Index Fragmentation - Am I interpreting the results correctly?

Yes, that's what it looks like. Unless there is some error happening with the data after it got moved to Excel.

However, these are all tiny, tiny tables. Stop caring about fragmentation on tables with less than, say, 1,000 pages.* And even then you probably shouldn't care too much until another order or two of magnitude, and even less if you are using SSD and/or your database fits into memory even with the fragmentation.

The work that you'll spend reorganizing them will not have the impact you expect, and the benefit from any changes you actually see, temporarily, will not justify it at all. I would simply filter such small tables out of your queries right from the start.

* 1,000 pages is just my ballpark, pull a number out of your you-know-what suggestion. There is no magic number for this. But seriously, for small tables, focus your efforts elsewhere.


Its clearly visible that page_count for all the indexes shown in figure you attached is < 1500. In such case even if index is fragmented to 100% this is NOT going to cause any performance issue.

Actually below is recommendation on fragmentation from Microsoft if you read BOL 2000 version

Fragmentation affects disk I/O. Therefore, focus on the larger indexes because their pages are less likely to be cached by SQL Server. Use the page count reported by DBCC SHOWCONTIG to get an idea of the size of the indexes (each page is 8 KB in size). Generally, you should not be concerned with fragmentation levels of indexes with less than 1,000 pages. In the tests, indexes containing more than 10,000 pages realized performance gains, with the biggest gains on indexes with significantly more pages (greater than 50,000 pages).

Below is reply from Microsoft Team on a Old Connect Item(Old Connect Item has been retired and the Bugs and Features requests were not carried away) which was raised to understand why fragmentation did not decrease even after rebuild.

For small tables, usually performance impact on fragmentation is undetectable. The first 8 page allocation would be from mixed extents and mixed extents could be anywhere in database files. Rebuilding indexes would not change this nature. If you have a small table, those mixed pages weight a lot during fragmentation calculation; therefore, rebuilding index may not reduce fragmentation. (As matter of fact, I could easily construct a case that fragmentation increases after rebuild.) Those fragmentation would not be a pain for your query performance; so basically you can ignore.

You should use below query, this will filter out unnecessary index which has page_count <1500. Its advised only to rebuild index having page_count >1500

Declare @DatabaseId Int = DB_ID('ODS')

SELECT 
    OBJECT_NAME(T.OBJECT_ID) as TableName, 
    T2.Name as IndexName,
    T.index_id as IndexId,
    index_type_desc as IndexType,
    index_level as IndexLevel,
    avg_fragmentation_in_percent as AverageFragmentationPercent,
    avg_page_space_used_in_percent as AveragePageSpaceUsedPercent,
    page_count as PageCount
FROM sys.dm_db_index_physical_stats (@DatabaseId, NULL, NULL, NULL, 'DETAILED') T
    INNER JOIN [sys].[indexes] T2 ON T.index_id = T2.index_id And T.object_id = T2.object_id
where page_count >1500--this would filter out irrelevant index frag.
ORDER BY avg_fragmentation_in_percent DESC

NOTE: The 1500 figure recommendation is not something which Microsoft has recomended as a hard and fast rule but its widely accepted figure. In some forum you would see people using the value of 1000. The core point is if page_count for index is quite less you should not rebuild or reorganize that index because such index is actually not going to cause any performance issue.