Is it okay to have a fill factor of 100 on the clustered index that is on an ID column?

In most cases you do want a FILLFACTOR of 100. Only in special cases would you want to lower that value. Updates only cause page-splits if they increase the size of the row such that it won't still fit on the same data page. Deletes here and there shouldn't matter in either case. In all cases, you need to do regular index maintenance and REBUILD the clustered index when fragmentation is over a certain limit. Rebuild will re-sort all of the data to be in the ideal order, so it will fill in voids left by DELETE operations and UPDATE operations that caused page splits where there is still available space on the old and/or new data pages.

A lot of deletes can leave empty pages and hence some fragmentation, but using a lower FILLFACTOR isn't going to help there. And in fact, depending on what rows are being deleted, you could be increasing empty page fragmentation since fewer rows will be on each data page in the first place.

But keep in mind:

  • Using a FILLFACTOR of 100 doesn't mean that 100% of the space is used. Rows are fit into the data page as long as there is space for the entire row. If you have 100 bytes left over and have a 105 byte row to add, it can't fit there. But there is still space for an update of a variable length column to increase by 50 bytes.
  • When you set FILLFACTOR to less than 100, you are reserving space on all data pages, not just the ones that will be getting updated. Unless you have a usage pattern that evenly distributes updates across the entire range of the table/index, why reserve space on data pages that contain "older" rows that aren't being updated? That just slows down queries across the entire index / table in the hopes that some of the newer data pages (i.e. rows more likely to be updated) won't page split as early as they would when using a FILLFACTOR of 100.
  • Consider the size of your rows and how many will fit onto a data page when FILLFACTOR is 100, and how many will fit onto a data page with a lower FILLFACTOR such as 90, or even 80. If you have wide rows of 500 - 1000 bytes, the difference between various FILLFACTOR values might be just one or a few rows. What types of updates are happening? If updates are to fixed-length columns, this isn't even an issue. For variable-length columns, can they be increasing in size by 100 - 500 bytes?

    There are several factors to consider here to be truly pragmatic: if, in the course of a day or week (how long between REBUILD operations?) successive UPDATE operations on rows that sort close together are going to expand in size and cause multiple page splits, then is postponing that page split by several hours or a day really worth the impact of reserving that space across the entire table / index such that it takes up more disk space, space in memory (i.e. Buffer Pool), makes backups larger, takes longer to read from disk into the Buffer Pool, take longer to do the index maintenance, etc?

  • Fragmentation adversely affects range queries, not singleton queries. If you are fetching individual rows (or finding rows to be updated or deleted), you likely won't see the impact of fragmentation, at least not until it is really bad, but maybe not at all.

As with many / most technical decisions, there are various technicalities that influence what is the "best" or "optimal" approach. So you really need to consider the max size of the row, if the updates will happen to variable-length columns, if the update activity is going to be mostly spread across the table / index or confined to a more "recent" segment of it, etc. I have worked in places where the policy was for all tables to have a FILLFACTOR of 80 and it was an insane (and irresponsible) practice given that we were using IDENTITY columns for the clustered indexes (usually PKs) and "older" data rarely ever got updated. So we wasted tons of expensive SAN space on many tables -- often containing a million rows -- that then bloated the backups (and made both backup operations and restore operations take longer!) and chewed up more memory, which either force cached plans to get evicted or cached data pages (i.e. Page Life Expectancy was quite low).

ERGO: Start with 100 and experiment with lowering in small amounts IF / WHEN you think an optimization might be warranted (i.e. if you find that fragmentation on a particular table is growing to higher levels between rebuilds AND that is also causing performance degradation!).