Correct value for fill factor for clustered indexes with surrogate identity keys

Nick's pretty much correct.

If you do updates that increase the size of a record on packed pages, then you'll cause page splits, but apart from that, with an identity primary key nothing's going to cause page splits in the clustered index.

(Although saying that, there are 5 types of page splits that the Storage Engine can do, and not all of them cause fragmentation and data movement - the one you get when inserting monatonically increasing identity values is a end-of-page split. But I digress...)

I've helped many customers with this and I wrote the BOL around it all - if you want to just pick a value as the stake-in-the-ground, 70% has seen the most success. As Nick says, monitor and tweak as appropriate.

Picking a fillfactor for any index is a balancing act of how much activity occurs that pushes the page fullness towards 100% and how often you can take corrective action to reset the fillfactor. You need to think about how much space will initially be 'wasted' on the pages if you set the fillfactor really low, like 50%, but again I've seen this be appropriate in some cases.

You should also consider how the index will be used. If it's just for singleton lookups, you might get away with a lower fillfactor and more time between rebuild/defrag as you're not going to waste too many IOs/memory from having a lot of the sparsely populated clustered index in memory. For doing large range scans, you'd want to have the fillfactor a bit higher, to increase the IO and memory efficiency.

There's also the OLTP vs DW question - usually a DW is unchanging so the indexes would have 100% fillfactor. OLTP is the hard part.

After you've sorted out the clustered index, remember that the nonclustereds will need attention too as they most likely will get fragmented.

When reseting the fillfactor, remember you have a choice between rebuilding and defragging. DBCC INDEXDEFRAG/ALTER INDEX ... REORGANIZE can reset the fillfactor in some cases for indexes that aren't badly fragmented.

Hope this helps!

(Sorry for the 'over-answer' - one of my hot-buttons, having written the code :-)


It Depends

It's a balancing act. If your table is read intensive, with not many updates or deletes then the default (which is 100) should be ok.

If your table is very write intensive, with lots of updates, then a value below 80 might be more appropriate.

There's no magic formula for this stuff. (AFAIK, if there is please let me know) Best thing to do is have a test environment, have some workload to test. Make the changes & see how your database performs with the workload.