If a database only ever has one insert, is it bad to index every possible column combination?

Yes, it will influence initial plan compile time as the optimizer will have many extra access paths to the data to consider.

Since you're on SQL Server 2017, loading once, and running reports, why not just use a clustered column store index instead?

That seems to be the ideal solution to your need to index every possible column combination.

Columnstore indexes - Overview


If you have N columns in a table, every possible column combination is 2^N-1 (removing the empty set). For 10 columns that would mean 1023 indexes, for 20 columns we end up with a whopping 1048575 indexes. Most of the indexes will never be used but will have to be taken into consideration by the optimizer. It is possible that the optimizer will choose a sub-optimal index instead of a better one. I would not take the path of generating all sorts of indexes, instead of trying to figure out what indexes that would actually be beneficial.

EDIT corrected number of possible indexes

As Jeff points out it's even worse than 2^N (power-set) since (3,2,1) is clearly different than (1,2,3). For N columns we can choose the first position in an index that contains all columns in N ways. For the second position in N-1 ways, etc. We, therefore, end up with N! different indexes of full size. None of these indexes is subsumed by another index in this set. In addition, we can't add another shorter index so that it is not covered by any full index. The number of indexes is therefore N!. The example for 10 columns, therefore, becomes 10! = 3628800 indexes and for 20 (drumroll) 2432902008176640000 indexes. This is a ridicously large number, if we put a dot for each index one mm a part, it will take a lightbeam 94 days to pass all dots. All and all, dont;-)


No.

It's not practical to index "everything", but you can index "most" of it.

Here's the thing. If a table has N columns, then the number of possible indexes is N!. Let's say a table has 10 columns, then you don't only have 10 possible indexes, but 10!. That is... 3,628,800... on a single table. That's a lot of disk space, disk I/O, cache, and seek times.

Why? A few reasons:

  • Lightwwight indexes are usually cached, something that makes them lightinng fast. If you have 3 million of them, they are NOT going to be cached.

  • The SQL optimizer may take a lot of time deciding which one is better to use, specially when using joins.

  • The SQL optimizer may give up on using the comprehensive algorithm, and try a heuristic algorithm instead. This may be "less than optimal". PostgreSQL, for example, has different options for "less-than-8 table queries", and "more-than-8 table queries".

  • Indexes are supposed to be lighter than the heap. If you are indexing everything, then the index becomes as heavy as the heap... something that defeats the purpose of the index.