How does case-insensitive collation work?

indexing against case insensitive strings yet the case of the data is persisted. How does this actually work?

This is actually not a SQL Server specific behavior, it's just how these things work in general.

So, the data is the data. If you are speaking about an index specifically, the data needs to be stored as it is else it would require a look-up in the main table each time to get the actual value, and there would be no possibility of a covering index (at least not for string types).

The data, either in the table/clustered index or non-clustered index, does not contain any collation / sorting info. It is simply data. The collation (locale/culture rules and sensitivities) is just meta data attached to the column and used when a sort operation is called (unless overridden by a COLLATE clause), which would include the creation/rebuild of an index. The rules defined by a non-binary collation are used to generate sort-keys, which are binary representations of the string (sort keys are unnecessary in binary collations). These binary representations incorporate all of the locale/culture rules and selected sensitivities. The sort-keys are used to place the records in their proper order, but are not themselves stored in the index or table. They aren't stored (at least I haven't seen these values in the index and was told that they aren't stored) because:

  1. They aren't truly needed for sorting since they would merely be in the same order as the rows in the table or index anyway. But, the physical order of the index is just sorting, not comparison.
  2. While storing them might make comparisons faster, it would also make the index larger as the minimum size for a single character is 5 bytes, and that's just "overhead" (of the sort key structure). Most characters are 2 bytes each, plus 1 byte if there's an accent, plus 1 byte if it's upper-case. For example, "e" is a 7-byte key, "E" and "é" are both 8 bytes, and "É" is a 9-byte key. Hence, not worth storing these in the end.

There are two types of collations: SQL Server and Windows.

SQL Server

SQL Server collations (those with names starting with SQL_) are the older, pre-SQL Server 2000 way of sorting/comparing (even though SQL_Latin1_General_CP1_CI_AS is still the installation default on US English OSes, quite sadly). In this older, simplistic, non-Unicode model, each combination of locale, code page, and the various sensitivities are given a static mapping of each of the characters in that code page. Each character is assigned a value (i.e. sort weight) to denote how it equates with the others. Comparisons in this model appear to do a two-pass operation:

  1. First, it removes all accents (such that " ü " becomes " u "), expands characters like " Æ " into " A " and " E ", then does an initial sort so that words are in a natural order (how you would expect to find them in a dictionary).
  2. Then, it goes character by character to determine equality based on these underlying values per each character. This second part is what mustaccio is describing in his answer.

The only sensitivities that can be adjusted in these collations are: "case" and "accent" ("width", "kana type" and "variation selector" are not available). Also, none of these collations support Supplementary Characters (which makes sense as those are Unicode-specific and these collations only apply to non-Unicode data).

This approach applies only to non-Unicode VARCHAR data. Each unique combination of locale, code page, case-sensitivity, and accent-sensitivity has a specific "sort ID", which you can see in the following example:

SELECT COLLATIONPROPERTY(N'SQL_Latin1_General_CP1_CI_AS', 'SortID'), -- 52
       COLLATIONPROPERTY(N'SQL_Latin1_General_CP1_CS_AS', 'SortID'), -- 51
       COLLATIONPROPERTY(N'Latin1_General_100_CI_AS',     'SortID'); --  0

The only difference between the first two collations is the case-sensitivity. The third collation is a Windows collation and so does not have a static mapping table.

Also, these collations should sort and compare faster than the Windows collations due to being simple lookups for character to sort weight. However, these collations are also far less functional and should generally be avoided if at all possible.

Windows

Windows collations (those with names not starting with SQL_) are the newer (starting in SQL Server 2000) way of sorting/comparing. In this newer, complex, Unicode model, each combination of locale, code page, and the various sensitivities are not given a static mapping. For one thing, there are no code pages in this model. This model assigns a default sort value to each character, and then each locale/culture can re-assign sort values to any number of characters. This allows multiple cultures to use the same characters in different ways. This does have the affect of allowing for multiple languages to be sorted naturally using the same collation if they do not use the same characters (and if one of them does not need to re-assign any values and can simply use the defaults).

The sort values in this model are not single values. They are an array of values that assign relative weights to the base letter, any diacritics (i.e. accents), casing, etc. If the collation is case-sensitive, then the "case" portion of that array is used, otherwise it's ignored (hence, insensitive). If the collation is accent-sensitive, then the "diacritic" portion of the array is used, otherwise it's ignored (hence, insensitive).

Comparisons in this model are a multi-pass operation:

  1. First, the string is normalized so that various ways of representing the same character will equate. For example, " ü " could be a single character / code point (U+00FC). You could also combine a non-accented " u " (U+0075) with a Combining Diaeresis " ̈ " (U+0308) to get: " ", which not only looks the same when rendered (unless there is a problem with your font), but is also considered to be the same as the single character version (U+00FC), unless using a binary collation (which compares bytes instead of characters). Normalization breaks the single character into the various pieces, which includes expansions for characters like " Æ " (as noted above for SQL Server collations).
  2. The comparison operation in this model goes character by character per each sensitivity. Sort keys for the strings are determined by applying the appropriate elements of each characters collation array of values based on which sensitivities are "sensitive". The sort key values are arranged by all of the primary sensitivities of each character (the base character), followed by all of the secondary sensitivities (diacritic weight), followed by the case weight of each character, and so on.
  3. Sorting is performed based on the calculated sort keys. With each sensitivity grouped together, you can get a different sort order than you would with an equivalent SQL Server collation when comparing strings of multiple characters, and accents are involved, and the collation is accent-sensitive (and even more so if the collation is also case-sensitive).

For more details on this sorting, I will eventually publish a post that shows the sort key values, how they are calculated, the differences between SQL Server and Windows collations, etc. But for now, please see my answer to: Accent Sensitive Sort (please note that the other answer to that question is a good explanation of the official Unicode algorithm, but SQL Server instead uses a custom, though similar, algorithm, and even a custom weight table).

All sensitivities can be adjusted in these collations: "case", "accent", "width", "kana type", and "variation selector" (starting in SQL Server 2017, and only for the Japanese collations). Also, some of these collations (when used with Unicode data) support Supplementary Characters (starting in SQL Server 2012). This approach applies to both NVARCHAR and VARCHAR data (even non-Unicode data). It applies to non-Unicode VARCHAR data by first converting the value to Unicode internally, and then applying the sort/comparison rules.


Please note:

  1. There is no universal default collation for SQL Server. There is an installation default which differs based on the current locale/language setting of the OS at time of installation (which is unfortunately SQL_Latin1_General_CP1_CI_AS for US English systems, so please vote for this suggestion). This can be changed during installation. This instance-level collation then sets the collation for the [model] DB which is the template used when creating new DBs, but the collation can be changed when executing CREATE DATABASE by specifying the COLLATE clause. This database-level collation is used for variable and string literals, as well as the default for new (and altered!) columns when the COLLATE clause is not specified (which is the case for the example code in the question).
  2. For more info on Collations / encodings / Unicode, please visit: Collations Info

Typically this is implemented using collation tables that assign a certain score to each character. The sorting routine has a comparator that uses an appropriate table, whether default or specified explicitly, to compare strings, character by character, using their collation scores. If, for example, a particular collation table assigns a score of 1 to "a" and 201 to "A", and a lower score in this particular implementation means higher precedence, then "a" will be sorter before "A". Another table might assign reverse scores: 201 to "a" and 1 to "A", and the sort order will be subsequently reverse. Yet another table might assign equal scores to "a", "A", "Á", and "Å", which would lead to a case- and accent-insensitive comparison and sorting.

Similarly, such a collation table-based comparator used when comparing an index key with the value supplied in the predicate.