Are Tries still a good idea on modern architectures?

I did some performance testing in C# with a Trie and a Dictionary (a strongly typed hash table). I found that the Dictionary was 5-10 times faster than the Trie. Perhaps my implementation of the Trie could be optimized a bit, but hardly enough to be much faster than (or perhaps even as fast as) the Dictionary.

The ContainsKey method in the dictionary is close to an O(1) operation (depending on how good the hashing algorithm is), so it's not easy to make a collection that beats that as long as the hashing algorithm is reasonably fast.

With a custom IEqualityComparer you can use most anything as a key in a Dictionary, which makes it rather flexible. A Trie is a bit more limited in what you can use as key, so that limits the usefulness somewhat.


I hadn't thought of this as an area of concern before, but now that you mention it, there are times when a standard Trie implementation might be handy. On the other hand, as far as I know, Tries are used by Python and Perl and other string-savvy languages that I use now.

Last I checked, which was ages ago, the BSD kernel code used Tries (Patricia Tries) in the code to select the best interface for sending packets. Looks like Wikipedia has some info.


You could just build two sample apps and see which one performs better. Memory access is cheap assuming you don't page fault. Then it's very expensive. For client application development, its almost always better to process than to access memory for this very reason. Modern processors are ridiculously fast, but cache misses still hurt.