Fastest C++ map?

Am I better use unordered_map?

Possibly.

std:map provides consistent performance at O(log n) because it needs to be implemented as a balanced tree. But std:unordered_map will be implemented as a hash table which might give you O(1) performance (good hash function and distribution of keys across hash buckets), but it could be O(n) (everything in one hash bucket and devolves to a list). One would normally expect something inbetween these extremes.

So you can have reasonable performance (O(log n)) all the time, or you need to ensure everything lines up to get good performance with a hash.

As with any such question: you need to measure before committing to one approach. Unless your datasets are large you might find there is no significant difference.


Important warning: Unless you have measured (and your question suggests that you haven't) that map performance substantially influences your application performance (large percentage of time is spent on searching and updating the map) don't bother with making it faster. Stick to std::map (or std::unordered_map or any available hash_map implementation). Speeding up your application by 1% probably will not be worth the effort. Make it bug free instead.

Echoing Richard's answer: measure performance with different map implementation using your real classes and real data.

Some additional notes:

  • Understand the difference between expected cost (hash maps usually have it lower), worst case cost (O(logn) for balanced binary tree but much higher for hash map if insert triggers reallocation of hash array) and amortized cost (total cost divided by number of operations or elements; depends on things like ratio of new and existing elements). You need to find out which is more constraining in your case. For example reallocating of hash maps can be too much if you need to adhere to very low latency limit.

  • Find out where real bottleneck is. It might be that cost of searching in map is insignificant compared to e.g. IO cost.

  • Try more specialized map implementation. For example a lot can be gained if you know something more about map's key. Authors of generic map implementations do not have such knowledge.

In your example (32 bit unsigned integer keys which strongly cluster, e.g. are assigned sequentially) you can use radix based approach. Very simple example (threat it as an illustration, not ready to use recipe):

Item *sentinel[65536];  // sentinel page, initialized to NULLs.
Item (*pages[65536])[65536];  // list of pages,
                              // initialized so every element points to sentinel

Then search is as simple as:

Item *value = pages[index >> 16][index & 0xFFFF];

When you need to set new value:

if (pages[index >> 16] == sentinel) {
  pages[index >> 16] = allocate_new_null_filled_page();
}
pages[index >> 16][index & 0xFFFF] = value;
  • Tweak your map implementation.

    • E.g. every hash_map likes to know approximate number of elements in advance. It helps avoid unnecessary reallocation of hash table and (possibly) rehashing of all keys.

    • With my specialized example above you certainly would try different page sizes, or three level version.

    • Common optimization is providing specialized memory allocator to avoid multiple allocations of small objects.