Why use 1<<4 instead of 16?

Writing 1 << 4 instead of 16 doesn't change the behavior here. It's done to emphasize that the number is a power of two, and not a completely arbitrary choice. It thus reminds developers experimenting with different numbers that they should stick to the pattern (e.g., use 1 << 3 or 1 << 5, not 20) so they don't break all the methods which rely on it being a power of two. There is a comment just above:

/**
 * The default initial capacity - MUST be a power of two.
 */
static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16

No matter how big a java.util.HashMap grows, its table capacity (array length) is maintained as a power of two. This allows the use of a fast bitwise AND operation (&) to select the bucket index where an object is stored, as seen in methods that access the table:

final Node<K,V> getNode(int hash, Object key) {
    Node<K,V>[] tab; Node<K,V> first, e; int n; K k;
    if ((tab = table) != null && (n = tab.length) > 0 &&
        (first = tab[(n - 1) & hash]) != null) { /// <-- bitwise 'AND' here
        ...

There, n is the table capacity, and (n - 1) & hash wraps the hash value to fit that range.

More detail

A hash table has an array of 'buckets' (HashMap calls them Node), where each bucket stores zero or more key-value pairs of the map.

Every time we get or put a key-value pair, we compute the hash of the key. The hash is some arbitrary (maybe huge) number. Then we compute a bucket index from the hash, to select where the object is stored.

Hash values bigger than the number of buckets are "wrapped around" to fit the table. For example, with a table capacity of 100 buckets, the hash values 5, 105, 205, would all be stored in bucket 5. Think of it like degrees around a circle, or hours on a clock face.

(Hashes can also be negative. A value of -95 could correspond to bucket 5, or 95, depending on how it was implemented. The exact formula doesn't matter, so long as it distributes hashes roughly evenly among the buckets.)

If our table capacity n were not a power of two, the formula for the bucket would be Math.abs(hash % n), which uses the modulo operator to calculate the remainder after division by n, and uses abs to fix negative values. That would work, but be slower.

Why slower? Imagine an example in decimal, where you have some random hash value 12,459,217, and an arbitrary table length of 1,234. It's not obvious that 12459217 % 1234 happens to be 753. It's a lot of long division. But if your table length is an exact power of ten, the result of 12459217 % 1000 is simply the last 3 digits: 217.

Written in binary, a power of two is a 1 followed by some number of 0s, so the equivalent trick is possible. For example, if the capacity n is decimal 16, that's binary 10000. So, n - 1 is binary 1111, and (n - 1) & hash keeps only the last bits of the hash corresponding to those 1s, zeroing the rest. This also zeroes the sign bit, so the result cannot be negative. The result is from 0 to n-1, inclusive. That's the bucket index.

Even as CPUs get faster and their multimedia capabilities have improved, integer division is still one of the most expensive single-instruction operations you can do. It can be 50 times slower than a bitwise AND, and avoiding it in frequently executed loops can give real improvements.


I can't read the developer's mind, but we do things like that to indicate a relationship between the numbers.

Compare this:

int day = 86400;

vs

int day = 60 * 60 * 24; // 86400

The second example clearly shows the relationship between the numbers, and Java is smart enough to compile that as a constant.

Tags:

Java

Openjdk