Why in a heap implemented by array the index 0 is left unused?

As I found it in CLRS book, there is some significance in terms of performance, since generally, shift operators work very fast.

On most computers, the LEFT procedure can compute 2*i in one instruction by simply shifting the binary representation of i left by one bit position. Similarly, the RIGHT procedure can quickly compute 2*i+1 by shifting the binary representation of i left by one bit position and then adding in a 1 as the low-order bit. The PARENT procedure can compute i/2 by shifting i right one bit position.

So, starting the heap at index 1 will probably make faster calculation of parent, left and right child indexes.


There's no reason why a heap implemented in an array has to leave the item at index 0 unused. If you put the root at 0, then the item at array[index] has its children at array[index*2+1] and array[index*2+2]. The node at array[child] has its parent at array[(child-1)/2].

Let's see.

                  root at 0       root at 1
Left child        index*2 + 1     index*2
Right child       index*2 + 2     index*2 + 1
Parent            (index-1)/2     index/2

So having the root at 0 rather than at 1 costs you an extra add to find the left child, and an extra subtraction to find the parent.

For a more general case where it may not be a binary heap, but a 3-heap, 4-heap, etc where there are NUM_CHILDREN children for each node instead of 2 the formulas are:

                  root at 0                  root at 1
Left child        index*NUM_CHILDREN + 1     index*NUM_CHILDREN
Right child       index* NUM_CHILDREN + 2    index*NUM_CHILDREN + 1
Parent            (index-1)/NUM_CHILDREN     index/NUM_CHILDREN

I can't see those few extra instructions making much of a difference in the run time.

For reasons why I think it's wrong to start at 1 in a language that has 0-based arrays, see https://stackoverflow.com/a/49806133/56778 and my blog post But that's the way we've always done it!

Tags:

Algorithm

Heap