What is the time complexity of the algorithm below?

There is an algorithm that (1) is recognizable as an implementation of Algorithm 1 and (2) runs in time O(|E| + |V|).

First, let's consider the essence of Algorithm 1. Until the graph is empty, do the following: record the priority of the node with the lowest priority as the core number of that node, and delete that node. The priority of a node is defined dynamically as the maximum over (1) its degree in the residual graph and (2) the core numbers of its deleted neighbors. Observe that the priority of a node never increases, since its degree never increases, and a higher-priority neighbor would not be deleted first. Also, priorities always decrease by exactly one in each outer loop iteration.

The data structure support sufficient to achieve O(|E| + |V|) time splits nicely into

  1. A graph that, starting from the initial graph (initialization time O(|E| + |V|)), supports

    • Deleting a node (O(degree + 1), summing to O(|E| + |V|) over all nodes),
    • Getting the residual degree of a node (O(1)).
  2. A priority queue that, starting from the initial degrees (initialization time O(|V|)), supports

    • Popping the minimum priority element (O(1) amortized),
    • Decreasing the priority of an element by one (O(1)), never less than zero.

A suitable graph implementation uses doubly linked adjacency lists. Each undirected edge has two corresponding list nodes, which are linked to each other in addition to the previous/next nodes originating from the same vertex. Degrees can be stored in a hash table. It turns out we only need the residual degrees to implement this algorithm, so don't bother storing the residual graph.

Since the priorities are numbers between 0 and |V| - 1, a bucket queue works very nicely. This implementation consists of a |V|-element vector of doubly linked lists, where the list at index p contains the elements of priority p. We also track a lower bound on the minimum priority of an element in the queue. To find the minimum priority element, we examine the lists in order starting from this bound. In the worst case, this costs O(|V|), but if we credit the algorithm whenever it increases the bound and debit it whenever the bound decreases, we obtain an amortization scheme that offsets the variable cost of this scanning.


If choosing a data structure that can support efficient lookup of min value and efficient deletions (such as self balancing binary tree or min heap), this can be done in O(|E|log|V|).

  • Creating the data structure in line 1 is done in O(|V|) or O(|V|log|V|).
  • The loop goes exactly once over each node v in V.
  • For each such node, it needs to peek the min value, and adjust the DS so you can peek at the next min efficiently next iteration. O(log|V|) is required for that (otherwise, you could do heap sort in o(nlogn) - note little o notation here).
  • Getting neighbors, and removing items from V and E could be done in O(|neighbors|) using efficient set implementation, such as a hash table. Since each edge and node is chosen exactly once for this step, it sums over all iterations in O(|V| + |E|).
  • The for loop over neighbors, again, sums to O(|E|), thanks to the fact that each edge is counted exactly once here.
  • In this loop, however, you might need to update p, and such an update costs O(log|V|), so this sums to O(|E|log|V|)

This sums in O(|V|log|V| + |E|log|V|). Assuming the graph is not very sparse (|E| is in Omega(|V|) - this is O(|E|log|V|).