Understanding Time complexity calculation for Dijkstra Algorithm

Dijkstra's shortest path algorithm is O(ElogV) where:

  • V is the number of vertices
  • E is the total number of edges

Your analysis is correct, but your symbols have different meanings! You say the algorithm is O(VElogV) where:

  • V is the number of vertices
  • E is the maximum number of edges attached to a single node.

Let's rename your E to N. So one analysis says O(ElogV) and another says O(VNlogV). Both are correct and in fact E = O(VN). The difference is that ElogV is a tighter estimation.


Adding a more detailed explanation as I understood it just in case:

  • O(for each vertex using min heap: for each edge linearly: push vertices to min heap that edge points to)
  • V = number of vertices
  • O(V * (pop vertex from min heap + find unvisited vertices in edges * push them to min heap))
  • E = number of edges on each vertex
  • O(V * (pop vertex from min heap + E * push unvisited vertices to min heap)). Note, that we can push the same node multiple times here before we get to "visit" it.
  • O(V * (log(heap size) + E * log(heap size)))
  • O(V * ((E + 1) * log(heap size)))
  • O(V * (E * log(heap size)))
  • E = V because each vertex can reference all other vertices
  • O(V * (V * log(heap size)))
  • O(V^2 * log(heap size))
  • heap size is V^2 because we push to it every time we want to update a distance and can have up to V comparisons for each vertex. E.g. for the last vertex, 1st vertex has distance 10, 2nd has 9, 3rd has 8, etc, so, we push each time to update
  • O(V^2 * log(V^2))
  • O(V^2 * 2 * log(V))
  • O(V^2 * log(V))
  • V^2 is also a total number of edges, so if we let E = V^2 (as in the official naming), we will get the O(E * log(V))

let n be the number of vertices and m be the number of edges.

Since with Dijkstra's algorithm you have O(n) delete-mins and O(m) decrease_keys, each costing O(logn), the total run time using binary heaps will be O(log(n)(m + n)). It is totally possible to amortize the cost of decrease_key down to O(1) using Fibonacci heaps resulting in a total run time of O(nlogn+m) but in practice this is often not done since the constant factor penalties of FHs are pretty big and on random graphs the amount of decrease_keys is way lower than its respective upper bound (more in the range of O(n*log(m/n), which is way better on sparse graphs where m = O(n)). So always be aware of the fact that the total run time is both dependent on your data structures and the input class.