What's the fastest algorithm for sorting a linked list?

It is reasonable to expect that you cannot do any better than O(N log N) in running time.

However, the interesting part is to investigate whether you can sort it in-place, stably, its worst-case behavior and so on.

Simon Tatham, of Putty fame, explains how to sort a linked list with merge sort. He concludes with the following comments:

Like any self-respecting sort algorithm, this has running time O(N log N). Because this is Mergesort, the worst-case running time is still O(N log N); there are no pathological cases.

Auxiliary storage requirement is small and constant (i.e. a few variables within the sorting routine). Thanks to the inherently different behaviour of linked lists from arrays, this Mergesort implementation avoids the O(N) auxiliary storage cost normally associated with the algorithm.

There is also an example implementation in C that work for both singly and doubly linked lists.

As @Jørgen Fogh mentions below, big-O notation may hide some constant factors that can cause one algorithm to perform better because of memory locality, because of a low number of items, etc.


Depending on a number of factors, it may actually be faster to copy the list to an array and then use a Quicksort.

The reason this might be faster is that an array has much better cache performance than a linked list. If the nodes in the list are dispersed in memory, you may be generating cache misses all over the place. Then again, if the array is large you will get cache misses anyway.

Mergesort parallelises better, so it may be a better choice if that is what you want. It is also much faster if you perform it directly on the linked list.

Since both algorithms run in O(n * log n), making an informed decision would involve profiling them both on the machine you would like to run them on.

--- EDIT

I decided to test my hypothesis and wrote a C-program which measured the time (using clock()) taken to sort a linked list of ints. I tried with a linked list where each node was allocated with malloc() and a linked list where the nodes were laid out linearly in an array, so the cache performance would be better. I compared these with the built-in qsort, which included copying everything from a fragmented list to an array and copying the result back again. Each algorithm was run on the same 10 data sets and the results were averaged.

These are the results:

N = 1000:

Fragmented list with merge sort: 0.000000 seconds

Array with qsort: 0.000000 seconds

Packed list with merge sort: 0.000000 seconds

N = 100000:

Fragmented list with merge sort: 0.039000 seconds

Array with qsort: 0.025000 seconds

Packed list with merge sort: 0.009000 seconds

N = 1000000:

Fragmented list with merge sort: 1.162000 seconds

Array with qsort: 0.420000 seconds

Packed list with merge sort: 0.112000 seconds

N = 100000000:

Fragmented list with merge sort: 364.797000 seconds

Array with qsort: 61.166000 seconds

Packed list with merge sort: 16.525000 seconds

Conclusion:

At least on my machine, copying into an array is well worth it to improve the cache performance, since you rarely have a completely packed linked list in real life. It should be noted that my machine has a 2.8GHz Phenom II, but only 0.6GHz RAM, so the cache is very important.


This is a nice little paper on this topic. His empirical conclusion is that Treesort is best, followed by Quicksort and Mergesort. Sediment sort, bubble sort, selection sort perform very badly.

A COMPARATIVE STUDY OF LINKED LIST SORTING ALGORITHMS by Ching-Kuang Shene

http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.31.9981