Good choice of a parallelized sorting algorithm to implement as homework?

Quick sort can split the unsorted list into two halves, but unfortunately, the halves aren't guaranteed to be anywhere near even. So one machine (or half of a cluster of machines) could get 20 entries, and the other half could get 20 billion.

I can't think of a good way to make heapsort work in parallel. It can be done, but man, that feels really counterintuitive.

Merge sort is the one I think you want.

  • Each split is exactly 50% of the list, so it's easy to split between processors.
  • You can implement merge sort on two sets of tape drives, which means that it doesn't require the whole list be in memory at one time. For large lists, especially those that are larger than the memory you have available, that's a must have.
  • Merge sort is also stable in parallel implementations, if it matters.

Merge sort is a great first parallel sorting technique. The best sort is always machine dependent and generally involves a combination of sorting techniques for different size inputs.


As Dean J mentions, merge sort is a good candidate. But it has the disadvantage of requiring a synchronization when both the threads are done (the merging process).

Though quicksort has the disadvantage of being unpredictable while partitioning, what can be done is to make the first partition (that decides the processor load) consciously to divide the load more or less evenly, and then letting the algorithm take its course.

The advantage is that you don't need to do any sync of any kind after the processors are done with their work. After they are done, you have the sorted array ready, without the need for an extra merging step, whic might be costly.