Sorting algorithms
Contents
About
A sorting algorithm is an algorithm that used to rearrange elements in a list in a certain order - for example taking a series of numbers and rearranging them in ascending order. Rearranging objects like this is not as easy as it sounds, and so there are several different types of well known sorting algorithms, some more efficient than others. Below is a brief account of some of the most well known sorting algorithms in computer science.
NOTE: Some sorting algorithms may build and return new arrays/vectors, but most sorting algorithms try to reduce space by "swapping elements" as they are in the array. In most of my code I show c++ and make use of a "swap()" function which is part of the "#include <algorithm>" library, but can easily be written as an inline function as shown below. Incidentally, the O(n log n) "sort()" function in the algorithm library is about the fastest sort you can hope for unless you have some very specific sorting requirements.
inline void swap(int &a, int &b) { int temp = a; a = b; b = temp; }
Python Solution In python you objects are by reference, simple types are by value... so the best way to swap is: first, last = last, first
Bubble Sort
Bubble sort is a simple sorting algorithm that works by repeatedly stepping through the list to be sorted, comparing each pair of adjacent items, and swapping them if they are in the wrong order. It's named bubble sort because larger elements bubble to the top of the list each iteration. Bubble sort is simple to code but is also well-known as the slowest sorting algorithm there is!
Performance: (O(n^2)) average and worst case.
In C++:
void bubbleSort(vector<int> &v)
{
for(int e=v.size(); e>1; e--) // e represents position beyond which all items are sorted.
for(int i=1; i<e; i++)
{
if(v[i-1] > v[i]) // If its more than next value: swap these values.
swap(v[i-1], v[i]);
}
}
def bubble_sort(arr):
n = len(arr)
for i in range(n):
for j in range(0, n-i-1):
if arr[j] > arr[j+1]:
arr[j], arr[j+1] = arr[j+1], arr[j] |
Insertion Sort
Insertion sort builds the sorted list one element at a time from the bottom. For each element it swaps it down until a <= value if found. Insertion sort is more efficient than bubble sort, and can even out perform O(n log n) methods if the array is "almost sorted" (in such cases it's almost O(n)) or n is relatively small.
Performance: (O(n^2)) average, but (O(1)) best case.
In C++ it can be written in just 3 lines of code:
for(int i=1; i<v.size(); i++)
for(int j=i; j>0 && (v[j] < v[j-1]); j--)
swap(v[j], v[j-1]);
.... A Few Other O(n^2) Algorithms:
O(n^2) algorithms like bubble and insertion sort are generally easy to code and for <10,000 elements will generally take than a second, but they scale horribly if the array gets any larger. A couple of other noteworthy O(n^2) algorithms are:
- Selection Sort - finds the minimum in the list, moves it to the front, and repeats for the remaining (n-1) elements - thus is O(n^2) in all input cases (just as lousy as bubble sort).
- Shell Sort - moves elements more than 1 position at a time. Shell sort works by running any of the above algorithms over several specially selected "gap" values which start far away and decrease to 1. As an example, lets use insertion sort and gap values of {5,3,1}. On its first pass (gap=5) it effectively breaks the array into 5 smaller arrays by starting at position 5 and then comparing the following pairs in order: (5&0), (6&1), (7&2), (8&3), (9&4), (10&5;5&0), (11&6;6&1), ... and so on until the end. This approach can help move values in fewer swaps and on the final gap value of 1 is equivalent to insertion sort.
Merge Sort
Merge sort is an O(n log n) divide and conquer algorithm sorting algorithm. The main steps are:
- Divide the unsorted list into n sublists, each containing 1 element.
- Repeatedly merge sublists to produce new sublists until there is only 1 sublist remaining... the merge involves starting at the front of two sublists then removing the smallest values into a new list until both sublists are empty.
Performance: (O(n log n)) average case... (O(n log n)) worst case... con: requires extra memory
In code a merge sort works best when broken into two functions - one to perform the breaking, and one to merge two vectors together. Here's some code in C++ to do just that:
void merge(vector<int> &a, vector<int> &b, vector<int> &c) {
c.clear(); // NOTE: to be more efficient we could call "resize()" here
int i=0, j=0;
while(i<a.size() && j<b.size()) // Until the end of either array:
c.push_back((a[i]<b[j]) ? a[i++] : b[j++] ); // Add whichever is smaller.
while(i<a.size()) // Add any remaining elements in a.
c.push_back(a[i++]);
while(j<b.size()) // Add any remaining elements in b.
c.push_back(b[j++]);
}
void mergeSort(vector<int> &v) {
if(v.size() <= 1) // base case
return;
int m = v.size()/2; // "middle" index in our vector
vector<int> half1; half1.insert(half1.begin(),v.begin(),v.begin()+m); // |-- Split v into two halves
vector<int> half2; half2.insert(half2.begin(),v.begin()+m,v.end() ); // | (either side of the middle).
mergeSort(half1); // |-- Run mergeSort (recursively) on both halves
mergeSort(half2); // |
merge(half1, half2, v); // Merge the two (sorted) halves back together
}
def merge_sort(arr) -> None:
if len(arr) <= 1:
return
mid = len(arr) // 2 # Floor division.
left_half = arr[:mid]
right_half = arr[mid:]
merge_sort(left_half)
merge_sort(right_half)
i, j, k = 0, 0, 0
while i < len(left_half) and j < len(right_half):
if left_half[i] < right_half[j]:
arr[k] = left_half[i]
i += 1
else:
arr[k] = right_half[j]
j += 1
k += 1
while i < len(left_half): # Wish we could have done `extend()` here!
arr[k] = left_half[i]
i += 1
k += 1
while j < len(right_half):
arr[k] = right_half[j]
j += 1
k += 1
if __name__ == '__main__':
arr = [5,4,6,3,2,8,1]
merge_sort(arr)
print(arr) |
Quicksort
Quicksort is a divide and conquer algorithm where a "pivot" value is selected and all values arranged above or below. It's one of the fastest sorting algorithms and the steps are:
- Pick an element, called a pivot, from the list.
- Reorder the list so that all elements with values less than or equal (<=) the pivot come before the pivot, while all elements with values greater or equal (>=) than the pivot come after it (equal values can go either side). After this partitioning, the pivot is in its final position. This is called the <n>partition operation.
- Recursively sort the sub-list of lesser elements and the sub-list of greater elements.
Performance: (O(n log n)) average case... (O(n^2)) worst case (rare).
In pseudocode:
function qsort1('array'):
if length('array') >= 1: // Base case.
return 'array' // An array of zero or one element is already sorted.
select and remove a pivot value 'pivot' from 'array'
create empty lists 'less' and 'greater'
for each 'x' in 'array':
if 'x' >= 'pivot' then append 'x' to 'less'
else append 'x' to 'greater'
return concatenate(qsort1('less'), 'pivot', qsort1('greater')) // Two recursive calls.
Note that creating extra vectors can be expensive, so another approach is to pass references to the lower (l) and upper (u) element as per this example in C++:
void qsort2(vector<int> &v, int l, int u)
{
if(l>=u) // Base case.
return;
swap(v[l], v[randInt(l,u)]); // Randomly select a pivot and move to front.
int mid = l; // The positions below which all values are < our pivot.
for(int i=l+1; i<=u; i++) // For each element:
{
if(v[i] < v[l]) // If less than pivot value: move appropriate position near front.
swap(v[++mid], v[i]);
}
swap(v[l], v[mid]); // Move the pivot to its correct spot.
qsort2(v, l, mid-1); // Recursive calls.
qsort2(v, mid+1, u); // Recursive calls.
}
Both algorithm above work great when the array is all random numbers, but now let's consider a few extreme cases cases: (#1) the array is already sorted, (#2) the array is sorted in decreasing order and (#3) all the elements are identical. Choosing the first element as a pivot will take O(n^2) because the pivot will end up at the front (or back for #2) and peel off one elements each recursion. To avoid #1 and #2 it's thus a good idea to randomly chose a pivot as demonstrated in the C++ example above. To account for #3 (all the same values) a crude solution might be to place these lines just before the recursive calls:
int halfway = (u-l)/2; // The middle point where we ideally want our pivot:
if(mid<halfway) {
for(int i=mid+1; i<halfway && v[i]==v[mid]; i++); // Iterates towards the middle over all identical values (same val as pivot).
mid = i-1;
} //... then similar decrementing loop if mid is > halfway.
A more robust solution is to start at both ends searching for values (i) "<" from the front and (j) ">" from the back, swapping "back to front" values and repeating this until i and j pass each other, at which point the pivot is swapped with position j. This results in more swaps than the first method, but if all values are identical it means the pivot ends up in the middle and thus require almost exactly (O(n log n)) comparisons.
const int CUTOFF = 1;
void qsort3(vector<int> &v, int l, int u)
{
if(u-l < CUTOFF) // Where a CUTOFF of 1 might be default.
return;
swap(v[l], v[randInt(l,u)]);
int i=l+1; // An iterator which starts at the front.
int j=u; // An iterator which starts at the end.
while(true)
{
for(; i<=u && v[i]<v[l]; i++);
for(; j>u && v[j]>v[l]; j--); // NOTE: if needed we can remove j>l because v[l] acts like sentinel.
if(i>j) // When iterators overtake: stop.
break;
swap(v[i++], v[j--]); // Move pivot to the correct position.
}
swap(v[l], v[j]);
qsort3(v, l, j-1);
qsort3(v, j+1, u );
}
Due to its recursive nature, merge-sort requires up to (O(n log n)) extra space worth in the memory stack. Notice there will be O(n) function calls with just one element and this gets especially expensive in pseudo-code example where extra arrays are created at each call! To reduce the number of calls a very useful optimization is thus to introduce a "cutoff value" (* see qsort3). The end result is an "almost sorted" array which can be very quickly sorted with insertion sort and a cutoff value ~50 can double the speed and drastically reduce extra memory.
Modifying Quicksort
Because it needs no extra memory, quicksort is one of the fastest and most preferred algorithms. No one sort algorithm is fastest for all arrays however (see here), so while quicksort is often at the top, some of the other O(n log n) algorithms listed below - especially Radix - will/may perform better on certain input. We've already seen how to modify quicksort to make it faster, but another nice feature of quicksort is that it can be modified to find the "Mth" biggest input in just linear time O(n). The follow non-recursive function show how we can find the median value (the middle-most value) in linear time, because it never needs to sort our whole array. The reason we can do this in O(n) is that on each successive loop we know which side of the pivot our desired value is - and we don't have to worry about the other half anymore. The number of elements will half from n to n/4 to n/4 to n/8... and if you add up all these fractions you need an average of 2*n operations.
void findMedian(vector<int> &v)
{
int l=0;
int u = v.size()-1;
int targetM = v.size()/2; // Represents the value in a sorted array we want (without needing to sort the whole array).
while(true)
{
swap(v[l], v[randInt(l,u)]); // Randomly select a pivot and move to front.
int m = l; // The positions below which all values are < our pivot.
for(int i=l+1; i<=u; i++)
if(v[i] < v[l])
swap(v[++m], v[i]);
swap(v[l], v[m]);
if (targetM == m) return (v[m]);
else if (targetM < m) u = m+1;
else l = m-1;
}
}
Heap Sort
Heap sort uses a special "max heap" data structure - a binary tree structure where the maximum element is always the top node - to sort elements in the array. Heaps are efficiently created by arrays where element 0 is the root, 0's children are 1&2, 1 and 2's children are 3&4 and 4&5, and so on. In a heap sort a "heap" grows from the front of the array until the entire array is a heap, and then elements are popped from the very top of the heap to just after the heap, until the heap disappears and what's left is a sorted array. The algorithm works as follows:
- Each element (n) is "added" to the heap by doing a "siftup" - swapped up as long as its parent has a smaller (<) value.
- Each element is removed from the heap in decreasing order by swapping element 0 with n. Element 0 then does a "siftdown" - swapped with its larger child so long as this child has a larger (>) value.
Performance: (O(n log n)) average case.
Below I've written a heap sort in C++.
void heapSort(vector<int> &v)
{
int totN = v.size();
int n = 0; // current size of heap
for(n=0; n<totN; n++) //## build heap by doing a "siftup" on each new element:
{
int i=n; // the newest element added to heap
int p=n/2; // its parent
for(; p>=0 && v[i] > v[p]; i=p, p=i/2) // while bigger than parent: swap
swap(v[i], v[p]);
}
for(n=totN; n>=1;) //## deconstruct heap by moving max element to the end and doing a "siftdown":
{
swap(v[0], v[--n]); // move max element to the end
int i = 0; // current element to "siftdown"
int c = 1; // first child of current element
for(; c<n; i=c, c*=2)
{
if(c+1<n && v[c+1] > v[c]) // if other child is bigger: use this child
c++;
if(v[i] >= v[c] ) // if node is bigger than parent: stop
break;
swap(v[c], v[i]); // else swap with child and continue
}
}
}
Radix Sort
Radix sort is a non-comparative integer sorting algorithm which groups keys using their individual digits starting from least to most significant (from right to left) and placing them into buckets each round. Since a computer stores numbers as bits it is most convenient to consider each bit as a digit, but in the example below I have used base 10 notation and show several three digit numbers, hence there are three rounds.
INPUT .... | 1st pass ... | 2nd pass ... | 3rd pass ... |
---|---|---|---|
470 | 470 | 470 | 174 |
293 | 191 | 174 | 191 |
393 | 282 | 282 | 282 |
584 | 293 | 584 | 293 |
191 | 393 | 191 | 393 |
174 | 174 | 293 | 470 |
282 | 584 | 393 | 584 |
Performance: (O(kn)) where k is the average key length (in digits).... so not necessarily better than quick-sort.
void radixSort(vector<int> &v)
{
vector< vector<int>> buckets; // Contains 10 "buckets" with each containing a vector
buckets.resize(10); // of ints with that value in the current digit value.
int pow10 = 1; // Holds increasing powers of 10
int max = *max_element(v.begin(), v.end()); // Maximum int value in v (requires <algorithm> library).
for(; pow10 <= max; pow10 *= 10) // For increasing powers of 10:
{
// STEP 1: POPULATE BUCKETS:
for(int i=0; i<v.size(); i++) { // For each value in array:
int bucket_num = (v[i] / pow10) % 10; // Determine what value (0-9) it has in that digit location..
buckets[ bucket_num ].push_back(v[i]); // And add it to that bucket.
}
// STEP 2: REORDER ARRAY
for(int b=0, i=0; b<buckets.size(); b++) { // For each bucket:
for(int c=0; c<buckets[b].size(); c++) // For each value in bucket:
v[i++] = buckets[b][c]; // Add back into the array (in order).
buckets[b].clear(); // Clear bucket ready for next digit.
}
}
}
Bucket Sort
Bucket sort is similar to radix sort in that it works by partitioning an array into a number of buckets. Each bucket is then sorted individually, either using a different sorting algorithm, or by recursively applying the bucket sorting algorithm. Bucket sort works as follows:
- Set up an array of initially empty "buckets"... where the buckets contain ranges, eg: (0-9),(10-19),(20-29).. and so on up to a known maximum value.
- Scatter: Go over the original array, putting each object in its bucket.
- Sort each non-empty bucket.
- Gather: Visit the buckets in order and put all elements back into the original array.
function bucketSort(array, n) is
buckets † new array of n empty lists
for i = 0 to (length(array)-1) do
insert array[i] into buckets[msbits(array[i], k)]
for i = 0 to n - 1 do
nextSort(buckets[i])
return the concatenation of buckets[0], ..., buckets[n-1]
The function msbits(x,k) returns the k most significant bits of x and different functions can be used to translate the range of elements in array to n buckets, such as translating the letters A-Z to 0-25. If inputs are uniformly distributed over, the hope is that not too many numbers to fall into each bucket. To produce the output, simply sort the numbers in each bucket and then go through the bucket in order, listing the elements in each.
Performance: (O(k+n)) average case and (O(k*n)) worst case where k is the average key length (in digits).... so not necessarily better than quick-sort.
Extra Topics
Stable Sorts
A stable sort is a sorting algorithm that keeps elements with the same keys in the same relative order. Here's what it looks like: (1,3,4,3,3,2 -> 1,2,3,3,3,4), with different 3's shown in bold, underlined and italic, and their order maintained. Examples of stable sorts are:
- Insertion sort, merge sort, Radix sort and several of the other O(n).
Heapsort is not a stable sort and most implementations of quick-sort are not stable either.