6.2. Sorting Part 2¶
6.2.1. Sorting Part 2¶
6.2.1.1. Shellsort¶
6.2.1.2. Shellsort (2)¶
static void shellsort(int[] A) { for (int i=A.length/2; i>2; i/=2) { // For each increment for (int j=0; j<i; j++) { // Sort each sublist inssort2(A, j, i); } } inssort2(A, 0, 1); // Could call regular inssort here } /** Modified Insertion Sort for varying increments */ static void inssort2(int[] A, int start, int incr) { for (int i=start+incr; i<A.length; i+=incr) for (int j=i; (j>=incr) && (A[j] < A[j-incr]); j-=incr) Swap.swap(A, j, j-incr); }
6.2.1.3. Mergesort¶
6.2.1.4. .¶
.
6.2.1.5. Mergesort cost¶
Mergesort cost:
Mergesort is also good for sorting linked lists.
Mergesort requires twice the space.
6.2.1.6. Quicksort¶
static void quicksort(Comparable[] A, int i, int j) { // Quicksort int pivotindex = findpivot(A, i, j); // Pick a pivot Swap.swap(A, pivotindex, j); // Stick pivot at end // k will be the first position in the right subarray int k = partition(A, i, j-1, A[j]); Swap.swap(A, k, j); // Put pivot in place if ((k-i) > 1) { quicksort(A, i, k-1); } // Sort left partition if ((j-k) > 1) { quicksort(A, k+1, j); } // Sort right partition }static int findpivot(Comparable[] A, int i, int j) { return (i+j)/2; }
6.2.1.7. Quicksort Partition¶
6.2.1.8. Quicksort Partition Cost¶
6.2.1.9. Quicksort Summary¶
6.2.1.10. Quicksort Worst Case¶
6.2.1.11. .¶
.
6.2.1.12. Quicksort Best Case¶
6.2.1.13. .¶
.
6.2.1.14. Quicksort Average Case¶
6.2.1.15. Optimizations for Quicksort¶
Better Pivot
Inline instead of function calls
Eliminate recursion
- Better algorithm for small sublists: Insertion sort
Best: Don’t sort small lists at all, do a final Insertion Sort to clean up.
6.2.1.16. Heapsort¶
6.2.1.17. Heapsort Analysis¶
6.2.1.18. Binsort¶
6.2.1.19. Radix Sort: Linked List¶
6.2.1.20. .¶
.
6.2.1.21. Radix Sort: Array¶
6.2.1.22. Radix Sort Implementation¶
static void radix(Integer[] A, int k, int r) { Integer[] B = new Integer[A.length]; int[] count = new int[r]; // Count[i] stores number of records with digit value i int i, j, rtok; for (i=0, rtok=1; i<k; i++, rtok*=r) { // For k digits for (j=0; j<r; j++) { count[j] = 0; } // Initialize count // Count the number of records for each bin on this pass for (j=0; j<A.length; j++) { count[(A[j]/rtok)%r]++; } // count[j] will be index in B for last slot of bin j. // First, reduce count[0] because indexing starts at 0, not 1 count[0] = count[0] - 1; for (j=1; j<r; j++) { count[j] = count[j-1] + count[j]; } // Put records into bins, working from bottom of bin // Since bins fill from bottom, j counts downwards for (j=A.length-1; j>=0; j--) { B[count[(A[j]/rtok)%r]] = A[j]; count[(A[j]/rtok)%r] = count[(A[j]/rtok)%r] - 1; } for (j=0; j<A.length; j++) { A[j] = B[j]; } // Copy B back } }
6.2.1.23. .¶
.
6.2.1.24. Radix Sort Analysis¶
6.2.1.25. Empirical Analysis¶
Sort101001K10K100K1MUpDownInsertion.00023.0070.6664.987381.06744200.04129.05Bubble.00035.0202.25277.9427691.0282068070.64108.69Selection.00039.0120.6972.477356.078000069.7669.58Shell.00034.0080.141.9930.25540.440.79Shell/O.00034.0080.121.9129.05300.360.64Merge.00050.0100.121.6119.32190.830.79Merge/O.00024.0070.101.3117.21970.470.66Quick.00048.0080.111.3715.71620.370.40Quick/O.00031.0060.091.1413.61430.320.36Heap.00050.0110.162.0826.73911.571.56Heap/O.00033.0070.111.6120.83341.011.04Radix/4.00838.0810.797.9979.98087.977.97Radix/8.00799.0440.403.9940.04044.003.99
6.2.1.26. Sorting Lower Bound (1)¶
We would like to know a lower bound for the problem of sorting
Sorting is O(nlogn) (average, worst cases) because we know of algorithms with this upper bound.
Sorting I/O takes Ω(n) time. You have to look at all records to tell if the list is sorted.
We will now prove Ω(nlogn) lower bound for sorting.