KEMBAR78
Sorting: 10.1 Aims | PDF | Computing | Teaching Mathematics
0% found this document useful (0 votes)
55 views12 pages

Sorting: 10.1 Aims

The document discusses sorting algorithms. It begins by defining the sorting problem as arranging a set of records so their key values are in non-decreasing order. It then describes three O(n2) sorting algorithms - Bubble Sort, Insertion Sort, and Selection Sort. Bubble Sort compares adjacent elements and swaps them if out of order until the list is fully sorted. Insertion Sort builds the sorted list one item at a time by inserting each element into the correct position. Selection Sort iteratively finds the minimum element and swaps it into the proper place.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
55 views12 pages

Sorting: 10.1 Aims

The document discusses sorting algorithms. It begins by defining the sorting problem as arranging a set of records so their key values are in non-decreasing order. It then describes three O(n2) sorting algorithms - Bubble Sort, Insertion Sort, and Selection Sort. Bubble Sort compares adjacent elements and swaps them if out of order until the list is fully sorted. Insertion Sort builds the sorted list one item at a time by inserting each element into the correct position. Selection Sort iteratively finds the minimum element and swaps it into the proper place.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 12

SORTING

We sort many things in our everyday lives: Bills and other piles of paper; jars of spices; and
so on. And we have many intuitive strategies that we can use to do the sorting, depending
on how many objects we have to sort and how hard they are to move around. Sorting is also
one of the most frequently performed computing tasks. We might sort the records in a
database so that we can search the collection efficiently. Because sorting is so important,
naturally it has been studied intensively and many algorithms have been devised. Some of
these algorithms are straightforward adaptations of schemes we use in everyday life. The
present unit covers several standard algorithms appropriate for sorting a collection of
records that fit in the computer’s main memory using a list.

10.1 Aims

After this unit, you should understand:


 The sorting problem in computer science.

You will also be able to:


 Characterize the sorting problem.
 Characterize the main sorting algorithms.
 Implement the main sorting algorithms.

10.2 The sorting problem

Except where noted otherwise, input to the sorting algorithms presented in this chapter is a
collection of records stored in an array or a list. Records are compared to one another using
a key value or in an object oriented programming language implementing the comparator
interface.

Given a set of records r1, r2, ..., rn with key values k1, k2, ..., kn, the
SORTING PROBLEM is to arrange the records into any order S
such that records rs1 , rs2 , ..., rsn have keys obeying the property
ks1 ≤ ks2 ≤ ... ≤ ksn.

In other words, the sorting problem is to arrange a set of records so that the values of their
key fields are in non-decreasing order.

As defined, the Sorting Problem allows input with two or more records that have the same
key value. Certain applications require that input not contains duplicate key values. The
sorting algorithms presented in this unit can handle duplicate key values. When duplicate
key values are allowed, there might be an implicit ordering to the duplicates, typically based
on their order of occurrence within the input. It might be desirable to maintain this initial
ordering among duplicates. A sorting algorithm is said to be stable if it does not change the
relative ordering of records with identical key values. Many, but not all, of the sorting
algorithms presented in this unit, are stable, or can be made stable with minor changes.

Efficient sorting is important for optimizing the use of other algorithms which require input
data to be in sorted lists; it is also often useful for canonicalizing data and for producing
human-readable output. More formally, the output must satisfy two conditions:

1. The output is in non-decreasing order (each element is no smaller than the previous
element according to the desired total order).
2. The output is a permutation (reordering) of the input.

The sorting problem has attracted a great deal of research, perhaps due to the complexity of
solving it efficiently despite its simple, familiar statement.

Sorting algorithms are often classified by:


 Computational complexity: Worst, average and best behavior of element
comparisons in terms of the size of the list (n).
o Good behavior is O(n log n).
o Bad behavior is O(n2).
 The computational complexity of swaps: for "in place" algorithms.
 Memory usage. In particular, some sorting algorithms are "in place". Strictly, an “in
place” sort needs only O(1) memory beyond the items being sorted; sometimes
O(log(n)) additional memory is considered "in place".
 Recursion. Some algorithms are either recursive or non-recursive while others may
be both.
 Stability: stable sorting algorithms maintain the relative order of records with equal
keys or values.
 Whether or not they are a comparison sort. A comparison sort examines the data
only by comparing two elements with a comparison operator.
 General method: insertion, exchange, selection, merging, etc.
o Exchange sorts include bubble sort and quicksort.
o Selection sorts include shaker sort and heapsort.
 Adaptability: Whether or not the pre sorted-ness of the input affects the running
time. Algorithms that take this into account are known to be adaptive.

In this unit, we use the “computational complexity” criteria to classify the sorting algorithms.

10.3 O(n2) Sorting Algorithms

This section presents three simple sorting algorithms, these algorithms are easy to
understand and implement, we will soon see that they are unacceptably slow when there are
many records to sort. Nonetheless, there are situations where one of these simple
algorithms is the best tool for the job.

10.3.1 Bubble Sort

Bubble sort is a simple sorting algorithm. The algorithm starts at the beginning of the data
set. It compares the first two elements, and if the first is greater than the second, it swaps
them. It continues doing this for each pair of adjacent elements to the end of the data set. It
then starts again with the first two elements, repeating until no swaps have occurred on the
last pass. This algorithm's average and worst-case performance is O(n 2), so it is rarely used
to sort large, unordered, data sets.

Bubble sort can be used to sort a small number of items, where its asymptotic inefficiency is
not a high penalty. Bubble sort can also be used efficiently on a list of any length that is
nearly sorted; that is, the elements are not significantly out of place. For example, if any
number of elements is out of place by only one position, bubble sort's exchange will get
them in order on the first pass, the second pass will find all elements in order, and so the
sort will take only 2n time.

Properties:
 Stable.
 O(1) extra space.
 O(n2) comparisons and swaps.
 Adaptive: O(n) when nearly sorted.

Bubble Sort Algorithm

Algorithm: BubbleSort( T[] a, integer n )


Input: An array with the elements to sort (a) and the length of the array (n)
Output: The element sorting in ascending order
Variables
Integer i, j
Boolean swapped
Begin
For i ← 0 to n-1
swapped ← false
For j ← n-1 to i
If a[j] < a[j-1] then
Swap(a, j, j-1)
swapped ← true
End If
End For
If not swapped then
Return
End If
End for
End

Bubble Sort consists of a simple double for loop. The first iteration of the inner for loop
moves through the record array from bottom to top, comparing adjacent keys. If the lower-
indexed key’s value is greater than its higher-indexed neighbor, then the two values are
swapped. Once the smallest value is encountered, this process will cause it to “bubble” up to
the top of the array. The second pass through the array repeats this process. However,
because we know that the smallest value reached the top of the array on the first pass, there
is no need to compare the top two elements on the second pass. Likewise, each succeeding
pass through the array compares adjacent elements, looking at one less value than the
preceding pass. The process end when no more swaps are needed to change array
elements.

Let us take the array of numbers (5, 1, 4, 2, 8), and sort the array from lowest number to the
greatest number using bubble sort. In each step, elements written in bold are being
compared. Three passes will be required.

First Pass:
1.1. ( 5, 1, 4, 2, 8 ) → ( 1, 5, 4, 2, 8 ), Swaps since 5 > 1.
1.2. ( 1, 5, 4, 2, 8 ) → ( 1, 4, 5, 2, 8 ), Swap since 5 > 4.
1.3. ( 1, 4, 5, 2, 8 ) → ( 1, 4, 2, 5, 8 ), Swap since 5 > 2.
1.4. ( 1, 4, 2, 5, 8 ) → ( 1, 4, 2, 5, 8 ), Not swap, (8 > 5).

Second Pass:
2.1. ( 1, 4, 2, 5, 8 ) → ( 1, 4, 2, 5, 8 )
2.2. ( 1, 4, 2, 5, 8 ) → ( 1, 2, 4, 5, 8 ), Swap since 4 > 2
2.3. ( 1, 2, 4, 5, 8 ) → ( 1, 2, 4, 5, 8 )
2.4. ( 1, 2, 4, 5, 8 ) → ( 1, 2, 4, 5, 8 )

Now, the array is already sorted, but our algorithm does not know if it is completed. The
algorithm needs one whole pass without any swap to know it is sorted.
Third Pass:
3.1. ( 1, 2, 4, 5, 8 ) → ( 1, 2, 4, 5, 8 )
3.2. ( 1, 2, 4, 5, 8 ) → ( 1, 2, 4, 5, 8 )
3.3. ( 1, 2, 4, 5, 8 ) → ( 1, 2, 4, 5, 8 )
3.4. ( 1, 2, 4, 5, 8 ) → ( 1, 2, 4, 5, 8 )

10.3.2 Insertion Sort

Insertion sort is a simple sorting algorithm that builds the final sorted array (or list) one item
at a time. When humans manually sort something (for example, a deck of playing cards),
most use a method that is similar to insertion sort.

Imagine that you have a stack of phone bills from the past two years and that you wish to
organize them by date. A fairly natural way to do this might be to look at the first two bills and
put them in order. Then take the third bill and put it into the right order with respect to the first
two, and so on. As you take each bill, you would add it to the sorted pile that you have
already made.

Insertion sort iterates, consuming one input element each repetition and growing a sorted
output list. On a repetition, insertion sort removes one element from the input data, finds the
location it belongs within the sorted list and inserts it there. It repeats until no input elements
remain.

Sorting is typically done in-place, by iterating up the array, growing the sorted list behind it.
At each array position, it checks the value there against the largest value in the sorted list
(which happens to be next to it, in the previous array-position checked). If larger, it leaves
the element in place and moves to the next. If smaller, it finds the correct position within the
sorted list, shifts all the larger values up to make a space, and inserts into that correct
position.

Insertion sort iterates, consuming one input element each repetition and growing a sorted
output list. On a repetition, insertion sort removes one element from the input data, finds the
location it belongs within the sorted list and inserts it there. It repeats until no input elements
remain.

Sorting is typically done in-place, by iterating up the array, growing the sorted list behind it.
At each array position, it checks the value there against the largest value in the sorted list
(which happens to be next to it, in the previous array-position checked). If larger, it leaves
the element in place and moves to the next. If smaller, it finds the correct position within the
sorted list, shifts all the larger values up to make a space, and inserts into that correct
position.

Properties:
 Stable.
 O(1) extra space.
 O(n2) comparisons and swaps.
 Adaptive: O(n) when nearly sorted.
 Very low overhead.
Insertion Sort Algorithm

Algorithm: InsertionSort( T[] a, integer n )


Input: An array with the elements to sort (a) and the length of the array (n)
Output: The element sorting in ascending order
Variables
Integer i, j
Begin
For i ← 1 to n-1
j ← i
While j > 0 and a[j-1] >= a[j] do
Swap(a, j, j-1 )
j ← j-1
End while
End for
End

Let us take the array of numbers (5, 1, 4, 2, 8), and sort the array from lowest number to the
greatest number using insertion sort. In each step, elements written in bold are being
compared.

First iteration
i=1 j=1 (5, 1, 4, 2, 8) → (1, 5, 4, 2, 8)
Second iteration
i=2 j=2 (1, 5, 4, 2, 8) → (1, 4, 5, 2, 8)
j=1 (1, 4, 5, 2, 8) → (1, 2, 5, 4, 8)
Third iteration
i=3, j=3 (1, 2, 5, 4, 8) → (1, 2, 4, 5, 8)
j=2 (1, 2, 4, 5, 8) → (1, 2, 4, 5, 8)
Fourth iteration
i=4, j=4 (1, 2, 4, 5, 8) → (1, 2, 4, 5, 8)

At the end of the algorithm the array is sorted (1, 2, 4, 5, 8).

10.3.3 Selection Sort

Selection sort is a sorting algorithm, specifically an in-place comparison sort. It has O(n 2)
time complexity, making it inefficient on large lists, and generally performs worse than the
similar insertion sort. Selection sort is noted for its simplicity, and it has performance
advantages over more complicated algorithms in certain situations, particularly where
auxiliary memory is limited.

The algorithm divides the input list or array into two parts: the sublist of items already sorted,
which is built up from left to right at the front (left) of the list, and the sublist of items
remaining to be sorted that occupy the rest of the list. Initially, the sorted sublist is empty and
the unsorted sublist is the entire input list. The algorithm proceeds by finding the smallest
element in the unsorted sublist, exchanging it with the leftmost unsorted element, putting it in
sorted order, and moving the sub list boundaries one element to the right.

Properties:
 Not stable.
 O(1) extra space.
 O(n2) comparisons.
 O(n) swaps.
 Non-adaptive.
Selection Sort Algorithm

Algorithm: SelectionSort( T[] a, integer n )


Input: An array with the elements to sort (a) and the length of the array (n)
Output: The element sorting in ascending order
Variables
Integer i, j, min
Begin
For i ← 0 to n-1
min = i
For j ← i + 1 to n-1
If a[j] < a[min] do
min = j
End If
End For
If min <> i then
Swap(a,i,min)
End If
End For
End

Selection sort is not difficult to analyze compared to other sorting algorithms since none of
the loops depend on the data in the array. Selecting the lowest element requires scanning all
n elements (this takes n − 1 comparisons) and then swapping it into the first position. Finding
the next lowest element requires scanning the remaining n − 1 elements and so on, for (n −
1) + (n − 2) + ... + 2 + 1 = n(n − 1) / 2 ∈ Θ(n2) comparisons (see arithmetic progression).
Each of these scans requires one swap for n − 1 elements (the final element is already in
place).

Let us take the array of numbers (5, 1, 4, 2, 8), and sort the array from lowest number to the
greatest number using selection sort. In each step, elements written in bold are being
compared.
First iteration
i=0 j=1 (5, 1, 4, 2, 8) → (1, 5, 4, 2, 8)
j=2 (1, 5, 4, 2, 8) → (1, 5, 4, 2, 8)
j=3 (1, 5, 4, 2, 8) → (1, 5, 4, 2, 8)
j=4 (1, 5, 4, 2, 8) → (1, 5, 4, 2, 8)
Second iteration
i=1 j=2 (1, 5, 4, 2, 8) → (1, 4, 5, 2, 8)
j=3 (1, 4, 5, 2, 8) → (1, 2, 5, 4, 8)
j=4 (1, 2, 5, 4, 8) → (1, 2, 5, 4, 8)

Third iteration
i=2 j=3 (1, 2, 5, 4, 8) → (1, 2, 4, 5, 8)
j=4 (1, 2, 4, 5, 8) → (1, 2, 4, 5, 8)

Fourth iteration
i=3 j=4 (1, 2, 4, 5, 8) → (1, 2, 4, 5, 8)

At the end of the algorithm the array is sorted (1, 2, 4, 5, 8).

10.4 O(n•log(n)) sorting algorithms

10.4.1 Merge sort

A natural approach to problem solving is divide-and-conquer. In terms of sorting, we might


consider breaking the list to be sorted into pieces, process the pieces, and then put them
back together somehow. A simple way to do this would be to split the list in half, sort the
halves, and then merge the sorted halves together. This is the idea behind Merge sort.

Merge sort (also commonly spelled mergesort) is an O(n log n) comparison-based sorting
algorithm. Most implementations produce a stable sort, which means that the
implementation preserves the input order of equal elements in the sorted output. Merge sort
is a divide-and-conquer algorithm that was invented by John von Neumann in 1945. A
detailed description and analysis of bottom-up merge sort appeared in a report by Goldstine
and Neumann as early as 1948.

Mergesort is one of the simplest sorting algorithms conceptually and has good performance
both in the asymptotic sense and in empirical running time. Surprisingly, even though it is
based on a simple concept, it is relatively difficult to implement in practice.

Properties:
 Stable.
 O(n) extra space.
 Non adaptive.
 Does not require random access to data.

Merge Sort Algorithm

Algorithm: MergeSort( T[] a, integer n )


Input: An array with the elements to sort (a) and the length of the array (n)
Output: The element sorting in ascending order
Variables
Integer i, half
T[] left, right
Begin
If n > 1 then
half ← n / 2
// Add the element to the left part
For i ← 0 to half
left[i] ← a[i]
End For
// Add the element to the right part
For i ← half+1 to n
right[i] ← a[i]
End For
// Sort left and right parts
MergeSort(left,half) // Recursive call
MergeSort(right,n-half) // Recursive call
// Merge left and right
Return Merge(left, right)
Else
Return a
End If
End
Merge algorithm

Algorithm: Merge( T[] left, integer l_size, T[] rigth, integer r_size )
Input: Two arrays and his length.
Output: The arrays merged in ascending order
Variables
T[] result
Begin
// assign the element of the sublists to 'result' variable until there is
// no element to merge.
While length(left) > 0 or length(right) > 0 do
If length(left) > 0 and length(right) > 0 then
// compare the first two element, which is the small one,
// of each two sublists.
If first(left) <= first(right) then
// the small element is copied to 'result' variable.
// delete the copied one (a first element) in the sublist.
append first(left) to result
left ← rest(left)
Else
// same operation as the above(in the right sublist).
append first(right) to result
right ← rest(right)
End If
Else
If length(left) > 0 then
// copy all of remaining elements from the sublist to 'result'
// variable, when there is no more element to compare with.
append first(left) to result
left ← rest(left)
Else
if length(right) > 0 then
// same operation as the above (in the right sublist).
append first(right) to result
right ← rest(right)
End If
End If
End If
End while
// return the result of the merged sublists(or completed one, finally).
// the length of the left and right sublists will grow bigger and bigger,
// after the next call of this function.
Return result
End

Conceptually, a merge sort works as follows

1. Divide the unsorted list into n sublists, each containing 1 element (a list of 1 element
is considered sorted).
2. Repeatedly merge sublists to produce new sublists until there is only 1 sublist
remaining. This will be the sorted list.

Example: Let us take the array of numbers (38, 27, 43, 3, 9, 82, 10), and sort the array from
lowest number to greatest number using selection sort.
Figure 1. Merge sort illustration

Merge sort is very predictable. It makes between 0.5*lg(n) and log(n) comparisons per
element, and between log(n) and 1.5*lg(n) swaps per element. The minima are achieved for
already sorted data; the maxima are achieved, on average, for random data. If using Θ(n)
extra space is of no concern, then merge sort is an excellent choice: It is simple to
implement, and it is the only stable O(n·lg(n)) sorting algorithm.

10.4.2 Quicksort

Quicksort is aptly named because, when properly implemented, it is the fastest known
general-purpose in-memory sorting algorithm in the average case. It does not require the
extra array needed by Mergesort, so it is space efficient as well. Quicksort is widely used
and is typically the algorithm implemented in programming languages libraries.

Quicksort is a divide-and-conquer algorithm. Quicksort first divides a large list into two
smaller sub-lists: the low elements and the high elements. Quicksort can then recursively
sort the sub-lists.

The steps are:


1. Pick an element, called a pivot, from the list.
2. Reorder the list so that all elements with values less than the pivot come before the
pivot while all elements with values greater than the pivot come after it (equal values
can go either way). After this partitioning, the pivot is in its final position. This is called
the partition operation.
3. Recursively apply the above steps to the sub-list of elements with smaller values and
separately the sub-list of elements with greater values.

The base cases of the recursion are lists of size zero or one, which never need to be sorted.

Quicksort first selects a value called the pivot. Assume that the input array contains k values
less than the pivot. The records are then rearranged in such a way that the k values less
than the pivot are placed in the first, or leftmost, k positions in the array, and the values
greater than or equal to the pivot are placed in the last, or rightmost, n − k positions. This is
called a partition of the array. The values placed in a given partition need not (and typically
will not) be sorted with respect to each other. All that is required is that all values end up in
the correct partition.
The pivot value itself is placed in position k. Quicksort then proceeds to sort the resulting
sub-arrays now on either side of the pivot, one of size k and the other of size n − k − 1. How
are these values sorted? Because Quicksort is such a good algorithm, using Quicksort on
the sub-arrays would be appropriate.

Properties:
 Not stable.
 O(lg(n)) extra space.
 O(n2) time, but typically O(n·lg(n)) time
 Not adaptive.

Quicksort Algorithm

Algorithm: Quicksort( T[] a, integer lb, rb )


Input: An array with the elements to sort and the left and right boundaries
Output: The element sorting in ascending order
Variables
Integer k, pivot
T[] left, right
Begin
// Pick a pivot
pivot ← findPivot(a,lb,rb)
// Stick pivot at end
swap(a, pivot, rb)
// k will be the first position in the right subarray
k = partition(a, lb-1, rb, a[rb]);
// Put pivot in place
swap(a, k, rb);
// Sort left partition
If (k-i) > 1 then
Quicksort(a, lb, k-1);
End if
// Sort right partition
If (j-k) > 1 then
Quicksort(a, k+1, rb)
End If
End

Selecting a pivot can be done in many ways. The simplest is to use the first key. However, if
the input is sorted or reverse sorted, this will produce a poor partitioning with all values to
one side of the pivot. It is better to pick a value at random, thereby reducing the chance of a
bad input order affecting the sort. Here is a simple findPivot function:

Algorithm: findPivot( T[] a, integer lb, rb)


Input: An array with the elements and his boundaries
Output: The pivot for Quicksort
Begin
Return return (lb+rb)/2;
End

Example:
Figure 2. Quicksort illustration

10.5 Conclusions

After studying the content of this unit the conclusions are:


 Bubble sort has many of the same properties as insertion sort but has slightly higher
overhead. In the case of nearly sorted data, bubble sort takes O(n) time but requires
at least 2 passes through the data.
 Insertion sort is the algorithm of choice either when the data is nearly sorted or
when the problem size is small.
 Selection sort has the property of minimizing the number of swaps. In applications
where the cost of swapping items is high, selection sort very well may be the
algorithm of choice.
 Merge sort is the algorithm of choice for a variety of situations: when stability is
required when sorting linked lists, and when random access is much more expensive
than sequential access.
 Quick sort is robust and has low overhead when carefully implemented. When a
stable sort is not needed, quick sort is an excellent general-purpose sort.

10.6 To Do List and Exercises

To Do List

1. From reference 3, read Chapter 3.


2. From reference 4, read Chapter 7.

Exercises

Exercise 1. Fill the following table.

Algorithm Idea Time Complexity Stable Space Adaptative


Bubble sort
Insertion Sort
Selection Sort
Merge sort
Quicksort

Exercise 2.
a) Implement all sorting algorithm characterized in the unit,
b) Run the algorithm with the same data and analyze the time complexity.

10.7 References
1. Preiss, Bruno, Data Structures and Algorithms with Object-Oriented Design
Patterns in C#. 2001.
2. Drake, Peter. Data Structures and Algorithms in Java. Prentice Hall. 2005.
3. Lafore, Robert. Data Structures & Algorithms in Java. Sams. 2003.
4. Shaffer, Clifford. Data Structures and Algorithm Analysis Edition 3.2 (Java Version).
Virginia Tech Blacksburg, VA 24061
5. Drozdek, Adm. Data Structures and Algorithms in C++, Second Edition. 2001
6. http://codecodex.com/wiki/Bubble_sort
7. http://www.sorting-algorithms.com/bubble-sort
8. http://lecture.ecc.u-tokyo.ac.jp/~ueda/JavaApplet/BubbleSort.html
9. https://en.wikipedia.org/wiki/Bubble_sort
10. http://www.sgi.com/tech/stl/stable_sort.html
11. http://www.sorting-algorithms.com/
12. http://www.sorting-algorithms.com/insertion-sort
13. http://www.sorting-algorithms.com/selection-sort
14. http://www.nist.gov/dads/HTML/bingosort.html
15. http://www.nist.gov/dads/HTML/mergesort.html
16. http://opendatastructures.org/versions/edition-0.1e/ods-
java/11_1_Comparison_Based_Sorti.html#SECTION001411000000000000000

You might also like