A divide-and-conquer algorithm for this problem would proceed as
follows: Let P
= (n,a[i],….,a[j]) denote an arbitrary instance of the problem. Here n is the
number of elements in the list a[i],….,a[j] and we are interested in finding
the maximum and minimum of this list. Let small(P) be true when n ≤ 2. In
this case, the maximum and minimum are a[i] if n = 1. If n = 2, the problem
can be solved by making one comparison.
If the list has more than two elements, P has to be divided into smaller
instances. For example, we might divide P into the two instances P1 =
(n/2,a[1],….,a[n/2]) and P2
= (n - n/2,a[n/2 + 1],….,a[n]). After having divided P into two smaller sub
problems, we can solve them by recursively invoking the same divide and
conquer algorithm.
Now the question is How can we combine the Solutions for P1 and P2
to obtain the solution for P? If MAX(P) and MIN(P) are the maximum and
minimum of the elements of P, then MAX(P) is the larger of MAX(P1) and
MAX(P2) also MIN(P) is the smaller of MIN(P1) and MIN(P2).
MaxMin is a recursive algorithm that finds the maximum and minimum
of the set of elements {a(i),a(i+1),…,a(j)}. The situation of set sizes one (i=j)
and two (i=j-1) are handled separately. For sets containing more than two
elements, the midpoint is determined and two new sub problems are
generated. When the maxima and minima of this sub problems are
determined, the two maxima are compared and the two minima are
compared to achieve the solution for the entire set.
The procedure is initially invoked by the statement MaxMin(1,n,x,y). for
this algorithm each node has four items of information: i, j, max, min.
Suppose we simulate MaxMin on the following nine elements:
a: [1] [2] [3] [4] [5] [6] [7] [8] [9]
22 13 -5 -8 15 60 17 31 47
A good way of keeping track of recursive calls is to build a tree by
adding a node each time a new call is made. On the array a[ ] above, the
following tree is produced.
1
2
We see that the root node contains 1 and 9 as the values of i and j
corresponding to the initial call to MaxMin. This execution produces two new
call to MaxMin, where i and j have the values 1, 5 and 6, 9, and thus split the
set into two subsets of the same size. From the tree we can immediately see
that the maximum depth of recursion is four (including the first call).
Algorithm for maximum and minimum using divide-and-conquer
Algorithm MaxMin(i, j, max, min)
// a[1:n] is a global array. Parameters i and j are integers,
// 1≤i≤j≤n. The effect is to set max and min to the largest and
// smallest values in a[i:j].
{
if (i=j) then max := min := a[i]; //Small(P)
else if (i=j-1) then // Another case of Small(P)
{
if (a[i] < a[j]) then max := a[j]; min := a[i];
}
else
{
max := a[i]; min := a[j];
}
}
els
e
{ // if P is not small, divide P into sub-problems.
// Find where to split the
set. mid := ( i + j )/2;
// Solve the sub-problems.
MaxMin( i, mid, max, min );
MaxMin( mid+1, j, max1,
min1 );
// Combine the solutions.
if (max < max1) then max :=
max1; if (min > min1) then
} min := min1;
}
Merge Sort Algorithm
Merge sort is a sorting technique based on divide and conquer technique.
With worst- case time complexity being Ο(n log n), it is one of the most
3
respected algorithms.
4
Merge sort first divides the array into equal halves and then combines them
in a sorted manner.
How Merge Sort Works?
To understand merge sort, we take an unsorted array as the following −
We know that merge sort first divides the whole array iteratively into equal
halves unless the atomic values are achieved. We see here that an array of
8 items is divided into two arrays of size 4.
This does not change the sequence of appearance of items in the original.
Now we divide these two arrays into halves.
We further divide these arrays and we achieve atomic value which can no
more be divided.
Now, we combine them in exactly the same manner as they were broken
down. Please note the color codes given to these lists.
We first compare the element for each list and then combine them into
another list in a sorted manner. We see that 14 and 33 are in sorted
positions. We compare 27 and 10 and in the target list of 2 values we put 10
first, followed by 27. We change the order of 19 and 35 whereas 42 and 44
are placed sequentially.
In the next iteration of the combining phase, we compare lists of two data
values, and merge them into a list of found data values placing all in a
sorted order.
5
After the final merging, the list should look like this −
Let’s consider a sequence of n elements (a [1] to a [n/2]) and (a [n/2] to a[n].
Each set has to be individually sorted and the resulting sorted sequences are
merged which produce a single sorted sequence of n elements. The merge
sort algorithms uses recursion and a function merge, which merges two
sorted sets.
Algorithm Merge sort(low, high)
//a[low:high] is a global array to be sorted.
//Small(P) is true if there is only one element
//to sort. In this case the list is already sorted.
{
if (low < high) then //if there are more than one element
{
//Divide P into subproblems
//Find where to split the
set mid: = [(low
+high)/2]
//Solve the subproblems
MergeSort (low, mid);
MergeSort (mid +1,
high);
//Combine the solutions
MergeSort (low, mid,
high) ;
}
}
Merging two sorted subarrays using auxiliary storage
Algorithm Merge (low, mid, high)
//a[low:high] is a global array containing two sorted
//subsets in a[low:mid] and in a[mid+1:high]. The goal
//is to merge these two sets into a single set residing
//in a[low:high]. B[] is an auxiliary global array
{
6
h: = low; i: = low; j: = mid +
1; While ((h ≤ mid) and (j ≤
high)) do
{
if (a[h] ≤ a[j]) then
7
{
b[ i ] : = a[h]; h : = h + 1;
}
else
{
b [ i ] : =a[ j ]; j: = j + 1;
}
i : = i + 1;
}
if (h > mid) then
for k : = j to high do
{
b[ i ] : = a[k]; i : = i + 1;
}
else
for k: = h to mid do
{
b [ i ]: = a [k]; i: = i + 1;
}
for k: = low to high do a[k]: = b[k];
}
QUICK SORT
Quick Sort is also based on the concept of Divide and Conquer, just like
merge sort. But in quick sort all the heavy lifting(major work) is done while
dividing the array into subarrays, while in case of merge sort, all the real
work happens during merging the subarrays. In case of quick sort, the
combine step does absolutely nothing.
It is also called partition-exchange sort. This algorithm divides the list into
three main parts:
1. Elements less than the Pivot element
2. Pivot element(Central element)
3. Elements greater than the pivot element
Basic Ideas
1. Pick an element, say P(the pivot)
8
2. Re-arrange the elements into 3sub-blocks,
9
those less than or equal to (≤) P(the left-block S1)
P(the only element in the middle-block)
those greater than or equal to (≥) P(the rightblock S2)
3. Repeat the process recursivelyfor the left-and right-sub-blocks.
4. Return {quicksort(S1), P, quicksort(S2)}.(That is the results of
quicksort(S1), followed by P, followed by the results of
quicksort(S2))
Pick a “Pivot”value, P .Create 2 new sets without P
Note: The main idea is to find the “right” position for the pivot element P. After
each“pass”, the pivot element, P, should be“in place”. Eventually, the
elements are sorted since each pass puts at least one element (i.e., P) into its
final position.
1
0
Function Partition in the following algorithm accomplishes an in-place
partitioning of the elements of a[m:p-1]. It is assumed that a[p]≥a[m] and
that a[m] is the partitioning element. If m=1 and p-1=n, then a[n+1] must
be defined and must be greater than or equal to all elements in a[1:n]. the
assumption that a[m] is the partition element. The function
Interchange(a,j,x) exchanges a[i] with a[j].
Algorithm partition(a,m,p)
//within a[m],a[m+1],…..,a[p-1] the elements are
//rearranged in such a manner that if initially t=a[m],
//then after completion a[q]=t for some q between m
//and p-1,a[k]≤t for m≤k<q, and a[k]≥t
//for q<k<p.q is returned. set a[p]=∞.
{
v=a[m];
i=m;j=p;
repeat
{
repeat
i=i+1;
until(a[j]≥v);
repeat
}
until(i≥j);
1
1
j
=
j
-
1
;
u
n
t
i
l
(
a
[
j
]
≤
v
)
;
if(i<j)
then
interch
ange(a,
i,j);
1
2
a[m]=a[j];a[j]=v;return;
}
Algorithm interchange(a,i,j)
//exchange a[i] with a[j]
{
p=a[i];
a[i]=a[j];a[j]=
p;
}
Sorting by partitioning
Algorithm QuickSort(p,q)
//Sorts the elements a[p],…..,a[q] which reside in the global
//array a[1:n] into ascending order; a[n+1] is considered to
//be defined and must be≥ all the elements in a[1:n]
{
If(p<q) then// If there are more than one element
{
//divide P into two sub
problems
j=Partition(a,p,q+1);
//j is the position of the partitioning element.
//Solve the sub problems.
QuickSort(p,j-
1);
QuickSort(j+1,
q);
//There is no need for combining solutions
}
}
1
3
Performance Measurement of Quick Sort
The advantage of this quicksortis that we can sort “in-place”, i.e., withoutthe
need for a temporary buffer depending on the size of the inputs.(cf.
mergesort) Partitioning Step: Time Complexity is θ(n). Recall that
quicksortinvolves partitioning, and 2 recursive calls. Thus, giving the basic
quicksortrelation: T(n) = θ(n)+ T(i) + T(n-i-1) = cn+ T(i) + T(n-i-1) where iis
the size of the first sub-block after partitioning. We shall take T(0) = T(1) = 1
1
4
as the initial conditions. To find the solution for this relation, consider three
cases:
1
5
1. The Worst-case
2. The Best-case
3. The Average-case
Worst-Case(Data is sorted already) When the pivot is the smallest (or largest)
element at partitioning on a block of size n, the result yields one empty sub-
block, one element (pivot) in the “correct” place and one sub-block of size (n-
1) takes θ(n) times.
Recurrence Equation: T(1) = 1 T(n) = T(n-1) + cn Solution: θ(n2)
Best case: The pivot is in the middle (median) (at each partition step), i.e.
after each partitioning, on a block of size n, the result yields twosub-blocks of
approximately equal size and the pivot element in the “middle” position
takes n data comparisons.
Recurrence Equation becomes T(1) = 1 T(n) = 2T(n/2) + cn Solution: θ(n logn)
Average case: It turns out the average case running time also is θ (n logn).
Selection Sort
Selection sort is a simple sorting algorithm. This sorting algorithm is an in-
place comparison-based algorithm in which the list is divided into two parts,
the sorted part at the left end and the unsorted part at the right end.
Initially, the sorted part is empty and the unsorted part is the entire list.
Selection Sort works by repeatedly sorting elements. It works as follows: first
find the smallest in the array and exchange it with the element in the first
position, then find the second smallest element and exchange it with the
element in the second position, and continue in this way until the entire
array is sorted.
Selection sort is among the simplest of sorting techniques and it works very
well for small files. It has a quite important application as each item is
actually moved at the most once.
Section sort is a method of choice for sorting files with very large objects
(records) and small keys. The worst case occurs if the array is already
sorted in a descending order and we want to sort them in an ascending
order.
Selection sort spends most of its time trying to find the minimum element in
the unsorted part of the array. It clearly shows the similarity between
Selection sort and Bubble sort.
Bubble sort selects the maximum remaining elements at each stage,
but wastes some effort imparting some order to an unsorted part of
10
the array.
Selection sort is quadratic in both the worst and the average case, and
requires no extra memory.
10
This algorithm is not suitable for large data sets as its average and worst case
complexities are of Ο(n2), where n is the number of items.
How Selection Sort Works?
Consider the following depicted array as an example.
For the first position in the sorted list, the whole list is scanned sequentially.
The first position where 14 is stored presently, we search the whole list and
find that 10 is the lowest value.
So we replace 14 with 10. After one iteration 10, which happens to be the
minimum value in the list, appears in the first position of the sorted list.
For the second position, where 33 is residing, we start scanning the rest of the
list in a linear manner.
We find that 14 is the second lowest value in the list and it should appear at
the second place. We swap these values.
After two iterations, two least values are positioned at the beginning in a
sorted manner.
The same process is applied to the rest of the items in the
array. Analysis:
11
1. Input: n elements are given.
2. Output: the number of comparisons required to make sorting.
3. Logic: If we have n elements in selection sort, then n-1 passes are
required to find a sorted array.
In Selection Sort the function Select1 places the kth smallest element into
position a[k] and partitions the remaining elements so that a[i]≤a[k],1≤i<k,
and a[i]≥a[k],k<i≤n.
Algorithm Select1(a,n,k)
//Selects the kth smallest element in a[1:n] and places it
//in the kth position of a[]. The remaining elements are
//rearranged such that a[m]≤a[k] for 1≤m<k, and
//a[m]≥a[k]for k <m≤n.
{
low:=1;up:=n+1;
a[n+1]:=∞;//a[n+1] is set to
infinity. repeat
{
//Each time loop is entered,
//1≤low≤k≤up≤n+1.
j:=Partition(a,low,up);
//j is such that a[j] is the jth smallest
value in a[] if(k=j) then return;
else if(k<j) then up:=j;//j is the new
upper limit. else low:=j+1;//j+1 is
the new lower limit.
}until(false);
}
In analyzing Select1, we make the assumptions that were made for QuickSort:
1. The n elements are distinct
2. The input distribution is such that the partition element can be the
ith smallest element of a[m:p-1] with an equal probability for each I,
1≤i≤p-m
STRASSEN’S MATRIX MULTIPLICATION
Let A and B be two n x n matrices. The product matrix C = AB is also an n x n
matrix. Whose i and jth elements is formed by taking [i, j] the elements in
12
the ith column of B and multiplying them to give c [i j] = ∑ A [i k] B [k,j], 1 ≤
k < n for all i and j between 1 and n. To compute c[i, j] using this formula we
require n 3 multiplications.
13
The divide and conquer strategy suggest another way to compute the
product of two n x n matrices. We will assume that n is a power of 2. in case
n is not a power of 2 then enough rows and columns of zeros may be added
to both A and B so that the resulting dimensions are a power of 2. Imagine
that A and B are each partitioned into 42 sub matrices each having
dimension n/2 x n/2 then the product AB can be computed by using the
above formula for the product of 2 x 2 matrices.
A11 A12 B11 B12 C11
C12 A21 A22 B21 B22
= C21 C22
Then
C11 = A11 B11 + A12
B21 C12 = A11 B12 +
A12 B22 C21 = A21
B11 + A22 B21 C22 =
A21 B12 + A22 B22
if n = 2 then the above formula is computed using a multiplication operation
for the elements of A and B. For n > 2 the elements of C can be computed
using matrix multiplication and addition operations applied to matrices of
size n/2 x n/2. Since n is a recursively computed by the same algorithm for n
x n case. Strassen formulated a method to reduce the number of matrix
multiplication so as to reduce the complexity of the algorithm. This method
uses only 7 multiplications and 18 addition or subtractions. The method
involves first computing 7 n/2 x n/2 matrices p, q, r, s, t, u and v as below :
P = (A11+ A22) (B11 + B22)
Q = (A21+ A22) B11
R = A11 (B12 - B22)
S = A22 (B21 – B11)
T = (A11+ A12) B22
14
U = (A21 - A11) (B11 +
B12) V = (A12 – A22)
(B21 + B22) C11 = P+S-
T+V
15
C12 = R + T
C21 = Q+S
C22= P+R-
Q+U
Limitations of Strassen’s Algorithm
From a practical point of view , Strassen’s algorithm is often not the method
of choice for matrix multiplication, for the following four reason:
1. The constant factor hidden in the running time of Strassen’s
algorithm is larger than the constant factor in the native θ(n3)
method.
2. When the matrices are sparse, methods tailored for sparse matrices are
faster.
3. Strassen’s algorithm is not quite as numerically stable as the native method.
4. The sub matrices formed at the levels of recursion consume space.
16