KEMBAR78
Algorithm analysis (All in one) | PPTX
ALGORITHM ANALYSIS
Algorithm
• An algorithm is a step-by-step procedure for
solving a problem in a finite amount of time.
OR
• Prepared By:
• Engr: Basharat Jehan
Qualification: MS Software Engineering
Lecturer Agriculture University Peshawar Amir
Muhammad Khan Campus Mardan KP,
Pakistan
• Email id: Basharatjehan1987@gmail.com
• An algorithm is any well-defined
computational procedure that takes some
value, or set of values, as input and produces
some value, or set of values, as output. An
algorithm is thus a sequence of
computational steps that transform the input
into the output.
• For example a sorting algorithm take a
sequence of numbers as input, and a sorted
list is output.
The problem of sorting
• An algorithm is said to be correct if, for every
input instance, it halts with the correct
output. We say that a correct algorithm solves
the given computational problem. An
incorrect algorithm might not halt at all on
some input instances, or it might halt with an
incorrect answer.
data structure
• A data structure is a way to store and
organize data in order to facilitate access and
modifications
Analyzing algorithms
• Analyzing an algorithm has come to mean
predicting the resources that the algorithm
requires.
• Analyzing an algorithm determines the amount of
“time” that algorithm takes to execute. This is not
really a number of seconds or any other clock
measurement but rather an approximation of the
number of operations that an algorithm
performs. The number of operations is related to
the execution time, so we will sometimes use the
word time to describe an algorithm’s
computational complexity.
• Most of what we will be discussing is going to
be how efficient various algorithms are in
terms of time, but some forms of analysis
could be done based on how much space an
algorithm needs to complete its task. This
space complexity analysis was critical in the
early days of computing when storage space
on a computer (both internal and external)
was limited.
Why Analyze an Algorithm?
• There are several answers to this basic
question, depending on one’s frame of
reference: the intended use of the algorithm,
the importance of the algorithm in relationship
to others from both practical and theoretical
standpoints, the difficulty of analysis, and the
accuracy and precision of the required answer.
• we want to know how long an implementation of
a particular algorithm will run on a particular
computer, and how much space it will require.
We generally strive to keep the analysis
independent of particular implementations—we
concentrate instead on obtaining results for
essential characteristics of the algorithm that can
be used to derive precise estimates of true
resource requirements on various actual
machines.
CENG 213 Data Structures 14
Kinds of analyses
• An algorithm can require different times to solve different
problems of the same size.
– Eg. Searching an item in a list of n elements using sequential
search.  Cost: 1,2,...,n
• Worst-Case Analysis –The maximum amount of time that an
algorithm require to solve a problem of size n.
– This gives an upper bound for the time complexity of an algorithm.
– Normally, we try to find worst-case behavior of an algorithm.
• Best-Case Analysis –The minimum amount of time that an
algorithm require to solve a problem of size n.
– The best case behavior of an algorithm is NOT so useful.
• Average-Case Analysis –The average amount of time that an
algorithm require to solve a problem of size n.
– Sometimes, it is difficult to find the average-case behavior of an
algorithm.
– We have to look at all possible data organizations of a given size n,
and their distribution probabilities of these organizations.
– Worst-case analysis is more common than average-case analysis.
Analysis of algorithms
The theoretical study of computer-program
performance and resource usage.
What’s more important than performance?
• modularity
• correctness
• maintainability
• functionality
• robustness
• user-friendliness
• programmer time
• simplicity
• extensibility
• reliability
• Problem
• Strategy
• Algorithm
– Input
– Output
– Steps
• Analysis
– Correctness
– Time & Space
– Optimality
• Implementation
• Verification
Dr Nazir A. Zafar Advanced Algorithms Analysis and Design
Problem Solving Process
Algorithm for finding maximum
number in an array
Algorithm arrayMax(A, n)
Input array A of n integers
Output maximum element of A
currentMax  A[0]
for i  1 to n  1 do
if A[i]  currentMax then
currentMax  A[i]
return currentMax
Searching Algorithms
• Necessary components to search a list of fdata
– Array containing the list
– Length of the list
– Item for which you are searching
• After search completed
– If item found, report “success,” return location in array
– If item not found, report “not found” or “failure”
Sequential Search
• In computer science, linear search or
sequential search is a method for finding a
particular value in a list that checks each
element in sequence until the desired element
is found or the list is exhausted. The list need
not be ordered.
• Suppose that you want to determine whether 27 is in the list
• First compare 27 with list[0]; that is, compare 27 with
35
• Because list[0] ≠ 27, you then compare 27 with
list[1]
• Because list[1] ≠ 27, you compare 27 with the next
element in the list
• Because list[2] = 27, the search stops
• This search is successful!
Searching Algorithms (Cont’d)
Figure 1: Array list with seven (07) elements
• Let’s now search for 10
• The search starts at the first element in the list; that
is, at list[0]
• Proceeding as before, we see that this time the
search item, which is 10, is compared with every
item in the list
• Eventually, no more data is left in the list to compare
with the search item; this is an unsuccessful search
Searching Algorithms (Cont’d)
Algorithm of Sequential Searching
• The complete algorithm for sequential search is
//list the elements to be searched
//target the value being searched for
//N the number of elements in the list
SequentialSearch( list, target, N )
• for i = 1 to N do
• if (target = list[i])
• return i
• end if
• end for
• return 0
• Can only be performed on a sorted list !!!
• Uses divide and conquer technique to search list
Binary Search Algorithm
• Search item is compared with middle element of
list
• If search item < middle element of list, search is
restricted to first half of the list
• If search item > middle element of list, search
second half of the list
• If search item = middle element, search is
complete
Binary Search Algorithm (Cont’d)
• Determine whether 75 is in the list
Binary Search Algorithm (Cont’d)
Figure 2: Array list with twelve (12) elements
Figure 3: Search list, list[0] … list[11]
Binary Search Algorithm (Cont’d)
Figure 4: Search list, list[6] … list[11]
RATES OF GROWTH
In analysis of algorithms, it is not important to know exactly how
many operations an algorithm does. Of greater concern is the rate
of increase in operations for an algorithm to solve a problem as
the size of the problem increases.
This is referred to as the rate of growth of the algorithm. What
happens with small sets of input data is not as interesting as what
happens when the data set
gets large.
Because we are interested in general behavior, we just look at
the overall
growth rate of algorithms, not at the details. If we look closely
at the graph in
Fig. 1.1, we will see some trends. The function based on x2
increases slowly at
first, but as the problem size gets larger, it begins to grow at a
rapid rate. The
functions that are based on x both grow at a steady rate for the
entire length of
the graph. The function based on log x seems to not grow at all,
but this is
because it is actually growing at a very slow rate. The relative
height of the
functions is also different when we have small values versus
large ones. Consider
the value of the functions when x is 2.
time complexity
In computer science, the time complexity of
an algorithm quantifies the amount of time
taken by an algorithm to run as a function of
the length of the string representing the input
How to analyze Time Complexity
Exchange Sort
A method of arranging records or other types
of data into a specified order, in which
adjacent pairs of records are
exchanged until the correct order is achieved.
Exchange sort Algorithm
Exchange Sort Analysis
• Best Case: O(N)
• Average and Worst Case: O(n^2)
Bubble sort
• Bubble sort, sometimes referred to as sinking
sort, is a simple sorting algorithm that
repeatedly steps through the list to be sorted,
compares each pair of adjacent items and
swaps them if they are in the wrong order.
Bubble Sort Algorithm Analysis
• Best Case: O(N)
• Average and Worst Case: O(n^2)
CENG 213 Data Structures 63
The Execution Time of Algorithms (cont.)
Example: Simple Loop
Cost Times
i = 1; c1 1
sum = 0; c2 1
while (i <= n) { c3 n+1
i = i + 1; c4 n
sum = sum + i; c5 n
}
Total Cost = c1 + c2 + (n+1)*c3 + n*c4 + n*c5
 The time required for this algorithm is proportional to n
CENG 213 Data Structures 64
The Execution Time of Algorithms (cont.)
Example: Nested Loop
Cost Times
i=1; c1 1
sum = 0; c2 1
while (i <= n) { c3 n+1
j=1; c4 n
while (j <= n) { c5 n*(n+1)
sum = sum + i; c6 n*n
j = j + 1; c7 n*n
}
i = i +1; c8 n
}
Total Cost = c1 + c2 + (n+1)*c3 + n*c4 + n*(n+1)*c5+n*n*c6+n*n*c7+n*c8
 The time required for this algorithm is proportional to n2
Insertion Sort
• Sorting method in which algorithm scan a
whole list one by one and in each iteration the
algorithm place each number to its correct
position.
example
algorithm
See example animated example at
• http://courses.cs.vt.edu/csonline/Algorithms/
Lessons/InsertionCardSort/insertioncardsort.s
wf
Selection Sort
• It works as follows: first find the smallest in
the array and exchange it with the element in
the first position, then find the second
smallest element and exchange it with the
element in the second position, and continue
in this way until the entire array is sorted.
Algorithm
• MIN(A,K,N,LOC)
• 1. Set MIN=A[K] and LOC=K
• Repeat for J=K+1,K+2…….N
• If MIN>A[J], then Set MIN=A[J] and LOC=J.
• Interchange A[K] and A[LOC].
• Exit
algorithm
example
• Final Term Course
Quicksort
• Quicksort is a divide and conquer algorithm. Quicksort
first divides a large array into two smaller sub-arrays:
the low elements and the high elements. Quicksort can
then recursively sort the sub-arrays.
The steps are:
• Pick an element, called a pivot, from the array.
• Reorder the array so that all elements with values less
than the pivot come before the pivot, while all
elements with values greater than the pivot come after
it (equal values can go either way). After this
partitioning, the pivot is in its final position. This is
called the partition operation.
• Example 2.3 Suppose the array contains these
numbers in sequence:
15 22 13 27 12 10 20 25
Pivot
• Partition the array so that all items smaller than
the pivot item are to the left of it and all items
larger are to the right:
» 10 13 12 15 20 27 20 25
Sort the sub array
10 12 13 15 20 22 25 27
78
• A process related to sorting is merging. By two-way
merging we mean combining two sorted arrays into
one sorted array. By repeatedly applying the merging
procedure, we can sort an array. For example, to sort
an array of 16 items, we can divide it into two
subarrays, each of size 8, sort the two subarrays, and
then merge them to produce the sorted array. In the
same way, each subarray of size 8 can be divided into
two subarrays of size 4, and these subarrays can be
sorted and merged. Eventually, the size of the
subarrays will become 1, and an array of size 1 is
trivially sorted. This procedure is called "Mergesort."
80
Merge Sort
1) Divide the array into two subarrays each with
n/2 items.
2) Conquer (solve) each subarray by sorting it.
Unless the array is sufficiently small, use
recursion to do this.
3) Combine the solutions to the subarrays by
merging them into a single sorted array
81
Steps in Merge Sort
• Suppose the array contains these numbers in
sequence:
– 27 10 12 20 25 13 15 22.
• Divide the array:
– 27 10 12 20 and 25 13 15 22.
• Sort each subarray:
– 10 12 20 27 and 13 15 22 25.
• Merge the subarrays:
– 10 12 13 15 20 22 25 27.
82
Example
83
See slides for difference b/t
quick/merge and heap sort
• http://www.slideshare.net/MohammedHussei
n8/quick-sort-merge-sort-heap-sort
Decision Trees for Sorting Algorithms
• Void sortthree(keytype S[])
• {
• Keytype a,b,c;
• a=S[1]; b= S[2]; c= S[3];
• If(a<b)
• If(b<c)
• S=a,b,c
• Else if (a<c)
• S=a,c,b;
• Else
• S=c,a,b;
• Else if (b<c)
• If (a<c)
• S= b,a,c;
• Else
• S= b,c,a;
• Else
• S=c,b,a;
• }
Every- Case Time Complexity
• If T(n) is time complexity, then this complexity
is called every case complexity if for every
instance of size ‘n’ it perform the same
number of basic operations.
Computational Complexity
• Computational complexity, which is a field
that runs hand-in-hand with algorithm design
and analysis, is the study of all possible
algorithms that can solve a given problem. A
computational complexity analysis tries to
determine a lower bound(Ώ) on the efficiency
of all algorithms for a given problem.
• We introduce computational complexity analysis by studying the
Sorting problem. There are two reasons for choosing this problem.
First, quite a few algorithms have been devised that solve the
problem. By studying and comparing these algorithms, we can gain
insight into how to choose among several algorithms for the same
problem and how to improve a given algorithm. Second, the
problem of sorting is one of the few problems for which we have
been successful in developing algorithms whose time complexities
are about as good as our lower bound. That is, for a large class of
sorting algorithms, we have determined a lower bound of Ω (n lg n)
and we have developed Ω (n lg n) algorithms. Therefore, we can say
tht we have solved the Sorting problem as far as this class of
algorithms is concerned.
permutation
• A permutation of the first n positive integers
can be thought of as an ordering of those
integers. Because there are n! permutations of
the first n positive integers , there are n!
different orderings of those integers. For
example, the following six permutations are
all the orderings of the first three positive
integers:
Theorem
• “Any algorithm that sorts n distinct keys only
by comparisons of keys and removes at most
one inversion after each comparison”
Proof
• Suppose that currently the array S contains the
permutation [2, 4, 3, 1] and we are comparing 2
with 1. After that comparison, 2 and 1 will be
exchanged, thereby removing the inversions
(2, 1), (4, 1), and (3, 1). However, the inversions (4,
2) and (3, 2) have been added, and the net
reduction, in inversions is only one. This example
illustrates the general result that Exchange Sort
always has a net reduction of at most one
inversion after each comparison.
Dynamic Programming
• Dynamic programming is a method for solving
a complex problem by breaking it down into a
collection of simpler sub problems. It is
applicable to problems exhibiting the
properties of overlapping sub problems and
optimal substructure.
Examples of dynamic programming
Pascal triangle
Solve the given graph using Floyd’s
algorithm
• The Greedy Algorithms proceeds the way that it
grabs data items in sequence, each time taking
the one that is deemed "best“ according to
some criterion, without regard for the choices it
has made before or will in the future. One
should not get the impression that there is
something wrong with greedy algorithms
because of the negative connotations of
Scrooge and the word "greedy". They often lead
to very efficient and simple solutions
115
The Greedy Algorithm
118
119
Prim’s Algorithm for minimum
spanning Tree
Step 3: create Table
Kruskal’s Algorithm
Work with edges, rather than nodes
Two steps:
– Sort edges by increasing edge weight
– Select the first |V| – 1 edges that do not
generate a cycle
Walk-Through
Consider an undirected, weight graph
5
1
A
H
B
F
E
D
C
G 3
2
4
6
3
4
3
4
8
4
3
10
Sort the edges by increasing edge weight
edge dv
(D,E) 1
(D,G) 2
(E,G) 3
(C,D) 3
(G,H) 3
(C,F) 3
(B,C) 4
5
1
A
H
B
F
E
D
C
G 3
2
4
6
3
4
3
4
8
4
3
10 edge dv
(B,E) 4
(B,F) 4
(B,H) 4
(A,H) 5
(D,F) 6
(A,B) 8
(A,F) 10
Select first |V|–1 edges which do not
generate a cycle
edge dv
(D,E) 1 
(D,G) 2
(E,G) 3
(C,D) 3
(G,H) 3
(C,F) 3
(B,C) 4
5
1
A
H
B
F
E
D
C
G 3
2
4
6
3
4
3
4
8
4
3
10 edge dv
(B,E) 4
(B,F) 4
(B,H) 4
(A,H) 5
(D,F) 6
(A,B) 8
(A,F) 10
Select first |V|–1 edges which do not
generate a cycle
edge dv
(D,E) 1 
(D,G) 2 
(E,G) 3
(C,D) 3
(G,H) 3
(C,F) 3
(B,C) 4
5
1
A
H
B
F
E
D
C
G 3
2
4
6
3
4
3
4
8
4
3
10 edge dv
(B,E) 4
(B,F) 4
(B,H) 4
(A,H) 5
(D,F) 6
(A,B) 8
(A,F) 10
Select first |V|–1 edges which do not
generate a cycle
edge dv
(D,E) 1 
(D,G) 2 
(E,G) 3 
(C,D) 3
(G,H) 3
(C,F) 3
(B,C) 4
5
1
A
H
B
F
E
D
C
G 3
2
4
6
3
4
3
4
8
4
3
10 edge dv
(B,E) 4
(B,F) 4
(B,H) 4
(A,H) 5
(D,F) 6
(A,B) 8
(A,F) 10
Accepting edge (E,G) would create a cycle
Select first |V|–1 edges which do not
generate a cycle
edge dv
(D,E) 1 
(D,G) 2 
(E,G) 3 
(C,D) 3 
(G,H) 3
(C,F) 3
(B,C) 4
5
1
A
H
B
F
E
D
C
G 3
2
4
6
3
4
3
4
8
4
3
10 edge dv
(B,E) 4
(B,F) 4
(B,H) 4
(A,H) 5
(D,F) 6
(A,B) 8
(A,F) 10
Select first |V|–1 edges which do not
generate a cycle
edge dv
(D,E) 1 
(D,G) 2 
(E,G) 3 
(C,D) 3 
(G,H) 3 
(C,F) 3
(B,C) 4
5
1
A
H
B
F
E
D
C
G 3
2
4
6
3
4
3
4
8
4
3
10 edge dv
(B,E) 4
(B,F) 4
(B,H) 4
(A,H) 5
(D,F) 6
(A,B) 8
(A,F) 10
Select first |V|–1 edges which do not
generate a cycle
edge dv
(D,E) 1 
(D,G) 2 
(E,G) 3 
(C,D) 3 
(G,H) 3 
(C,F) 3 
(B,C) 4
5
1
A
H
B
F
E
D
C
G 3
2
4
6
3
4
3
4
8
4
3
10 edge dv
(B,E) 4
(B,F) 4
(B,H) 4
(A,H) 5
(D,F) 6
(A,B) 8
(A,F) 10
Select first |V|–1 edges which do not
generate a cycle
edge dv
(D,E) 1 
(D,G) 2 
(E,G) 3 
(C,D) 3 
(G,H) 3 
(C,F) 3 
(B,C) 4 
5
1
A
H
B
F
E
D
C
G 3
2
4
6
3
4
3
4
8
4
3
10 edge dv
(B,E) 4
(B,F) 4
(B,H) 4
(A,H) 5
(D,F) 6
(A,B) 8
(A,F) 10
Select first |V|–1 edges which do not
generate a cycle
edge dv
(D,E) 1 
(D,G) 2 
(E,G) 3 
(C,D) 3 
(G,H) 3 
(C,F) 3 
(B,C) 4 
5
1
A
H
B
F
E
D
C
G 3
2
4
6
3
4
3
4
8
4
3
10 edge dv
(B,E) 4 
(B,F) 4
(B,H) 4
(A,H) 5
(D,F) 6
(A,B) 8
(A,F) 10
Select first |V|–1 edges which do not
generate a cycle
edge dv
(D,E) 1 
(D,G) 2 
(E,G) 3 
(C,D) 3 
(G,H) 3 
(C,F) 3 
(B,C) 4 
5
1
A
H
B
F
E
D
C
G 3
2
4
6
3
4
3
4
8
4
3
10 edge dv
(B,E) 4 
(B,F) 4 
(B,H) 4
(A,H) 5
(D,F) 6
(A,B) 8
(A,F) 10
Select first |V|–1 edges which do not
generate a cycle
edge dv
(D,E) 1 
(D,G) 2 
(E,G) 3 
(C,D) 3 
(G,H) 3 
(C,F) 3 
(B,C) 4 
5
1
A
H
B
F
E
D
C
G 3
2
4
6
3
4
3
4
8
4
3
10 edge dv
(B,E) 4 
(B,F) 4 
(B,H) 4 
(A,H) 5
(D,F) 6
(A,B) 8
(A,F) 10
Select first |V|–1 edges which do not
generate a cycle
edge dv
(D,E) 1 
(D,G) 2 
(E,G) 3 
(C,D) 3 
(G,H) 3 
(C,F) 3 
(B,C) 4 
5
1
A
H
B
F
E
D
C
G 3
2
4
6
3
4
3
4
8
4
3
10 edge dv
(B,E) 4 
(B,F) 4 
(B,H) 4 
(A,H) 5 
(D,F) 6
(A,B) 8
(A,F) 10
Select first |V|–1 edges which do not
generate a cycle
edge dv
(D,E) 1 
(D,G) 2 
(E,G) 3 
(C,D) 3 
(G,H) 3 
(C,F) 3 
(B,C) 4 
5
1
A
H
B
F
E
D
C
G
2
3
3
3
edge dv
(B,E) 4 
(B,F) 4 
(B,H) 4 
(A,H) 5 
(D,F) 6
(A,B) 8
(A,F) 10
Done
Total Cost =  dv = 21
4
}not
considered

Algorithm analysis (All in one)

  • 1.
  • 2.
    Algorithm • An algorithmis a step-by-step procedure for solving a problem in a finite amount of time. OR
  • 3.
    • Prepared By: •Engr: Basharat Jehan Qualification: MS Software Engineering Lecturer Agriculture University Peshawar Amir Muhammad Khan Campus Mardan KP, Pakistan • Email id: Basharatjehan1987@gmail.com
  • 4.
    • An algorithmis any well-defined computational procedure that takes some value, or set of values, as input and produces some value, or set of values, as output. An algorithm is thus a sequence of computational steps that transform the input into the output.
  • 5.
    • For examplea sorting algorithm take a sequence of numbers as input, and a sorted list is output.
  • 6.
  • 7.
    • An algorithmis said to be correct if, for every input instance, it halts with the correct output. We say that a correct algorithm solves the given computational problem. An incorrect algorithm might not halt at all on some input instances, or it might halt with an incorrect answer.
  • 8.
    data structure • Adata structure is a way to store and organize data in order to facilitate access and modifications
  • 9.
    Analyzing algorithms • Analyzingan algorithm has come to mean predicting the resources that the algorithm requires.
  • 10.
    • Analyzing analgorithm determines the amount of “time” that algorithm takes to execute. This is not really a number of seconds or any other clock measurement but rather an approximation of the number of operations that an algorithm performs. The number of operations is related to the execution time, so we will sometimes use the word time to describe an algorithm’s computational complexity.
  • 11.
    • Most ofwhat we will be discussing is going to be how efficient various algorithms are in terms of time, but some forms of analysis could be done based on how much space an algorithm needs to complete its task. This space complexity analysis was critical in the early days of computing when storage space on a computer (both internal and external) was limited.
  • 12.
    Why Analyze anAlgorithm? • There are several answers to this basic question, depending on one’s frame of reference: the intended use of the algorithm, the importance of the algorithm in relationship to others from both practical and theoretical standpoints, the difficulty of analysis, and the accuracy and precision of the required answer.
  • 13.
    • we wantto know how long an implementation of a particular algorithm will run on a particular computer, and how much space it will require. We generally strive to keep the analysis independent of particular implementations—we concentrate instead on obtaining results for essential characteristics of the algorithm that can be used to derive precise estimates of true resource requirements on various actual machines.
  • 14.
    CENG 213 DataStructures 14 Kinds of analyses • An algorithm can require different times to solve different problems of the same size. – Eg. Searching an item in a list of n elements using sequential search.  Cost: 1,2,...,n • Worst-Case Analysis –The maximum amount of time that an algorithm require to solve a problem of size n. – This gives an upper bound for the time complexity of an algorithm. – Normally, we try to find worst-case behavior of an algorithm. • Best-Case Analysis –The minimum amount of time that an algorithm require to solve a problem of size n. – The best case behavior of an algorithm is NOT so useful. • Average-Case Analysis –The average amount of time that an algorithm require to solve a problem of size n. – Sometimes, it is difficult to find the average-case behavior of an algorithm. – We have to look at all possible data organizations of a given size n, and their distribution probabilities of these organizations. – Worst-case analysis is more common than average-case analysis.
  • 15.
    Analysis of algorithms Thetheoretical study of computer-program performance and resource usage. What’s more important than performance? • modularity • correctness • maintainability • functionality • robustness • user-friendliness • programmer time • simplicity • extensibility • reliability
  • 16.
    • Problem • Strategy •Algorithm – Input – Output – Steps • Analysis – Correctness – Time & Space – Optimality • Implementation • Verification Dr Nazir A. Zafar Advanced Algorithms Analysis and Design Problem Solving Process
  • 17.
    Algorithm for findingmaximum number in an array Algorithm arrayMax(A, n) Input array A of n integers Output maximum element of A currentMax  A[0] for i  1 to n  1 do if A[i]  currentMax then currentMax  A[i] return currentMax
  • 19.
    Searching Algorithms • Necessarycomponents to search a list of fdata – Array containing the list – Length of the list – Item for which you are searching • After search completed – If item found, report “success,” return location in array – If item not found, report “not found” or “failure”
  • 20.
    Sequential Search • Incomputer science, linear search or sequential search is a method for finding a particular value in a list that checks each element in sequence until the desired element is found or the list is exhausted. The list need not be ordered.
  • 21.
    • Suppose thatyou want to determine whether 27 is in the list • First compare 27 with list[0]; that is, compare 27 with 35 • Because list[0] ≠ 27, you then compare 27 with list[1] • Because list[1] ≠ 27, you compare 27 with the next element in the list • Because list[2] = 27, the search stops • This search is successful! Searching Algorithms (Cont’d) Figure 1: Array list with seven (07) elements
  • 22.
    • Let’s nowsearch for 10 • The search starts at the first element in the list; that is, at list[0] • Proceeding as before, we see that this time the search item, which is 10, is compared with every item in the list • Eventually, no more data is left in the list to compare with the search item; this is an unsuccessful search Searching Algorithms (Cont’d)
  • 23.
    Algorithm of SequentialSearching • The complete algorithm for sequential search is //list the elements to be searched //target the value being searched for //N the number of elements in the list SequentialSearch( list, target, N ) • for i = 1 to N do • if (target = list[i]) • return i • end if • end for • return 0
  • 25.
    • Can onlybe performed on a sorted list !!! • Uses divide and conquer technique to search list Binary Search Algorithm
  • 26.
    • Search itemis compared with middle element of list • If search item < middle element of list, search is restricted to first half of the list • If search item > middle element of list, search second half of the list • If search item = middle element, search is complete Binary Search Algorithm (Cont’d)
  • 27.
    • Determine whether75 is in the list Binary Search Algorithm (Cont’d) Figure 2: Array list with twelve (12) elements Figure 3: Search list, list[0] … list[11]
  • 28.
    Binary Search Algorithm(Cont’d) Figure 4: Search list, list[6] … list[11]
  • 32.
    RATES OF GROWTH Inanalysis of algorithms, it is not important to know exactly how many operations an algorithm does. Of greater concern is the rate of increase in operations for an algorithm to solve a problem as the size of the problem increases. This is referred to as the rate of growth of the algorithm. What happens with small sets of input data is not as interesting as what happens when the data set gets large.
  • 33.
    Because we areinterested in general behavior, we just look at the overall growth rate of algorithms, not at the details. If we look closely at the graph in Fig. 1.1, we will see some trends. The function based on x2 increases slowly at first, but as the problem size gets larger, it begins to grow at a rapid rate. The functions that are based on x both grow at a steady rate for the entire length of the graph. The function based on log x seems to not grow at all, but this is because it is actually growing at a very slow rate. The relative height of the functions is also different when we have small values versus large ones. Consider the value of the functions when x is 2.
  • 40.
    time complexity In computerscience, the time complexity of an algorithm quantifies the amount of time taken by an algorithm to run as a function of the length of the string representing the input
  • 41.
    How to analyzeTime Complexity
  • 54.
    Exchange Sort A methodof arranging records or other types of data into a specified order, in which adjacent pairs of records are exchanged until the correct order is achieved.
  • 55.
  • 56.
    Exchange Sort Analysis •Best Case: O(N) • Average and Worst Case: O(n^2)
  • 57.
    Bubble sort • Bubblesort, sometimes referred to as sinking sort, is a simple sorting algorithm that repeatedly steps through the list to be sorted, compares each pair of adjacent items and swaps them if they are in the wrong order.
  • 62.
    Bubble Sort AlgorithmAnalysis • Best Case: O(N) • Average and Worst Case: O(n^2)
  • 63.
    CENG 213 DataStructures 63 The Execution Time of Algorithms (cont.) Example: Simple Loop Cost Times i = 1; c1 1 sum = 0; c2 1 while (i <= n) { c3 n+1 i = i + 1; c4 n sum = sum + i; c5 n } Total Cost = c1 + c2 + (n+1)*c3 + n*c4 + n*c5  The time required for this algorithm is proportional to n
  • 64.
    CENG 213 DataStructures 64 The Execution Time of Algorithms (cont.) Example: Nested Loop Cost Times i=1; c1 1 sum = 0; c2 1 while (i <= n) { c3 n+1 j=1; c4 n while (j <= n) { c5 n*(n+1) sum = sum + i; c6 n*n j = j + 1; c7 n*n } i = i +1; c8 n } Total Cost = c1 + c2 + (n+1)*c3 + n*c4 + n*(n+1)*c5+n*n*c6+n*n*c7+n*c8  The time required for this algorithm is proportional to n2
  • 65.
    Insertion Sort • Sortingmethod in which algorithm scan a whole list one by one and in each iteration the algorithm place each number to its correct position.
  • 66.
  • 68.
  • 69.
    See example animatedexample at • http://courses.cs.vt.edu/csonline/Algorithms/ Lessons/InsertionCardSort/insertioncardsort.s wf
  • 70.
    Selection Sort • Itworks as follows: first find the smallest in the array and exchange it with the element in the first position, then find the second smallest element and exchange it with the element in the second position, and continue in this way until the entire array is sorted.
  • 71.
    Algorithm • MIN(A,K,N,LOC) • 1.Set MIN=A[K] and LOC=K • Repeat for J=K+1,K+2…….N • If MIN>A[J], then Set MIN=A[J] and LOC=J. • Interchange A[K] and A[LOC]. • Exit
  • 72.
  • 74.
  • 75.
  • 77.
    Quicksort • Quicksort isa divide and conquer algorithm. Quicksort first divides a large array into two smaller sub-arrays: the low elements and the high elements. Quicksort can then recursively sort the sub-arrays. The steps are: • Pick an element, called a pivot, from the array. • Reorder the array so that all elements with values less than the pivot come before the pivot, while all elements with values greater than the pivot come after it (equal values can go either way). After this partitioning, the pivot is in its final position. This is called the partition operation.
  • 78.
    • Example 2.3Suppose the array contains these numbers in sequence: 15 22 13 27 12 10 20 25 Pivot • Partition the array so that all items smaller than the pivot item are to the left of it and all items larger are to the right: » 10 13 12 15 20 27 20 25 Sort the sub array 10 12 13 15 20 22 25 27 78
  • 80.
    • A processrelated to sorting is merging. By two-way merging we mean combining two sorted arrays into one sorted array. By repeatedly applying the merging procedure, we can sort an array. For example, to sort an array of 16 items, we can divide it into two subarrays, each of size 8, sort the two subarrays, and then merge them to produce the sorted array. In the same way, each subarray of size 8 can be divided into two subarrays of size 4, and these subarrays can be sorted and merged. Eventually, the size of the subarrays will become 1, and an array of size 1 is trivially sorted. This procedure is called "Mergesort." 80 Merge Sort
  • 81.
    1) Divide thearray into two subarrays each with n/2 items. 2) Conquer (solve) each subarray by sorting it. Unless the array is sufficiently small, use recursion to do this. 3) Combine the solutions to the subarrays by merging them into a single sorted array 81 Steps in Merge Sort
  • 82.
    • Suppose thearray contains these numbers in sequence: – 27 10 12 20 25 13 15 22. • Divide the array: – 27 10 12 20 and 25 13 15 22. • Sort each subarray: – 10 12 20 27 and 13 15 22 25. • Merge the subarrays: – 10 12 13 15 20 22 25 27. 82 Example
  • 83.
  • 86.
    See slides fordifference b/t quick/merge and heap sort • http://www.slideshare.net/MohammedHussei n8/quick-sort-merge-sort-heap-sort
  • 87.
    Decision Trees forSorting Algorithms • Void sortthree(keytype S[]) • { • Keytype a,b,c; • a=S[1]; b= S[2]; c= S[3]; • If(a<b) • If(b<c) • S=a,b,c • Else if (a<c) • S=a,c,b; • Else • S=c,a,b; • Else if (b<c) • If (a<c) • S= b,a,c; • Else • S= b,c,a; • Else • S=c,b,a; • }
  • 89.
    Every- Case TimeComplexity • If T(n) is time complexity, then this complexity is called every case complexity if for every instance of size ‘n’ it perform the same number of basic operations.
  • 90.
    Computational Complexity • Computationalcomplexity, which is a field that runs hand-in-hand with algorithm design and analysis, is the study of all possible algorithms that can solve a given problem. A computational complexity analysis tries to determine a lower bound(Ώ) on the efficiency of all algorithms for a given problem.
  • 91.
    • We introducecomputational complexity analysis by studying the Sorting problem. There are two reasons for choosing this problem. First, quite a few algorithms have been devised that solve the problem. By studying and comparing these algorithms, we can gain insight into how to choose among several algorithms for the same problem and how to improve a given algorithm. Second, the problem of sorting is one of the few problems for which we have been successful in developing algorithms whose time complexities are about as good as our lower bound. That is, for a large class of sorting algorithms, we have determined a lower bound of Ω (n lg n) and we have developed Ω (n lg n) algorithms. Therefore, we can say tht we have solved the Sorting problem as far as this class of algorithms is concerned.
  • 92.
    permutation • A permutationof the first n positive integers can be thought of as an ordering of those integers. Because there are n! permutations of the first n positive integers , there are n! different orderings of those integers. For example, the following six permutations are all the orderings of the first three positive integers:
  • 96.
    Theorem • “Any algorithmthat sorts n distinct keys only by comparisons of keys and removes at most one inversion after each comparison”
  • 97.
    Proof • Suppose thatcurrently the array S contains the permutation [2, 4, 3, 1] and we are comparing 2 with 1. After that comparison, 2 and 1 will be exchanged, thereby removing the inversions (2, 1), (4, 1), and (3, 1). However, the inversions (4, 2) and (3, 2) have been added, and the net reduction, in inversions is only one. This example illustrates the general result that Exchange Sort always has a net reduction of at most one inversion after each comparison.
  • 98.
    Dynamic Programming • Dynamicprogramming is a method for solving a complex problem by breaking it down into a collection of simpler sub problems. It is applicable to problems exhibiting the properties of overlapping sub problems and optimal substructure.
  • 99.
  • 100.
  • 109.
    Solve the givengraph using Floyd’s algorithm
  • 115.
    • The GreedyAlgorithms proceeds the way that it grabs data items in sequence, each time taking the one that is deemed "best“ according to some criterion, without regard for the choices it has made before or will in the future. One should not get the impression that there is something wrong with greedy algorithms because of the negative connotations of Scrooge and the word "greedy". They often lead to very efficient and simple solutions 115 The Greedy Algorithm
  • 118.
  • 119.
  • 123.
    Prim’s Algorithm forminimum spanning Tree
  • 125.
  • 134.
    Kruskal’s Algorithm Work withedges, rather than nodes Two steps: – Sort edges by increasing edge weight – Select the first |V| – 1 edges that do not generate a cycle
  • 135.
    Walk-Through Consider an undirected,weight graph 5 1 A H B F E D C G 3 2 4 6 3 4 3 4 8 4 3 10
  • 136.
    Sort the edgesby increasing edge weight edge dv (D,E) 1 (D,G) 2 (E,G) 3 (C,D) 3 (G,H) 3 (C,F) 3 (B,C) 4 5 1 A H B F E D C G 3 2 4 6 3 4 3 4 8 4 3 10 edge dv (B,E) 4 (B,F) 4 (B,H) 4 (A,H) 5 (D,F) 6 (A,B) 8 (A,F) 10
  • 137.
    Select first |V|–1edges which do not generate a cycle edge dv (D,E) 1  (D,G) 2 (E,G) 3 (C,D) 3 (G,H) 3 (C,F) 3 (B,C) 4 5 1 A H B F E D C G 3 2 4 6 3 4 3 4 8 4 3 10 edge dv (B,E) 4 (B,F) 4 (B,H) 4 (A,H) 5 (D,F) 6 (A,B) 8 (A,F) 10
  • 138.
    Select first |V|–1edges which do not generate a cycle edge dv (D,E) 1  (D,G) 2  (E,G) 3 (C,D) 3 (G,H) 3 (C,F) 3 (B,C) 4 5 1 A H B F E D C G 3 2 4 6 3 4 3 4 8 4 3 10 edge dv (B,E) 4 (B,F) 4 (B,H) 4 (A,H) 5 (D,F) 6 (A,B) 8 (A,F) 10
  • 139.
    Select first |V|–1edges which do not generate a cycle edge dv (D,E) 1  (D,G) 2  (E,G) 3  (C,D) 3 (G,H) 3 (C,F) 3 (B,C) 4 5 1 A H B F E D C G 3 2 4 6 3 4 3 4 8 4 3 10 edge dv (B,E) 4 (B,F) 4 (B,H) 4 (A,H) 5 (D,F) 6 (A,B) 8 (A,F) 10 Accepting edge (E,G) would create a cycle
  • 140.
    Select first |V|–1edges which do not generate a cycle edge dv (D,E) 1  (D,G) 2  (E,G) 3  (C,D) 3  (G,H) 3 (C,F) 3 (B,C) 4 5 1 A H B F E D C G 3 2 4 6 3 4 3 4 8 4 3 10 edge dv (B,E) 4 (B,F) 4 (B,H) 4 (A,H) 5 (D,F) 6 (A,B) 8 (A,F) 10
  • 141.
    Select first |V|–1edges which do not generate a cycle edge dv (D,E) 1  (D,G) 2  (E,G) 3  (C,D) 3  (G,H) 3  (C,F) 3 (B,C) 4 5 1 A H B F E D C G 3 2 4 6 3 4 3 4 8 4 3 10 edge dv (B,E) 4 (B,F) 4 (B,H) 4 (A,H) 5 (D,F) 6 (A,B) 8 (A,F) 10
  • 142.
    Select first |V|–1edges which do not generate a cycle edge dv (D,E) 1  (D,G) 2  (E,G) 3  (C,D) 3  (G,H) 3  (C,F) 3  (B,C) 4 5 1 A H B F E D C G 3 2 4 6 3 4 3 4 8 4 3 10 edge dv (B,E) 4 (B,F) 4 (B,H) 4 (A,H) 5 (D,F) 6 (A,B) 8 (A,F) 10
  • 143.
    Select first |V|–1edges which do not generate a cycle edge dv (D,E) 1  (D,G) 2  (E,G) 3  (C,D) 3  (G,H) 3  (C,F) 3  (B,C) 4  5 1 A H B F E D C G 3 2 4 6 3 4 3 4 8 4 3 10 edge dv (B,E) 4 (B,F) 4 (B,H) 4 (A,H) 5 (D,F) 6 (A,B) 8 (A,F) 10
  • 144.
    Select first |V|–1edges which do not generate a cycle edge dv (D,E) 1  (D,G) 2  (E,G) 3  (C,D) 3  (G,H) 3  (C,F) 3  (B,C) 4  5 1 A H B F E D C G 3 2 4 6 3 4 3 4 8 4 3 10 edge dv (B,E) 4  (B,F) 4 (B,H) 4 (A,H) 5 (D,F) 6 (A,B) 8 (A,F) 10
  • 145.
    Select first |V|–1edges which do not generate a cycle edge dv (D,E) 1  (D,G) 2  (E,G) 3  (C,D) 3  (G,H) 3  (C,F) 3  (B,C) 4  5 1 A H B F E D C G 3 2 4 6 3 4 3 4 8 4 3 10 edge dv (B,E) 4  (B,F) 4  (B,H) 4 (A,H) 5 (D,F) 6 (A,B) 8 (A,F) 10
  • 146.
    Select first |V|–1edges which do not generate a cycle edge dv (D,E) 1  (D,G) 2  (E,G) 3  (C,D) 3  (G,H) 3  (C,F) 3  (B,C) 4  5 1 A H B F E D C G 3 2 4 6 3 4 3 4 8 4 3 10 edge dv (B,E) 4  (B,F) 4  (B,H) 4  (A,H) 5 (D,F) 6 (A,B) 8 (A,F) 10
  • 147.
    Select first |V|–1edges which do not generate a cycle edge dv (D,E) 1  (D,G) 2  (E,G) 3  (C,D) 3  (G,H) 3  (C,F) 3  (B,C) 4  5 1 A H B F E D C G 3 2 4 6 3 4 3 4 8 4 3 10 edge dv (B,E) 4  (B,F) 4  (B,H) 4  (A,H) 5  (D,F) 6 (A,B) 8 (A,F) 10
  • 148.
    Select first |V|–1edges which do not generate a cycle edge dv (D,E) 1  (D,G) 2  (E,G) 3  (C,D) 3  (G,H) 3  (C,F) 3  (B,C) 4  5 1 A H B F E D C G 2 3 3 3 edge dv (B,E) 4  (B,F) 4  (B,H) 4  (A,H) 5  (D,F) 6 (A,B) 8 (A,F) 10 Done Total Cost =  dv = 21 4 }not considered