1
CHAPTER TWO
COMPLEXITY
ANALYSIS
6/28/2022 DSA- CH-2: Complexity Analysis
CH-2 Contents
2
1. Introduction
2. Computational and asymptotic
complexity
3. Big-O, Ω, Θ, little-o and OO notations
4. Common complexity classes
5. Best, average and worst case complexity
6/28/2022
Introduction to Data
Structures and Algorithms
3 Analysis
A program is written in order to solve a
problem.
Solution consists of two things:
A way to organize the data
Sequence of steps to solve the problem
The way data are organized in a
computers memory is said to be Data
Structure and
The clearly defined sequence of
computational steps the computer follows
to solve a problem is said to be an
algorithm.
Therefore, a program is nothing but data
6/28/2022
Algorithm Analysis -
4
Concepts
Refers to the process of determining the
amount of computing time and storage space
required by different algorithms.
Or it’s a process of predicting the resource
requirement of algorithms in a given environment.
To solve a problem, there are many possible
algorithms.
One has to be able to choose the best algorithm for
the problem at hand using some scientific method.
To classify some data structures and
algorithms as good, we need precise ways of
analyzing them in terms of resource
requirement.
The main resources are: Running Time, Memory
Usage and Communication Bandwidth 6/28/2022
5
Running time is usually treated as the
most important since computational time
is the most precious resource in most
problem domains.
Two approaches to measure the
efficiency of algorithms:
Theoretical:Determining the quantity of
resources required mathematically
(Execution time, memory space, etc.)
needed by each.
Empirical: Programming competing
algorithms and trying them on different
6/28/2022
6
Theoretical - we can analyze an
algorithm according to the number of
operations required, rather than
according to an absolute amount of time
involved.
This can show how an algorithm’s
efficiency changes according to the size of
the input.
6/28/2022
7
Empirical - However, it is difficult to
use actual clock-time as a consistent
measure of an algorithm’s efficiency,
because clock-time can vary based on
many things. For example,
Specificprocessor speed,
Current processor load and
Specific data for a particular run of the
program (Input Size and Input Properties)
Operating Environment
6/28/2022
Important terms
8
Computational complexity
It is a measuring of the amount of time, storage,
or other resources needed to execute the algorithm
in order to compare the efficiency of algorithms.
Asymptotic complexity
Asymptotic time complexity
The limiting behavior of the execution time of
an algorithm when the size of the problem goes to
infinity. This is usually denoted in big-O notation
Asymptotic space complexity
The limiting behavior of the use of memory space of
an algorithm when the size of the problem goes to
infinity. This is usually denoted in big-O notation.
6/28/2022
Theoretical Approach -
9
Complexity Analysis
Complexity Analysis is the systematic study
of the cost of computation, measured
either in time units or in operations
performed, or in the amount of storage
space required.
The goal is to have a meaningful measure that
permits comparison of algorithms independent
of operating platform.
There are two things to consider:
Time Complexity: Determine the approximate
number of operations required to solve a
problem of size n.
Space Complexity: Determine the approximate
6/28/2022
memory required to solve a problem of size n.
10
Complexity analysis involves two distinct
phases:
1. Algorithm Analysis: Analysis of the
algorithm or data structure to produce a
function T(n) that describes the algorithm
in terms of the operations performed in
order to measure the complexity of the
algorithm.
2. Order of Magnitude Analysis: Analysis of
the function T(n) to determine the
general complexity category to which it
belongs. 6/28/2022
Algorithm Analysis Rules:
11
OPTION 1 : Exact count
There is no generally accepted set of rules for
algorithm analysis. However, an exact count of
operations is commonly used.
We assume an arbitrary time unit.
1. Execution of one of the following operations
takes time 1:
Assignment Operation
Single Input / Output Operation
Single Boolean Operations
Single Arithmetic Operations
Function Return
6/28/2022
12
2. Running time of a selection statement (if,
switch) is the time for the condition
evaluation + the maximum of the
running times for the individual clauses
in the selection.
3. Loops: Running time for a loop is equal
to the running time for the statements
inside the loop * number of iterations.
4. For nested loops, analyze inside out.
Always assume that the loop executes the
maximum number of iterations possible.
6/28/2022
13
5. The total running time of a statement
inside a group of nested loops is the
running time of the statements
multiplied by the product of the sizes of
all the loops.
6. Running time of a function call is 1 for
setup + the time for any parameter
calculations + the time required for the
execution of the function body.
6/28/2022
Complexity Analysis: Loops
14
We start by considering how to
count operations in for-loops.
We use integer division throughout.
First of all, we should know the
number of iterations of the loop; say
it is x.
Then the loop condition is executed x +
1 times.
Each of the statements in the loop body
is executed x times.
The loop-index update statement is
executed x times.
6/28/2022
Complexity Analysis: Loops (with <)
15
In the following for-loop:
for(int i=k; i<n; i+=m) for(int i=n; i>k; i-=m)
{ {
statement1; statement1;
statement2; statement2;
} }
The number of iterations is: (n – k )
/m
The initialization statement, i = k, is executed
one time.
The condition, i < n, is executed (n – k) / m
+ 1 times.
The update statement, i = i + m, is executed
(n – k) / m times. 6/28/2022
Each of statement1 and statement2 is
Complexity Analysis : Loops (with <=)
16
In the following for-loop:
for(int i=k; i<=n; i+=m) for(int i=n; i>=k; i-=m)
{ {
statement1; statement1;
statement2; statement2;
} }
The number of iterations is: (n – k) /
m + 1
The initialization statement, i = k, is executed
one time.
The condition, i <= n, is executed (n – k) / m +
2 times.
The update statement, i = i + m, is executed (n –
k) / m + 1 times. 6/28/2022
Complexity Analysis:
Loops With Logarithmic Iterations
17
In the following for-loop: (with <)
for (int i=k; i<n; i*=m){ for (int i=n; i>k; i/=m){
statement1; statement1;
statement2; statement2;
} }
The number of iterations is: (Logm (n / k) )
In the following for-loop: (with <=)
for (int i=k; i<=n; i*=m){ for (int i=n; i>=k; i/=m){
statement1; statement1;
statement2; statement2;
} }
The number of iterations is: (Logm (n / k) + 1)
6/28/2022
Example 1
18
6/28/2022
19
Time Units to Compute
Line 3: 1 for the assignment statement: int
k=0
Line 4: 1 for the output statement.
Line 5: 1 for the input statement.
Line 6: In the for loop:
1 assignment, n+1 tests, and n increments.
Line 7: n loops of 2 units for an assignment,
and an addition.
Line 8: 1 for the return statement.
T (n)= 1+1+1+(1+n+1+n)+2n+1 = 4n+6 =
O(n) 6/28/2022
Example 2
20
6/28/2022
21
Time Units to Compute
Line 3: 1 for the assignment statement:
int sum=0
Line 4: In the for loop:
1 assignment, n+1 tests, and n increments.
Line 5: n loops of 2 units for an assignment,
and an addition.
Line 6: 1 for the return statement.
T (n)= 1+ (1+n+1+n)+2n+1 = 4n+4
= O(n)
6/28/2022
Example: 3
22
6/28/2022
23
Time Units to Compute
Line 2: 2 for the assignment statement:
x=2.5 , y=3.0
Line 3: In the for loop:
1 assignment, n+1 tests, and n increments.
Line 5: n loops of 2 units for an assignment, and a
multiplication
Line 6: n loops of 2 units for an assignment, and a
multiplication
Line 7: n loops of 2 units for an assignment, and
an addition
T (n)= 2+ (1+n+1+n)+2n+2n+2n
6/28/2022
=
8n+4 = O(n)
Exercise 1
24
6/28/2022
Exercise 2
25
6/28/2022
26
OPTION 2 : Formal Approach to
Algorithm Analysis
It can be simplified by using some formal
approach in which case we can ignore
initializations, loop control, and book
keeping.
6/28/2022
27
For loops: formally
In general, a for loop translates to a
summation. The index and bounds of
the summation are the same as the
for
index and bounds of the for N
N loop.
for (int
(int ii =
= 1;
1; ii <=
<= N;
N; i++)
i++) {{
}}
sum
sum = = sum+i;
sum+i; 1 Nii 1
1
Suppose we count the number of additions
that are done. There is 1 addition per
iteration of the loop, hence N 6/28/2022
additions in
28
Nested Loops: Formally
Nested for loops translate into multiple
summations, one for each for loop.
for (int i = 1; i <= N; i++) {
for (int j = 1; j <= M; j++) { N M N
}
sum = sum+i+j; 2 2M 2MN
i 1 j 1 i 1
}
Again, count the number of additions.
The outer summation is for the outer
for loop.
6/28/2022
29
Consecutive Statements: Formally
Add the running times of the separate
blocks of your code
for (int i = 1; i <= N; i++) {
for (int isum
= 1;= sum+i;
i <= N; i++) {
sum
} = sum+i; N N N N N N
} for (int i = 1; i <= N; i++) {
1 2 N
2 N 2
for (int ifor= (int
1; ij <=
= 1;N;
j <=i++)
sum = sum+i+j;
N; j++)
{ { i 1 1i
1 j 1
i 1 i 1 j 1
2 N
for (int
}
j = 1; j <= N; j++) {
} sum = sum+i+j;
}
} 6/28/2022
30
Conditionals: Formally
If (test) s1 else s2: Compute the
maximum of the running time for s1
and s2.
if (test == 1) {
if (test == 1) {
fori(int
for (int = 1; i =i 1;
<=i <=
N;N; i++){ {
i++) N N N NN N
sum sum = sum+i;
= sum+i; max1,1
max , 2 2
}} }} i 1i 1 i 1 i j11 j 1
else(int
for i(int i =i1;<=
i <=
N;N;i++)
i++){{
else for = 1;
for (int j = 1; j <= N; j++) {
for (int j = 1; j <= N; j++) { max N ,
2 N 2
max N , 2 N 2 N 2 2 2N
2
sum = sum+i+j;
}}
sum = sum+i+j;
}}
6/28/2022
Example: 4
31
Suppose we have hardware capable of
executing 10 6 instructions per second.
How long would it take to execute an
algorithm whose complexity function is:
T (n) = 2n2 on an input size of n=108?
6/28/2022
32
The total number of operations to be
performed would be T (108):
T(108) = 2*(108)2 =2*1016
The required number of seconds would be
given by
T(108)/106 so: Running time =2*1016/106 =
2*1010
Days? 86,400 days
Years? 634 years
6/28/2022
Important mathematics series
33
formulas
6/28/2022
Exercises
34
Determine the run
time equation and
complexity of each of
the following code
segments.
6/28/2022
Exercise: 3
35
for (i=0;i<n;i++)
{
for (j=0;j<n; j++)
{
sum=sum+i+j;
}
}
6/28/2022
Exercise: 4
36
for(int i=1; i<=n; i++)
{
for (int j=1; j<=i; j++)
{
sum++;
}
}
What is the value of the sum if n=20?
6/28/2022
Exercise: 5
37
int k=0;
for (int i=0; i<n; i++)
{
for (int j=i; j<n; j++)
{
k++;
}
}
What is the value of k when n is equal
to 20?
6/28/2022
Exercise: 6
38
int k=0;
for (int i=1; i<n; i*=2)
{
for(int j=1; j<n; j++)
{
k++;
}
}
What is the value of k when n is equal
to 20?
6/28/2022
Exercise: 7
39
int x=0;
for(int i=1;i<n;i=i+5)
x++;
What is the value of x when n=25?
int x=0;
for(int k=n;k>=n/3;k=k-5)
x++;
• What is the value of x when n=25?
6/28/2022
Exercise: 8
40
int x=0;
for (int i=1; i<n;i=i+5)
for (int k=n;k>=n/3;k=k-5)
x++;
• What is the value of x when n=25?
int x=0;
for(int i=1;i<n;i=i+5)
for(int j=0;j<i;j++)
for(int k=n;k>=n/2;k=k-3)
x++;
6/28/2022
CH-2 Contents
41
1. Introduction
2. Computational and asymptotic
complexity
3. Big-O, Ω, Θ, little-o and OO notations
4. Common complexity classes
5. Best, average and worst case complexity
9/13/2017
Order of Magnitude Analysis -
42
Measures of Times
In order to determine the running time of
an algorithm it is possible to define three
functions T best (n) , T avg (n) and T worst (n) as
the best, the average and the worst case
running time of the algorithm respectively.
1. Average Case (T a v g ): The amount of time the
algorithm takes on an "average" set of inputs.
2. Worst Case (T w o r s t ): The amount of time the
algorithm takes on the worst possible set of
inputs.
3. Best Case (Tbest): The amount of time the
algorithm takes on the smallest possible set of
inputs.
We are interested in the worst-case
9/13/2017 time,
Asymptotic Analysis
43
Asymptotic analysis is concerned with
how the running time of an algorithm
increases with the size of the input in the
limit, as the size of the input increases
without bound.
There are five notations used to describe
a running time function. These are:
Big-Oh Notation (O) (<=)
Big-Omega Notation ()(>=)
Theta Notation ()(=)
Little-o Notation (o)(<)
Little-Omega Notation ()(>)
9/13/2017
The Big-Oh Notation
44
Big-Oh notation is a way of comparing
algorithms and is used for computing the
complexity of algorithms;
i.e., the amount of time that it takes for
computer program to run .
It’s only concerned with what happens for very
large value of n.
Therefore only the largest term in the expression
(function) is needed.
For example, if the number of operations in an
algorithm is n2 – n, n is insignificant compared to n2
for large values of n. Hence the n term is ignored. Of
course, for small values of n, it may be important.
However, Big-Oh is mainly concerned
9/13/2017
with
large values of n.
45
Formal Definition: f (n)= O (g (n)) if
there exist c, k ∊ ℛ + such that for all n
≥ k, f (n) ≤ c.g (n).
Bythe definition above, demonstrating
that a function f is big-O of a function g
requires that we find specific constants c
and k for which the inequality holds (and
show that the inequality does, in fact, hold).
Big-O expresses an upper bound on the
growth rate of a function, for
sufficiently large values of n.
Example 9/13/2017
Take the function f(n)= 3n2 + 5n -3 =
Standard Method to Prove
46
Big -oh
1. Choose k=1
2. Assuming n>1, find/derive a c such
that
This shows that n>1 implies f(n) <= cg(n).
Keep in mind
n >1 implies 1<n, n<n2, n2<n3 ….
Increase numerator to simplify fraction.
9/13/2017
Example 1: Proving Big -
47
Oh
Show that f(n)= n2 +2n +1 is O(n2)
Choosek=1
Assuming n > 1, then
f(n)/g(n)= (n2 + 2n +1)/n2 < (n2 + 2n2+n2)/n2
= (4n2)/n2=4
Choose c=4. Note that 2n<2n2 and 1< n2
Thus, n2 + 2n +1 is O(n2), because n2 + 2n + 1
<=4n2 whenever n>1.
9/13/2017
Example 2: Proving Big -
48
Oh
Show that f(n)= 4n +6 is O(n)
Choosek=1
Assuming n > 1, then
f(n)/g(n)=(4n +6)/n < (4n+6n)/n =
(10n)/n=10
Choose c=10. Note that 6<6n
Thus, 4n +6 is O(n), because 4n + 6 <=10n
whenever n>1.
9/13/2017
Exercise 1: Proving Big -Oh
49
Show that f(n)= (n +1)3 is O(n3)
Where (n +1)3 = n3 + 3n2 + 3n +1
???
9/13/2017
Standard Method to Prove
50
Not Big -oh
1. Assuming n>1
2. Show
3. n > h-1(c) implies h(n) > c,
which implies f(n) > cg(n)
So choosing n > 1, n > k, and n > h-1(c)
implies n > k and f(n) > cg(n)
9/13/2017
Example 3: Proving Not Big
51
-Oh
Show that f(n)= n2 – 2n +1 is not O(n).
Assuming n > 1, then
f(n)/g(n)= (n2 – 2n +1)/n > (n2 – 2n)/n = n -
2
n > c+2 implies n-2 > c and f(n) >cn.
So choosing n > 1, n > k, and n > c+2
implies n > k and f(n) > cn.
Decrease numerator to simplify fraction.
9/13/2017
Example 4: Proving Not Big
52
-Oh
Show that f(n)= n2/2 is not O(n).
Assuming n > 1, then
f(n)/g(n)= (n2/2)/n = n/2
n > 2c implies n/2 > c and f(n) >cn.
So choosing n > 1, n > k, and n > 2c
implies n > k and f(n) > cn.
9/13/2017
Exercise 2: Proving Not Big
53
-Oh
Show that f(n)= (n +1)3 is not O(n2)
Where (n +1)3 = n3 + 3n2 + 3n +1
???
9/13/2017
54
Examples: The following points are facts
that you can use for Big-Oh problems:
1<=n for all n>=1
n<=n2 for all n>=1
2n <=n! for all n>=4
log2n<=n for all n>=2
n<=nlog2n for all n>=2
9/13/2017
Big-Oh Rules
55
If f(n) is a polynomial of degree d, then
f(n) is O(nd), i.e.,
1. Drop lower-order terms
2. Drop constant factors
Use the smallest possible class of
functions
2n is O(n) instead of 2n is O(n2)
Use the simplest expression of the class
3n 5 is O(n) instead of 3n 5 is O(3n)
9/13/2017
Big-O Theorems
56
For all the following theorems, assume
that f(n) is a non-negative function of n
and that K is an arbitrary constant.
Theorem 1: K is O(1)
Theorem 2: A polynomial is O(the term
containing the highest power of n)
f (n) = 7n4 + 3n2 + 5n +1000 is O(7n4 )
Theorem 3: K*f(n) is O(f(n))
i.e.,
constant coefficients can be dropped
g(n) = 7n4 is O(n4 )
9/13/2017
57
Theorem 4: If f(n) is O(g(n)) and g(n) is
O(h(n)) then f(n) is O(h(n)). [transitivity]
f (n) = 6n2 + 3n -8 is O(6n2)= O(g(n))
g (n) = 6n2 is O(n3) = O(h(n))
h (n) = n3
f (n) =O( h (n))
Theorem 5: For any base b, logb(n) is
O(log(n)).
All logarithms grow at the same rate
logbn is O(logdn) where b, d > 1
9/13/2017
58
Theorem 6: Each of the following functions is strictly
big-O of its successors:
K [constant]
logb(n) [always log base 2 if no base is shown]
n
n logb(n)
n2
n to higher powers
2n
3n
larger constants to the n-th power
n! [n factorial]
nn
f(n)= 3nlogbn + 4 logbn+2 is O(nlogbn) and )(n2) and O(2n)
59
Theorem 7: In general, f(n) is big-O of
the dominant term of f(n), where
“dominant” may usually be determined
from Theorem 6.
f (n) = 7n2+ 3n log(n) + 5n +1000 is
O(n2 )
g(n) = 7n4 + 3n +1000000 is O(3n )
h(n) = 7n(n + log(n)) is O(n2)
9/13/2017
Properties of the Big-O
60
Higher powers grow faster
nr is O( ns) if 0 <= r <= s
Fastest growing term dominates a sum
Iff(n) is O(g(n)), then f(n) + g(n) is O(g)
E.g. 5n4 + 6n3 is O (n4)
Exponential functions grow faster than
powers,
E.g. n20 is O( 1.05n)
Logarithms grow more slowly than
powers 9/13/2017
E.g. log n is O( n0.5)
Big-Omega Notation:
61
Just as O-notation provides an
asymptotic upper bound on a function,
notation provides an asymptotic
lower bound.
Formal Definition: A function f(n) is ( g
(n)) if there exist constants c and k ∊
ℛ + such that
f(n)>=c. g(n) for all n>=k.
f(n)= ( g (n)) means that f(n) is greater
than or equal to some constant multiple of
g(n) for all values of n greater
9/13/2017than or
62
Example: If f(n) =n2, then f(n)= ( n)
In simple terms, f(n)= (g(n)) means
that the growth rate of f(n) is greater
that or equal to g(n).
9/13/2017
Theta Notation:
63
A function f (n) belongs to the set of
(g(n)) if there exist positive constants c1
and c2 such that it can be sandwiched
between c1.g(n) and c2.g(n), for sufficiently
large values of n.
Formal Definition: A function f (n) is
(g(n)) if it is both O( g(n) ) and ( g(n) ).
In other words, there exist constants c1, c2,
and k >0 such that c1.g (n)<=f(n)<=c2. g(n) for
all n >= k
If f(n)= (g(n)), then g(n) is an
asymptotically tight bound for f(n).
In simple terms, f(n)= (g(n)) means that
f(n) and g(n) have the same rate of growth.
9/13/2017
64
Example:
1. If f(n)=2n+1, then f(n) = (n)
2. f(n) =2n2 then
f(n)=O(n4)
f(n)=O(n3)
f(n)=O(n2)
All these are technically correct, but the
last expression is the best and tight one.
Since 2n2 and n2 have the same growth
rate, it can be written as f(n)= (n2).
9/13/2017
Little-o Notation
65
Big-Oh notation may or may not be
asymptotically tight, for example:
2n2 = O(n2)
=O(n3)
f(n)=o(g(n)) means for all c>0 there exists some
k>0 such that f(n)< c.g(n) for all n>=k.
Informally, f(n)=o(g(n)) means f(n) becomes
insignificant relative to g(n) as n approaches
infinity.
Example: f(n)=3n+4 is o(n2)
In simple terms, f(n) has less growth rate
compared to g(n).
g(n)= 2n2 g(n) =o(n3), O(n2), g(n) is not o(n2).
9/13/2017
Little-Omega ( notation)
66
Little-omega () notation is to big-
omega () notation as little-o notation
is to Big-Oh notation.
We use notation to denote a lower
bound that is not asymptotically tight.
Formal Definition: f(n)= (g(n)) if there
exists a constant no>0 such that 0<= c.
g(n)<f(n) for all n>=k.
Example: 2n2=(n) but it’s not (n2).
9/13/2017
Relational Properties of the
67
Asymptotic Notations
Transitivity
if f(n)=(g(n)) and g(n)= (h(n)) then
f(n)=(h(n)),
if f(n)=O(g(n)) and g(n)= O(h(n)) then
f(n)=O(h(n)),
if f(n)=(g(n)) and g(n)= (h(n)) then
f(n)= (h(n)),
if f(n)=o(g(n)) and g(n)= o(h(n)) then
f(n)=o(h(n)), and
if f(n)= (g(n)) and g(n)= (h(n)) then
f(n)= (h(n)). 9/13/2017
Symmetry
68
Transpose symmetry
f(n)=O(g(n)) if and only if g(n)=(f(n)),
f(n)=o(g(n)) if and only if g(n)=(f(n)).
Reflexivity
f(n)=(f(n)),
f(n)=O(f(n)),
f(n)=(f(n)).
9/13/2017
CH-2 Contents
69
1. Introduction
2. Computational and asymptotic
complexity
3. Big-O, Ω, Θ, little-o and OO notations
4. Common complexity classes
5. Best, average and worst case complexity
6/28/2022
Common complexity classes
70
Algorithm can be classified by their time
and space complexities, and in this
respect, several classes of such
algorithms can be distinguished.
An algorithm is called constant if its
execution time remains the same for any
number of elements; it is called quadratic if
its execution time is O(n2).
For each of classes, a number of operations is
shown along with the real time needed for
executing them on a machine able to perform
1 million operations per second, or one
6/28/2022
operation per microsecond(µsec)
Class 10 102 103
Constant O(1) 1 1µsec 1 1µsec 1 1µsec
Logarithmic O(log 3.3 3µsec 6.6 7µsec 9.97 10µsec
n) 2 4
Linear O(n) 10 10µse 102 100µsec 103 1msec
c
Linear O(nlog 33. 33µse 664 664µsec 9970 10mse
logarithmic n) 2 c c
Quadratic O(n2) 102 1mse 104 10msec 106 1sec
c
Cubic O(n3) 103 10ms 106 1sec 109 16.7mi
ec n
exponential O(2n) 102 10ms 103 3.17*1017 10301 -
0
4 ec yr
71 6/28/2022
Class 104 105 106
Constant O(1) 1 1µsec 1 1µsec 1 1µsec
Logarith O(log 13.5 13µsec 16.6 7µsec 19.93 20µsec
mic n)
Linear O(n) 104 10mse 105 0.1sec 106 1sec
c
Linear O(nlog 133*1 133ms 166*1 1.6sec 199.3*1 20sec
logarith n) 03 ec 03 05
mic
Quadrati O(n2) 108 1.7min 1010 16.7mi 1012 11.6da
c n ys
Cubic O(n3) 1012 11.6da 1015 31.7yr 1018 31709
ys yr
exponent O(2n) 103010 - 1030103 - 10301030 -
ial
72 6/28/2022
73 6/28/2022
CH-2 Contents
74
1. Introduction
2. Computational and asymptotic
complexity
3. Big-O, Ω, Θ, little-o and OO notations
4. Common complexity classes
5. Best, average and worst case complexity
6/28/2022
Types of analysis
75
The term analysis of algorithms is used
to describe approaches to the study of
the performance of algorithms. In this
course we will perform the following
types of analysis:
The worst-case runtime
The best case runtime
The average case runtime
The amortized runtime
6/28/2022
76
The worst-case runtime complexity of the
algorithm is
The function defined by the maximum number
of steps taken on any instance of size a.
The best-case runtime complexity of the
algorithm is
The function defined by the minimum number
of steps taken on any instance of size a.
The average case runtime complexity of the
algorithm is
The function defined by an average number of
steps taken on any instance of size a.
The amortized runtime complexity of the
algorithm is 6/28/2022
The function defined by a sequence of
Example
77
Let us consider an algorithm of
sequential searching in an array of size n.
6/28/2022
78
Its worst-case runtime complexity is
O(n)
Its best-case runtime complexity is
O(1)
Its average case runtime complexity is
O(n/2)=O(n)
6/28/2022
79
Questions?
6/28/2022
80
Thank You
6/28/2022