CMP202
Data Structures and
Algorithms
Complexity Analysis of
Algorithms
Prof.Dr. Güray YILMAZ
1
Algorithm Analysis
❑ I have some computational problems need to solve.
❑ Emre and Selin have different ideas for how to solve it.
❑ How do we know which is better?
❑ Easy! Have them both write the code and run it,
and see which is faster.
❑ THIS IS A TERRIBLE IDEA !
❑ How can we analyze their suggestions
o before they have to write the code
o independent of their machines
2
Motivations for Complexity Analysis
❑ There are often many different algorithms which can be used to solve
the same problem.
❑ Develop some techniques that allow us to:
▪ compare different algorithms with respect to their “efficiency”
▪ choose the most efficient algorithm for the problem
3
Motivations for Complexity Analysis
❑ Complexity analysis helps evaluate the
efficiency of an algorithm in terms of
time and space.
▪ time efficiency: the time it takes to execute
▪ space efficiency: the memory space it uses
❑ By understanding its computational resources
(time and memory usage), you can select the
most efficient algorithm for a given problem,
especially when working with large datasets
❑ We will focus on an algorithm’s efficiency
with respect to time.
4
Motivations for Complexity Analysis
❑ Scalability Assessment
▪ As the size of the input grows, an algorithm’s performance can drastically change.
▪ Analyzing its complexity helps predict how well the algorithm will scale and
whether it will remain feasible for large inputs.
▪ This ensures that the algorithm
remains practical and efficient
under heavy workloads
5
Machine independence
❑ The evaluation of efficiency should be as machine independent as possible.
❑ It is not useful to measure how fast the algorithm runs as this depends on which
particular computer, OS, programming language, compiler, and kind of inputs are
used in testing
❑ Instead,
▪ we count the number of basic operations the algorithm performs.
▪ we calculate how this number depends on the size of the input.
❑ A basic operation takes a constant amount of time to execute.
❑ Hence, the efficiency of an algorithm is the number of basic operations it
performs. This number is a function of the input size n.
6
Example of Basic Operations:
❑ Arithmetic operations: *, /, %, +, -
❑ Assignment statements of simple data types x=5
❑ Reading of primitive types x = scanner.nextInt()
❑ Writing of a primitive types System.out.print(x)
❑ Simple conditional tests if (x < 12) ...
❑ Method call (the execution time of the method itself may depend on the value of
parameter and it may not be constant) c = calculate(a,b)
❑ A method's return statement return x
❑ We consider an operation such as ++ , += , and *=
as consisting of two basic operations
❑ Memory Access
Note: To simplify complexity analysis we will not consider memory access (fetch or
store) operations.
7
Example of Basic Operations:
❑ Consider an algorithm that finds the maximum value in an unsorted array of
integers.
❑ The algorithm simply scans through the entire array and keeps track of the
largest number found so far.
Algorithm:
❑ Input: an array arr[] of size n.
❑ Process: start by assuming the first element is the maximum.
then iterate through the rest of the array, updating the maximum if a larger
element is found.
❑ Output: the maximum value in the array.
8
Example of Basic Operations:
Pseudocode:
9
Example of Basic Operations:
Step 1: Identify Basic Operations
❑ In this algorithm, the basic operations can be considered as:
1.Comparison: arr[i] > max_value
2.Assignment: max_value = arr[i]
❑ These are constant-time operations because their execution time does not depend on
the size of the input.
10
Example of Basic Operations:
Step 2: Count Basic Operations
❑ For an array of size n, the loop runs from index 1 to n-1, so it iterates n-1 times.
❑ In each iteration, two basic operations are performed:
• one comparison arr[i] > max_value
• one assignment max_value = arr[i] (when the comparison is true)
❑ Thus, in the worst case, the total number of basic operations will be:
• comparison: n-1 comparisons.
• assignment: In the worst case, an assignment will occur in each of the n-1 iterations, so
there are n-1 assignments.
11
Example of Basic Operations:
Step 3: Calculate Efficiency
The total number of basic operations in the algorithm is:
❑ Total operations = (n−1)(comparisons) + (n−1)(assignments) = 2(n−1)
❑ This result gives us a linear function of the input size n.
12
Best ( ), Average ( ) and Worst (O) case complexities
❑ We are usually interested in the worst case complexity ➔ what are the
maximum operations that might be performed for a given problem size.
❑ We will not discuss the other cases -- best case and average case.
❑ Why we focus on worst case analysis
▪ easier to compute
▪ usually close to the actual running time
▪ crucial for the real-time systems
(e.g. air-traffic control)
13
Best ( ), Average ( ) and Worst (O) case complexities
❑ Example: Linear Search Algorithm Complexity
❑ Best Case: Item found at the beginning ➔ 1 comparison
❑ Worst Case: Item found at the end ➔ n comparisons
❑ Average Case: Item may be found at index 0, or 1, or 2, . . . or n - 1
▪ average number of comparisons ➔ (1 + 2 + . . . + n) / n = (n+1) / 2 comp.
14
Asymptotic Complexity
❑ Finding exact complexity, number of basic operations ➔ f(n),
of an algorithm is difficult.
❑ We approximate f(n) by a function g(n) in a way that does not
substantially change the magnitude of f(n).
❑ The function g(n) is sufficiently close to f(n) for large values of the
input size n.
❑ This "approximate" measure of efficiency is called as asymptotic complexity.
15
Asymptotic Complexity
❑ Asymptotic complexity measure does not give the exact number
of operations of an algorithm, but it shows how that number grows
with the size of the input.
time
❑ This allows to understand the efficiency
of an algorithm in the long term,
independent of specific machine
parameters (such as CPU speed or
memory size).
input, n
16
Big-O (asymptotic) Notation
❑ Defn: f(n) = O(g(n)) ➔ if there exists positive constants c and n0 such that:
f(n) c*g(n) for all n n0
17
Big-O (asymptotic) Notation
Implication of the definition:
❑ For all sufficiently large n, c*g(n) is an upper bound of f(n)
Note: By the definition of Big-O :
f(n) = 3n + 4 is O(n)
it is also O(n2),
it is also O(n3),
...
it is also O(nn)
❑ However, when Big-O notation is used, the function g(n) is CHOSEN
TO BE AS SMALL AS POSSIBLE
❑ We call such a function g(n) a tight asymptotic bound of f(n)
18
Big-O (asymptotic) Notation
O(1) Constant
❑ Some Big-O complexity classes in O(log(n)) Logarithmic
order of magnitude from smallest
O(n) Linear
to highest:
O(n log(n)) Linearithmic
O(nx) {e.g., O(n2), O(n3), etc} Polynomial
O(an) {e.g., O(1.6n), O(2n), ...} Exponential
O(nn) Exponential
O(n!) Factorial
19
Examples of Algorithms and their big-O complexity
Big-O Notation Examples of Algorithms
push – pop (stack operation), enqueue – dequeue (queue op.) or
O(1) – constant
accessing an array element
O(log(n)) - logarithmic Binary search
O(n) - linear Linear search
O(n*log(n)) linearithmic Heap sort, Quick sort (average), Merge sort
O(n2) - quadratic Selection sort, Insertion sort, Bubble sort
O(n3) - cubic Matrix multiplication
O(2n) - exponential Towers of Hanoi
20
Warnings about O-Notation
❑ Big-O notation cannot compare algorithms in the same complexity class.
❑ It only gives sensible comparisons of algorithms in different complexity
classes when n is large.
❑ Consider two algorithms for same task:
Linear: f(n) = 1000 n
Quadratic: h(n) = n2/1000
The quadratic one is faster for n < 1.000.000
21
Rules for using big-O
For large values of input n , the constants and terms with lower degree of n are ignored.
1.Multiplicative Constants Rule: Ignoring constant factors
O(c f(n)) = O(f(n)), where c is a constant;
Example: O(20 n3) = O(n3)
2. Addition Rule: Ignoring smaller terms
If O(f(n)) > O(h(n)) then O(f(n) + h(n)) = O(f(n))
Example: O(n2 log n + n3) = O(n3)
O(2000n3 + 2n! + n800 + 10n + 27n log n + 5) = O(n!)
3. Multiplication Rule: O(f(n)*h(n)) = O(f(n))*O(h(n))
Example: O((n3 + 2n2 + 3n log n + 7)(8n2 + 5n + 2)) = O(n5)
22
Proving Big-O Complexity
To prove that f(n) is O(g(n)), we find any pair of values n0 and c that satisfy:
f(n) ≤ c*g(n) for n n0
Note: The pair (n0 , c) is not unique. If such a pair exists then there are infinite number
of such pairs.
Example: Prove that f(n) = 3n2 + 5 is O(n2)
We try to find some values of n0 and c by solving the following inequality:
3n2 + 5 cn2 OR 3 + 5/n2 c
(By putting different values for n0, we get corresponding values for c)
n0 1 2 3 4
c 8 4.25 3.55 3.3125 3
23
Proving Big-O Complexity
Example:
Prove that f(n) = 3n2 + 4n log n + 10 is O(n2) by finding appropriate values for c and n0
We try to find some values of n and c by solving the following inequality
3n2 + 4n log n + 10 cn2
OR 3 + 4 log n / n + 10/n2 c
(We used Log of base 2, but another base can be used as well)
n0 1 2 3 4
c 13 7.5 6.22 5.62 3
24
More Big-O Examples
7n-2
7n-2 is O(n)
need c > 0 and n0 1 such that 7n-2 c*n for n n0
this is true for c = 7 and n0 = 1
◼ 3n3 + 20n2 + 5
3n3 + 20n2 + 5 is O(n3)
need c > 0 and n0 1 such that 3n3 + 20n2 + 5 c*n3 for n n0
this is true for c = 4 and n0 = 21
◼ 3 log n + 5
3 log n + 5 is O(log n)
need c > 0 and n0 1 such that 3 log n + 5 c*log n for n n0
this is true for c = 8 and n0 = 2
25
How to determine complexity of code structures
Loops: for, while, and do-while:
Complexity is determined by multiplying the number of iterations of the loop
by the complexity of the operations within the loop.
Examples:
for (int i = 0; i < n; i++)
sum = sum - i; O(n)
for (int i = 0; i < n * n; i++)
O(n2)
sum = sum + i;
i=1;
while (i < n) {
sum = sum + i; O(log n)
i = i*2
}
26
How to determine complexity of code structures
Nested Loops:
Complexity of inner loop * complexity of outer loop.
Examples:
sum = 0
for(int i = 0; i < n; i++)
for(int j = 0; j < n; j++) O(n2)
sum += i * j ;
i = 1;
while(i <= n) {
j = 1;
while(j <= n){
statements of constant complexity O(n log n)
j = j*2;
}
i = i+1;
}
27
How to determine complexity of code structures
Sequence of statements: Use Addition rule
O(s1; s2; s3; … sk) = O(s1) + O(s2) + O(s3) + … + O(sk)
= O(max(s1, s2, s3, . . . , sk))
Example:
for (int j = 0; j < n*n; j++)
sum = sum + j;
for (int k = 0; k < n; k++)
sum = sum - l;
System.out.print("sum is now ” + sum);
Complexity is O(n2) + O(n) +O(1) = O(n2)
28
How to determine complexity of code structures
Switch: Take the complexity of the most expensive case
char key;
int[] X = new int[n];
int[][] Y = new int[n][n];
........
switch(key) {
case 'a':
for(int i = 0; i < X.length; i++)
sum += X[i]; o(n)
break;
case 'b':
for(int i = 0; i < Y.length; j++)
for(int j = 0; j < Y[0].length; j++) o(n2)
sum += Y[i][j];
break;
} // end of switch block
Overall Complexity: O(n2)
29
How to determine complexity of code structures
If Statement: Take the complexity of the most expensive case
char key;
int[][] A = new int[n][n];
int[][] B = new int[n][n];
int[][] C = new int[n][n];
........
if(key == '+') {
for(int i = 0; i < n; i++)
for(int j = 0; j < n; j++) O(n2)
C[i][j] = A[i][j] + B[i][j];
} // End of if block Overall
complexity
else if(key == 'x') O(n3)
C = matrixMult(A , B); O(n3)
else
System.out.println("Error! Enter '+' or 'x'!"); O(1)
30
How to determine complexity of code structures
• Sometimes if-else statements must carefully be checked
O(if-else) = O(Condition)+ Max[O(if), O(else)]
int[] integers = new int[n];
........
if(hasPrimes(integers) == true)
integers[0] = 20; O(1)
else
integers[0] = -20; O(1)
public boolean hasPrimes(int[] arr) {
for(int i = 0; i < arr.length; i++)
..........
.......... O(n)
} // End of hasPrimes()
O(if-else) = O(Condition) = O(n)
31
How to determine complexity of code structures
Note: Sometimes a loop may cause the if-else rule not to be applicable.
• Consider the following loop:
while (n > 0) {
if (n % 2 == 0) {
System.out.println(n);
n = n / 2;
} else{
System.out.println(n);
n = n – 1;
}
}
• The else-branch has more basic operations; therefore one may conclude that the loop is O(n).
• However, the if-branch dominates. For example if n is 61, then the sequence of n is:
60, 30, 15, 14, 7, 6, 3, 2, 1, and 0. Hence the loop is logarithmic and its complexity is O(log n)
32
O – Omega - Theta
❑ Big-O is an upper bound
▪ code will run up to this maximum limit
❑ Big-Omega is a lower bound
Big-Omega
❑ Big Theta is equivalence
Big-Theta
c1*g(n) ≤ f(n) ≤ c2*g(n) and for all n ≥ n0
33
Graph Examples: O
34
Graph Examples: Omega ()
35
Graph Examples: Theta ()
36
Omega () notation example
Big-Omega
• f(n) = 2n + 5, g(n) = n → is f(n) = (g(n)) ? How?
▪ 2n+5 >= 2n, for all n >= 1
▪ c = 2 and n0 = 1
37
Omega () notation example
❑ f(n) = 5n2-3n, g(n) = n2 → is f(n) = (g(n)) ? How?
▪ A function f(n) is Ω(g(n)), if there exist positive constants c and n0 such that
for all n≥n0, the following inequality holds: f(n)≥c⋅g(n)
▪ Applying the Definition: f(n)=5n2−3n g(n)=n2
▪ We need to find constants c>0 and n0 such that for all n≥n0: 5n2−3n ≥ c⋅n2
▪ Simplifying the Inequality: Divide both sides by n2 ➔ 5−3/n ≥ c
▪ As n increases, the term 3/n decreases and approaches 0
▪ Therefore, for sufficiently large n, 3/n becomes negligible
▪ Choosing Constants: Let's choose c=4, then we need ➔ 5−3/n≥4
▪ Subtract 4 from both sides ➔ 1≥3/n
▪ Multiply both sides by n ➔ n≥3
▪ Therefore, for n≥3 and c=4 ➔ f(n) = (g(n))
38
Omega () notation example
39
Theta () notation example
Big-Theta c1*g(n) ≤ f(n) ≤ c2*g(n)
n ≥ n0
• f(n) = 2n + 5 → (n) ? How ?
▪ 2n <= 2n+5 <= 3n ➔ for all n >= 5
▪ c1 = 2, c2 = 3 and n0 = 5
• f(n) = 5n2-3n →(n2) ? How ?
▪ 4n2<= 5n2-3n <= 5n2 ➔ for all n >= 4
▪ c1 = 4, c2 = 5 and n0 = 4
40
Theta () notation example
41