Asymptotic Analysis
1. Introduction to Asymptotic Analysis
The main idea of asymptotic analysis is to have a measure of efficiency of algorithms that
doesn’t depend on machine specific constants, and doesn’t require algorithms to be
implemented and time taken by programs to be compared. Asymptotic notations are
mathematical tools to represent time complexity of algorithms for asymptotic analysis. The
following three asymptotic notations are mostly used to represent time complexity of
algorithms.
Asymptotic analysis refers to the study of an algorithm's behavior (in terms of time or space
complexity) as the input size becomes very large. The aim is to determine how the running
time or space requirement of an algorithm scales with respect to the size of the input.
The input size is typically denoted by n, and asymptotic analysis uses mathematical
expressions to describe the algorithm's performance for large n.
Why Asymptotic Analysis?
Comparing Algorithms: Helps compare the efficiency of different algorithms.
Scaling: Predicts how algorithms perform with large inputs.
Optimization: Guides towards more efficient algorithmic choices.
2. Time and Space Complexity
Time Complexity: The amount of time an algorithm takes to complete as a function
of the input size n.
o Example: If an algorithm takes 3n^2 operations, its time complexity is O(n²).
Space Complexity: The amount of memory an algorithm uses as a function of the
input size n.
o Example: If an algorithm uses n extra memory cells, its space complexity is
O(n).
3. Notations Used in Asymptotic Analysis
Asymptotic analysis uses several standard notations to describe the growth rates of
algorithms. These notations represent the worst-case, average-case, and best-case
complexities of algorithms.
a) Big O Notation (O)
Definition: Describes an upper bound of the algorithm's growth rate. It represents the
worst-case scenario of the algorithm’s time or space complexity.
Example: If an algorithm takes at most 5n² + 3n + 1 steps, then its time complexity is
O(n²).
Properties:
o Focuses on the upper bound of the performance.
o Excludes constant factors and lower-order terms.
The Big O notation defines an upper bound of an algorithm, it bounds a function only
from above. For example, consider the case of Insertion Sort. It takes linear time in best
case and quadratic time in worst case. We can safely say that the time complexity of
Insertion sort is O(n2 ). Note that O(n2 ) also covers linear time.
The Big O notation is useful when we only have upper bound on time complexity of an
algorithm. Many times we easily find an upper bound by simply looking at the algorithm.
For a given function g(n), we denote by O(g(n)) the set of functions.
O(g(n)) = { f(n): there exist positive constants c and n0, such that 0 ≤ f(n)≤ c×g(n) for all
n ≥ n0}
Example : f(n) = 2n + 3
2n + 3 ≤ 10 n ∀ n ≥ 1
Here, c=10, n0=1, g(n)=n
=> f(n) = O(n)
Also, 2n + 3 ≤ 2 n + 3n
2n + 3 ≤ 5 n ∀ n ≥ 1
And, 2n + 3 ≤ 2n2 + 3n2
2n + 3 ≤ 5n2
=> f(n) = O(n2 )
O(1) < O(log n) < O(√ n) < O(n) < O(n log n) < O(n2 ) < O(n3 ) < O(2n ) < O(3n ) < O(nn )
b) Omega Notation (Ω)
Definition: Describes a lower bound of the algorithm's growth rate. It represents the
best-case scenario or the minimum amount of work done by the algorithm.
Example: An algorithm with time complexity of Ω(n) means that the algorithm takes
at least linear time in the best case.
Just as Big O notation provides an asymptotic upper bound on a function, Ω notation
provides an asymptotic lower bound.
Ω notation can be useful when we have lower bound on time complexity of an algorithm. As
discussed in the previously, the best case performance of an algorithm is generally not
useful, the omega notation is the least used notation among all three.
For a given function g(n), we denote by Ω(g(n)) the set of functions.
Ω (g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ c×g(n) ≤ f(n) for all n ≥
n0 }.
Let us consider the same insertion sort example here. The time complexity of insertion sort
can be written as Ω(n), but it is not a very useful information about insertion sort, as we are
generally interested in worst case and sometimes in average case.
Example :
f(n) = 2n + 3
2n + 3 ≥ n ∀ n ≥ 1
Here, c=1, n0=1, g(n)=n
=> f(n) = Ω(n)
Also, f(n) = Ω(log n)
f(n) = Ω(√n)
c) Theta Notation (Θ)
Definition: Provides a tight bound (both upper and lower) for the algorithm's growth
rate. If an algorithm is Θ(f(n)), it means its time or space complexity grows exactly at
the rate of f(n) for large n.
Example: If an algorithm has a time complexity of Θ(n log n), it means the
performance is bounded above and below by n log n as n increases.
The theta notation bounds a functions from above and below, so it defines the exact asymptotic
behaviour. A simple way to get theta notation of an expression is to drop low order terms and ignore
leading constants. For example, consider the following expression.
3n3 + 6n2 + 6000 = Θ(n3 )
Dropping lower order terms is always fine because there will always be a n0 after which Θ(n3 ) has
higher values than Θ(n2 ) irrespective of the constants involved. For a given function g(n), we denote
Θ(g(n)) as the following set of functions.
Θ(g(n)) = {f(n): there exist positive constants c1, c2 and n0 such that 0 ≤ c1×g(n) ≤ f(n) ≤ c2×g(n) for
all n ≥ n0}
The above definition means, if f(n) is theta of g(n), then the value f(n) is always between c1×g(n) and
c2×g(n) for large values of n (n ≥ n0). The definition of theta also requires that f(n) must be non-
negative for values of n greater than n0.
Example :
f(n) = 2n + 3
1 * n ≤ 2n + 3 ≤ 5n ∀ n ≥ 1
Here, c1=1, c2 = 5, n0=1, g(n)=n
=> f(n) = Θ(n)
4. Types of Complexities
a) Worst-Case Complexity
This represents the maximum number of operations that the algorithm will need to
perform, given the worst possible input.
Example: In a sorting algorithm, the worst-case complexity for Bubble Sort is O(n²).
b) Best-Case Complexity
This represents the minimum number of operations the algorithm needs to perform,
assuming the best-case input.
Example: The best-case for Merge Sort is O(n log n), even if the input is already
sorted.
c) Average-Case Complexity
This represents the expected number of operations the algorithm will perform,
averaged over all possible inputs.
Example: In Quick Sort, the average-case time complexity is O(n log n).
5. Common Time Complexities
Here are some common time complexities and their implications:
Time
Growth Rate Implication
Complexity
O(1) Constant time The algorithm takes the same time regardless of input size.
Logarithmic Common in divide and conquer algorithms like binary
O(log n)
time search.
The algorithm’s running time grows linearly with the input
O(n) Linear time
size.
Log-linear Common in efficient sorting algorithms like Merge Sort and
O(n log n)
time Quick Sort.
Common in algorithms with nested loops (e.g., Bubble
O(n²) Quadratic time
Sort).
Exponential Algorithms with exponential complexity are usually
O(2^n)
time infeasible for large n (e.g., recursive Fibonacci).
Very inefficient, occurs in algorithms like brute-force
O(n!) Factorial time
permutations.
6. Space Complexity
Just like time complexity, space complexity represents the amount of extra memory the
algorithm needs as the input size increases.
Common Space Complexities
O(1): Constant space. The algorithm uses a fixed amount of space regardless of input
size.
O(n): Linear space. The algorithm uses space proportional to the input size.
O(n²): Quadratic space. The algorithm uses space proportional to the square of the
input size.
7. Analyzing Algorithms Using Asymptotic Notation
Example 1: Linear Search
Description: Search an element in an array by checking each element.
Time Complexity:
o Worst case: O(n) (if the element is at the end or not present).
o Best case: O(1) (if the element is the first one).
Example 2: Binary Search
Description: Search for an element in a sorted array by repeatedly dividing the search
interval in half.
Time Complexity:
o Worst case: O(log n) (every step halves the size of the search space).
Example 3: Merge Sort
Description: Divide the array into two halves, recursively sort each half, and merge
the sorted halves.
Time Complexity:
o Worst, Best, and Average case: O(n log n).
Space Complexity: O(n) (due to auxiliary space used during merge).
Example 4: Quick Sort
Description: Divide the array into two partitions, recursively sort each partition.
Time Complexity:
o Worst case: O(n²) (when the pivot element is poorly chosen).
o Average case: O(n log n).
o Best case: O(n log n).
Space Complexity: O(log n) (recursive stack space).
The solution depends on comparing f(n) with n^log_b(a).