Chapter 3 – Basics of
Algorithm Analysis
By Dr. S. Sridhar
Basics of Algorithm Complexity
• Algorithm complexity theory is a branch of algorithm study that deals
with the analysis of algorithms in terms of computational resources
such as time and space
• An efficient algorithm utilizes few computer resources in terms of
time and space
• The computability theory deals with the theoretical issues of
solvability of a problem
• A statement such as ‘algorithm A takes 0.366 seconds in machine B’ is
not correct
• Absolute values cannot determine the efficiency of an algorithm
Basics of Algorithm Complexity – Cont’d
• The speed of an algorithm is dependent on many factors such as
• machine speed,
• programming language,
• efficiency of compilers that deal with the language translation,
• input size, and
• data structures used.
• Quality of an algorithm depends on factors such as its ease of
implementation, style, readability, understandability, and simplicity
• An analysis of the effective complexity of an algorithm is required for the
following reasons:
• To estimate the efficiency of any given algorithm by predicting the algorithm
behaviour
• To provide a framework for comparing algorithms or solutions for the given problem
Measures of Algorithm Analysis
Measures of Algorithm Analysis
• Time complexity is more important than space complexity
• due to the declining cost of primary and secondary memory devices
• The implementation of a program involves two time factors
• Run-Time: Also called as Execution Time
• Compile-Time
• Run-time is expressed in terms of CPU Cycles not in general SI Units
• Time efficiency or time complexity is a measure of how much run-time an
algorithm requires for execution when the input size is scaled
• Scaling an input means increasing the value of an input
• A variable N or n is usually reserved to represent the input size of an
algorithm
• Time complexity is always expressed as a measure of the input size
• Input size is the length of the input of an algorithm
Introduction to Time Complexity
• To determine the run-time function t(n) or T(n), two decisions are
important
• Select/identify an appropriate computational model
• Find a suitable measuring unit to characterize the algorithm complexity
• Complexity of an algorithm should be independent of any machine
• With a suitable model such as a RAM, time complexity analysis of an
algorithm can be performed in the following two ways:
• Mathematical Analysis
• Empirical Analysis
Mathematical Analysis vs Empirical Analysis
Analysis of Iterative Algorithm
• Steps involved in carrying out mathematical analysis for the iterative
(or non-recursive) algorithm framework are:
1. Measure the input size, that is, the length of the input data
2. Measure the running time using either step count or operation
count
3. In the case of operation count, find the worst-, best-, and average-
case efficiencies of the algorithm.
4. Identify the rate of growth.
Measuring Input Size
• The input size (denoted by n or N ) is defined formally as the number
of binary bits used to represent the input data
• The efficiency of an algorithm is always expressed as a function of the
input size
• Algorithm Analysis is independent of the size of the data not the
efficiency expression
Usual Inputs to Algorithms
• Integer: Integers are typical algorithm inputs
• The common practice is to use the number of bits required to store an input
• Note: Logarithm Function is used to make numbers small
• Matrices: Algorithms involving matrices
• Arrays: For algorithms involving arrays (or sets or tuples) of many
elements, the order of the array can be used as the input size
• Graphs: For algorithms involving graphs, either the number of edges
or the number of vertices, or both can be used as the input size of the
graph
Method of Measuring Running Time
Step Count
• The idea is to count the number of instructions or steps that are used
by a given algorithm
• The first step is to find steps per execution and frequency (or number
of times) of statements
• The number of steps per execution is 0 for non-executable statements
and generally 1 for executable statements
• This number varies only when a function is invoked or composite structures
are processed
• If a function is invoked, then the step count depends on the size of
the function
Step Count – Specific Ways of Measuring
• Declarative statements with no initializations have a statement count of 0.
• In case initializations are made, the step count is 1
• Comments and brackets such as begin/end/end if/end while/end for all
have a step count of 0
• Expressions have a step count of 1
• Assignment statement, function invocation, return statements, and other
statements such as break/continue/go to all have a step count of 1
• Advantage: Evaluation of time complexity using step count is easy
• Disadvantage: Step count is not dependent on operands
Example: (Step Count)
Consider the following algorithm segment:
Find the step count of this algorithm segment
Operation Count
• Operations can be divided into two categories:
• elementary (or basic)
• non-elementary (non-basic) operations
• Elementary (or basic) operations are primitive operations and are often
implemented directly with less effort
• Assignment operations
• Comparison operations
• Arithmetic operations
• Logical operations
• Non-elementary (or non-basic) operations are not atomic in nature and
involve many elementary operations
• Sorting
• Maximum or Minimum of an array
Steps in Performing an Operation Count
• Count the number of basic operations of a program and express it as
a formula
• Simplify the formula
• Represent time complexity as a function of operation count
Operation Count of a Sequence S Operation Count of a Selection S
Operation Count of a Repetition S
Example: Operation Count
Consider the following algorithm segment and perform complexity analysis using operation count
Solution:
Example: Operation Count
• Let us assume that the analysis of𝑛 an algorithm yields t(n) as follows
𝑡 𝑛 = 6𝑖 𝑖 + 1
𝑖=1
• What is the estimate of the total number of operations?
Worst-case Complexity and Upper Bound
• The worst-case complexity is a pessimistic analysis assuming the worst
scenario
• The worst-case complexity w(n) is a complexity function of the worst-case
input
• Algorithm takes maximum time and causes a computer to run slower
• Algorithm needs to perform the maximum number of steps or operations
to process the input for accomplishing the task
• Worst-case complexity of an algorithm gives us an upper bound on the
algorithm
• The upper bound of an algorithm is crucial, as it measures the maximum
computational effort required to solve a given problem
Best-Case Complexity and Lower Bound
• The minimum computation time of the algorithm for all the instances of
the domain
• Best cases are rare in real-world problems and rarely reflect the efficiency
of an algorithm
• Represents the ideal or lower bound of an algorithm
• Algorithm requires a minimum amount of effort
• The lower bound is a useful measure in algorithm study and is applied to
the problem, not the algorithm
• If a problem has k possible solutions, the best algorithm gives the lower
bound considering all k possible solutions
• Difficult to compute the lower bound of an algorithm
Average-Case Complexity
• The average-case analysis assumes that the input is random
• provides a prediction about the running time of an algorithm for a
random input
• The average-case complexity (A(n)) is neither best nor worst
• Indicates the average performance of an algorithm
Example: Find the best, worst, and average
case complexity of the linear search
• A linear search problem is given an array of numbers and a target key
• Problem is about checking the presence or absence of the target key
• Worst-case complexity of linear search occurs when the time is either
present as the last item or not in the list
• Worst-case complexity of this algorithm can be given as T(n) = n
• The best case of linear search occurs when the item is present as the
first element of the list
• T(n) = n
Example: Linear Search – Cont’d
• The average-case complexity for a linear search problem can be
calculated as follows:
• The item can be present anywhere in the list
• Probability of the presence of the item in the given array is equally likely
1
𝑝=
𝑛
• Number of comparisons required is as follows
Rate of Growth
• Primary focus of the rate of growth is to find the behaviour of an
algorithm
• The rate of growth of an algorithm is the rate at which its running
time increases as a function of input n:
• Rate of Growth is:
• Measure the complexity of an algorithm t(n) for larger values of n
• Compare it with the collection of functions that have the same growth rate,
so that the algorithm can be classified and ranked
Measuring Larger Inputs
• Rate of growth is helpful in understanding the behaviour of the
algorithms
Example: Measuring Larger Inputs
• Consider three algorithms A, B, and C. Their respective run-time complexity is given as follows: 𝑇𝐴 = 7𝑛,
𝑇𝐵 = 7 log10 𝑛, and 𝑇𝐶 = 𝑛2 . Compare the rates of growth when the input is scaled from n = 100 to n =
10,000.
Solution
𝑇𝐴 10,000 7×10,000
• = = 100
𝑇𝐴 100 7×100
𝑇𝐵 10,000 7×𝑙𝑜𝑔10 10,000
• = =2
𝑇𝐵 100 7×𝑙𝑜𝑔10 100
𝑇𝐶 10,000 10,0002
• = = 10,000
𝑇𝐶 100 100×100
• Second algorithm is better
Example: Rate of Growth
• Let us consider four algorithms with the following time complexities:
𝑇1 𝑛 = 𝑛, 𝑇2 𝑛 = 𝑙𝑜𝑔2 𝑛 , 𝑇3 𝑛 = 𝑛2 , 𝑇4 𝑛 = 2𝑛
Algorithm Classes
Asymptotic Analysis
• Asymptotic analysis is the theory of approximation
• Asymptotic analysis is a very old discipline in mathematics, and
asymptotic is basically an approximation tool
• An asymptote of a curve is a line that closely approximates a curve
• does not touch the curve at any point of time
• Time complexity can be accurate only when many parameters that
are dependent on the machine are included
• These additional machine dependent parameters affect the generic
nature of the algorithm
History of Asymptotic Notations
Example: Asymptotic Analysis
Let there be the following two algorithms:
1. Algorithm A with running time 𝑡𝑛 = 𝑛2 + 𝑛
2. Algorithm B with running time 𝑡𝑛 = 𝑛2
Assume that each operation takes one second to compute a task for 𝑛 = 100. How long will it take for n =
100?
Solution: Time computation for Algorithm A and B is
10002 +1000
• 𝑇𝑖𝑚𝑒𝐴 = 1 × ≈ 99.1
1002 +100
10002
• 𝑇𝑖𝑚𝑒𝐵 = 1 ×= = 100
1002
• It can be observed that no significant difference exists between
TimeA and TimeB
O-Notations (Big-oh Notations)
• Expresses the upper bound or the worst-case complexity of an
algorithm
• Mathematically,
• 𝑡 𝑛 =𝑂 𝑔 𝑛 𝑡 𝑛 ≤ 𝑐 ×𝑔 𝑛 ;𝑐 > 0
Ω -Notations (Big-Omega Notations)
• Expresses the lower bound or the best-case complexity of an
algorithm
• Mathematically,
• 𝑡 𝑛 =Ω 𝑔 𝑛 𝑡 𝑛 ≥ 𝑐 × 𝑔 𝑛 ;𝑐 > 0
θ-Notations (Big-Theta Notations)
• Expresses the average-case complexity of an algorithm
• Mathematically,
• 𝑡 𝑛 =θ 𝑔 𝑛 𝑐1 ⋅ 𝑔 𝑛 ≤ 𝑓 𝑛 𝑐2 ⋅ 𝑔 𝑛 ; 𝑐1 , 𝑐2 > 0
o-notation (Little-oh Notation)
• Expresses the loose upper bound of an algorithm
• Mathematically,
• 𝑡 𝑛 =𝑜 𝑔 𝑛 𝑡 𝑛 < 𝑐 × 𝑔 𝑛 ;𝑐 > 0
𝛚 -notation (Little-omega Notation)
• Expresses the loose lower bound of an algorithm
• Mathematically,
• 𝑡 𝑛 =ω 𝑔 𝑛 𝑡 𝑛 > 𝑐 ×𝑔 𝑛 ;𝑐 > 0
Big-oh Notation vs. Little-oh Notation
Asymptotic Notations
Limits and Asymptotic Notations
𝑡 𝑛
𝑙𝑖𝑚 =𝑐
𝑥→∞ 𝑔 𝑛
𝑡 𝑛
𝑙𝑖𝑚 ≠∞ 𝑡 𝑛 =𝑂 𝑔 𝑛
𝑥→∞ 𝑔 𝑛
Algorithm Complexity Classes
• Algorithm A that grows slower than
algorithm B can be written as A ≺ B.
• This can also be written as A ⊂ B
using set notation
𝑂 1 ⊂ 𝑂 𝑙𝑜𝑔 𝑛 ⊂ 𝑂 𝑛 ⊂ 𝑂 𝑛 𝑙𝑜𝑔 𝑛 ⊂ 𝑂 𝑛2 ⊂ 𝑂 𝑛3
⊂ 𝑂 𝑛𝑘 ⊂ 𝑂 2𝑘 ⊂ 𝑂 𝑛!
Empirical Analysis and Algorithm Visualization
• Efficiency analysis of a larger program using mathematical analysis is
difficult
• Algorithm efficiency of such programs can be analysed using empirical
analysis or a posteriori analysis
• Empirical analysis is useful for carrying out various functions such as
estimating the time for a larger algorithm and for performing comparison
between two or more algorithms
• Asymptotic analysis ignores machine-dependent parameters and thus is
independent of the machine
• However, empirical analysis investigates the machine-dependent constants
to tune the algorithm parameters that result in optimal solutions
• Empirical analysis is useful for verifying the results of the a priori analysis
Empirical Analysis
• steps of empirical analysis:
• Determine the experimental purpose and decide on efficiency metric
• Generate a sample input dataset
• Analyse the results of an algorithm using statistical tests
• The aim of empirical analysis is to run multiple algorithms on the
same set of data
• Determine the efficient algorithm based on the counts of basic
operations, memory references, comparisons, or any other
operations
Usage of Timeit in Python
>>> import timeit
>>> statement = '‘’
... a = 5
... b = 5
... power = a ** b
... '‘’
>>> print("Time taken for executing " + statement + "\n is ",
timeit.timeit(stmt = statement))
Usage of Timeit in Python – Output
Time taken for executing
a=5
b=5
power = a ** b
is 0.5711772000067867
Profiling in Python