KEMBAR78
design and analysis of algorithm efficiency | PPTX
Analysis of Algorithm Efficiency
Presented by
Dr.R.Vasanthi M.E., Ph.D
Professor and Head/CSE
Tagore Institute of Engineering and Technology
Analysis of algorithms
• Issues:
– correctness
– time efficiency
– space efficiency
– optimality
• Approaches:
– theoretical analysis
– empirical analysis
Theoretical analysis of time efficiency
Time efficiency is analyzed by determining the
number of repetitions of the basic operation as
a function of input size
• Basic operation: the operation that contributes
most towards the running time of the algorithm
T(n) ≈ copC(n)
running time execution time
for basic operation
Number of times basic
operation is executed
input size
Analysis of Algoithms
• An investigation of an algorithm’s efficiency
with respect to two resources:
– Running time (time efficiency or time
complexity)
– Memory space (space efficiency or space
complexity)
Analysis Framework
• Measuring an input’s size
• Measuring Running Time
• Orders of Growth (of algorithm’s efficiency
function)
• Worst-case, Best-case, and Average-case
efficiency
Measuring Input Size
•
Units of measuring running time
• Measure running time in milliseconds, seconds,
etc.
– Depends on which computer
• Count the number of times each operation is
executed
– Difficult and unnecessary
• Count the number of times an algorithm’s “basic
operation” is executed
Measure running time in terms of # of basic
operations
• Basic operation: the operation that
contributes the most to the total running time
of an algorithm
• Usually the most time consuming operation in
the algorithm’s innermost loop
Input size and basic operation
examples
Problem Measure of input
size
Basic operation
Search for a key in a
list of n items
# of items in the list Key comparison
Add two n×n matrices Dimensions of the
matrices, n
Addition
Polynomial evaluation Order of the
polynomial
Multiplication
Theoretical Analysis of Time
Efficiency
• Count the number of times the algorithm’s basic
operation is executed on inputs of size n: C(n)
T(n) ≈ cop × C(n)
Execution time for
basic operation
# of times basic op.
is executed
Running time
Input size
Ignore cop,
Focus on
orders of
growth
Orders of Growth
• Why do we care about the order of growth of an
algorithm’s efficiency function, i.e., the total
number of basic operations?
Euclid’s Consecutive Integer
Checking
gcd(60, 24) 3 13
gcd(31415, 14142) 10 14142
gcd(218922995834555169026,
135301852344706746049)
97 > 1020
How fast efficiency function grows
as n gets larger and larger…
Orders of Growth
Values of some important functions as n  
Orders of Growth (contd.)
• Plots of growth…
• Consider only the leading term
• Ignore the constant coefficients
Worst, Best, Average Cases
• Efficiency depends on input size n
• For some algorithms, efficiency depends on
the type of input
• Example: Sequential Search
– Given a list of n elements and a search key k,
find if k is in the list
– Scan list, compare elements with k until either
found a match (success), or list is exhausted
(failure)
Sequential Search Algorithm
ALGORITHM SequentialSearch(A[0..n-1], k)
//Input: A[0..n-1] and k
//Output: Index of first match or -1 if no match is //found
i <- 0
while i < n and A[i] ≠ k do
i <- i+1
if i < n
return i //A[i] = k
else
return -1
Different cases
• Worst case efficiency
– Efficiency (# of times the basic op. will be executed)
for the worst case input of size n
– Runs longest among all possible inputs of size n
• Best case efficiency
– Runs fastest among all possible inputs of size n
• Average case efficiency
– Efficiency for a typical/random input of size n
– NOT the average of worst and best cases
– How do we find average case efficiency?
Average Case of Sequential Search
• Two assumptions:
– Probability of successful search is p (0 ≤ p ≤ 1)
– Search key can be at any index with equal
probability (uniform distribution)
Cavg(n) = Expected # of comparisons = Expected #
of comparisons for success + Expected # of comparisons
if k is not in the list
Summary of Analysis Framework
• Time and space efficiencies are functions of input size
• Time efficiency is # of times basic operation is executed
• Space efficiency is # of extra memory units consumed
• Efficiencies of some algorithms depend on type of input:
requiring worst, best, average case analysis
• Focus is on order of growth of running time (or extra
memory units) as input size goes to infinity
Asymptotic order of growth
A way of comparing functions that ignores constant factors and
small input sizes
• BIG oh( O(g(n))) : class of functions t(n) that grow no faster
than g(n)
• Theta (Θ(g(n))): class of functions t(n) that grow at same
rate as g(n)
• Big Omega( Ω(g(n)) ): class of functions t(n) that grow at
least as fast as g(n)
O(big oh)-Notation
Doesn’t
matter
n
n0
c × g(n)
t(n)
t(n) є O(g(n))
O-Notation (contd.)
• Definition: A function t(n) is said to be in
O(g(n)), denoted t(n) є O(g(n)), if t(n) is
bounded above by some positive constant
multiple of g(n) for sufficiently large n.
• If we can find +ve constants c and n0 such
that:
t(n) ≤ c × g(n) for all n ≥ n0
Ω(big omega)-Notation
Doesn’t
matter
n0
t(n)
c × g(n)
n
t(n) є Ω(g(n))
Ω-Notation (contd.)
• Definition: A function t(n) is said to be in
Ω(g(n)) denoted t(n) є Ω(g(n)), if t(n) is
bounded below by some positive constant
multiple of g(n) for all sufficiently large n.
• If we can find +ve constants c and n0 such
that
t(n) ≥ c × g(n) for all n ≥ n0
Θ(big theta)-Notation
Doesn’t
matter
n0
n
t(n) є Θ(g(n))
c1 × g(n)
c2 × g(n)
t(n)
Θ-Notation (contd.)
• Definition: A function t(n) is said to be in
Θ(g(n)) denoted t(n) є Θ(g(n)), if t(n) is
bounded both above and below by some
positive constant multiples of g(n) for all
sufficiently large n.
• If we can find +ve constants c1, c2, and n0
such that
c2×g(n) ≤ t(n) ≤ c1×g(n) for all n ≥ n0

design and analysis of algorithm efficiency

  • 1.
    Analysis of AlgorithmEfficiency Presented by Dr.R.Vasanthi M.E., Ph.D Professor and Head/CSE Tagore Institute of Engineering and Technology
  • 2.
    Analysis of algorithms •Issues: – correctness – time efficiency – space efficiency – optimality • Approaches: – theoretical analysis – empirical analysis
  • 3.
    Theoretical analysis oftime efficiency Time efficiency is analyzed by determining the number of repetitions of the basic operation as a function of input size • Basic operation: the operation that contributes most towards the running time of the algorithm T(n) ≈ copC(n) running time execution time for basic operation Number of times basic operation is executed input size
  • 4.
    Analysis of Algoithms •An investigation of an algorithm’s efficiency with respect to two resources: – Running time (time efficiency or time complexity) – Memory space (space efficiency or space complexity)
  • 5.
    Analysis Framework • Measuringan input’s size • Measuring Running Time • Orders of Growth (of algorithm’s efficiency function) • Worst-case, Best-case, and Average-case efficiency
  • 6.
  • 7.
    Units of measuringrunning time • Measure running time in milliseconds, seconds, etc. – Depends on which computer • Count the number of times each operation is executed – Difficult and unnecessary • Count the number of times an algorithm’s “basic operation” is executed
  • 8.
    Measure running timein terms of # of basic operations • Basic operation: the operation that contributes the most to the total running time of an algorithm • Usually the most time consuming operation in the algorithm’s innermost loop
  • 9.
    Input size andbasic operation examples Problem Measure of input size Basic operation Search for a key in a list of n items # of items in the list Key comparison Add two n×n matrices Dimensions of the matrices, n Addition Polynomial evaluation Order of the polynomial Multiplication
  • 10.
    Theoretical Analysis ofTime Efficiency • Count the number of times the algorithm’s basic operation is executed on inputs of size n: C(n) T(n) ≈ cop × C(n) Execution time for basic operation # of times basic op. is executed Running time Input size Ignore cop, Focus on orders of growth
  • 11.
    Orders of Growth •Why do we care about the order of growth of an algorithm’s efficiency function, i.e., the total number of basic operations? Euclid’s Consecutive Integer Checking gcd(60, 24) 3 13 gcd(31415, 14142) 10 14142 gcd(218922995834555169026, 135301852344706746049) 97 > 1020 How fast efficiency function grows as n gets larger and larger…
  • 12.
    Orders of Growth Valuesof some important functions as n  
  • 13.
    Orders of Growth(contd.) • Plots of growth… • Consider only the leading term • Ignore the constant coefficients
  • 14.
    Worst, Best, AverageCases • Efficiency depends on input size n • For some algorithms, efficiency depends on the type of input • Example: Sequential Search – Given a list of n elements and a search key k, find if k is in the list – Scan list, compare elements with k until either found a match (success), or list is exhausted (failure)
  • 15.
    Sequential Search Algorithm ALGORITHMSequentialSearch(A[0..n-1], k) //Input: A[0..n-1] and k //Output: Index of first match or -1 if no match is //found i <- 0 while i < n and A[i] ≠ k do i <- i+1 if i < n return i //A[i] = k else return -1
  • 16.
    Different cases • Worstcase efficiency – Efficiency (# of times the basic op. will be executed) for the worst case input of size n – Runs longest among all possible inputs of size n • Best case efficiency – Runs fastest among all possible inputs of size n • Average case efficiency – Efficiency for a typical/random input of size n – NOT the average of worst and best cases – How do we find average case efficiency?
  • 17.
    Average Case ofSequential Search • Two assumptions: – Probability of successful search is p (0 ≤ p ≤ 1) – Search key can be at any index with equal probability (uniform distribution) Cavg(n) = Expected # of comparisons = Expected # of comparisons for success + Expected # of comparisons if k is not in the list
  • 18.
    Summary of AnalysisFramework • Time and space efficiencies are functions of input size • Time efficiency is # of times basic operation is executed • Space efficiency is # of extra memory units consumed • Efficiencies of some algorithms depend on type of input: requiring worst, best, average case analysis • Focus is on order of growth of running time (or extra memory units) as input size goes to infinity
  • 19.
    Asymptotic order ofgrowth A way of comparing functions that ignores constant factors and small input sizes • BIG oh( O(g(n))) : class of functions t(n) that grow no faster than g(n) • Theta (Θ(g(n))): class of functions t(n) that grow at same rate as g(n) • Big Omega( Ω(g(n)) ): class of functions t(n) that grow at least as fast as g(n)
  • 20.
  • 21.
    O-Notation (contd.) • Definition:A function t(n) is said to be in O(g(n)), denoted t(n) є O(g(n)), if t(n) is bounded above by some positive constant multiple of g(n) for sufficiently large n. • If we can find +ve constants c and n0 such that: t(n) ≤ c × g(n) for all n ≥ n0
  • 22.
  • 23.
    Ω-Notation (contd.) • Definition:A function t(n) is said to be in Ω(g(n)) denoted t(n) є Ω(g(n)), if t(n) is bounded below by some positive constant multiple of g(n) for all sufficiently large n. • If we can find +ve constants c and n0 such that t(n) ≥ c × g(n) for all n ≥ n0
  • 24.
    Θ(big theta)-Notation Doesn’t matter n0 n t(n) єΘ(g(n)) c1 × g(n) c2 × g(n) t(n)
  • 25.
    Θ-Notation (contd.) • Definition:A function t(n) is said to be in Θ(g(n)) denoted t(n) є Θ(g(n)), if t(n) is bounded both above and below by some positive constant multiples of g(n) for all sufficiently large n. • If we can find +ve constants c1, c2, and n0 such that c2×g(n) ≤ t(n) ≤ c1×g(n) for all n ≥ n0

Editor's Notes

  • #1 The efficiency analysis (also called complexity analysis) is mainly used to compare algorithms in order to establish which is more efficient (that which uses less computing resources). Usually, the amount of computing resources depends on the dimension of input data, called also input's size (or even problem's size). Analysis of algorithms – algorithm efficiency w.r.t 2 resources. Time and space efficiency.
  • #2 Amount of time required to execute an algorithm – time efficiency Amount of space required for an algorithm – space efficency correctness refers to the input-output behaviour of the algorithm (i.e., for each input it produces the expected output). An empirical definition is based on some kind of observation A theoretical definition is one based on a existing theory
  • #19 Asymptotic notations is a notation which is used to make meaningful statements about the efficiency of a program. 3 notations used to compare orders of growth of an algorithm’s basic operation count