KEMBAR78
Introduction to Design and Analysis of Algorithms | PPTX
Design and Analysis of Algorithms
Introduction
 Design-
 the way in which something is planned and made or
arranged.
 Analysis –
 the careful examination of the different parts or
details of something.
 Algorithm –
 a set of rules that must be followed when solving a
particular problem.
Algorithm
 an algorithm is a set of well-defined
instructions to solve a particular problem. It
takes a set of input and produces a desired
output.
 add two numbers
 largest among three numbers
 find all the roots of the quadratic equation
 find the factorial
 check prime number
 Qualities of Good Algorithms
 Input and output should be defined precisely.
 Each step in the algorithm should be clear and
unambiguous.
 Algorithms should be most effective among many
different ways to solve a problem.
 An algorithm shouldn't include computer code.
Instead, the algorithm should be written in such a
way that it can be used in different programming
languages.
Need for an algorithm
 Grasping the fundamental concept of the
problem
 To comprehend the problem’s flow
 It provides the designer with a concise
definition of the problem’s needs and aim.
 Discover a solution to the situation
 Enhance the effectiveness of existing methods
 It is the easiest way to describe something
without going into too much detail about how it
works.
 evaluate and assess the complexity (in terms of
time and space) of input size problems without
developing and executing them
 determine the algorithm’s resource
requirements (memory, input-output cycles)
 assess the techniques’ performance in all
circumstances (best cases, worst cases, and
average cases)
 int main() { int a=0; a = 5<2 ? 4 : 3;
printf("%d",a); return 0; }
 int main() { int a=0; a = printf("4");
printf("%d",a); return 0; }
 int main() { int a=0; a = 5>2 ? printf("4"): 3;
printf("%d",a); return 0; }
 int main() { int a=0; a = (5>2) ? : 8;
printf("%d",a); return 0; }
 int main() { if( 4 > 5 ) { printf("Hurray..n"); }
printf("Yes"); return 0; }
 int main() { if( 10 > 9 ) printf("Singaporen");
else if(4%2 == 0) printf("Englandn");
printf("Poland"); return 0; }
# include<stdio.h>
int find_odd(int k)
{ int n=0,n1,n2;
while (n!=-1) {
scanf("%d",&n);
if(n%2==1) {
n1++;
if(n1==k) {
return n;
break; } }
else if(n==-1) {
return n;
break; }
else
continue; }}
void main()
{
int a[100],i,k;
scanf("%d",&k);
printf("%d",find_odd(k));
}
NOTION OF AN ALGORITHM
 An algorithm is a sequence of unambiguous
instructions for solving a problem, i.e., for obtaining a
required output for any legitimate input in a finite
amount of time.
Problem to be solved
Algorithm
Computer Program Output
Input
 The notion of the algorithm illustrates some
important points:
 The non-ambiguity requirement for each step of an
algorithm cannot be compromised.
 The range of inputs for which an algorithm works has
to be specified carefully.
 The same algorithm can be represented in several
different ways.
 There may exist several algorithms for solving the
same problem.
 Algorithms for the same problem can be based on
very different ideas and can solve the problem with
dramatically different speeds.
Characteristics of an algorithm
 Input: Zero / more quantities are externally
supplied.
 Output: At least one quantity is produced
 Definiteness: Each instruction is clear and
unambiguous
 Finiteness: If the instructions of an algorithm is
traced then for all cases the algorithm must
terminates after a finite number of steps.
 Efficiency: Every instruction must be very basic and
runs in short time.
Steps for writing an algorithm
An algorithm is a procedure. It has two parts; the first part is head and the second part is
body.
The Head section consists of keyword Algorithm and Name of the algorithm with
parameter list.
E.g. Algorithm name1(p1, p2,…,p3)
The head section also has the following:
//Problem Description:
//Input:
//Output:
In the body of an algorithm – steps for solving the problem
Example:
 The greatest common divisor(GCD) of two
nonnegative integers m and n (not-both-zero),
denoted gcd(m, n), is defined as the largest
integer that divides both m and n evenly,
i.e., with a remainder of zero.
 Euclid’s Algorithm
 Consecutive integer checking algorithm
 Middle school procedure
 Euclid’s algorithm is based on applying
repeatedly the equality gcd(m, n) = gcd(n, m
mod n), where m mod n is the remainder of
the division of m by n, until m mod n is equal
to 0. Since gcd(m, 0) = m, the last value of m is
also the greatest common divisor of the initial
m and n. gcd(60, 24) can be computed as
follows:gcd(60, 24) = gcd(24, 12) = gcd(12, 0) =
12.
Euclid’s algorithm for computing gcd(m, n)
in simple steps
 Step 1 If n = 0, return the value of m as the
answer and stop; otherwise, proceed to Step 2.
 Step 2 Divide m by n and assign the value of the
remainder to r.
 Step 3 Assign the value of n to m and the value of
r to n. Go to Step 1.
Euclid’s algorithm for computing gcd(m, n)
expressed in pseudocode
Euclid_gcd(m, n)
//Computes gcd(m, n) by Euclid’s algorithm
//Input: Two nonnegative, not-both-zero integers m
and n
//Output: Greatest common divisor of m and n
while n ≠ 0 do
r ←m mod n m←n
n←r
return m
 How it is stopped - n value is reduced for each iteration
and become zero after a finite number of iteration
Consecutive integer checking algorithm
 T=min{m,n}
 Check whether t divides both m and n, if so
then t is the answer
 If not, decrease t until it divides both the value
 Example: GCD(60,24)
 Step 1: Assign the value of min{m,n} to t
 Step 2: Divide m by t. if the remainder is zero,
goto step 3; otherwise goto step 4
 Step 3: divide n by t. if remainder is zero,
return t as output; otherwise goto step 4
 Step 4: decrease the value of t by 1. goto step 2
Like Euclid’s, this does not work correctly when one of the
input is zero so it is necessary to specify the range of
algorithm’s input
Middle school method
 Gcd(60,24)
 60=2.2.3.5
 24=2.2.2.3
 Gcd(60,24)=2.2.3=12
 GCD (20,36)
Middle school method
 Step 1: find the prime factors of m
 Step 2: find the prime factors of n
 Step 3: identify all the common factors in the
two prime expansions found in step 1 and step
2 (if p is a common factor occurring pm and pn
times in m and n respectively, it should be
repeated min{pm,pn} times)
 Step 4: compute the product of all the common
factors and return it as GCD of the numbers
given
Generating prime number less than n
 Ancient Greece invented – sieve of eratosthenes
 For every iteration – eliminates the multiples of
the number
FUNDAMENTALS OF ALGORITHMIC
PROBLEM SOLVING
 The notion of the algorithm illustrates some
important points:
 The non-ambiguity requirement for each step of an
algorithm cannot be compromised.
 The range of inputs for which an algorithm works has
to be specified carefully.
 The same algorithm can be represented in several
different ways.
 There may exist several algorithms for solving the
same problem.
 Algorithms for the same problem can be based on
very different ideas and can solve the problem with
dramatically different speeds.
 Understanding the Problem
 Careful examination of the problem
 Workout manually with some inputs and special cases
 Input- instances of algorithm
 Specify range of instances including boundary values
 Algorithm should not only work all the time but should work for
all the legimate inputs
 Decision making on:
 Ascertaining the Capabilities of the Computational Device
 RAM
 Earlier system-executes in sequential manner- sequential algorithm
 Parallel algorithm – executing concurrently
 Problem related to scientific applications – independent of parameters
like speed and cost – machine independent
 Practical applications- depending on the problem (speed, memory)
 Choosing between Exact and Approximate
Problem Solving
 Certain problems cannot solve exactly – square
roots, non-linear equations, evaluating definite
integrals
 Solving exactly a problem – slow process – problem
complexity – traveling salesman problem
 Deciding on Appropriate Data Structures
 Depending on the problem type – it may demand for
ingenuity data structure
 Algorithm + Data Structure = Program
 Data structures is crucial for DAA
 Algorithm Design Techniques
 An Algorithm Design Technique (or “strategy” or
“paradigm”) is a general approach to solve problem
algorithmically that is applicable to a variety of problems
from different areas of computing
 Existing algorithm provide guidance for designing new
algorithms
 Classify algorithm based on designing ideas and use
appropriately
 Methods of Specifying an Algorithm
 There are three ways to specify an algorithm. They are:
 Natural language
 Pseudocode
 Flowchart
 Natural Language
 Pseudocode
Step 1: Read the first number, say a.
Step 2: Read the first number, say b.
Step 3: Add the above two numbers and store
the result in c.
Step 4: Display the result from c.
Pseudocode Sum(a,b)
//Problem Description: This algorithm
performs addition of two numbers
//Input: Two integers a and b
//Output: Addition of two integers
c←a+b
return c
 Flowchart
 Proving an Algorithm’s Correctness
 Correctness- algorithm yields required output for every
legimate input in finite amount of time
 Euclid’s algorithm
 Proving correctness – can be easy or complex based on
the problem
 Mathematical induction
 Tracing of algorithm – cannot prove correctness
conclusively
 To prove incorrect – does not work for atleast one input
 Redesign if incorrect – data structure, design technique
 For approximation algorithm – correctness – does not
exceed predefined limit
 Analyzing an Algorithm
 Two types of efficiency
 Time – how fast the algorithm runs
 Space – how much extra memory required
 Desirable characteristics of an algorithm
 Simplicity – finding GCD(m,n)
 Simple algorithm – easy to understand and program – few
bugs
 Sometimes more efficient than complex alternative
 Generality
 Two issues
 Generality of the problem it solves – two numbers relatively prime
– divisible by 1 / quadratic equation
 Range of input it accepts – for GCD exclude 1
 Coding an Algorithm
 Algorithm – program (both risk and opportunity)
 Risk: coding incorrectly or inefficiently – testing (art
rather science)
 Verifying all range of inputs
 Code optimization – to increase speed by 10-50%
 Empirical analysis for time efficiency
 Algorithm’s optimality – complexity of the problem –
minimum effort required to solve a problem
 Whether a problem can be solved or not – finding real roots of
quadratic equation with negative discriminant
IMPORTANT PROBLEM TYPES
 The most important problem types are:
 Sorting
 Arranging elements in some order
 Sorting numbers, alphabets
 Sorting student records – names/ roll nos/ marks – key
 sorting makes searching easier – eg: dictionary, telephone directories
 Sorting algorithm selection – programming time, execution time, memory
space
 Properties of sorting algorithm
 Stable - Equal keys aren't reordered (same GPA- student will be sorted
alphabetically)
 Inplace- does not require extra memory
 Types
 Internal sorting – sorted and retained in main memory
 Bubble, selection, insertion shell, quick and heap
 External sorting – moving datas from secondary storage to main memory for
sorting
 merge
 Searching
 Finding a given value called search key in a given set
 Linear, binary, hashing
 tree-based – binary, balanced
 Used for storing and retrieving information from large
database
 Issues
 Frequently changing data relative to number of searches -
Addition and deletion of data
 Appropriate algorithm and data structures to be chosen
 String processing
 Sequence of characters (eg: text string – letters, numbers,
special char, bit strings – 0’s and 1’s)
 Pattern matching
 Process of searching for an occurrence of a given word or a
pattern in a string
 String matching algorithm by knuth-morris-pratt
 Compares character of a pattern from left to right
 String matching algorithm by boyer-moore
 Compares characters of a pattern from right to left
 Graph problems
 Collection of vertices connected by edges
 Ex: real-time applications – transportation,
communication networks
 Graph algorithms
 Graph traversal – how to visit all the points in the network
 BFS, DFS
 Shortest path algorithm – finding best route
 Dijikstra’s, Prims, Kruskals
 Travelling salesman problem – finding shortest path through
n cities that visits all the cities exactly once
 Graph-coloring problem
 Smallest number of colors to vertices of a graph so that no two
adjacet vertices are of same color
 Combinatorial problems
 Problem that ask explicitly or implicitly to find
combinatorial object like permutation/ combination /
subset that satisfies certain constraints and has some
desired propoerty
 Ex: travelling salesman problem and graph coloring
problem
 More difficult problems in computing – number of
combinatorial objects grows with problem size, no
algorithm for solving such problem in acceptable time
 Geometric problems
 Deals with geometric objects such as lines,
points, polygons etc.
Ex: closest-pair – given n points, finding the
closest pair, convex-hull – finding the smallest
convex polygon that includes all the points of a
given set, dihedral angle problem
 Numerical problems
 Involves mathematical objects of continuous
nature
 Mathematical problems can be solved only
approximately
 Solving equations
 Evaluating functions
 Solving requires handling real numbers- done
only approximately
FUNDAMENTALS OF THE ANALYSIS OF
ALGORITHM EFFICIENCY
 Analysis Framework.
 Asymptotic Notations and its properties.
 Mathematical analysis for Recursive
algorithms.
 Mathematical analysis for Non-recursive
algorithms.
 Empirical Analysis of Algorithms
 Algorithm Visualization
FUNDAMENTALS OF THE ANALYSIS OF
ALGORITHM EFFICIENCY
 The efficiency of an algorithm can be in terms of
time and space. The algorithm efficiency can be
analyzed by the following ways.
 Analysis Framework.
 Time efficiency, indicating how fast the algorithm runs,
and
 Space efficiency, indicating how much extra memory it
uses.
 The algorithm analysis framework consists of the following:
 Measuring an Input’s Size
 Units for Measuring Running Time
 Orders of Growth
 Worst-Case, Best-Case, and Average-Case Efficiencies
Measuring an Input’s size
 Longer input- run longer (ex: sorting large
arrays, multiplying large matrices)
 Efficiency considered as function of input size
 Input size depends on the problem
 input size for sorting/ searching n numbers /
finding the list’s smallest element? –
 n
 Polynomial- anxn
+……..a0 -
 polynomial degree or number of coefficients (larger by 1
than degree)
 Product of two nXn matrices –
 matrix order n/ total number of elements N in the matrix
 Spell checking algorithm, if examines individual
char –
 number of char
 Processing words – number of words
 Algorithm involving numbers – checking a number
n is prime or not – use number of bits in the binary
representation
b=[log2n]+1
Units for measuring running time
 Running time – can be measured as sec, millisec,
microsec, etc….
 But it depends on factors like:
 Speed of system
 Quality of program
 Compiler used
 For measuring efficiency – metric required that are
independent of these factors
 So, count number of times the algorithm
operations are executed – difficult and unnecessary
 Best approach – identify basic operation and count
the number of times repeated
 Basic operation- it is the most time consuming operation
in the algorithm’s inner most loop
 Ex: sorting – key comparision
 Matrix multiplication – multiplication/ addition
 Time efficiency – consider Cop - execution time of
algorithm basic operation, C(n) – number of times the
operation performed for input n, T(n) – running time
T(n) ≈ Cop (C(n))
C(n) – approximate count without considering other
statements
Cop - approximate time
Provides reasonable estimate only for extremely large or
very small ‘n’
 How much faster would this algorithm run on a machine
that is ten times faster than that the one we have?” – 10
times
 Proof: consider the input size is doubled
let C(n) = ½n(n-1) ≈ ½ n2
for 2 times faster system:
T(2(n)) / T(n) ≈ Cop C(2n) / Cop C(n)
≈ ½ (2n)2
/ ½ n2
=
?
(ignores multiplicative constant and concentrates on the
count’s order of growth for large input size)
Order of growth
 Why count’s order of growth for larger input
sizes?
 Difference in running time for small input – does
not distinguish between efficient and inefficient
ones
 Eg: GCD for two small numbers – efficiency of
euclid’s than other two algorithms
 For value of n – function’s order of growth that
counts
 Slowest growing function –logarithmic function
loga n = loga b logb n
Logarithmic bases are omitted
and written as log n
 Exponential – 2n
 factorial – n!
 Grows fastly
 Ex: 2-fold increase in the value ‘n’
 Log2 n – increase by 1
(Log2 2n = Log2 2 + Log2 n = 1 + Log2 n)
 Quadratic function n2
– four-fold increase
 Cubic function n3
– 8-fold increase
 For 2n
– squared
Worst-case, best-case and average-case
 As said earlier running time not only depends
on the size of input but also the “specifics of
input”
 Ex: sequential search
 Worst case:
 No matching elements or the matching element is at
the end of the array
 Algorithm makes largest key comparison – n times
 Cworst (n) = n
 Worst-case efficiency of an algorithm is its efficiency
for the worst-case input of size n, which is an input
(or inputs) of size n for which the algorithm runs
the longest among all possible inputs of that
size.
 Best-case
 Best-case efficiency of an algorithm is its efficiency
for the best-case input of size n, which is an input
(or inputs) of size n for which the algorithm runs
the fastest among all possible inputs of that
size.
 Best-case efficiency for sequential search - ?
 Average-case
 Probability of a successful search is equal to p
(0≤p≤1)
 Probability of the first match occurring in the ith
position of the list is the same for every i
 Calculating Cavg (n):
 Successful search = p/n for every i
 Unsuccessful search = (1-p)
Cavg (n) = [1.p/n+…n.p/n]+n.(1-p)
= p/n [1+…+n]+n(1-p)
= p/n n(n+1)/2 +n(1-p)
= p(n+1)/2+n(1-p)
When p=1 =?
When p=0 =?
Average case is not the average of best and worst case
efficiencies
 Amortized efficiency
 It applies not to a single run of an algorithm, but
rather to a sequence of operations performed
on the same data structure
 single operation can be expensive but total time
for entire sequence of n such operations is better
than the worst-case efficiency multiplied by n
 So amortizing the high cost of worst –case
occurrence
How do you measure the efficiency of an
algorithm?
 Both time and space efficiencies are measured
as functions of the algorithm’s input size
 Time efficiency is measured by counting the
number of times the algorithm basic operation
is executed. Space efficiency is measured by
extra memory consumed by the algorithm
 Efficiency of some algorithms may differ
significantly for the inputs of the same size. For
such algorithms, distinguish between the
worst-case, average-case and best-case
efficiencies
ASYMPTOTIC NOTATIONS AND ITS
PROPERTIES
 Asymptotic notation is a notation, which is used
to take meaningful statement about the
efficiency of a program.
 The efficiency analysis framework concentrates
on the order of growth of an algorithm’s basic
operation count as the principal indicator of the
algorithm’s efficiency.
 To compare and rank such orders of growth,
computer scientists use three notations, they are:
 O - Big oh notation
 Ω - Big omega notation
 Θ - Big theta notation
O - Big oh notation
 A function t(n) is said to be in O(g(n)), denoted 𝑡
( ) ( ( )), if t (n) is bounded above by some
𝑛 ∈ 𝑂 𝑔 𝑛
constant multiple of g(n) for all large n, i.e., if
there exist some positive constant c and some
nonnegative integer n0 such that
 𝑡 ( ) ≤ ( ) fo ≥ 0.
𝑛 𝑐𝑔 𝑛 𝑟 𝑎𝑙𝑙 𝑛 𝑛
 Where t(n) and g(n) are nonnegative functions
defined on the set of natural numbers. O =
Asymptotic upper bound = Useful for worst
case analysis = Loose bound
Big-oh notation: ( ) ( ( ))
𝑡 𝑛 ∈ 𝑂 𝑔 𝑛
 Example 2: Prove the assertions 100 + 5
𝑛 ∈
( 2).
𝑂 𝑛
Proof: 100n + 5 ≤ 100n + n (for all n ≥ 5)
= 101n
≤ 101n2 ( ≤ 𝑛2)
𝑛
𝑛
 Since, the definition gives us a lot of freedom in
choosing specific values for constants c and n0.
We have c=101 and n0=5
Ω - Big omega notation
 A function t(n) is said to be in Ω(g(n)), denoted
t(n) Ω(g(n)), if t(n) is bounded below by some
∈
positive constant multiple of g(n) for all large n,
i.e., if there exist some positive constant c and
some nonnegative integer n0 such that
 t (n) ≥ cg(n) for all n ≥ n0.
 Where t(n) and g(n) are nonnegative functions
defined on the set of natural numbers.
 Ω = Asymptotic lower bound = Useful for best
case analysis = Loose bound
Big-omega notation: t (n) ∈ Ω (g(n))
Example 4:Prove the assertions n3
+10n2
+4n+2 ∈ Ω
(n2
).
Proof: n3
+10n2
+4n+2 ≥ n2
(for all n ≥ 0)
i.e., by definition t(n) ≥ cg(n), where c=1 and n0=0
Θ - Big theta notation
 A function t(n) is said to be in Θ(g(n)), denoted
t(n) Θ(g(n)), if t(n) is bounded both above and
∈
below by some positive constant multiples of
g(n) for all large n, i.e., if there exist some
positive constants c1 and c2 and some
nonnegative integer n0 such that
 c2g(n) ≤ t (n) ≤ c1g(n) for all n ≥ n0.
 Where t(n) and g(n) are nonnegative functions
defined on the set of natural numbers.
 Θ = Asymptotic tight bound = Useful for
average case analysis
Big-theta notation: t (n) Θ(g(n))
∈
Useful Property Involving the Asymptotic
Notations
 THEOREM: If t1(n) O(g1(n)) and t2(n) O(g2(n)), then t1(n) + t2(n)
∈ ∈ ∈
O(max{g1(n), g2(n)}). (The analogous assertions are true for the Ω and Θ notations
as well.)
 PROOF: The proof extends to orders of growth the following simple fact about four
arbitrary real numbers a1, b1, a2, b2: if a1 ≤ b1 and a2 ≤ b2, then a1 + a2 ≤ 2
max{b1, b2}.
 Since t1(n) O(g1(n)), there exist some positive constant c1 and some nonnegative integer
∈
n1 such that
 t1(n) ≤ c1g1(n) for all n ≥ n1.
 Similarly, since t2(n) O(g2(n)),
∈
 t2(n) ≤ c2g2(n) for all n ≥ n2.
 Let us denote c3 = max{c1, c2} and consider n ≥ max{n1, n2} so that we can use both
inequalities. Adding them yields the following:
 t1(n) + t2(n) ≤ c1g1(n) + c2g2(n)
 ≤ c3g1(n) + c3g2(n)
 = c3[g1(n) + g2(n)]
 ≤ c32 max{g1(n), g2(n)}.
 Hence, t1(n) + t2(n) ∈ O(max{g1(n), g2(n)}), with the constants c and n0 required by the
definition O being 2c3 = 2 max{c1, c2} and max{n1, n2}, respectively.
MATHEMATICAL ANALYSIS FOR
RECURSIVE ALGORITHMS
 General Plan for Analyzing the Time Efficiency of
Recursive Algorithms
 Identify the algorithm’s basic operation.
 Check whether the number of times the basic
operation is executed can vary on different inputs of
the same size; if it can, the worst-case, average-case,
and best-case efficiencies must be investigated
separately.
 Set up a recurrence relation, with an appropriate
initial condition, for the number of times the basic
operation is executed.
 Solve the recurrence or, at least, ascertain the order
of growth of its solution.
MATHEMATICAL ANALYSIS FOR NON-
RECURSIVE ALGORITHMS
 General Plan for Analyzing the Time Efficiency of
Nonrecursive Algorithms
 Decide on a parameter (or parameters) indicating an input’s
size.
 Identify the algorithm’s basic operation (in the innermost loop).
 Check whether the number of times the basic operation is
executed depends only on the size of an input. If it also
depends on some additional property, the worst-case, average-
case, and, if necessary, best-case efficiencies have to be
investigated separately.
 Set up a sum expressing the number of times the algorithm’s
basic operation is executed.
 Using standard formulas and rules of sum manipulation either
find a closed form formula for the count or at the least,
establish its order of growth.
Empirical Analysis of Algorithms
 The principal alternative to the mathematical analysis
of an algorithm’s efficiency is its empirical analysis
 General Plan for the Empirical Analysis of
Algorithm Time Efficiency
 Understand the experiment’s purpose.
 Decide on the efficiency metric M to be
measured and the measurement unit (an
operation count vs. a time unit).
 Decide on characteristics of the input sample
(its range, size, and so on).
 Prepare a program implementing the algorithm
(or algorithms) for the experimentation.
 Generate a sample of inputs.
 Run the algorithm (or algorithms) on the sample’s inputs and
record the data observed.
 Analyze the data obtained.
 Several different goals in analyzing algorithms empirically.
 checking the accuracy of a theoretical assertion about the
algorithm’s efficiency,
 comparing the efficiency of several algorithms for solving
the same problem or different implementations of the same
algorithm,
 developing a hypothesis about the algorithm’s efficiency
class,
 ascertaining the efficiency of the program implementing
the algorithm on a particular machine
 To measure algorithm Efficiency
 insert a counter (or counters) into a program
implementing the algorithm to count the number
of times the algorithm’s basic operation is
executed - basic operation is executed in several
places in the program
 time the program implementing the algorithm in
question – time command in UNIX (tfinish− tstart)
 C and C++ - clock, Java - method
currentTimeMillis() in the System class
 important to keep several facts in mind
 system’s time is typically not very accurate – repeated
number of times and consider average
 given the high speed of modern computers, the running time
may fail to register at all and be reported as zero - run the
program in an extra loop many times, measure the total
running time, and then divide it by the number of the loop’s
repetitions
 time-sharing system such as UNIX, the reported time may
include the time spent by the CPU on other programs
 decide on a sample of inputs for the experiment
 Pattern – same output
 Random numbers – linear congruential method
 Data can be presented numerically in a table or
graphically in a scatterplot
Algorithm Visualization
 third way to study algorithms
 Algorithm Visualization - use of images to
convey some useful information about algorithms
 There are two principal variations of algorithm
visualization:
 Static algorithm visualization
 Dynamic algorithm visualization, also called
algorithm animation
 Static algorithm visualization shows an
algorithm’s progress through a series of still
images.
 Algorithm animation, on the other hand, shows
a continuous, movie-like presentation of an
algorithm’s operations.
 Ex: Sorting Out Sorting
Initial and final screens of a typical
visualization of a sorting algorithm using
the scatterplot representation.
 two principal applications of algorithm
visualization: research and education.
 researchers are based on expectations that
algorithm visualization may help uncover some
unknown features of algorithms – different
colors for odd and even disk
 help students to learning algorithms easily

Introduction to Design and Analysis of Algorithms

  • 1.
    Design and Analysisof Algorithms Introduction
  • 2.
     Design-  theway in which something is planned and made or arranged.  Analysis –  the careful examination of the different parts or details of something.  Algorithm –  a set of rules that must be followed when solving a particular problem.
  • 3.
    Algorithm  an algorithmis a set of well-defined instructions to solve a particular problem. It takes a set of input and produces a desired output.  add two numbers  largest among three numbers  find all the roots of the quadratic equation  find the factorial  check prime number
  • 4.
     Qualities ofGood Algorithms  Input and output should be defined precisely.  Each step in the algorithm should be clear and unambiguous.  Algorithms should be most effective among many different ways to solve a problem.  An algorithm shouldn't include computer code. Instead, the algorithm should be written in such a way that it can be used in different programming languages.
  • 5.
    Need for analgorithm  Grasping the fundamental concept of the problem  To comprehend the problem’s flow  It provides the designer with a concise definition of the problem’s needs and aim.  Discover a solution to the situation  Enhance the effectiveness of existing methods  It is the easiest way to describe something without going into too much detail about how it works.
  • 6.
     evaluate andassess the complexity (in terms of time and space) of input size problems without developing and executing them  determine the algorithm’s resource requirements (memory, input-output cycles)  assess the techniques’ performance in all circumstances (best cases, worst cases, and average cases)
  • 7.
     int main(){ int a=0; a = 5<2 ? 4 : 3; printf("%d",a); return 0; }  int main() { int a=0; a = printf("4"); printf("%d",a); return 0; }  int main() { int a=0; a = 5>2 ? printf("4"): 3; printf("%d",a); return 0; }  int main() { int a=0; a = (5>2) ? : 8; printf("%d",a); return 0; }
  • 8.
     int main(){ if( 4 > 5 ) { printf("Hurray..n"); } printf("Yes"); return 0; }  int main() { if( 10 > 9 ) printf("Singaporen"); else if(4%2 == 0) printf("Englandn"); printf("Poland"); return 0; }
  • 9.
    # include<stdio.h> int find_odd(intk) { int n=0,n1,n2; while (n!=-1) { scanf("%d",&n); if(n%2==1) { n1++; if(n1==k) { return n; break; } } else if(n==-1) { return n; break; } else continue; }} void main() { int a[100],i,k; scanf("%d",&k); printf("%d",find_odd(k)); }
  • 10.
    NOTION OF ANALGORITHM  An algorithm is a sequence of unambiguous instructions for solving a problem, i.e., for obtaining a required output for any legitimate input in a finite amount of time. Problem to be solved Algorithm Computer Program Output Input
  • 11.
     The notionof the algorithm illustrates some important points:  The non-ambiguity requirement for each step of an algorithm cannot be compromised.  The range of inputs for which an algorithm works has to be specified carefully.  The same algorithm can be represented in several different ways.  There may exist several algorithms for solving the same problem.  Algorithms for the same problem can be based on very different ideas and can solve the problem with dramatically different speeds.
  • 12.
    Characteristics of analgorithm  Input: Zero / more quantities are externally supplied.  Output: At least one quantity is produced  Definiteness: Each instruction is clear and unambiguous  Finiteness: If the instructions of an algorithm is traced then for all cases the algorithm must terminates after a finite number of steps.  Efficiency: Every instruction must be very basic and runs in short time.
  • 13.
    Steps for writingan algorithm An algorithm is a procedure. It has two parts; the first part is head and the second part is body. The Head section consists of keyword Algorithm and Name of the algorithm with parameter list. E.g. Algorithm name1(p1, p2,…,p3) The head section also has the following: //Problem Description: //Input: //Output: In the body of an algorithm – steps for solving the problem
  • 14.
    Example:  The greatestcommon divisor(GCD) of two nonnegative integers m and n (not-both-zero), denoted gcd(m, n), is defined as the largest integer that divides both m and n evenly, i.e., with a remainder of zero.  Euclid’s Algorithm  Consecutive integer checking algorithm  Middle school procedure
  • 15.
     Euclid’s algorithmis based on applying repeatedly the equality gcd(m, n) = gcd(n, m mod n), where m mod n is the remainder of the division of m by n, until m mod n is equal to 0. Since gcd(m, 0) = m, the last value of m is also the greatest common divisor of the initial m and n. gcd(60, 24) can be computed as follows:gcd(60, 24) = gcd(24, 12) = gcd(12, 0) = 12.
  • 16.
    Euclid’s algorithm forcomputing gcd(m, n) in simple steps  Step 1 If n = 0, return the value of m as the answer and stop; otherwise, proceed to Step 2.  Step 2 Divide m by n and assign the value of the remainder to r.  Step 3 Assign the value of n to m and the value of r to n. Go to Step 1.
  • 17.
    Euclid’s algorithm forcomputing gcd(m, n) expressed in pseudocode Euclid_gcd(m, n) //Computes gcd(m, n) by Euclid’s algorithm //Input: Two nonnegative, not-both-zero integers m and n //Output: Greatest common divisor of m and n while n ≠ 0 do r ←m mod n m←n n←r return m  How it is stopped - n value is reduced for each iteration and become zero after a finite number of iteration
  • 18.
    Consecutive integer checkingalgorithm  T=min{m,n}  Check whether t divides both m and n, if so then t is the answer  If not, decrease t until it divides both the value  Example: GCD(60,24)
  • 19.
     Step 1:Assign the value of min{m,n} to t  Step 2: Divide m by t. if the remainder is zero, goto step 3; otherwise goto step 4  Step 3: divide n by t. if remainder is zero, return t as output; otherwise goto step 4  Step 4: decrease the value of t by 1. goto step 2 Like Euclid’s, this does not work correctly when one of the input is zero so it is necessary to specify the range of algorithm’s input
  • 20.
    Middle school method Gcd(60,24)  60=2.2.3.5  24=2.2.2.3  Gcd(60,24)=2.2.3=12  GCD (20,36)
  • 21.
    Middle school method Step 1: find the prime factors of m  Step 2: find the prime factors of n  Step 3: identify all the common factors in the two prime expansions found in step 1 and step 2 (if p is a common factor occurring pm and pn times in m and n respectively, it should be repeated min{pm,pn} times)  Step 4: compute the product of all the common factors and return it as GCD of the numbers given
  • 22.
    Generating prime numberless than n  Ancient Greece invented – sieve of eratosthenes  For every iteration – eliminates the multiples of the number
  • 24.
  • 25.
     The notionof the algorithm illustrates some important points:  The non-ambiguity requirement for each step of an algorithm cannot be compromised.  The range of inputs for which an algorithm works has to be specified carefully.  The same algorithm can be represented in several different ways.  There may exist several algorithms for solving the same problem.  Algorithms for the same problem can be based on very different ideas and can solve the problem with dramatically different speeds.
  • 26.
     Understanding theProblem  Careful examination of the problem  Workout manually with some inputs and special cases  Input- instances of algorithm  Specify range of instances including boundary values  Algorithm should not only work all the time but should work for all the legimate inputs  Decision making on:  Ascertaining the Capabilities of the Computational Device  RAM  Earlier system-executes in sequential manner- sequential algorithm  Parallel algorithm – executing concurrently  Problem related to scientific applications – independent of parameters like speed and cost – machine independent  Practical applications- depending on the problem (speed, memory)
  • 27.
     Choosing betweenExact and Approximate Problem Solving  Certain problems cannot solve exactly – square roots, non-linear equations, evaluating definite integrals  Solving exactly a problem – slow process – problem complexity – traveling salesman problem  Deciding on Appropriate Data Structures  Depending on the problem type – it may demand for ingenuity data structure  Algorithm + Data Structure = Program  Data structures is crucial for DAA
  • 28.
     Algorithm DesignTechniques  An Algorithm Design Technique (or “strategy” or “paradigm”) is a general approach to solve problem algorithmically that is applicable to a variety of problems from different areas of computing  Existing algorithm provide guidance for designing new algorithms  Classify algorithm based on designing ideas and use appropriately  Methods of Specifying an Algorithm  There are three ways to specify an algorithm. They are:  Natural language  Pseudocode  Flowchart
  • 29.
     Natural Language Pseudocode Step 1: Read the first number, say a. Step 2: Read the first number, say b. Step 3: Add the above two numbers and store the result in c. Step 4: Display the result from c. Pseudocode Sum(a,b) //Problem Description: This algorithm performs addition of two numbers //Input: Two integers a and b //Output: Addition of two integers c←a+b return c
  • 30.
  • 31.
     Proving anAlgorithm’s Correctness  Correctness- algorithm yields required output for every legimate input in finite amount of time  Euclid’s algorithm  Proving correctness – can be easy or complex based on the problem  Mathematical induction  Tracing of algorithm – cannot prove correctness conclusively  To prove incorrect – does not work for atleast one input  Redesign if incorrect – data structure, design technique  For approximation algorithm – correctness – does not exceed predefined limit
  • 32.
     Analyzing anAlgorithm  Two types of efficiency  Time – how fast the algorithm runs  Space – how much extra memory required  Desirable characteristics of an algorithm  Simplicity – finding GCD(m,n)  Simple algorithm – easy to understand and program – few bugs  Sometimes more efficient than complex alternative  Generality  Two issues  Generality of the problem it solves – two numbers relatively prime – divisible by 1 / quadratic equation  Range of input it accepts – for GCD exclude 1
  • 33.
     Coding anAlgorithm  Algorithm – program (both risk and opportunity)  Risk: coding incorrectly or inefficiently – testing (art rather science)  Verifying all range of inputs  Code optimization – to increase speed by 10-50%  Empirical analysis for time efficiency  Algorithm’s optimality – complexity of the problem – minimum effort required to solve a problem  Whether a problem can be solved or not – finding real roots of quadratic equation with negative discriminant
  • 34.
    IMPORTANT PROBLEM TYPES The most important problem types are:  Sorting  Arranging elements in some order  Sorting numbers, alphabets  Sorting student records – names/ roll nos/ marks – key  sorting makes searching easier – eg: dictionary, telephone directories  Sorting algorithm selection – programming time, execution time, memory space  Properties of sorting algorithm  Stable - Equal keys aren't reordered (same GPA- student will be sorted alphabetically)  Inplace- does not require extra memory  Types  Internal sorting – sorted and retained in main memory  Bubble, selection, insertion shell, quick and heap  External sorting – moving datas from secondary storage to main memory for sorting  merge
  • 35.
     Searching  Findinga given value called search key in a given set  Linear, binary, hashing  tree-based – binary, balanced  Used for storing and retrieving information from large database  Issues  Frequently changing data relative to number of searches - Addition and deletion of data  Appropriate algorithm and data structures to be chosen
  • 36.
     String processing Sequence of characters (eg: text string – letters, numbers, special char, bit strings – 0’s and 1’s)  Pattern matching  Process of searching for an occurrence of a given word or a pattern in a string  String matching algorithm by knuth-morris-pratt  Compares character of a pattern from left to right  String matching algorithm by boyer-moore  Compares characters of a pattern from right to left
  • 37.
     Graph problems Collection of vertices connected by edges  Ex: real-time applications – transportation, communication networks  Graph algorithms  Graph traversal – how to visit all the points in the network  BFS, DFS  Shortest path algorithm – finding best route  Dijikstra’s, Prims, Kruskals  Travelling salesman problem – finding shortest path through n cities that visits all the cities exactly once  Graph-coloring problem  Smallest number of colors to vertices of a graph so that no two adjacet vertices are of same color
  • 38.
     Combinatorial problems Problem that ask explicitly or implicitly to find combinatorial object like permutation/ combination / subset that satisfies certain constraints and has some desired propoerty  Ex: travelling salesman problem and graph coloring problem  More difficult problems in computing – number of combinatorial objects grows with problem size, no algorithm for solving such problem in acceptable time
  • 39.
     Geometric problems Deals with geometric objects such as lines, points, polygons etc. Ex: closest-pair – given n points, finding the closest pair, convex-hull – finding the smallest convex polygon that includes all the points of a given set, dihedral angle problem  Numerical problems  Involves mathematical objects of continuous nature  Mathematical problems can be solved only approximately  Solving equations  Evaluating functions  Solving requires handling real numbers- done only approximately
  • 40.
    FUNDAMENTALS OF THEANALYSIS OF ALGORITHM EFFICIENCY  Analysis Framework.  Asymptotic Notations and its properties.  Mathematical analysis for Recursive algorithms.  Mathematical analysis for Non-recursive algorithms.  Empirical Analysis of Algorithms  Algorithm Visualization
  • 41.
    FUNDAMENTALS OF THEANALYSIS OF ALGORITHM EFFICIENCY  The efficiency of an algorithm can be in terms of time and space. The algorithm efficiency can be analyzed by the following ways.  Analysis Framework.  Time efficiency, indicating how fast the algorithm runs, and  Space efficiency, indicating how much extra memory it uses.  The algorithm analysis framework consists of the following:  Measuring an Input’s Size  Units for Measuring Running Time  Orders of Growth  Worst-Case, Best-Case, and Average-Case Efficiencies
  • 42.
    Measuring an Input’ssize  Longer input- run longer (ex: sorting large arrays, multiplying large matrices)  Efficiency considered as function of input size  Input size depends on the problem  input size for sorting/ searching n numbers / finding the list’s smallest element? –  n  Polynomial- anxn +……..a0 -  polynomial degree or number of coefficients (larger by 1 than degree)  Product of two nXn matrices –  matrix order n/ total number of elements N in the matrix
  • 43.
     Spell checkingalgorithm, if examines individual char –  number of char  Processing words – number of words  Algorithm involving numbers – checking a number n is prime or not – use number of bits in the binary representation b=[log2n]+1
  • 44.
    Units for measuringrunning time  Running time – can be measured as sec, millisec, microsec, etc….  But it depends on factors like:  Speed of system  Quality of program  Compiler used  For measuring efficiency – metric required that are independent of these factors  So, count number of times the algorithm operations are executed – difficult and unnecessary  Best approach – identify basic operation and count the number of times repeated
  • 45.
     Basic operation-it is the most time consuming operation in the algorithm’s inner most loop  Ex: sorting – key comparision  Matrix multiplication – multiplication/ addition  Time efficiency – consider Cop - execution time of algorithm basic operation, C(n) – number of times the operation performed for input n, T(n) – running time T(n) ≈ Cop (C(n)) C(n) – approximate count without considering other statements Cop - approximate time Provides reasonable estimate only for extremely large or very small ‘n’
  • 46.
     How muchfaster would this algorithm run on a machine that is ten times faster than that the one we have?” – 10 times  Proof: consider the input size is doubled let C(n) = ½n(n-1) ≈ ½ n2 for 2 times faster system: T(2(n)) / T(n) ≈ Cop C(2n) / Cop C(n) ≈ ½ (2n)2 / ½ n2 = ? (ignores multiplicative constant and concentrates on the count’s order of growth for large input size)
  • 47.
    Order of growth Why count’s order of growth for larger input sizes?  Difference in running time for small input – does not distinguish between efficient and inefficient ones  Eg: GCD for two small numbers – efficiency of euclid’s than other two algorithms  For value of n – function’s order of growth that counts
  • 49.
     Slowest growingfunction –logarithmic function loga n = loga b logb n Logarithmic bases are omitted and written as log n  Exponential – 2n  factorial – n!  Grows fastly
  • 50.
     Ex: 2-foldincrease in the value ‘n’  Log2 n – increase by 1 (Log2 2n = Log2 2 + Log2 n = 1 + Log2 n)  Quadratic function n2 – four-fold increase  Cubic function n3 – 8-fold increase  For 2n – squared
  • 51.
    Worst-case, best-case andaverage-case  As said earlier running time not only depends on the size of input but also the “specifics of input”  Ex: sequential search
  • 53.
     Worst case: No matching elements or the matching element is at the end of the array  Algorithm makes largest key comparison – n times  Cworst (n) = n  Worst-case efficiency of an algorithm is its efficiency for the worst-case input of size n, which is an input (or inputs) of size n for which the algorithm runs the longest among all possible inputs of that size.
  • 54.
     Best-case  Best-caseefficiency of an algorithm is its efficiency for the best-case input of size n, which is an input (or inputs) of size n for which the algorithm runs the fastest among all possible inputs of that size.  Best-case efficiency for sequential search - ?  Average-case  Probability of a successful search is equal to p (0≤p≤1)  Probability of the first match occurring in the ith position of the list is the same for every i
  • 55.
     Calculating Cavg(n):  Successful search = p/n for every i  Unsuccessful search = (1-p) Cavg (n) = [1.p/n+…n.p/n]+n.(1-p) = p/n [1+…+n]+n(1-p) = p/n n(n+1)/2 +n(1-p) = p(n+1)/2+n(1-p) When p=1 =? When p=0 =? Average case is not the average of best and worst case efficiencies
  • 56.
     Amortized efficiency It applies not to a single run of an algorithm, but rather to a sequence of operations performed on the same data structure  single operation can be expensive but total time for entire sequence of n such operations is better than the worst-case efficiency multiplied by n  So amortizing the high cost of worst –case occurrence
  • 57.
    How do youmeasure the efficiency of an algorithm?  Both time and space efficiencies are measured as functions of the algorithm’s input size  Time efficiency is measured by counting the number of times the algorithm basic operation is executed. Space efficiency is measured by extra memory consumed by the algorithm  Efficiency of some algorithms may differ significantly for the inputs of the same size. For such algorithms, distinguish between the worst-case, average-case and best-case efficiencies
  • 58.
    ASYMPTOTIC NOTATIONS ANDITS PROPERTIES  Asymptotic notation is a notation, which is used to take meaningful statement about the efficiency of a program.  The efficiency analysis framework concentrates on the order of growth of an algorithm’s basic operation count as the principal indicator of the algorithm’s efficiency.  To compare and rank such orders of growth, computer scientists use three notations, they are:  O - Big oh notation  Ω - Big omega notation  Θ - Big theta notation
  • 60.
    O - Bigoh notation  A function t(n) is said to be in O(g(n)), denoted 𝑡 ( ) ( ( )), if t (n) is bounded above by some 𝑛 ∈ 𝑂 𝑔 𝑛 constant multiple of g(n) for all large n, i.e., if there exist some positive constant c and some nonnegative integer n0 such that  𝑡 ( ) ≤ ( ) fo ≥ 0. 𝑛 𝑐𝑔 𝑛 𝑟 𝑎𝑙𝑙 𝑛 𝑛  Where t(n) and g(n) are nonnegative functions defined on the set of natural numbers. O = Asymptotic upper bound = Useful for worst case analysis = Loose bound
  • 61.
    Big-oh notation: () ( ( )) 𝑡 𝑛 ∈ 𝑂 𝑔 𝑛
  • 62.
     Example 2:Prove the assertions 100 + 5 𝑛 ∈ ( 2). 𝑂 𝑛 Proof: 100n + 5 ≤ 100n + n (for all n ≥ 5) = 101n ≤ 101n2 ( ≤ 𝑛2) 𝑛 𝑛  Since, the definition gives us a lot of freedom in choosing specific values for constants c and n0. We have c=101 and n0=5
  • 63.
    Ω - Bigomega notation  A function t(n) is said to be in Ω(g(n)), denoted t(n) Ω(g(n)), if t(n) is bounded below by some ∈ positive constant multiple of g(n) for all large n, i.e., if there exist some positive constant c and some nonnegative integer n0 such that  t (n) ≥ cg(n) for all n ≥ n0.  Where t(n) and g(n) are nonnegative functions defined on the set of natural numbers.  Ω = Asymptotic lower bound = Useful for best case analysis = Loose bound
  • 64.
    Big-omega notation: t(n) ∈ Ω (g(n))
  • 65.
    Example 4:Prove theassertions n3 +10n2 +4n+2 ∈ Ω (n2 ). Proof: n3 +10n2 +4n+2 ≥ n2 (for all n ≥ 0) i.e., by definition t(n) ≥ cg(n), where c=1 and n0=0
  • 66.
    Θ - Bigtheta notation  A function t(n) is said to be in Θ(g(n)), denoted t(n) Θ(g(n)), if t(n) is bounded both above and ∈ below by some positive constant multiples of g(n) for all large n, i.e., if there exist some positive constants c1 and c2 and some nonnegative integer n0 such that  c2g(n) ≤ t (n) ≤ c1g(n) for all n ≥ n0.  Where t(n) and g(n) are nonnegative functions defined on the set of natural numbers.  Θ = Asymptotic tight bound = Useful for average case analysis
  • 67.
    Big-theta notation: t(n) Θ(g(n)) ∈
  • 68.
    Useful Property Involvingthe Asymptotic Notations  THEOREM: If t1(n) O(g1(n)) and t2(n) O(g2(n)), then t1(n) + t2(n) ∈ ∈ ∈ O(max{g1(n), g2(n)}). (The analogous assertions are true for the Ω and Θ notations as well.)  PROOF: The proof extends to orders of growth the following simple fact about four arbitrary real numbers a1, b1, a2, b2: if a1 ≤ b1 and a2 ≤ b2, then a1 + a2 ≤ 2 max{b1, b2}.  Since t1(n) O(g1(n)), there exist some positive constant c1 and some nonnegative integer ∈ n1 such that  t1(n) ≤ c1g1(n) for all n ≥ n1.  Similarly, since t2(n) O(g2(n)), ∈  t2(n) ≤ c2g2(n) for all n ≥ n2.  Let us denote c3 = max{c1, c2} and consider n ≥ max{n1, n2} so that we can use both inequalities. Adding them yields the following:  t1(n) + t2(n) ≤ c1g1(n) + c2g2(n)  ≤ c3g1(n) + c3g2(n)  = c3[g1(n) + g2(n)]  ≤ c32 max{g1(n), g2(n)}.  Hence, t1(n) + t2(n) ∈ O(max{g1(n), g2(n)}), with the constants c and n0 required by the definition O being 2c3 = 2 max{c1, c2} and max{n1, n2}, respectively.
  • 69.
    MATHEMATICAL ANALYSIS FOR RECURSIVEALGORITHMS  General Plan for Analyzing the Time Efficiency of Recursive Algorithms  Identify the algorithm’s basic operation.  Check whether the number of times the basic operation is executed can vary on different inputs of the same size; if it can, the worst-case, average-case, and best-case efficiencies must be investigated separately.  Set up a recurrence relation, with an appropriate initial condition, for the number of times the basic operation is executed.  Solve the recurrence or, at least, ascertain the order of growth of its solution.
  • 70.
    MATHEMATICAL ANALYSIS FORNON- RECURSIVE ALGORITHMS  General Plan for Analyzing the Time Efficiency of Nonrecursive Algorithms  Decide on a parameter (or parameters) indicating an input’s size.  Identify the algorithm’s basic operation (in the innermost loop).  Check whether the number of times the basic operation is executed depends only on the size of an input. If it also depends on some additional property, the worst-case, average- case, and, if necessary, best-case efficiencies have to be investigated separately.  Set up a sum expressing the number of times the algorithm’s basic operation is executed.  Using standard formulas and rules of sum manipulation either find a closed form formula for the count or at the least, establish its order of growth.
  • 71.
    Empirical Analysis ofAlgorithms  The principal alternative to the mathematical analysis of an algorithm’s efficiency is its empirical analysis  General Plan for the Empirical Analysis of Algorithm Time Efficiency  Understand the experiment’s purpose.  Decide on the efficiency metric M to be measured and the measurement unit (an operation count vs. a time unit).  Decide on characteristics of the input sample (its range, size, and so on).  Prepare a program implementing the algorithm (or algorithms) for the experimentation.
  • 72.
     Generate asample of inputs.  Run the algorithm (or algorithms) on the sample’s inputs and record the data observed.  Analyze the data obtained.  Several different goals in analyzing algorithms empirically.  checking the accuracy of a theoretical assertion about the algorithm’s efficiency,  comparing the efficiency of several algorithms for solving the same problem or different implementations of the same algorithm,  developing a hypothesis about the algorithm’s efficiency class,  ascertaining the efficiency of the program implementing the algorithm on a particular machine
  • 73.
     To measurealgorithm Efficiency  insert a counter (or counters) into a program implementing the algorithm to count the number of times the algorithm’s basic operation is executed - basic operation is executed in several places in the program  time the program implementing the algorithm in question – time command in UNIX (tfinish− tstart)  C and C++ - clock, Java - method currentTimeMillis() in the System class  important to keep several facts in mind  system’s time is typically not very accurate – repeated number of times and consider average
  • 74.
     given thehigh speed of modern computers, the running time may fail to register at all and be reported as zero - run the program in an extra loop many times, measure the total running time, and then divide it by the number of the loop’s repetitions  time-sharing system such as UNIX, the reported time may include the time spent by the CPU on other programs  decide on a sample of inputs for the experiment  Pattern – same output  Random numbers – linear congruential method
  • 76.
     Data canbe presented numerically in a table or graphically in a scatterplot
  • 77.
    Algorithm Visualization  thirdway to study algorithms  Algorithm Visualization - use of images to convey some useful information about algorithms  There are two principal variations of algorithm visualization:  Static algorithm visualization  Dynamic algorithm visualization, also called algorithm animation  Static algorithm visualization shows an algorithm’s progress through a series of still images.
  • 78.
     Algorithm animation,on the other hand, shows a continuous, movie-like presentation of an algorithm’s operations.  Ex: Sorting Out Sorting
  • 79.
    Initial and finalscreens of a typical visualization of a sorting algorithm using the scatterplot representation.
  • 80.
     two principalapplications of algorithm visualization: research and education.  researchers are based on expectations that algorithm visualization may help uncover some unknown features of algorithms – different colors for odd and even disk  help students to learning algorithms easily