KEMBAR78
Daa Unit 1 Notes | PDF | Matrix (Mathematics) | Computing
0% found this document useful (0 votes)
43 views50 pages

Daa Unit 1 Notes

An algorithm is a sequence of clear instructions designed to solve a problem with specified inputs and outputs, ensuring it terminates in a finite time. Key characteristics include definiteness, finiteness, and efficiency, while various methods for specifying algorithms include natural language, pseudocode, and flowcharts. The document also covers algorithm design, correctness, analysis of efficiency in terms of time and space, and the importance of coding algorithms effectively.

Uploaded by

yijiyo6332
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views50 pages

Daa Unit 1 Notes

An algorithm is a sequence of clear instructions designed to solve a problem with specified inputs and outputs, ensuring it terminates in a finite time. Key characteristics include definiteness, finiteness, and efficiency, while various methods for specifying algorithms include natural language, pseudocode, and flowcharts. The document also covers algorithm design, correctness, analysis of efficiency in terms of time and space, and the importance of coding algorithms effectively.

Uploaded by

yijiyo6332
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 50

1.

1 NOTION OF AN ALGORITHM

An algorithm is a sequence of unambiguous instructions for solving a problem, i.e., for


obtaining a required output for any legitimate input in a finite amount of time.

Problem to be solved

Algorithm

Input Computer Program Output

FIGURE 1.1 The notion of the algorithm.

It is a step by step procedure with the input to solve the problem in a finite amount of time
to obtain the required output.

The notion of the algorithm illustrates some important points:


 The non-ambiguity requirement for each step of an algorithm cannot be compromised.
 The range of inputs for which an algorithm works has to be specified carefully.
 The same algorithm can be represented in several different ways.
 There may exist several algorithms for solving the same problem.
 Algorithms for the same problem can be based on very different ideas and can solve the
problem with dramatically different speeds.

Characteristics of an algorithm:
Input: Zero / more quantities are externally supplied.
Output: At least one quantity is produced.
Definiteness: Each instruction is clear and unambiguous.
Finiteness: If the instructions of an algorithm is traced then for all cases the algorithm must
terminates after a finite number of steps.
Efficiency: Every instruction must be very basic and runs in short time.
Steps for writing an algorithm:
1. An algorithm is a procedure. It has two parts; the first part is head and the second part is
body.
2. The Head section consists of keyword Algorithm and Name of the algorithm with
parameter list. E.g. Algorithm name1(p1, p2,…,p3)
The head section also has the following:
//Problem Description:
//Input:
//Output:
3. In the body of an algorithm various programming constructs like if, for, while and some
statements like assignments are used.
4. The compound statements may be enclosed with { and } brackets. if, for, while can be
closed by endif, endfor, endwhile respectively. Proper indention is must for block.
5. Comments are written using // at the beginning.
6. The identifier should begin by a letter and not by digit. It contains alpha numeric letters
after first letter. No need to mention data types.
7. The left arrow “←” used as assignment operator. E.g. v←10
8. Boolean operators (TRUE, FALSE), Logical operators (AND, OR, NOT) and Relational
operators (<,<=, >, >=,=, ≠, <>) are also used.
9. Input and Output can be done using read and write.
10. Array[], if then else condition, branch and loop can be also used in algorithm.

Example:
The greatest common divisor(GCD) of two nonnegative integers m and n (not-both-zero),
denoted gcd(m, n), is defined as the largest integer that divides both m and n evenly, i.e., with a
remainder of zero.

1.2 FUNDAMENTALS OF ALGORITHMIC PROBLEM SOLVING


A sequence of steps involved in designing and analyzing an algorithm is shown in the figure
below.

FIGURE 1.2 Algorithm design and analysis process.


(i) Understanding the Problem
 This is the first step in designing of algorithm.
 Read the problem’s description carefully to understand the problem statement completely.
 Ask questions for clarifying the doubts about the problem.
 Identify the problem types and use existing algorithm to find solution.
 Input (instance) to the problem and range of the input get fixed.

(ii) Decision making


The Decision making is done on the following:
(a) Ascertaining the Capabilities of the Computational Device
 In random-access machine (RAM), instructions are executed one after another (The
central assumption is that one operation at a time). Accordingly, algorithms
designed to be executed on such machines are called sequential algorithms.
 In some newer computers, operations are executed concurrently, i.e., in parallel.
Algorithms that take advantage of this capability are called parallel algorithms.
 Choice of computational devices like Processor and memory is mainly based on
space and time efficiency
(b) Choosing between Exact and Approximate Problem Solving
 The next principal decision is to choose between solving the problem exactly or
solving it approximately.
 An algorithm used to solve the problem exactly and produce correct result is called
an exact algorithm.
 If the problem is so complex and not able to get exact solution, then we have to
choose an algorithm called an approximation algorithm. i.e., produces an
approximate answer. E.g., extracting square roots, solving nonlinear equations, and
evaluating definite integrals.
(c) Algorithm Design Techniques
 An algorithm design technique (or “strategy” or “paradigm”) is a general approach
to solving problems algorithmically that is applicable to a variety of problems from
different areas of computing.
Algorithms+ Data Structures = Programs
 Though Algorithms and Data Structures are independent, but they are combined
together to develop program. Hence the choice of proper data structure is required
before designing the algorithm.
 Implementation of algorithm is possible only with the help of Algorithms and Data
Structures
 Algorithmic strategy / technique / paradigm are a general approach by which
many problems can be solved algorithmically. E.g., Brute Force, Divide and
Conquer, Dynamic Programming, Greedy Technique and so on.

(iii) Methods of Specifying an Algorithm

There are three ways to specify an algorithm. They are:


a. Natural language
b. Pseudocode
c. Flowchart

Algorithm Specification

Natural Language Pseudocode Flowchart

FIGURE 1.3 Algorithm Specifications

Pseudocode and flowchart are the two options that are most widely used nowadays for specifying
algorithms.

a. Natural Language
It is very simple and easy to specify an algorithm using natural language. But many times
specification of algorithm by using natural language is not clear and thereby we get brief
specification.
Example: An algorithm to perform addition of two numbers.
Step 1: Read the first number, say a.
Step 2: Read the first number, say b.
Step 3: Add the above two numbers and store the result in c.
Step 4: Display the result from c.
Such a specification creates difficulty while actually implementing it. Hence many programmers
prefer to have specification of algorithm by means of Pseudocode.
b. Pseudocode
 Pseudocode is a mixture of a natural language and programming language constructs.
Pseudocode is usually more precise than natural language.
 For Assignment operation left arrow “←”, for comments two slashes “//”,if condition, for,
while loops are used.

ALGORITHM Sum(a,b)
//Problem Description: This algorithm performs addition of two numbers
//Input: Two integers a and b
//Output: Addition of two integers
c←a+b
return c
This specification is more useful for implementation of any language.

c. Flowchart
In the earlier days of computing, the dominant method for specifying algorithms was a flowchart,
this representation technique has proved to be inconvenient.
Flowchart is a graphical representation of an algorithm. It is a a method of expressing an algorithm
by a collection of connected geometric shapes containing descriptions of the algorithm’s steps.
Symbols Example: Addition of a and b

Start Start state


Start
Transition / Assignment
Input the value of a
Processing / Input read
Input the value of b
Input and Output
c=a+b

Condition / Decision
Display the value of c

Flow
Stop
Stop connectivityStop

state

FIGURE 1.4 Flowchart symbols and Example for two integer addition.

(iv) Proving an Algorithm’s Correctness

 Once an algorithm has been specified then its correctness must be proved.
 An algorithm must yields a required result for every legitimate input in a finite amount of
time.
 For example, the correctness of Euclid’s algorithm for computing the greatest common
divisor stems from the correctness of the equality gcd(m, n) = gcd(n, m mod n).
 A common technique for proving correctness is to use mathematical induction because an
algorithm’s iterations provide a natural sequence of steps needed for such proofs.
 The notion of correctness for approximation algorithms is less straightforward than it is for
exact algorithms. The error produced by the algorithm should not exceed a predefined
limit.

(v) Analyzing an Algorithm


 For an algorithm the most important is efficiency. In fact, there are two kinds of algorithm
efficiency. They are:
 Time efficiency, indicating how fast the algorithm runs, and
 Space efficiency, indicating how much extra memory it uses.
 The efficiency of an algorithm is determined by measuring both time efficiency and space
efficiency.
 So factors to analyze an algorithm are:
 Time efficiency of an algorithm
 Space efficiency of an algorithm
 Simplicity of an algorithm
 Generality of an algorithm

(vi) Coding an Algorithm


 The coding / implementation of an algorithm is done by a suitable programming language
like C, C++, JAVA.
 The transition from an algorithm to a program can be done either incorrectly or very
inefficiently. Implementing an algorithm correctly is necessary. The Algorithm power
should not reduced by inefficient implementation.
 Standard tricks like computing a loop’s invariant (an expression that does not change its
value) outside the loop, collecting common subexpressions, replacing expensive
operations by cheap ones, selection of programming language and so on should be known to
the programmer.
 Typically, such improvements can speed up a program only by a constant factor, whereas a
better algorithm can make a difference in running time by orders of magnitude. But once
an algorithm is selected, a 10–50% speedup may be worth an effort.
 It is very essential to write an optimized code (efficient code) to reduce the burden of
compiler.
FUNDAMENTALS OF THE ANALYSIS OF ALGORITHM EFFICIENCY
The efficiency of an algorithm can be in terms of time and space. The algorithm efficiency
can be analyzed by the following ways.
a. Analysis Framework.
b. Asymptotic Notations and its properties.
c. Mathematical analysis for Recursive algorithms.
d. Mathematical analysis for Non-recursive algorithms.

1.3 Analysis Framework


There are two kinds of efficiencies to analyze the efficiency of any algorithm. They are:
 Time efficiency, indicating how fast the algorithm runs, and
 Space efficiency, indicating how much extra memory it uses.

The algorithm analysis framework consists of the following:


 Measuring an Input’s Size
 Units for Measuring Running Time
 Orders of Growth
 Worst-Case, Best-Case, and Average-Case Efficiencies

(i) Measuring an Input’s Size


 An algorithm’s efficiency is defined as a function of some parameter n indicating the
algorithm’s input size. In most cases, selecting such a parameter is quite straightforward.
For example, it will be the size of the list for problems of sorting, searching.
 For the problem of evaluating a polynomial p(x) = anxn + . . . + a0 of degree n, the size of
the parameter will be the polynomial’s degree or the number of its coefficients, which is
larger by 1 than its degree.
 In computing the product of two n × n matrices, the choice of a parameter indicating an
input size does matter.
 Consider a spell-checking algorithm. If the algorithm examines individual characters of its
input, then the size is measured by the number of characters.
 In measuring input size for algorithms solving problems such as checking primality of a
positive integer n. the input is just one number.
 The input size by the number b of bits in the n’s binary representation is b=(log2 n)+1.

(ii) Units for Measuring Running Time


Some standard unit of time measurement such as a second, or millisecond, and so on can be
used to measure the running time of a program after implementing the algorithm.
Drawbacks
 Dependence on the speed of a particular computer.
 Dependence on the quality of a program implementing the algorithm.
 The compiler used in generating the machine code.
 The difficulty of clocking the actual running time of the program.
So, we need metric to measure an algorithm’s efficiency that does not depend on these
extraneous factors.
One possible approach is to count the number of times each of the algorithm’s operations
is executed. This approach is excessively difficult.
The most important operation (+, -, *, /) of the algorithm, called the basic operation.
Computing the number of times the basic operation is executed is easy. The total running time is
determined by basic operations count.
_

(iii) Orders of Growth


 A difference in running times on small inputs is not what really distinguishes efficient
algorithms from inefficient ones.
 For example, the greatest common divisor of two small numbers, it is not immediately clear
how much more efficient Euclid’s algorithm is compared to the other algorithms, the
difference in algorithm efficiencies becomes clear for larger numbers only.
 For large values of n, it is the function’s order of growth that counts just like the Table 1.1,
which contains values of a few functions particularly important for analysis of algorithms.

TABLE 1.1 Values (approximate) of several functions important for analysis of algorithms

n √𝑛 log2n n n log2n n2 n3 2n n!
1 1 0 1 0 1 1 2 1
2 1.4 1 2 2 4 4 4 2
4 2 2 4 8 16 64 16 24
8 2.8 3 8 2.4•101 64 5.1•102 2.6•102 4.0•104
10 3.2 3.3 10 3.3•101 102 103 103 3.6•106
16 4 4 16 6.4•101 2.6•102 4.1•103 6.5•104 2.1•1013
102 10 6.6 102 6.6•102 104 106 1.3•1030 9.3•10157
103 31 10 103 1.0•104 106 109
104 102 13 104 1.3•105 108 1012 Very big
105 3.2•102 17 105 1.7•106 1010 1015 computation
106 103 20 106 2.0•107 1012 1018

(iv) Worst-Case, Best-Case, and Average-Case Efficiencies


Consider Sequential Search algorithm some search key K
ALGORITHM SequentialSearch(A[0..n - 1], K)
//Searches for a given value in a given array by sequential search
//Input: An array A[0..n - 1] and a search key K
//Output: The index of the first element in A that matches K or -1 if there are no
// matching elements
i ←0
while i < n and A[i] ≠ K do
i ←i + 1
if i < n return i
else return -1
Clearly, the running time of this algorithm can be quite different for the same list size n.

In the worst case, there is no matching of elements or the first matching element can found
at last on the list. In the best case, there is matching of elements at first on the list.

Worst-case efficiency
 The worst-case efficiency of an algorithm is its efficiency for the worst case input of size n.
 The algorithm runs the longest among all possible inputs of that size.
 For the input of size n, the running time is Cworst(n) = n.
_

Best case efficiency


 The best-case efficiency of an algorithm is its efficiency for the best case input of size n.
 The algorithm runs the fastest among all possible inputs of that size n.
 In sequential search, If we search a first element in list of size n. (i.e. first element equal to
a search key), then the running time is Cbest(n) = 1

Average case efficiency


 The Average case efficiency lies between best case and worst case.
 To analyze the algorithm’s average case efficiency, we must make some assumptions about
possible inputs of size n.
 The standard assumptions are that
o The probability of a successful search is equal to p (0 ≤ p ≤ 1) and
o The probability of the first match occurring in the ith position of the list is the same
for every i.

Yet another type of efficiency is called amortized efficiency. It applies not to a single run of
an algorithm but rather to a sequence of operations performed on the same data structure.

ASYMPTOTIC NOTATIONS AND ITS PROPERTIES

Asymptotic notation is a notation, which is used to take meaningful statement about the
efficiency of a program.
The efficiency analysis framework concentrates on the order of growth of an algorithm’s
basic operation count as the principal indicator of the algorithm’s efficiency.
To compare and rank such orders of growth, computer scientists use three notations, they
are:
 O - Big oh notation
 Ω - Big omega notation
 Θ - Big theta notation
Let t(n) and g(n) can be any nonnegative functions defined on the set of natural numbers.
The algorithm’s running time t(n) usually indicated by its basic operation count C(n), and g(n),
some simple function to compare with the count.

Example 1:

where g(n) = n2.


_

(i) O - Big oh notation


A function t(n) is said to be in O(g(n)), denoted 𝑡 (𝑛) ∈ 𝑂(𝑔(𝑛)), if t (n) is bounded above by
some constant multiple of g(n) for all large n, i.e., if there exist some positive constant c and some
nonnegative integer n0 such that
𝑡 (𝑛) ≤ 𝑐(𝑛) fo𝑟 𝑎𝑙𝑙 𝑛 ≥ 𝑛0.
Where t(n) and g(n) are nonnegative functions defined on the set of natural numbers.
O = Asymptotic upper bound = Useful for worst case analysis = Loose bound

FIGURE 1.5 Big-oh notation: 𝑡 (𝑛) ∈ (𝑔(𝑛)).

Example 2: Prove the assertions 100𝑛 + 5 ∈ (𝑛2).


Proof: 100n + 5 ≤ 100n + n (for all n ≥ 5)
= 101n
≤ 101n2 (Ӭ 𝑛 ≤ 𝑛2)
Since, the definition gives us a lot of freedom in choosing specific values for constants c
and n0. We have c=101 and n0=5
Example 3: Prove the assertions 100𝑛 + 5 ∈ (𝑛).
Proof: 100n + 5 ≤ 100n + 5n (for all n ≥ 1)
= 105n
i.e., 100n + 5 ≤ 105n
i.e., t(n) ≤ cg(n)
±100𝑛 + 5 ∈ (𝑛) with c=105 and n0=1

(ii) Ω - Big omega notation


A function t(n) is said to be in Ω(g(n)), denoted t(n) ∈ Ω(g(n)), if t(n) is bounded below by
some positive constant multiple of g(n) for all large n, i.e., if there exist some positive constant c
and some nonnegative integer n0 such that
t (n) ≥ cg(n) for all n ≥ n0.
Where t(n) and g(n) are nonnegative functions defined on the set of natural numbers.
Ω = Asymptotic lower bound = Useful for best case analysis = Loose bound
_

FIGURE 1.6 Big-omega notation: t (n) ∈ Ω (g(n)).

Example 4: Prove the assertions n3+10n2+4n+2 ∈ Ω (n2).


Proof: n3+10n2+4n+2 ≥ n2 (for all n ≥ 0)
i.e., by definition t(n) ≥ cg(n), where c=1 and n0=0

(iii) Θ - Big theta notation


A function t(n) is said to be in Θ(g(n)), denoted t(n) ∈ Θ(g(n)), if t(n) is bounded both above
and below by some positive constant multiples of g(n) for all large n, i.e., if there exist some
positive constants c1 and c2 and some nonnegative integer n0 such that
c2g(n) ≤ t (n) ≤ c1g(n) for all n ≥ n0.
Where t(n) and g(n) are nonnegative functions defined on the set of natural numbers.
Θ = Asymptotic tight bound = Useful for average case analysis

FIGURE 1.7 Big-theta notation: t (n) ∈ Θ(g(n)).

1
Example 5: Prove the assertions 𝑛 (𝑛 − 1) ∈ Θ(𝑛2).
2
Proof: First prove the right inequality (the upper bound):
1
𝑛 (𝑛 − 1) = 1 𝑛2 − 1 𝑛 ≤ 1 𝑛2 for all n ≥ 0.
2 2 2 2
Second, we prove the left inequality (the lower bound):
1
𝑛 (𝑛 − 1) = 1 𝑛2 − 1 𝑛 ≥ 1 𝑛2 − [1 𝑛] [1 𝑛] for all n ≥ 2.
2 2 2 2 2 2
_

± 1
𝑛 (𝑛 − 1) ≥ 1 𝑛2
2 4
1
i.e., 𝑛2 ≤ 1 𝑛 (𝑛 − 1) ≤ 1 𝑛2 1
41 2 2
Hence, 𝑛 (𝑛 − 1) ∈ Θ(𝑛2) 1

2
, where c2= 4 , c1= 2 and n0=2
Note: asymptotic notation can be thought of as "relational operators" for functions similar to the
corresponding relational operators for values.
= ⇒ Θ(), ≤ ⇒ O(), ≥ ⇒ Ω(), < ⇒ o(), > ⇒ ω()

Useful Property Involving the Asymptotic Notations


The following property, in particular, is useful in analyzing algorithms that comprise two
consecutively executed parts.

THEOREM: If t1(n) ∈ O(g1(n)) and t2(n) ∈ O(g2(n)), then t1(n) + t2(n) ∈ O(max{g1(n), g2(n)}).
(The analogous assertions are true for the Ω and Θ notations as well.)

PROOF: The proof extends to orders of growth the following simple fact about four arbitrary real
numbers a1, b1, a2, b2: if a1 ≤ b1 and a2 ≤ b2, then a1 + a2 ≤ 2 max{b1, b2}.
Since t1(n) ∈ O(g1(n)), there exist some positive constant c1 and some nonnegative integer
n1 such that
t1(n) ≤ c1g1(n) for all n ≥ n1.
Similarly, since t2(n) ∈ O(g2(n)),
t2(n) ≤ c2g2(n) for all n ≥ n2.
Let us denote c3 = max{c1, c2} and consider n ≥ max{n1, n2} so that we can use
both inequalities. Adding them yields the following:
t1(n) + t2(n) ≤ c1g1(n) + c2g2(n)
≤ c3g1(n) + c3g2(n)
= c3[g1(n) + g2(n)]
≤ c32 max{g1(n), g2(n)}.

Hence, t1(n) + t2(n) ∈ O(max{g1(n), g2(n)}), with the constants c and n0 required by the
definition O being 2c3 = 2 max{c1, c2} and max{n1, n2}, respectively.
The property implies that the algorithm’s overall efficiency will be determined by the part
with a higher order of growth, i.e., its least efficient part.
± t1(n) ∈ O(g1(n)) and t2(n) ∈ O(g2(n)), then t1(n) + t2(n) ∈ O(max{g1(n), g2(n)}).

Basic rules of sum manipulation

Summation formulas
_

MATHEMATICAL ANALYSIS FOR RECURSIVE ALGORITHMS


General Plan for Analyzing the Time Efficiency of Recursive Algorithms
1. Decide on a parameter (or parameters) indicating an input’s size.
2. Identify the algorithm’s basic operation.
3. Check whether the number of times the basic operation is executed can vary on different
inputs of the same size; if it can, the worst-case, average-case, and best-case efficiencies
must be investigated separately.
4. Set up a recurrence relation, with an appropriate initial condition, for the number of times
the basic operation is executed.
5. Solve the recurrence or, at least, ascertain the order of growth of its solution.

EXAMPLE 1: Compute the factorial function F(n) = n! for an arbitrary nonnegative integer n.
Since n!= 1•. . . . • (n − 1) • n = (n − 1)! • n, for n ≥ 1 and 0!= 1 by definition, we can compute
F(n) = F(n − 1) • n with the following recursive algorithm. (ND 2015)
ALGORITHM F(n)
//Computes n! recursively
//Input: A nonnegative integer n
//Output: The value of n!
if n = 0 return 1
else return F(n − 1) * n

Algorithm analysis
 For simplicity, we consider n itself as an indicator of this algorithm’s input size. i.e. 1.
 The basic operation of the algorithm is multiplication, whose number of executions we
denote M(n). Since the function F(n) is computed according to the formula F(n) = F(n −1)•n
for n > 0.
 The number of multiplications M(n) needed to compute it must satisfy the equality
M(n) = M(n-1) + 1 for n > 0
To compute To
F(n-1) multiply
F(n-1) by
n
M(n − 1) multiplications are spent to compute F(n − 1), and one more multiplication is
needed to multiply the result by n.

Recurrence relations
The last equation defines the sequence M(n) that we need to find. This equation defines
M(n) not explicitly, i.e., as a function of n, but implicitly as a function of its value at another point,
namely n − 1. Such equations are called recurrence relations or recurrences.
Solve the recurrence relation (𝑛) = 𝑛(𝑛 − 1) + 1, i.e., to find an explicit formula for
M(n) in terms of n only.
To determine a solution uniquely, we need an initial condition that tells us the value with
which the sequence starts. We can obtain this value by inspecting the condition that makes the
algorithm stop its recursive calls:
if n = 0 return 1.
This tells us two things. First, since the calls stop when n = 0, the smallest value of n for
which this algorithm is executed and hence M(n) defined is 0. Second, by inspecting the
pseudocode’s exiting line, we can see that when n = 0, the algorithm performs no multiplications.
_

Thus, the recurrence relation and initial condition for the algorithm’s number of multiplications
M(n):
M(n) = M(n − 1) + 1 for n > 0,
M(0) = 0 for n = 0.

Method of backward substitutions


M(n) = M(n − 1) + 1 substitute M(n − 1) = M(n − 2) + 1
= [M(n − 2) + 1]+ 1
= M(n − 2) + 2 substitute M(n − 2) = M(n − 3) + 1
= [M(n − 3) + 1]+ 2
= M(n − 3) + 3

= M(n − i) + i

= M(n − n) + n
= n.
Therefore M(n)=n

EXAMPLE 2: consider educational workhorse of recursive algorithms: the Tower of Hanoi


puzzle. We have n disks of different sizes that can slide onto any of three pegs. Consider A
(source), B (auxiliary), and C (Destination). Initially, all the disks are on the first peg in order of
size, the largest on the bottom and the smallest on top. The goal is to move all the disks to the third
peg, using the second one as an auxiliary.

FIGURE 1.8 Recursive solution to the Tower of Hanoi puzzle.


_

ALGORITHM TOH(n, A, C, B)
//Move disks from source to destination recursively
//Input: n disks and 3 pegs A, B, and C
//Output: Disks moved to destination as in the source order.
if n=1
Move disk from A to C
else
Move top n-1 disks from A to B using C
TOH(n - 1, A, B, C)
Move top n-1 disks from B to C using A
TOH(n - 1, B, C, A)

Algorithm analysis
The number of moves M(n) depends on n only, and we get the following recurrence
equation for it: M(n) = M(n − 1) + 1+ M(n − 1) for n > 1.
With the obvious initial condition M(1) = 1, we have the following recurrence relation for the
number of moves M(n):
M(n) = 2M(n − 1) + 1 for n > 1,
M(1) = 1.
We solve this recurrence by the same method of backward substitutions:
M(n) = 2M(n − 1) + 1 sub. M(n − 1) = 2M(n − 2) + 1
= 2[2M(n − 2) + 1]+ 1
= 22M(n − 2) + 2 + 1 sub. M(n − 2) = 2M(n − 3) + 1
= 2 [2M(n − 3) + 1]+ 2 + 1
2

= 23M(n − 3) + 22 + 2 + 1 sub. M(n − 3) = 2M(n − 4) + 1


= 2 M(n − 4) + 2 + 2 + 2 + 1
4 3 2


= 2iM(n − i) + 2i−1 + 2i−2 + . . . + 2 + 1= 2iM(n − i) + 2i − 1.

Since the initial condition is specified for n = 1, which is achieved for i = n − 1,
M(n) = 2n−1M(n − (n − 1)) + 2n−1 – 1 = 2n−1M(1) + 2n−1 − 1= 2n−1 + 2n−1 − 1= 2n − 1.
Thus, we have an exponential time algorithm

EXAMPLE 3: An investigation of a recursive version of the algorithm which finds the number of
binary digits in the binary representation of a positive decimal integer.

ALGORITHM BinRec(n)
//Input: A positive decimal integer n
//Output: The number of binary digits in n’s binary representation
if n = 1 return 1
else return BinRec(‫ہ‬n/2])+ 1

Algorithm analysis
The number of additions made in computing BinRec(‫ہ‬n/2]) is A(‫ہ‬n/2]), plus one more
addition is made by the algorithm to increase the returned value by 1. This leads to the recurrence
A(n) = A(‫ہ‬n/2]) + 1 for n > 1.
Since the recursive calls end when n is equal to 1 and there are no additions made
_

then, the initial condition is A(1) = 0.


The standard approach to solving such a recurrence is to solve it only for n = 2k
A(2k) = A(2k−1) + 1 for k > 0,
A(20) = 0.

backward substitutions
A(2k) = A(2k−1) + 1 substitute A(2k−1) = A(2k−2) + 1
= [A(2k−2) + 1]+ 1= A(2k−2) + 2 substitute A(2k−2) = A(2k−3) + 1
= [A(2k−3) + 1]+ 2 = A(2k−3) + 3 ...
...
= A(2k−i) + i
...
= A(2k−k) + k.
Thus, we end up with A(2k) = A(1) + k = k, or, after returning to the original variable n = 2k and
hence k = log2 n,
A(n) = log2 n ϵ Θ (log2 n).

MATHEMATICAL ANALYSIS FOR NON-RECURSIVE ALGORITHMS


General Plan for Analyzing the Time Efficiency of Nonrecursive Algorithms
1. Decide on a parameter (or parameters) indicating an input’s size.
2. Identify the algorithm’s basic operation (in the innermost loop).
3. Check whether the number of times the basic operation is executed depends only on the size
of an input. If it also depends on some additional property, the worst-case, average-case,
and, if necessary, best-case efficiencies have to be investigated separately.
4. Set up a sum expressing the number of times the algorithm’s basic operation is executed.
5. Using standard formulas and rules of sum manipulation either find a closed form formula
for the count or at the least, establish its order of growth.

EXAMPLE 1: Consider the problem of finding the value of the largest element in a list of n
numbers. Assume that the list is implemented as an array for simplicity.
ALGORITHM MaxElement(A[0..n − 1])
//Determines the value of the largest element in a given array
//Input: An array A[0..n − 1] of real numbers
//Output: The value of the largest element in A
maxval ←A[0]
for i ←1 to n − 1 do
if A[i]>maxval
maxval←A[i]
return maxval

Algorithm analysis
 The measure of an input’s size here is the number of elements in the array, i.e., n.
 There are two operations in the for loop’s body:
o The comparison A[i]> maxval and
o The assignment maxval←A[i].
_

 The comparison operation is considered as the algorithm’s basic operation, because the
comparison is executed on each repetition of the loop and not the assignment.
 The number of comparisons will be the same for all arrays of size n; therefore, there is no
need to distinguish among the worst, average, and best cases here.
 Let C(n) denotes the number of times this comparison is executed. The algorithm makes
one comparison on each execution of the loop, which is repeated for each value of the
loop’s variable i within the bounds 1 and n − 1, inclusive. Therefore, the sum for C(n) is
calculated as follows:
𝑛−

𝑛(𝑛) = ∑
=
i.e., Sum up 1 in repeated n-1 times
𝑛−

𝑛(𝑛) = ∑ = 𝑛 − ∈ ()
=

EXAMPLE 2: Consider the element uniqueness problem: check whether all the Elements in a
given array of n elements are distinct.
ALGORITHM UniqueElements(A[0..n − 1])
//Determines whether all the elements in a given array are distinct
//Input: An array A[0..n − 1]
//Output: Returns “true” if all the elements in A are distinct and “false” otherwise
for i ←0 to n − 2 do
for j ←i + 1 to n − 1 do
if A[i]= A[j ] return false
return true

Algorithm analysis
 The natural measure of the input’s size here is again n (the number of elements in the array).
 Since the innermost loop contains a single operation (the comparison of two elements), we
should consider it as the algorithm’s basic operation.
 The number of element comparisons depends not only on n but also on whether there are
equal elements in the array and, if there are, which array positions they occupy. We will
limit our investigation to the worst case only.
 One comparison is made for each repetition of the innermost loop, i.e., for each value of the
loop variable j between its limits i + 1 and n − 1; this is repeated for each value of the outer
loop, i.e., for each value of the loop variable i between its limits 0 and n − 2.

EXAMPLE 3: Consider matrix multiplication. Given two n × n matrices A and B, find the time
efficiency of the definition-based algorithm for computing their product C = AB. By definition, C
_

is an n × n matrix whose elements are computed as the scalar (dot) products of the rows of matrix A
and the columns of matrix B:

where C[i, j ]= A[i, 0]B[0, j]+ . . . + A[i, k]B[k, j]+ . . . + A[i, n − 1]B[n − 1, j] for every pair of
indices 0 ≤ i, j ≤ n − 1.

ALGORITHM MatrixMultiplication(A[0..n − 1, 0..n − 1], B[0..n − 1, 0..n − 1])


//Multiplies two square matrices of order n by the definition-based algorithm
//Input: Two n × n matrices A and B
//Output: Matrix C = AB
for i ←0 to n − 1 do
for j ←0 to n − 1 do
C[i, j ]←0.0
for k←0 to n − 1 do
C[i, j ]←C[i, j ]+ A[i, k] ∗ B[k, j]
return C
Algorithm analysis
 An input’s size is matrix order n.
 There are two arithmetical operations (multiplication and addition) in the innermost loop.
But we consider multiplication as the basic operation.
 Let us set up a sum for the total number of multiplications M(n) executed by the algorithm.
Since this count depends only on the size of the input matrices, we do not have to
investigate the worst-case, average-case, and best-case efficiencies separately.
 There is just one multiplication executed on each repetition of the algorithm’s innermost
loop, which is governed by the variable k ranging from the lower bound 0 to the upper
bound n − 1.
 Therefore, the number of multiplications made for every pair of specific values of variables
i and j is

The total number of multiplications M(n) is expressed by the following triple sum:

Now, we can compute this sum by using formula (S1) and rule (R1)

.
The running time of the algorithm on a particular machine m, we can do it by the product

If we consider, time spent on the additions too, then the total time on the machine is
_
_

EXAMPLE 4 The following algorithm finds the number of binary digits in the binary
representation of a positive decimal integer. •
ALGORITHM Binary(n)
//Input: A positive decimal integer n
//Output: The number of binary digits in n’s binary representation
count ←1
while n > 1 do
count ←count + 1
n←‫ہ‬n/2]
return count
Algorithm analysis
 An input’s size is n.
 The loop variable takes on only a few values between its lower and upper limits.
 Since the value of n is about halved on each repetition of the loop, the answer should be
about log2 n.
 The exact formula for the number of times.
 The comparison > 1 will be executed is actually ‫ہ‬log2 n] + 1.
D)vide ard onge
’ Tt is aa pooblem Slvng Tednigue yel to save
Poodems by dividing he main paoLlms indo St
Po0ems, SolVing tem individualy ond len elging
thann to
ecuy Siveby di vice te roblen
’he Basc idea is to ecúJ beóme &nple
into Smaloa SubproHem antil tay
s0)ved directy Sutyobens ase otfainl
Solujor to tle oyaall
’ Onte te to rodute e
then Gmbined
he
Soluhon.
Gn te divihd
Divi de a
’ into Gngles alyothon
35Us
1. Divide
2. Gnguet

Beakdowy
Jus proble my Teprgent apff he
2. Gacl ous prublem shyuld
vall Pyoblem.
pyobit
goa isK to divide
3.he goa He Prolem. nbl no

1. S)ve ack of e SmdlleSw psbom indv.

Small enouph ,we sslve i


2. ife Sutpyoblen
layesub
3.he
pslbes ihdeperdenty.
3. mere
Combi n
o The ctole POblern
to get he
e
her Solluhons
TeCASivey Gmsne
Seluhon. fo he
is tofosmlate. a Ralks frrm
magng te
olgs nal pralkm by
the Sub gto Probens.
divdey

Gne
|oluhon to slhon to
Sus 0bln SutpsoLIe
ombine
`oluj on.

Aghe DAndc(p)
Smallp) hen Jetan s(p)
eye
} diwde pinto Solle insance
P,-. K21:
Ay DAndc to eacd of heje Sbpoblens
L sehen ombine(DArdCCP),DArdc)
DAnd cC);
Reasiene Gzlason doy Divide ard Gonguca:
'p'is n and she
1/ he Sue of pwbbm
divide ano
tine
Gmkubng gecunene selahoy
by te ecunente
Congcles is deyoebel nJmall

angeL mols
when.
os diyide and
’T60) is he etime
n and.
On any inpl bime to Comple an3e drkiy
’ 7n) is the
oy Snal inpt. he
is the ime dos divids
the dnchon fn
Pyobe P Gmbinig
Pooblems. dindinto b is
inslayne n on be jves
E. SSza 6f a of hen se
ncy
ze n wit
insta b
a powe o
Assune -ize n is
ime
yewsence o n=)

on
Spenl dividing
Jn) is aduncho doothe tive
he leh into Smalle ones and on Gmbigae
om bininq
theih Sobion

the seassene labon anbe lved by


)Sulskkkon melot.
2) mostes theogem.
vepcaklly maky sukshlb
In
dox ac occuence disofpeay.
Cnti dL
ll Sucd cccusgante
Side
diyido and
analgks muny dhvd
The ieny he
Sinplifd
Congues algihn
mas Hem.
To)-aTnlb)+)
dzo

a
Solve Jolbuin Setusien e 1elbhon.
Tn) ( b)+n, T):2
Subsftubu

- 370)3) 430
13)4197"
The mad mil m Valle Je

tn. log"
Soluhon
Hete a:,b:2 don)

ln iog)
eésdon.jubshtubon me lhod
éx: Sajve by
Jonlc

t =T(V)

T(%)+iC
Sbyle (onsad

Tn (03)
0) - 0()
Hee asl, bz, fon)=Cz

leom in apbes
masty
Binasy s l
Binay
divide orglel
Gndoon Shoald e

binsyssA (4,n,k)
Sfast so, endan-;
whiklslotscn
mid.(Salh:
Ry alnid)

thd nid-l:
ele

Sta >d.
23 9

0+?454 Kyemid
|sst+end
key mid
21

end -9

a)6
59<63 6
end 7-|= 6.
Stat s
id 5t6 y 5
to
Vedug Pojemazil,a} Sutjit
in
onty Seasced
dos se4.iX>2,
tolas
(9-i,
a, vedues
to rsoblen
in
this z<e, d
s. ketohas atey ae
medi
cowent
is
to
id efual
anPicK "
2.
the ininder
CiJ Borge middle
dmifiy
btb6 mic
int i,int d, Tye )
int (T)e
Bseat at]
ascendin
14=i42

almid]') ebupr mid

vctuin
Ly basic ofoiahn is y onmpafSon.

inoly Sesk
and iteaion veion
Sane.
Binal Slolcs, ount
’ For ietusive
Coun
ve
while boce y oe Omyel:bM .
The
Bt te
inHe
e oy
atayal.
)
Sutcesfd sotdy
0),0Go) 0C9)
n suaeyul LJA.
-)

Gngfat Spoc.
Satei Yegkivy only
TBesafve Piopoßhoral to Hhe
Chve sy: Srate to waintaih

Shott. vay4igs
Adu:

aJ an
mid

melj sost(nidu, 4);


mesge 2r, mid, h))
3
3
diide e h given ayay into dus Say
tlen
20

low
=0+8

Sot, s, 8)
meije 6 4
3
|I6 l1o (20
mìd 0t 2 mdt 3 6-6
S6

L3

16/ 20 2y

Sotes Gmpale Smalleh


chectiy
Itt.

megeunon)

whele (icz mid ejez hizl)

Jttj ktt;
3
while Cí<z hi)
8T(N,)+
n)
T()-T2e)=
8T0,
Jeculfively thn Solve and vyhal
Tws into diay the divide
itt,
Cjuzmic) wkik
=A]: LEJ B
Udes divide and Gngl Potijm
mexge
Quick Sort

The main reason for the slowness of Algorithms like SIS is that all comparisons and
exchanges between keys in a sequence w1, w2, . . . . , wn take place between adjacent
pairs. In this way it takes a relatively long time for a key that is badly out of place to work
its way into its proper position in the sorted sequence.

Hoare his devised a very efficient way of implementing this idea in the early 1960’s
that improves the O(n2) behavior of SIS algorithm with an expected performance that is
O(n log n).

In essence, the quick sort algorithm partitions the original array by rearranging it into
two groups. The first group contains those elements less than some arbitrary chosen value
taken from the set, and the second group contains those elements greater than or equal to
the chosen value.

The chosen value is known as the pivot element. Once the array has been rearranged in
this way with respect to the pivot, the very same partitioning is recursively applied to each
of the two subsets. When all the subsets have been partitioned and rearranged, the original
array is sorted.

The function partition() makes use of two pointers ‘i’ and ‘j’ which are moved toward
each other in the following fashion:

 Repeatedly increase the pointer ‘i’ until a[i] >= pivot.

 Repeatedly decrease the pointer ‘j’ until a[j] <= pivot.


 If j > i, interchange a[j] with a[i]

 Repeat the steps 1, 2 and 3 till the ‘i’ pointer crosses the ‘j’ pointer. If ‘i’
pointer crosses ‘j’ pointer, the position for pivot is found and place pivot
element in ‘j’ pointer position.

The program uses a recursive function quicksort(). The algorithm of quick sort function
sorts all elements in an array ‘a’ between positions ‘low’ and ‘high’.

 It terminates when the condition low >= high is satisfied. This condition
will be satisfied only when the array is completely sorted.

 Here we choose the first element as the ‘pivot’. So, pivot = x[low]. Now it
calls the partition function to find the proper position j of the element x[low]
i.e. pivot. Then we will have two sub-arrays x[low], x[low+1], . . . .
. . . x[j-1] and x[j+1], x[j+2], . . .x[high].

 It calls itself recursively to sort the left sub-array x[low], x[low+1], . . . . .


. . x[j-1] between positions low and j-1 (where j is returned by the
partition function).

 It calls itself recursively to sort the right sub-array x[j+1], x[j+2], . . . . . .


. . . x[high] between positions j+1 and high.
Algorithm Algorithm

QUICKSORT(low, high)
/* sorts the elements a(low), . . . . . , a(high) which reside in the global array A(1 :
n) into ascending order a (n + 1) is considered to be defined and must be greater than all
elements in a(1 : n); A(n + 1) = + */
{
if low < high then
{
j := PARTITION(a, low, high+1);
// J is the position of the partitioning element
QUICKSORT(low, j – 1);
QUICKSORT(j + 1 , high);
}
}

Algorithm PARTITION(a, m, p)
{
V a(m); i m; j p; // A (m) is the partition element
do
{
loop i := i + 1 until a(i) > v // i moves left to right
loop j := j – 1 until a(j) < v // p moves right to left if
(i < j) then INTERCHANGE(a, i, j)
} while (i > j);
a[m] :=a[j]; a[j] :=V; // the partition element belongs at position P return j;
}
Algorithm INTERCHANGE(a, i, j)
{
P:=a[i];
a[i] := a[j]; a[j]
:= p;
}

Example

Select first element as the pivot element. Move ‘i’ pointer from left to right in search of an
element larger than pivot. Move the ‘j’ pointer from right to left in search of an element
smaller than pivot. If such elements are found, the elements are swapped. This process
continues till the ‘i’ pointer crosses the ‘j’ pointer. If ‘i’ pointer crosses ‘j’ pointer, the
position for pivot is found and interchange pivot and element at ‘j’ position.

Let us consider the following example with 13 elements to analyze quick sort:

1 2 3 4 5 6 7 8 9 10 11 12 13 Remarks
38 08 16 06 79 57 24 56 02 58 04 70 45
pivot i j swap i &
j
04 79
i j swap i &
j
02 57
j i
swap
(24 08 16 06 04 02) 38 (56 57 58 79 70 45) pivot
&j
pivot j, i swap
pivot
&j
(02 08 16 06 04) 24
pivot swap
,j i pivot
&j
02 (08 16 06 04)
pivot i j swap i &
j
04 16
j i
swap
(06 04) 08 (16) pivot
&j
pivot i
,j
swap
(04) 06 pivot
&j
04
pivot
, j, i
16
pivot,
j, i
(02 04 06 08 16 24) 38
(56 57 58 79 70 45)

pivot i j swap i & j


45 57
j i
swap
(45) 56 (58 79 70 57) pivot
&j
45
pivot, swap
j, i pivot
&j
(58 79 57)
pivot i 70 j swap i & j
57 79
j i
swap
(57) 58 (70 79) pivot
&j
57
pivot,
j, i
(70 79)
pivot swap
,j i pivot
&j
70
79
pivot,
j, i
(45 56 57 58 70 79)
02 04 06 08 16 24 38 45 56 57 58 70 79

Analysis of Quick Sort:

Like merge sort, quick sort is recursive, and hence its analysis requires solving a
recurrence formula. We will do the analysis for a quick sort, assuming a random pivot
(and no cut off for small files).

We will take T (0) = T (1) = 1, as in merge sort.

The running time of quick sort is equal to the running time of the two recursive calls plus
the linear time spent in the partition (The pivot selection takes only constant time). This
gives the basic quick sort relation:

T (n) = T (i) + T (n – i – 1) + C n - (1)

Where, i = |S1| is the number of elements in S1.

Worst Case Analysis

The pivot is the smallest element, all the time. Then i=0 and if we ignore T(0)=1, which is
insignificant, the recurrence is:

T (n) = T (n – 1) + C n n > 1 - (2)

Using equation – (1) repeatedly, thus


T (n – 1) = T (n – 2) + C (n – 1)

T (n – 2) = T (n – 3) + C (n – 2)

------- -

T (2) = T (1) + C (2)

Adding up all these equations yields

n
T (n) T (1)
 i
i 2
= O (n2) - (3)

Best Case Analysis


In the best case, the pivot is in the middle. To simply the math, we assume that the two
sub-files are each exactly half the size of the original and although this gives a slight over
estimate, this is acceptable because we are only interested in a Big – oh answer.

T (n) = 2 T (n/2) + C n - (4)

Divide both sides by n

T(n) T(n / 2) C - (5)


n /2
n

Substitute n/2 for ‘n’ in equation (5)

T(n / 2) T(n / 4) C - (6)


n/4
n/2

Substitute n/4 for ‘n’ in equation (6)

T(n / 4) T(n / 8) C - (7)


n/8
n/4
--------
--------
Continuing in this manner, we obtain:

T(2) T(1) - (8)


C
2 1

We add all the equations from 4 to 8 and note that there are log n of them:

T(n) T(1) C log n - (9)


1
n
Which yields, T (n) = C n log n + n = O(n log n) - (10)

This is exactly the same analysis as merge sort, hence we get the same answer.

Average Case Analysis

The number of comparisons for first call on partition: Assume left_to_right moves over k
smaller element and thus k comparisons. So when right_to_left crosses left_to_right it has
made n-k+1 comparisons. So, first call on partition makes n+1 comparisons. The average
case complexity of quicksort is

T(n) = comparisons for first call on quicksort


+
{Σ 1<=nleft,nright<=n [T(nleft) + T(nright)]}n = (n+1) + 2 [T(0) +T(1) + T(2) +
------ + T(n-1)]/n

nT(n) = n(n+1) + 2 [T(0) +T(1) + T(2) + ----------- + T(n-2) +T(n-1)]

(n-1)T(n-1) = (n-1)n + 2 [T(0) +T(1) + T(2) + ------------ + T(n-2)]\

Subtracting both sides:

nT(n) –(n-1)T(n-1) = [ n(n+1) – (n-1)n] + 2T(n-1) = 2n + 2T(n-1) nT(n)


= 2n + (n-1)T(n-1) + 2T(n-1) = 2n + (n+1)T(n-1)
T(n) = 2 + (n+1)T(n-1)/n
The recurrence relation obtained is:
T(n)/(n+1) = 2/(n+1) + T(n-1)/n

Using the method of subsititution:

T(n)/(n+1) = 2/(n+1) + T(n-1)/n


T(n-1)/n = 2/n + T(n-2)/(n-1)
T(n-2)/(n-1) = 2/(n-1) + T(n-3)/(n-2)
T(n-3)/(n-2) = 2/(n-2) + T(n-4)/(n-3)
. .
. .
T(3)/4 = 2/4 + T(2)/3
T(2)/3 = 2/3 + T(1)/2 T(1)/2 = 2/2 + T(0)
Adding both sides:
T(n)/(n+1) + [T(n-1)/n + T(n-2)/(n-1) + -------------------- + T(2)/3 + T(1)/2]
= [T(n-1)/n + T(n-2)/(n-1) +------------------- + T(2)/3 + T(1)/2] + T(0) +
[2/(n+1) + 2/n + 2/(n-1) +----------------- +2/4 + 2/3]
Cancelling the common terms:
T(n)/(n+1) = 2[1/2 +1/3 +1/4+ -------------------- +1/n+1/(n+1)]
T(n) = (n+1)2[ 2 k n 1 1/ k
=2(n+1) [ ]
=2(n+1)[log (n+1) – log 2]
=2n log (n+1) + log (n+1)-2n log 2 –log 2
T(n)= O(n log n)
Strassen’s Matrix Multiplication:

The matrix multiplication of algorithm due to Strassens is the most dramatic example of
divide and conquer technique (1969).

The usual way to multiply two n x n matrices A and B, yielding result matrix ‘C’ as
follows :

for i := 1 to n do
for j :=1 to n do
c[i, j] := 0;
for K: = 1 to n do
c[i, j] := c[i, j] + a[i, k] * b[k, j];
This algorithm requires n3 scalar multiplication’s (i.e. multiplication ofsingle
numbers) and n3 scalar additions. So we naturally cannot improve upon.

We apply divide and conquer to this problem. For example let us considers three
multiplication like this:

Then cij can be found by the usual matrix multiplication algorithm, C11 = A11 . B11 + A12 .
B21
C12 = A11 . B12 + A12
. B22 C21 = A21 . B11 +
A22 . B21 C22 = A21 .
B12 + A22 . B22

This leads to a divide–and–conquer algorithm, which performs nxn matrix multiplication


by partitioning the matrices into quarters and performing eight (n/2)x(n/2) matrix
multiplications and four (n/2)x(n/2) matrix additions.

T(1) = 1
T(n) = 8 T(n/2)

Which leads to T (n) = O (n3), where n is the power of 2.

Strassens insight was to find an alternative method for calculating the C ij, requiring seven
(n/2) x (n/2) matrix multiplications and eighteen (n/2) x (n/2) matrix additions and
subtractions:

P = (A11 + A22) (B11 + B22)


Q= (A21 + A22) B11

R = A11 (B12 – B22)


S = A22 (B21 - B11)
T = (A11 + A12) B22
U = (A21 – A11) (B11 + B12)

V = (A12 – A22) (B21 + B22)

C11 = P + S – T + V

C12 = R + T

C21 = Q + S

C22 = P + R - Q + U.

This method is used recursively to perform the seven (n/2) x (n/2) matrix multiplications,
then the recurrence equation for the number of scalar multiplications performed is:

T(1) = 1
T(n) = 7 T(n/2)

Solving this for the case of n = 2k is easy:

T(2k) = 7 T(2k–1)

= 72 T(2k-2)
= ------
= ------
= 7i T(2k–i)
Put i = k
= 7k T(1)

= 7k

That
2 is, T(n) = 7 log n

= n
log 7

= O(n log 7) = O(2n.81)

So, concluding that Strassen’s algorithm is asymptotically more efficient


than the standard algorithm. In practice, the overhead of managing the
many small matrices does not pay off until ‘n’ revolves the hundreds.
Max-Min Problem
The Max-Min Problem in algorithm analysis is finding the maximum and minimumvalue in an array.
Solution
To find the maximum and minimum numbers in a given array numbers[] of size n,the following
algorithm can be used. First we are representing the naive method and then we will present
divide and conquer approach.

Naïve Method
Naïve method is a basic method to solve any problem. In this method, the maximumand minimum
number can be found separately. To find the maximum and minimum numbers, the following
straightforward algorithm can be used.

Algorithm: Max-Min-Element (numbers[])


max :=
numbers[1]
min :=
numbers[1]

for i = 2 to n do
if numbers[i] >
max then max :=
numbers[i]
if numbers[i] <
min then min :=
numbers[i]
return (max, min)

Analysis
The number of comparisons in Naive method is 2n - 2.
The number of comparisons can be reduced using the divide and conquer approach. Following is
the technique.

Divide and Conquer Approach


In this approach, the array is divided into two halves. Then using recursive approachmaximum and
minimum numbers in each halves are found. Later, return the maximum of two maxima of each
half and the minimum of two minima of each half.
Design and Analysis of Algorithms

In this given problem, the number of elements in an array is y−x+1, where y isgreater
than or equal to x.
Max−Min(x,y) will return the maximum and minimum values of anarray nu
Algorithm: Max - Min(x, y)
if x – y ≤ 1 then
return (max(numbers[x], numbers[y]), min((numbers[x],
numbers[y])) else
(max1, min1):= maxmin(x, ⌊((x + y)/2)⌋)
(max2, min2):= maxmin(⌊((x +
y)/2) + 1)⌋,y) R eturn (max(max1,
max2), min(min1, min2))

Analysis

You might also like