KEMBAR78
Daa Module 2 Notes Part 1 | PDF | Dynamic Programming | Mathematical Optimization
0% found this document useful (0 votes)
8 views20 pages

Daa Module 2 Notes Part 1

Dynamic programming is a technique that breaks down complex problems into simpler sub-problems, optimizes them, and stores results for reuse, particularly useful for problems like Fibonacci numbers and matrix chain multiplication. It relies on principles such as overlapping sub-problems and optimal substructure, with two main approaches: top-down (memorization) and bottom-up (tabulation). The document also discusses algorithms and examples illustrating these concepts.

Uploaded by

anandraj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views20 pages

Daa Module 2 Notes Part 1

Dynamic programming is a technique that breaks down complex problems into simpler sub-problems, optimizes them, and stores results for reuse, particularly useful for problems like Fibonacci numbers and matrix chain multiplication. It relies on principles such as overlapping sub-problems and optimal substructure, with two main approaches: top-down (memorization) and bottom-up (tabulation). The document also discusses algorithms and examples illustrating these concepts.

Uploaded by

anandraj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

MODULE II

2 DYNAMIC PROGRAMMING
Dynamic programming is a computer programming technique where an
algorithmic problem is first broken down into sub-problems, the results are
saved, and then the sub-problems are optimized to find the overall solution
which usually has to do with finding the maximum and minimum range of the
algorithmic query.

Working of Dynamic Programming


The following are the steps that the dynamic programming follows:
o It breaks down the complex problem into simpler sub problems.

o It finds the optimal solution to these sub-problems.

o It stores the results of sub problems (memorization). The process of


storing the results of sub problems is known as memorization.

o It reuses them so that same sub-problem is calculated more than once.

o Finally, calculate the result of the complex problem.

Problems that can be solved using dynamic programming


Various problems those can be solved using dynamic programming are
o For computing nth Fibonacci number
o Computing Binomial co efficient
o Warshall's Algorithm
o Floyd's Algorithm
o Optimal binary search trees
o Multistage Graph
o Matrix Chain Multiplication

Principle of Optimality
The dynamic programming makes use of principle of optimality when finding
solution to given problem. The principle of optimality states that in an optimal
sequence of choices or decisions, each subsequence must also be optimal.

For example, While constructing optimal binary search tree we always select
the value of k which is obtained from minimum cost. Thus it follows principle
of optimality.

1
Elements of dynamic programming
There are two important elements of dynamic programming

1) Overlapping sub problem


When the answers to the same sub problem are needed more than once to solve
the main problem, we say that the sub problems overlap. In overlapping issues,
solutions are put into a table so developers can use them repeatedly instead of
recalculating them.

The recursive program for the Fibonacci numbers has several sub problems that
overlap, but a binary search doesn’t have any sub problems that overlap.

Example
For example, when finding the nth Fibonacci number, the problem F(n) is
broken down into finding F(n-1) and F. (n-2). You can break down F(n-1) even
further into a sub problem that has to do with F. (n-2).In this scenario, F(n-2) is
reused, and thus, the Fibonacci sequence can be said to exhibit overlapping
properties.

Fibonacci sequence is the sequence of numbers in which every next item is the
total of the previous two items. And each number of the Fibonacci sequence is
called Fibonacci number.
0 ,1,1,2,3,5,8,13,21,....................... is a Fibonacci sequence.

The Fibonacci numbers are defined as follows:


F0 = 0
Fn=1
Fn=F(n-1)+ F(n-2)

FIB (n)
1. If (n < 2)
2. then return n
3. else return FIB (n - 1) + FIB (n - 2)

There are two ways to handle overlapping problem sub problems


a) Top down approach
b) Bottom up approach

2
a) Top down approach/Memorization Technique
The sub problems are solved and the solution is remembered if they need to be
solved in the future. These solutions are stored in suitable data structure for
using them efficiently.

Memorization is equal to the sum of recursion and caching. Recursion means


calling the function itself, while caching means storing the intermediate results.

Example: shows four levels of recursion for the call fib (8):

Recursive calls during computation of Fibonacci number

A single recursive call to fib (n) results in one recursive call to fib (n - 1), two
recursive calls to fib (n - 2), three recursive calls to fib (n - 3), five recursive
calls to fib (n - 4) and, in general, Fk-1 recursive calls to fib (n - k) We can
avoid this unneeded repetition by writing down the conclusion of recursive calls
and looking them up again if we need them later. This process is called
memorization.

Algorithm with Memorization


MEMOFIB (n)
1 if (n < 2)
2 then return n
3 if (F[n] is undefined)
4 then F[n] ← MEMOFIB (n - 1) + MEMOFIB (n - 2)
5 return F[n]
This algorithm clearly takes only O (n) time to compute Fn.

3
b) Bottom up approach/ Tabulation Technique
In the bottom-up method, once a solution to a problem is written in terms of its
sub problems in a way that loops back on itself, users can rewrite the problem
by solving the smaller sub problems first and then using their solutions to solve
the larger sub problems. Unlike the top-down approach, the bottom-up approach
removes the recursion.

Example
Suppose we have an array that has 0 and 1 values at a[0] and a[1] positions,
respectively shown as below:

Since the bottom-up approach starts from the lower values, so the values at a[0]
and a[1] are added to find the value of a[2] shown as below:

The value of a[3] will be calculated by adding a[1] and a[2], and it becomes 2
shown as below:

The value of a[4] will be calculated by adding a[2] and a[3], and it becomes 3
shown as below:

The value of a[5] will be calculated by adding the values of a[4] and a[3], and it
becomes 5 shown as below:

Algorithm
The code for implementing the Fibonacci series using the bottom-up approach
is given below:
1. int fib(int n)
2. {
3. int A[];
4. A[0] = 0, A[1] = 1;
5. for( i=2; i<=n; i++)
6. {

4
7. A[i] = A[i-1] + A[i-2]
8. }
9. return A[n];
10. }

2) Optimal sub structure


The optimal substructure property of a problem says that you can find the best
answer to the problem by taking the best solutions to its sub problems and
putting them together. For example, if a node p is on the shortest path from a
source node t to a destination node w, then the shortest path from t to w is the
sum of the shortest paths from t to p and from p to w.

2.1 MATRIX CHAIN MULTIPLICATION

5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20

You might also like