KEMBAR78
Dynamic programming1 | PPTX
Dynamic Programming
Lecture:17 DEBOLINA PAN
Date: 01.10.13 I.D-110811026
1. Dynamic Programming
2. Comparison with divide n concuer
3. Where used?
4. Matrix Multiplication
5. Algo to multiply 2 matrices
6. #scaler multiplication used in matrix multiplication
7. Costs &parenthisization
8. Matrix multiplication Problem
9. Counting the #parenthisization
10. Brute force Method
11. Computing optimal cost
12. Matrix chain Order
13. Overlapping Subproblem
14. Memorization
 dynamic programming is a method for
solving complex problems by breaking them
down into simpler subproblems.and the
subproblems are dependent.
 programming is an expression of algorithm.i
dynamic programing the languag,used does
not refer to any computer language,rather it
refers to formal language.
 Dynamic programming, like the divide-and-conquer method,
solves problems by combining the solutions to subproblems.
 divide-and-conquer algorithms partition the problem into
disjoint subproblems, solve the subproblems recursively, and
then combine their solutions to solve the original problem. In
contrast, dynamic programming applies when the subproblems
overlap—that is, when subproblems share subsubproblems.
 a divide-and-conquer algorithm does more work than necessary,
repeatedly solving the common subsubproblems. A dynamic-
programming algorithm solves each subsubproblem just once
and then saves its answer in a table, thereby avoiding the work
of recomputing the answer every time it solves each
subsubproblem
 We typically apply dynamic programming to
optimization problems.
 Such problems can have many possible
solutions. Each solution has a value, and we
wish to find a solution with the optimal
(minimum or maximum) value. We call such a
solution an optimal solution to the problem,
as opposed to the optimal solution, since
there may be several solutions that achieve
the optimal value.
 Say,there are 2 matrices A & B.
 We can multiply 2 matrices only if they are
compatible.
 Condition to be multiplied:#collumns of A
=#rows of B.
 If the resultant matrix is C,then
Am*n * B n*p= Cm*p
MATRIX-MULTIPLY(A,B)
1 .if columns[A] ≠rows[B]
2 . Then error “incompatible dimensions”
3 . let C be a new A:rows B:columns matrix
4 .for i <-1 to A.rows
5 .for j<-1 to B.columns
6 .cij =0
7 .for k =1 to A.columns
8 .cij =cij + aik .bkj
9 .return C
 In the above example of matrix multiplication
if we consider,
 m=2,n=3,p=4,the total number of scaler
multiplication used is 24,i.e m*n*p.
 We shall express costs in terms of the
number of scaler multiplications.
 Example:
A1->10*100
A2->100*5
A3->5*50
If(A1*A2)*A3=>#scaler multiplication:
(10*100*5) + (10*5*50)=7500
IfA1*(A2*A3)=>#scaler multiplication:
(10*100*50)+(100*5*50)=75000
=>Ordering of parenthasization is important.
 matrix-chain multiplication problem can be
stated as follows: given a chain
<A1,A2,…,An> of n matrices, where for
i=1,2, …,n, matrix Ai has dimension pi-1*pi ,
fully parenthesize the product A1A2…An in a
way that minimizes the number of scalar
multiplications.
 Our goal is only to determine an order for
multiplying matrices that has the lowest cost.
 P(n) is the number of alternative
parenthesizations of a sequence of n
matrices.
 When n=1,P(n)=1,as there is only 1 matrix,so
only one way to put parenthesis.
 n=2,P(n)=1,only one way to put parenthesis.
 n=3,P(n)=2, (A1*A2)*A3 & A1*(A2*A3)
 N=4,P(n)=6
 (((A1*A2)*A3)*A4) (A1*((A2*A3)*A4)
 ((A1*A2)*(A3*A4)) (A1*(A2*(A3*A4)))
 (A1*(A2*A3))*A4 ((A1*A2)*(A3*A4))
 Thus we obtain the sequence:
◦ P(n)= 1 if n=1
 ∑k=1to n-1 P(k)P(n-k) if n≥2 a fully
parenthasize matrix product is a
prodct of 2 fully parenthasize matrix
sbproducts.split between two
sbprducts may occr between the k th
& k+1 th matrix for any k=1…n-1
Rate of growth: Ω((4^n)/(n^(3/2)))
 general problem-solving technique that
consists of systematically enumerating all
possible candidates for the solution and
checking whether each candidate satisfies the
problem's statement.
 Lower bound of this problem using brute
force method is Ω((4^n)/(n^(3/2)))
 That means brute force method as the solution of
the problem is not so good.
 At this point, we could easily write a recursive
algorithm based on recurrence to compute the
minimum cost m[1,n]for multiplying A1,A2,…, An.
Aswe sawfor the rod-cutting problem, and this
recursive algorithm takes exponential time, which is
no better than the brute-force method of checking
each way of parenthesizing the product.
 we have relatively few distinct subproblems: one
subproblem for each choice of i and j satisfying 1 ≤i≤
j≤n, or (nc2 +n)=Θ(n^2) in all. A recursive algorithm
may encounter each subproblem many times in
different branches of its recursion tree. This property
of overlapping subproblems is the second hallmark of
when dynamic programming applies.
 Instead of computing the solution to recurrence
recursively, we compute the optimal cost by using a
tabular, bottom-up approach.
 We shall implement the tabular, bottom-up method
in the procedure MATRIXCHAIN-ORDER, which
appears below. This procedure assumes that matrix
Ai has dimensions pi-1 *pi for i=1,2,…,n. Its input is
a sequence p=<p0; p1; : : : ;pni> where length[p] =
n+1. The procedure uses an auxiliary table
m[1…n,1…n] for storing the m[I,j] costs and another
auxiliary table s[1…n,1…n]that records which index
of k achieved the optimal cost in computing m[i; j
].We shall use the table s to construct an optimal
solution.
 In order to implement the bottom-up approach, we
must determine which entries of the table we refer to
when computing m[i,j].the cost m[i ,j] of computing a
matrix-chain product of j-i+1matrices depends only
on the costs of computing matrix-chain products of
fewer than j-i+1 matrices. That is, for k =I,i +1,…,j-
1, the matrix Ai…k is a product of k-i+1< j-i+1
matrices and the matrix Ak+1…j is a product of j-k<
j-i +1 matrices. Thus, the algorithm should fill in the
table min a manner that corresponds to solving the
parenthesization problem on matrix chains of
increasing length. For the subproblem of optimally
parenthesizing the chain Ai,Ai+1,… Aj,we consider
the subproblem size to be the length j-i+1 of the
chain.
MATRIX-CHAIN-ORDER.p/
1. n D p:length 1
2.Let m[1,… n.1,… n] and s[1,… n-1. 2…n] be new tables
3. for i =1 to n
4.Do m[i,j] <-0
5. for l =2 to n // l is the chain length
6. dofor i =1 to n –l+1
7. Do j<-i+l-1
8. m[i, j]<- infinity
9.for k <-i to j-1
10. Do q <-m[i, k]+m[k+1,j]+pi-1.pk.pj
11. if q <m[i,j]
12. m[i, j]<-q
13. s[i,j]<-k
14. return m and s
 running time of this algorithm is in fact also
Ω(n^3) The algorithm requiresΘ(n^2) space
to store the m and s tables. Thus, MATRIX-
CHAINORDER is much more efficient than the
exponential-time method of enumerating all
possible parenthesizations and checking each
one
 In dynamic problem
subproblems are
dependent.
 Dynamic-programming
algorithms typically take
advantage of overlapping
subproblems by solving
each subproblem once and
then storing the solution in
a table where it can be
looked up when needed,
using constant time per
lookup.
 Cost is reduced.
(A1 *A2) (A3*(A4*A5))
(A1*A2) (A3*A4)*A5
^
|
Overlapping
subproblem
 A memoized recursive algorithm maintains an
entry in a table for the solution to each
subproblem. Each table entry initially
contains a special value to indicate that the
entry has yet to be filled in. When the
subproblem is first encountered as the
recursive algorithm unfolds, its solution is
computed and then stored in the table. Each
subsequent time that we encounter this
subproblem, we simply look up the value
stored in the table and return it

Dynamic programming1

  • 1.
    Dynamic Programming Lecture:17 DEBOLINAPAN Date: 01.10.13 I.D-110811026
  • 2.
    1. Dynamic Programming 2.Comparison with divide n concuer 3. Where used? 4. Matrix Multiplication 5. Algo to multiply 2 matrices 6. #scaler multiplication used in matrix multiplication 7. Costs &parenthisization 8. Matrix multiplication Problem 9. Counting the #parenthisization 10. Brute force Method 11. Computing optimal cost 12. Matrix chain Order 13. Overlapping Subproblem 14. Memorization
  • 3.
     dynamic programmingis a method for solving complex problems by breaking them down into simpler subproblems.and the subproblems are dependent.  programming is an expression of algorithm.i dynamic programing the languag,used does not refer to any computer language,rather it refers to formal language.
  • 4.
     Dynamic programming,like the divide-and-conquer method, solves problems by combining the solutions to subproblems.  divide-and-conquer algorithms partition the problem into disjoint subproblems, solve the subproblems recursively, and then combine their solutions to solve the original problem. In contrast, dynamic programming applies when the subproblems overlap—that is, when subproblems share subsubproblems.  a divide-and-conquer algorithm does more work than necessary, repeatedly solving the common subsubproblems. A dynamic- programming algorithm solves each subsubproblem just once and then saves its answer in a table, thereby avoiding the work of recomputing the answer every time it solves each subsubproblem
  • 5.
     We typicallyapply dynamic programming to optimization problems.  Such problems can have many possible solutions. Each solution has a value, and we wish to find a solution with the optimal (minimum or maximum) value. We call such a solution an optimal solution to the problem, as opposed to the optimal solution, since there may be several solutions that achieve the optimal value.
  • 6.
     Say,there are2 matrices A & B.  We can multiply 2 matrices only if they are compatible.  Condition to be multiplied:#collumns of A =#rows of B.  If the resultant matrix is C,then Am*n * B n*p= Cm*p
  • 7.
    MATRIX-MULTIPLY(A,B) 1 .if columns[A]≠rows[B] 2 . Then error “incompatible dimensions” 3 . let C be a new A:rows B:columns matrix 4 .for i <-1 to A.rows 5 .for j<-1 to B.columns 6 .cij =0 7 .for k =1 to A.columns 8 .cij =cij + aik .bkj 9 .return C
  • 8.
     In theabove example of matrix multiplication if we consider,  m=2,n=3,p=4,the total number of scaler multiplication used is 24,i.e m*n*p.  We shall express costs in terms of the number of scaler multiplications.
  • 9.
     Example: A1->10*100 A2->100*5 A3->5*50 If(A1*A2)*A3=>#scaler multiplication: (10*100*5)+ (10*5*50)=7500 IfA1*(A2*A3)=>#scaler multiplication: (10*100*50)+(100*5*50)=75000 =>Ordering of parenthasization is important.
  • 10.
     matrix-chain multiplicationproblem can be stated as follows: given a chain <A1,A2,…,An> of n matrices, where for i=1,2, …,n, matrix Ai has dimension pi-1*pi , fully parenthesize the product A1A2…An in a way that minimizes the number of scalar multiplications.  Our goal is only to determine an order for multiplying matrices that has the lowest cost.
  • 11.
     P(n) isthe number of alternative parenthesizations of a sequence of n matrices.  When n=1,P(n)=1,as there is only 1 matrix,so only one way to put parenthesis.  n=2,P(n)=1,only one way to put parenthesis.  n=3,P(n)=2, (A1*A2)*A3 & A1*(A2*A3)  N=4,P(n)=6  (((A1*A2)*A3)*A4) (A1*((A2*A3)*A4)  ((A1*A2)*(A3*A4)) (A1*(A2*(A3*A4)))  (A1*(A2*A3))*A4 ((A1*A2)*(A3*A4))
  • 12.
     Thus weobtain the sequence: ◦ P(n)= 1 if n=1  ∑k=1to n-1 P(k)P(n-k) if n≥2 a fully parenthasize matrix product is a prodct of 2 fully parenthasize matrix sbproducts.split between two sbprducts may occr between the k th & k+1 th matrix for any k=1…n-1 Rate of growth: Ω((4^n)/(n^(3/2)))
  • 13.
     general problem-solvingtechnique that consists of systematically enumerating all possible candidates for the solution and checking whether each candidate satisfies the problem's statement.  Lower bound of this problem using brute force method is Ω((4^n)/(n^(3/2)))  That means brute force method as the solution of the problem is not so good.
  • 14.
     At thispoint, we could easily write a recursive algorithm based on recurrence to compute the minimum cost m[1,n]for multiplying A1,A2,…, An. Aswe sawfor the rod-cutting problem, and this recursive algorithm takes exponential time, which is no better than the brute-force method of checking each way of parenthesizing the product.  we have relatively few distinct subproblems: one subproblem for each choice of i and j satisfying 1 ≤i≤ j≤n, or (nc2 +n)=Θ(n^2) in all. A recursive algorithm may encounter each subproblem many times in different branches of its recursion tree. This property of overlapping subproblems is the second hallmark of when dynamic programming applies.
  • 15.
     Instead ofcomputing the solution to recurrence recursively, we compute the optimal cost by using a tabular, bottom-up approach.  We shall implement the tabular, bottom-up method in the procedure MATRIXCHAIN-ORDER, which appears below. This procedure assumes that matrix Ai has dimensions pi-1 *pi for i=1,2,…,n. Its input is a sequence p=<p0; p1; : : : ;pni> where length[p] = n+1. The procedure uses an auxiliary table m[1…n,1…n] for storing the m[I,j] costs and another auxiliary table s[1…n,1…n]that records which index of k achieved the optimal cost in computing m[i; j ].We shall use the table s to construct an optimal solution.
  • 16.
     In orderto implement the bottom-up approach, we must determine which entries of the table we refer to when computing m[i,j].the cost m[i ,j] of computing a matrix-chain product of j-i+1matrices depends only on the costs of computing matrix-chain products of fewer than j-i+1 matrices. That is, for k =I,i +1,…,j- 1, the matrix Ai…k is a product of k-i+1< j-i+1 matrices and the matrix Ak+1…j is a product of j-k< j-i +1 matrices. Thus, the algorithm should fill in the table min a manner that corresponds to solving the parenthesization problem on matrix chains of increasing length. For the subproblem of optimally parenthesizing the chain Ai,Ai+1,… Aj,we consider the subproblem size to be the length j-i+1 of the chain.
  • 17.
    MATRIX-CHAIN-ORDER.p/ 1. n Dp:length 1 2.Let m[1,… n.1,… n] and s[1,… n-1. 2…n] be new tables 3. for i =1 to n 4.Do m[i,j] <-0 5. for l =2 to n // l is the chain length 6. dofor i =1 to n –l+1 7. Do j<-i+l-1 8. m[i, j]<- infinity 9.for k <-i to j-1 10. Do q <-m[i, k]+m[k+1,j]+pi-1.pk.pj 11. if q <m[i,j] 12. m[i, j]<-q 13. s[i,j]<-k 14. return m and s
  • 18.
     running timeof this algorithm is in fact also Ω(n^3) The algorithm requiresΘ(n^2) space to store the m and s tables. Thus, MATRIX- CHAINORDER is much more efficient than the exponential-time method of enumerating all possible parenthesizations and checking each one
  • 19.
     In dynamicproblem subproblems are dependent.  Dynamic-programming algorithms typically take advantage of overlapping subproblems by solving each subproblem once and then storing the solution in a table where it can be looked up when needed, using constant time per lookup.  Cost is reduced. (A1 *A2) (A3*(A4*A5)) (A1*A2) (A3*A4)*A5 ^ | Overlapping subproblem
  • 20.
     A memoizedrecursive algorithm maintains an entry in a table for the solution to each subproblem. Each table entry initially contains a special value to indicate that the entry has yet to be filled in. When the subproblem is first encountered as the recursive algorithm unfolds, its solution is computed and then stored in the table. Each subsequent time that we encounter this subproblem, we simply look up the value stored in the table and return it