KEMBAR78
Dynamic Programming in design and analysis .pptx
1
Dynamic Programming
• Dynamic Programming is an algorithm design technique
for solving problems defined by recurrences with
overlapping sub problems.
• Used for optimization problems
– A set of choices must be made to get an optimal solution
– Find a solution with the optimal value (minimum or
maximum)
– There may be many solutions that lead to an optimal value .
2
Dynamic Programming Algorithm
1. Characterize the structure of an optimal solution
2. Recursively define the value of an optimal solution
3. Compute the value of an optimal solution in a bottom-
up/ top-down fashion
4. Construct an optimal solution from computed
information .
Knapsack Problem using Dynamic Programming Approach
Given n items of
weights: w1 w2 … wn
values: v1 v2 … vn , with a knapsack of capacity W ,find
most valuable subset of the items that fit into the knapsack
The problem states-
“Which items should be placed in the knapsack so that the value or profit
that is obtained by placing the items in the knapsack is maximum.
0/1 Knapsack Problem-
We have to either take an item completely or leave it completely.
It is solved using dynamic programming approach.
Knapsack problem has the following two variants-
•Fractional Knapsack Problem
•0/1 Knapsack Problem
0/1 Knapsack Problem:
•As the name suggests, items are indivisible here.
•We can not take the fraction of any item.
•We have to either take an item completely or leave it completely.
•It is solved using dynamic programming approach.
Step-01:
Draw a table say ‘T’ with (n+1) number of rows and (w+1) number of
columns.
Fill all the boxes of 0th
row and 0th
column with zeroes as shown
Step-02: Start filling the table row wise top to bottom from left to right.
T (i , j) = max { T ( i-1 , j ) , valuei + T( i-1 , j – weigh ti ) }
T(i , j) = maximum value of the selected items if we can take items 1 to i and we
have weight restrictions of j.
Step-03:
After filling the table completely, value of the last box represents the maximum
possible value that be put in the knapsack.
Step-04:
To identify the items that must be put in the knapsack to obtain the maximum
profit, Considering the last column of the table, start scanning the entries from
bottom to top.
Problem-
For the given set of items and knapsack capacity = 5 kg, find the optimal
solution for the 0/1 knapsack problem making use of dynamic
programming approach.
Item Weight Value
1 2 3
2 3 4
3 4 5
4 5 6
Knapsack capacity (W) = 5 kg
Number of items (n) = 4
Step-01:
Draw a table say ‘T’ with (n+1) = 4 + 1 = 5 number of rows and
(w+1) = 5 + 1 = 6 number of columns.
Fill all the boxes of 0th
row and 0th
column with 0.
Step-02:
Finding T(1,1)-
T(1,1) = max { T(1-1 , 1) , 3 + T(1-1 , 1-2) }
T(1,1) = max { T(0,1) , 3 + T(0,-1) }
T(1,1) = T(0,1) { Ignore T(0,-1) }
T(1,1) = 0
Finding T(1,2)- T(1,2) = max { T(1-1 , 2) , 3 + T(1-1 , 2-2) }
T(1,2) = max { T(0,2) , 3 + T(0,0) }
T(1,2) = max {0 , 3+0}
T(1,2) = 3
Finding T(1,3)
T(1,3) = max { T(1-1 , 3) , 3 + T(1-1 , 3-2) }
T(1,3) = max { T(0,3) , 3 + T(0,1) }
T(1,3) = max {0 , 3+0}
T(1,3) = 3
Finding T(1,4) T(1,4) = max { T(1-1 , 4) , 3 + T(1-1 , 4-2) }
T(1,4) = max { T(0,4) , 3 + T(0,2) }
T(1,4) = max {0 , 3+0}
T(1,4) = 3
Finding T(1,5)
T(1,5) = max { T(1-1 , 5) , 3 + T(1-1 , 5-2) }
T(1,5) = max { T(0,5) , 3 + T(0,3) }
T(1,5) = max {0 , 3+0}
T(1,5) = 3
Finding T(2,1) T(2,1) = max { T(2-1 , 1) , 4 + T(2-1 , 1-3) }
T(2,1) = max { T(1,1) , 4 + T(1,-2) }
T(2,1) = T(1,1) { Ignore T(1,-2) }
T(2,1) = 0
Finding T(2,2) T(2,2) = max { T(2-1 , 2) , 4 + T(2-1 , 2-3) }
T(2,2) = max { T(1,2) , 4 + T(1,-1) }
T(2,2) = T(1,2) { Ignore T(1,-1) }
T(2,2) = 3
Finding T(2,3) T(2,3) = max { T(2-1 , 3) , 4 + T(2-1 , 3-3) }
T(2,3) = max { T(1,3) , 4 + T(1,0) }
T(2,3) = max { 3 , 4+0 }
T(2,3) = 4
Finding T(2,4) T(2,4) = max { T(2-1 , 4) , 4 + T(2-1 , 4-3) }
T(2,4) = max { T(1,4) , 4 + T(1,1) }
T(2,4) = max { 3 , 4+0 }
T(2,4) = 4
T(2,5) = max { T(2-1 , 5) , 4 + T(2-1 , 5-3) }
T(2,5) = max { T(1,5) , 4 + T(1,2) }
T(2,5) = max { 3 , 4+3 }
T(2,5) = 7
Finding T(2,5)
Thus, maximum possible value that can be put in the knapsack = 7
Identifying the items that must be put in the knapsack-
Considering the last column, start scanning the entries from bottom to top.
If an entry is encountered whose value is not same as the value which is
stored in the entry immediately above it, then mark the label of row of that
entry.
Following this, We mark the rows labelled “1” and “2”.
Thus, items that must be put in the knapsack to obtain the maximum value
7 are- Item-1 and Item-2
All pairs shortest path problem(Floyd-Warshall Algorithm)
Floyd-Warshall algorithm is used to find all pair shortest paths from a given
weighted graph. As a result of this algorithm, it will generate a matrix, which will
represent the minimum distance from any node to all other nodes in the graph.
M[ i ][ j ] = 0 // For all diagonal elements, value = 0
If (i , j) is an edge in E, M[ i ][ j ] = weight(i,j)
// If there exists a direct edge between the vertices, value = weight of edge
Else
M[ i ][ j ] = infinity // If there is no direct edge between the vertices,
value = ∞
for k from 1 to |V|
for i from 1 to |V|
for j from 1 to |V|
if M[ i ][ j ] > M[ i ][ k ] + M[ k ][ j ]
M[ i ][ j ] = M[ i ][ k ] + M[ k ][ j ]
The asymptotic complexity of Floyd-Warshall algorithm is O(n3
), here n is
the number of nodes in the given graph.
Step-01:
Remove all the self loops and parallel edges (keeping the edge with lowest weight) from the
graph if any.
Step-02:
Write the initial distance matrix representing the distance between every pair of vertices as
For diagonal elements (representing self-loops), value = 0
For vertices having a direct edge between them, value = weight of that edge
For vertices having no direct edges between them, value = ∞
Step-03
Total number of possible Binary Search Trees with n different keys (countBST(n)) =
Catalan number Cn = (2n)! / ((n + 1)! * n!)
For n = 1, 2, 3, 4… values of Catalan numbers are 1, 2, 5, 14, … , …. So are
numbers of Binary Search Trees.
Optimal Binary Search Tree
A binary search tree is a special kind of binary tree. In binary search tree, the
elements in the left and right sub-trees of each node are respectively lesser
and greater than the element of that node.
a3
a1
a1
a2
The cost of the tree is obtained by multiplying the probability of the
item and the level of the tree.
a1 = 0.4 , a2= 0.3 , a3 = 0.3
Cost of BST = 3*0.4+2*0.3*+1*0.3 = 2.1
Dynamic programming Approach
The idea is to one of the keys in a1, …,an , say ak, where 1 ≤
k ≤ n, must be the root. As per binary search rule, left
subtree of ak contains a1,...,ak-1 and right subtree of ak
contains ak+1,...,an.
So, the idea is to examine all candidate roots ak , for 1 ≤ k ≤
n and determining all optimal BSTs containing a1,..., ak-1 and
containing ak+1,...,an
Algorithm for construction OBST
goal
0
0
C[i,j]
0
1
n+1
0 1 n
p 1
p2
n
p
i
j
C[i , j ]
Table :
Example: key A B C D
probability 0.1 0.2 0.4 0.3
The left table is filled using the recurrence
C[i, j] = min {C[i , k-1] + C[k+1 , j]} + ∑ ps , C[i ,i] = pi
The right saves the tree roots, which are the k’s that give the minimum
0 1 2 3 4
1 0 .1 .4 1.1 1.7
2 0 .2 .8 1.4
3 0 .4 1.0
4 0 .3
5 0
0 1 2 3 4
1 1 2 3 3
2 2 3 3
3 3 3
4 4
5
i ≤ k ≤ j s = i ≤ j
j
optimal BST
B
A
C
D
i
j
i
j
• Example 1 : key A B C D E
probability 0.25 0.2 0.05 0.2 0.3
Example 2 : Key:- 10, 11 ,12
Frequency:- 2, 1 , 6
ki 1 2 3
pi 0.3 0.1 0.6
Travelling Salesman Problem
1 2 3 4
1 0 10 15 20
2 5 0 9 10
3 6 13 0 12
4 8 8 9 0
From the above graph, the following table is prepared.
S = Φ
Cost(2,Φ,1)=d(2,1)=5
Cost(3,Φ,1)=d(3,1)=6
Cost(4,Φ,1)=d(4,1)=8
S = 1
Cost(i,s)=min{Cost(j,s–(j))+d[i,j]}
Cost(2,{3},1)=d[2,3]+Cost(3,Φ,1)=9+6=15
Cost(2,{4},1)=d[2,4]+Cost(4,Φ,1)=10+8=18
Cost(3,{2},1)=d[3,2]+Cost(2,Φ,1)=13+5=18
Cost(3,{4},1)=d[3,4]+Cost(4,Φ,1)=12+8=20
cost(4,{2},1)=d[4,2]+cost(2,Φ,1)=8+5=13
Cost(4,{3},1)=d[4,3]+Cost(3,Φ,1)=9+6=15
Algorithm for Traveling salesman problem
Step 1: Let d[i, j] indicates the distance between cities i and j. Function
C[x, V – { x }]is the cost of the path starting from city x. V is the set of
cities/vertices in given graph. The aim of TSP is to minimize the cost
function.
Step 2: Assume that graph contains n vertices V1, V2, ..., Vn. TSP
finds a path covering all vertices exactly once, and the same time it
tries to minimize the overall traveling distance.
Step 3: Mathematical formula to find minimum distance is stated
below: C(i, V) = min { d[i, j] + C(j, V – { j }) }, j V and i V. TSP
∈ ∉
problem possesses the principle of optimality, i.e. for d[V1, Vn] to be
minimum, any intermediate path (Vi, Vj) must be minimum.
– 24 11 10 9
8 – 2 5 11
26 12 – 8 7
11 23 24 – 6
5 4 8 11 –
Problem: Solve the traveling salesman problem with the associated cost adjacency matrix
using dynamic programming.
Solution:
Let us start our tour from city 1.
Step 1: Initially, we will find the distance between city 1 and city {2, 3, 4, 5}
without visiting any intermediate city.
Cost(x, y, z) represents the distance from x to z and y as an intermediate city.
Cost(2, Φ, 1) = d[2, 1] = 24
Cost(3, Φ, 1) = d[3, 1] = 11
Cost(4, Φ , 1) = d[4, 1] = 10
Cost(5, Φ , 1) = d[5, 1] = 9
Step 2: In this step, we will find the minimum distance by visiting 1 city as
intermediate city.
Cost{2, {3}, 1} = d[2, 3] + Cost(3, Φ, 1) = 2 + 11 = 13
Cost{2, {4}, 1} = d[2, 4] + Cost(4, Φ, 1) = 5 + 10 = 15
Cost{2, {5}, 1} = d[2, 5] + Cost(5, Φ, 1) = 11 + 9 = 20
Cost{3, {2}, 1} = d[3, 2] + Cost(2, Φ, 1) = 12 + 24 = 36
Cost{3, {4}, 1} = d[3, 4] + Cost(4, Φ, 1) = 8 + 10 = 18
Cost{3, {5}, 1} = d[3, 5] + Cost(5, Φ, 1) = 7 + 9 = 16
Cost{4, {2}, 1} = d[4, 2] + Cost(2, Φ, 1) = 23 + 24 = 47
Cost{4, {3}, 1} = d[4, 3] + Cost(3, Φ, 1) = 24 + 11 = 35
Cost{4, {5}, 1} = d[4, 5] + Cost(5, Φ, 1) = 6 + 9 = 15
Cost{5, {2}, 1} = d[5, 2] + Cost(2, Φ, 1) = 4 + 24 = 28
Cost{5, {3}, 1} = d[5, 3] + Cost(3, Φ, 1) = 8 + 11 = 19
Cost{5, {4}, 1} = d[5, 4] + Cost(4, Φ, 1) = 11 + 10 = 21
Step 3: In this step, we will find the minimum distance by visiting 2 cities as intermediate city.
Cost(2, {3, 4}, 1) = min { d[2, 3] + Cost(3, {4}, 1), d[2, 4] + Cost(4, {3}, 1)]}
= min { [2 + 18], [5 + 35] } = min{20, 40} = 20
Cost(2, {4, 5}, 1) = min { d[2, 4] + Cost(4, {5}, 1), d[2, 5] + Cost(5, {4}, 1)]}
= min { [5 + 15], [11 + 21] } = min{20, 32} = 20
Cost(2, {3, 5}, 1) = min { d[2, 3] + Cost(3, {4}, 1), d[2, 4] + Cost(4, {3}, 1)]}
= min { [2 + 18], [5 + 35] } = min{20, 40} = 20
Cost(3, {2, 4}, 1) = min { d[3, 2] + Cost(2, {4}, 1), d[3, 4] + Cost(4, {2}, 1)]}
= min { [12 + 15], [8 + 47] } = min{27, 55} = 27
Cost(3, {4, 5}, 1) = min { d[3, 4] + Cost(4, {5}, 1), d[3, 5] + Cost(5, {4}, 1)]}
= min { [8 + 15], [7 + 21] } = min{23, 28} = 23
Cost(3, {2, 5}, 1) = min { d[3, 2] + Cost(2, {5}, 1), d[3, 5] + Cost(5, {2}, 1)]}
= min { [12 + 20], [7 + 28] } = min{32, 35} = 32
Cost(4, {2, 3}, 1) = min{ d[4, 2] + Cost(2, {3}, 1), d[4, 3] + Cost(3, {2}, 1)]}
= min { [23 + 13], [24 + 36] } = min{36, 60} = 36
Cost(4, {3, 5}, 1) = min{ d[4, 3] + Cost(3, {5}, 1), d[4, 5] + Cost(5, {3}, 1)]}
= min { [24 + 16], [6 + 19] } = min{40, 25} = 25
Cost(4, {2, 5}, 1) = min{ d[4, 2] + Cost(2, {5}, 1), d[4, 5] + Cost(5, {2}, 1)]}
= min { [23 + 20], [6 + 28] } = min{43, 34} = 34
Cost(5, {2, 3}, 1) = min{ d[5, 2] + Cost(2, {3}, 1), d[5, 3] + Cost(3, {2}, 1)]}
= min { [4 + 13], [8 + 36] } = min{17, 44} = 17
Cost(5, {3, 4}, 1) = min{ d[5, 3] + Cost(3, {4}, 1), d[5, 4] + Cost(4, {3}, 1)]}
= min { [8 + 18], [11 + 35] } = min{26, 46} = 26
Cost(5, {2, 4}, 1) = min{ d[5, 2] + Cost(2, {4}, 1), d[5, 4] + Cost(4, {2}, 1)]}
= min { [4 + 15], [11 + 47] } = min{19, 58} = 19
Step 4 : In this step, we will find the minimum distance by visiting 3 cities as
intermediate city.
Cost(2, {3, 4, 5}, 1) = min d[2, 3] + Cost(3, {4, 5}, 1)
d[2, 4] + Cost(4, {3, 5}, 1)
d[2, 5] + Cost(5, {3, 4}, 1)
= min { 2 + 23, 5 + 25, 11 + 36}
= min{25, 30, 47} = 25
Cost(3, {2, 4, 5}, 1) = min d[3, 2] + Cost(2, {4, 5}, 1)
d[3, 4] + Cost(4, {2, 5}, 1)
d[3, 5] + Cost(5, {2, 4}, 1)
= min { 12 + 20, 8 + 34, 7 + 19}
= min{32, 42, 26} = 26
Cost(4, {2, 3, 5}, 1) = min d[4, 2] + Cost(2, {3, 5}, 1)
d[4, 3] + Cost(3, {2, 5}, 1)
d[4, 5] + Cost(5, {2, 3}, 1)
= min {23 + 30, 24 + 32, 6 + 17}
= min{53, 56, 23} = 23
Cost(5, {2, 3, 4}, 1) = min d[5, 2] + Cost(2, {3, 4}, 1)
d[5, 3] + Cost(3, {2, 4}, 1)
d[5, 4] + Cost(4, {2, 3}, 1)
= min {4 + 30, 8 + 27, 11 + 36}
= min{34, 35, 47} = 34
Step 5 : In this step, we will find the minimum distance by visiting 4 cities
as an intermediate city.
Cost(1, {2, 3, 4, 5}, 1) = min d[1, 2] + Cost(2, {3, 4, 5}, 1)
d[1, 3] + Cost(3, {2, 4, 5}, 1)
d[1, 4] + Cost(4, {2, 3, 5}, 1)
d[1, 5] + Cost(5, {2, 3, 4}, 1)
= min { 24 + 25, 11 + 26, 10 + 23, 9 + 34 }
= min{49, 37, 33, 43} = 33
Thus, minimum length tour would be of 33.
Trace the path:
Let us find the path that gives the distance of 33.
Cost(1, {2, 3, 4, 5}, 1) is minimum due to d[1, 4], so move from 1 to 4. Path = {1, 4}.
Cost(4, {2, 3, 5}, 1) is minimum due to d[4, 5], so move from 4 to 5. Path = {1, 4, 5}.
Cost(5, {2, 3}, 1) is minimum due to d[5, 2], so move from 5 to 2. Path = {1, 4, 5, 2}.
Cost(2, {3}, 1) is minimum due to d[2, 3], so move from 2 to 3. Path = {1, 4, 5, 2, 3}.
All cities are visited so come back to 1. Hence the optimum tour would be
1 – 4 – 5 – 2 – 3 – 1.
Greedy Algorithms vs Dynamic Programming

Dynamic Programming in design and analysis .pptx

  • 1.
    1 Dynamic Programming • DynamicProgramming is an algorithm design technique for solving problems defined by recurrences with overlapping sub problems. • Used for optimization problems – A set of choices must be made to get an optimal solution – Find a solution with the optimal value (minimum or maximum) – There may be many solutions that lead to an optimal value .
  • 2.
    2 Dynamic Programming Algorithm 1.Characterize the structure of an optimal solution 2. Recursively define the value of an optimal solution 3. Compute the value of an optimal solution in a bottom- up/ top-down fashion 4. Construct an optimal solution from computed information .
  • 3.
    Knapsack Problem usingDynamic Programming Approach Given n items of weights: w1 w2 … wn values: v1 v2 … vn , with a knapsack of capacity W ,find most valuable subset of the items that fit into the knapsack The problem states- “Which items should be placed in the knapsack so that the value or profit that is obtained by placing the items in the knapsack is maximum. 0/1 Knapsack Problem- We have to either take an item completely or leave it completely. It is solved using dynamic programming approach.
  • 4.
    Knapsack problem hasthe following two variants- •Fractional Knapsack Problem •0/1 Knapsack Problem 0/1 Knapsack Problem: •As the name suggests, items are indivisible here. •We can not take the fraction of any item. •We have to either take an item completely or leave it completely. •It is solved using dynamic programming approach.
  • 5.
    Step-01: Draw a tablesay ‘T’ with (n+1) number of rows and (w+1) number of columns. Fill all the boxes of 0th row and 0th column with zeroes as shown
  • 6.
    Step-02: Start fillingthe table row wise top to bottom from left to right. T (i , j) = max { T ( i-1 , j ) , valuei + T( i-1 , j – weigh ti ) } T(i , j) = maximum value of the selected items if we can take items 1 to i and we have weight restrictions of j. Step-03: After filling the table completely, value of the last box represents the maximum possible value that be put in the knapsack. Step-04: To identify the items that must be put in the knapsack to obtain the maximum profit, Considering the last column of the table, start scanning the entries from bottom to top.
  • 7.
    Problem- For the givenset of items and knapsack capacity = 5 kg, find the optimal solution for the 0/1 knapsack problem making use of dynamic programming approach. Item Weight Value 1 2 3 2 3 4 3 4 5 4 5 6 Knapsack capacity (W) = 5 kg Number of items (n) = 4
  • 8.
    Step-01: Draw a tablesay ‘T’ with (n+1) = 4 + 1 = 5 number of rows and (w+1) = 5 + 1 = 6 number of columns. Fill all the boxes of 0th row and 0th column with 0. Step-02: Finding T(1,1)- T(1,1) = max { T(1-1 , 1) , 3 + T(1-1 , 1-2) } T(1,1) = max { T(0,1) , 3 + T(0,-1) } T(1,1) = T(0,1) { Ignore T(0,-1) } T(1,1) = 0
  • 9.
    Finding T(1,2)- T(1,2)= max { T(1-1 , 2) , 3 + T(1-1 , 2-2) } T(1,2) = max { T(0,2) , 3 + T(0,0) } T(1,2) = max {0 , 3+0} T(1,2) = 3 Finding T(1,3) T(1,3) = max { T(1-1 , 3) , 3 + T(1-1 , 3-2) } T(1,3) = max { T(0,3) , 3 + T(0,1) } T(1,3) = max {0 , 3+0} T(1,3) = 3 Finding T(1,4) T(1,4) = max { T(1-1 , 4) , 3 + T(1-1 , 4-2) } T(1,4) = max { T(0,4) , 3 + T(0,2) } T(1,4) = max {0 , 3+0} T(1,4) = 3 Finding T(1,5) T(1,5) = max { T(1-1 , 5) , 3 + T(1-1 , 5-2) } T(1,5) = max { T(0,5) , 3 + T(0,3) } T(1,5) = max {0 , 3+0} T(1,5) = 3
  • 10.
    Finding T(2,1) T(2,1)= max { T(2-1 , 1) , 4 + T(2-1 , 1-3) } T(2,1) = max { T(1,1) , 4 + T(1,-2) } T(2,1) = T(1,1) { Ignore T(1,-2) } T(2,1) = 0 Finding T(2,2) T(2,2) = max { T(2-1 , 2) , 4 + T(2-1 , 2-3) } T(2,2) = max { T(1,2) , 4 + T(1,-1) } T(2,2) = T(1,2) { Ignore T(1,-1) } T(2,2) = 3 Finding T(2,3) T(2,3) = max { T(2-1 , 3) , 4 + T(2-1 , 3-3) } T(2,3) = max { T(1,3) , 4 + T(1,0) } T(2,3) = max { 3 , 4+0 } T(2,3) = 4 Finding T(2,4) T(2,4) = max { T(2-1 , 4) , 4 + T(2-1 , 4-3) } T(2,4) = max { T(1,4) , 4 + T(1,1) } T(2,4) = max { 3 , 4+0 } T(2,4) = 4 T(2,5) = max { T(2-1 , 5) , 4 + T(2-1 , 5-3) } T(2,5) = max { T(1,5) , 4 + T(1,2) } T(2,5) = max { 3 , 4+3 } T(2,5) = 7 Finding T(2,5)
  • 11.
    Thus, maximum possiblevalue that can be put in the knapsack = 7 Identifying the items that must be put in the knapsack- Considering the last column, start scanning the entries from bottom to top. If an entry is encountered whose value is not same as the value which is stored in the entry immediately above it, then mark the label of row of that entry. Following this, We mark the rows labelled “1” and “2”. Thus, items that must be put in the knapsack to obtain the maximum value 7 are- Item-1 and Item-2
  • 12.
    All pairs shortestpath problem(Floyd-Warshall Algorithm) Floyd-Warshall algorithm is used to find all pair shortest paths from a given weighted graph. As a result of this algorithm, it will generate a matrix, which will represent the minimum distance from any node to all other nodes in the graph. M[ i ][ j ] = 0 // For all diagonal elements, value = 0 If (i , j) is an edge in E, M[ i ][ j ] = weight(i,j) // If there exists a direct edge between the vertices, value = weight of edge Else M[ i ][ j ] = infinity // If there is no direct edge between the vertices, value = ∞
  • 13.
    for k from1 to |V| for i from 1 to |V| for j from 1 to |V| if M[ i ][ j ] > M[ i ][ k ] + M[ k ][ j ] M[ i ][ j ] = M[ i ][ k ] + M[ k ][ j ] The asymptotic complexity of Floyd-Warshall algorithm is O(n3 ), here n is the number of nodes in the given graph.
  • 14.
    Step-01: Remove all theself loops and parallel edges (keeping the edge with lowest weight) from the graph if any. Step-02: Write the initial distance matrix representing the distance between every pair of vertices as For diagonal elements (representing self-loops), value = 0 For vertices having a direct edge between them, value = weight of that edge For vertices having no direct edges between them, value = ∞
  • 15.
  • 17.
    Total number ofpossible Binary Search Trees with n different keys (countBST(n)) = Catalan number Cn = (2n)! / ((n + 1)! * n!) For n = 1, 2, 3, 4… values of Catalan numbers are 1, 2, 5, 14, … , …. So are numbers of Binary Search Trees. Optimal Binary Search Tree A binary search tree is a special kind of binary tree. In binary search tree, the elements in the left and right sub-trees of each node are respectively lesser and greater than the element of that node.
  • 18.
    a3 a1 a1 a2 The cost ofthe tree is obtained by multiplying the probability of the item and the level of the tree. a1 = 0.4 , a2= 0.3 , a3 = 0.3 Cost of BST = 3*0.4+2*0.3*+1*0.3 = 2.1
  • 19.
    Dynamic programming Approach Theidea is to one of the keys in a1, …,an , say ak, where 1 ≤ k ≤ n, must be the root. As per binary search rule, left subtree of ak contains a1,...,ak-1 and right subtree of ak contains ak+1,...,an. So, the idea is to examine all candidate roots ak , for 1 ≤ k ≤ n and determining all optimal BSTs containing a1,..., ak-1 and containing ak+1,...,an
  • 20.
  • 21.
    goal 0 0 C[i,j] 0 1 n+1 0 1 n p1 p2 n p i j C[i , j ] Table :
  • 22.
    Example: key AB C D probability 0.1 0.2 0.4 0.3 The left table is filled using the recurrence C[i, j] = min {C[i , k-1] + C[k+1 , j]} + ∑ ps , C[i ,i] = pi The right saves the tree roots, which are the k’s that give the minimum 0 1 2 3 4 1 0 .1 .4 1.1 1.7 2 0 .2 .8 1.4 3 0 .4 1.0 4 0 .3 5 0 0 1 2 3 4 1 1 2 3 3 2 2 3 3 3 3 3 4 4 5 i ≤ k ≤ j s = i ≤ j j optimal BST B A C D i j i j
  • 23.
    • Example 1: key A B C D E probability 0.25 0.2 0.05 0.2 0.3 Example 2 : Key:- 10, 11 ,12 Frequency:- 2, 1 , 6 ki 1 2 3 pi 0.3 0.1 0.6
  • 24.
    Travelling Salesman Problem 12 3 4 1 0 10 15 20 2 5 0 9 10 3 6 13 0 12 4 8 8 9 0 From the above graph, the following table is prepared.
  • 25.
    S = Φ Cost(2,Φ,1)=d(2,1)=5 Cost(3,Φ,1)=d(3,1)=6 Cost(4,Φ,1)=d(4,1)=8 S= 1 Cost(i,s)=min{Cost(j,s–(j))+d[i,j]} Cost(2,{3},1)=d[2,3]+Cost(3,Φ,1)=9+6=15 Cost(2,{4},1)=d[2,4]+Cost(4,Φ,1)=10+8=18 Cost(3,{2},1)=d[3,2]+Cost(2,Φ,1)=13+5=18 Cost(3,{4},1)=d[3,4]+Cost(4,Φ,1)=12+8=20 cost(4,{2},1)=d[4,2]+cost(2,Φ,1)=8+5=13 Cost(4,{3},1)=d[4,3]+Cost(3,Φ,1)=9+6=15
  • 28.
    Algorithm for Travelingsalesman problem Step 1: Let d[i, j] indicates the distance between cities i and j. Function C[x, V – { x }]is the cost of the path starting from city x. V is the set of cities/vertices in given graph. The aim of TSP is to minimize the cost function. Step 2: Assume that graph contains n vertices V1, V2, ..., Vn. TSP finds a path covering all vertices exactly once, and the same time it tries to minimize the overall traveling distance. Step 3: Mathematical formula to find minimum distance is stated below: C(i, V) = min { d[i, j] + C(j, V – { j }) }, j V and i V. TSP ∈ ∉ problem possesses the principle of optimality, i.e. for d[V1, Vn] to be minimum, any intermediate path (Vi, Vj) must be minimum.
  • 29.
    – 24 1110 9 8 – 2 5 11 26 12 – 8 7 11 23 24 – 6 5 4 8 11 – Problem: Solve the traveling salesman problem with the associated cost adjacency matrix using dynamic programming. Solution: Let us start our tour from city 1. Step 1: Initially, we will find the distance between city 1 and city {2, 3, 4, 5} without visiting any intermediate city. Cost(x, y, z) represents the distance from x to z and y as an intermediate city. Cost(2, Φ, 1) = d[2, 1] = 24 Cost(3, Φ, 1) = d[3, 1] = 11 Cost(4, Φ , 1) = d[4, 1] = 10 Cost(5, Φ , 1) = d[5, 1] = 9
  • 30.
    Step 2: Inthis step, we will find the minimum distance by visiting 1 city as intermediate city. Cost{2, {3}, 1} = d[2, 3] + Cost(3, Φ, 1) = 2 + 11 = 13 Cost{2, {4}, 1} = d[2, 4] + Cost(4, Φ, 1) = 5 + 10 = 15 Cost{2, {5}, 1} = d[2, 5] + Cost(5, Φ, 1) = 11 + 9 = 20 Cost{3, {2}, 1} = d[3, 2] + Cost(2, Φ, 1) = 12 + 24 = 36 Cost{3, {4}, 1} = d[3, 4] + Cost(4, Φ, 1) = 8 + 10 = 18 Cost{3, {5}, 1} = d[3, 5] + Cost(5, Φ, 1) = 7 + 9 = 16 Cost{4, {2}, 1} = d[4, 2] + Cost(2, Φ, 1) = 23 + 24 = 47 Cost{4, {3}, 1} = d[4, 3] + Cost(3, Φ, 1) = 24 + 11 = 35 Cost{4, {5}, 1} = d[4, 5] + Cost(5, Φ, 1) = 6 + 9 = 15 Cost{5, {2}, 1} = d[5, 2] + Cost(2, Φ, 1) = 4 + 24 = 28 Cost{5, {3}, 1} = d[5, 3] + Cost(3, Φ, 1) = 8 + 11 = 19 Cost{5, {4}, 1} = d[5, 4] + Cost(4, Φ, 1) = 11 + 10 = 21
  • 31.
    Step 3: Inthis step, we will find the minimum distance by visiting 2 cities as intermediate city. Cost(2, {3, 4}, 1) = min { d[2, 3] + Cost(3, {4}, 1), d[2, 4] + Cost(4, {3}, 1)]} = min { [2 + 18], [5 + 35] } = min{20, 40} = 20 Cost(2, {4, 5}, 1) = min { d[2, 4] + Cost(4, {5}, 1), d[2, 5] + Cost(5, {4}, 1)]} = min { [5 + 15], [11 + 21] } = min{20, 32} = 20 Cost(2, {3, 5}, 1) = min { d[2, 3] + Cost(3, {4}, 1), d[2, 4] + Cost(4, {3}, 1)]} = min { [2 + 18], [5 + 35] } = min{20, 40} = 20 Cost(3, {2, 4}, 1) = min { d[3, 2] + Cost(2, {4}, 1), d[3, 4] + Cost(4, {2}, 1)]} = min { [12 + 15], [8 + 47] } = min{27, 55} = 27 Cost(3, {4, 5}, 1) = min { d[3, 4] + Cost(4, {5}, 1), d[3, 5] + Cost(5, {4}, 1)]} = min { [8 + 15], [7 + 21] } = min{23, 28} = 23 Cost(3, {2, 5}, 1) = min { d[3, 2] + Cost(2, {5}, 1), d[3, 5] + Cost(5, {2}, 1)]} = min { [12 + 20], [7 + 28] } = min{32, 35} = 32 Cost(4, {2, 3}, 1) = min{ d[4, 2] + Cost(2, {3}, 1), d[4, 3] + Cost(3, {2}, 1)]} = min { [23 + 13], [24 + 36] } = min{36, 60} = 36 Cost(4, {3, 5}, 1) = min{ d[4, 3] + Cost(3, {5}, 1), d[4, 5] + Cost(5, {3}, 1)]} = min { [24 + 16], [6 + 19] } = min{40, 25} = 25 Cost(4, {2, 5}, 1) = min{ d[4, 2] + Cost(2, {5}, 1), d[4, 5] + Cost(5, {2}, 1)]} = min { [23 + 20], [6 + 28] } = min{43, 34} = 34 Cost(5, {2, 3}, 1) = min{ d[5, 2] + Cost(2, {3}, 1), d[5, 3] + Cost(3, {2}, 1)]} = min { [4 + 13], [8 + 36] } = min{17, 44} = 17 Cost(5, {3, 4}, 1) = min{ d[5, 3] + Cost(3, {4}, 1), d[5, 4] + Cost(4, {3}, 1)]} = min { [8 + 18], [11 + 35] } = min{26, 46} = 26 Cost(5, {2, 4}, 1) = min{ d[5, 2] + Cost(2, {4}, 1), d[5, 4] + Cost(4, {2}, 1)]} = min { [4 + 15], [11 + 47] } = min{19, 58} = 19
  • 32.
    Step 4 :In this step, we will find the minimum distance by visiting 3 cities as intermediate city. Cost(2, {3, 4, 5}, 1) = min d[2, 3] + Cost(3, {4, 5}, 1) d[2, 4] + Cost(4, {3, 5}, 1) d[2, 5] + Cost(5, {3, 4}, 1) = min { 2 + 23, 5 + 25, 11 + 36} = min{25, 30, 47} = 25 Cost(3, {2, 4, 5}, 1) = min d[3, 2] + Cost(2, {4, 5}, 1) d[3, 4] + Cost(4, {2, 5}, 1) d[3, 5] + Cost(5, {2, 4}, 1) = min { 12 + 20, 8 + 34, 7 + 19} = min{32, 42, 26} = 26 Cost(4, {2, 3, 5}, 1) = min d[4, 2] + Cost(2, {3, 5}, 1) d[4, 3] + Cost(3, {2, 5}, 1) d[4, 5] + Cost(5, {2, 3}, 1) = min {23 + 30, 24 + 32, 6 + 17} = min{53, 56, 23} = 23 Cost(5, {2, 3, 4}, 1) = min d[5, 2] + Cost(2, {3, 4}, 1) d[5, 3] + Cost(3, {2, 4}, 1) d[5, 4] + Cost(4, {2, 3}, 1) = min {4 + 30, 8 + 27, 11 + 36} = min{34, 35, 47} = 34
  • 33.
    Step 5 :In this step, we will find the minimum distance by visiting 4 cities as an intermediate city. Cost(1, {2, 3, 4, 5}, 1) = min d[1, 2] + Cost(2, {3, 4, 5}, 1) d[1, 3] + Cost(3, {2, 4, 5}, 1) d[1, 4] + Cost(4, {2, 3, 5}, 1) d[1, 5] + Cost(5, {2, 3, 4}, 1) = min { 24 + 25, 11 + 26, 10 + 23, 9 + 34 } = min{49, 37, 33, 43} = 33 Thus, minimum length tour would be of 33.
  • 34.
    Trace the path: Letus find the path that gives the distance of 33. Cost(1, {2, 3, 4, 5}, 1) is minimum due to d[1, 4], so move from 1 to 4. Path = {1, 4}. Cost(4, {2, 3, 5}, 1) is minimum due to d[4, 5], so move from 4 to 5. Path = {1, 4, 5}. Cost(5, {2, 3}, 1) is minimum due to d[5, 2], so move from 5 to 2. Path = {1, 4, 5, 2}. Cost(2, {3}, 1) is minimum due to d[2, 3], so move from 2 to 3. Path = {1, 4, 5, 2, 3}. All cities are visited so come back to 1. Hence the optimum tour would be 1 – 4 – 5 – 2 – 3 – 1.
  • 35.
    Greedy Algorithms vsDynamic Programming