As per the New Credit System Syllabus (2019 Course) of
Savitribai Phule Pune University w.e.f. academic year 2022-2023
Design and Analysis
of Algorithms
(Code : 410241)
Semester VII - Computer Engineering
Dr. Mahesh M. Goyani
== TechKnowledge
v tensyom Analysis of Algorithms (SPPU) a Table of Contents
7 Differences. 8
5 es
2121 P Problems vs, NP Problems...
2422 NP- Complete vs. NP- Hard
cnaptet 3: Greedy Algorithms, 3-1t09-20
Introduction...
at
B11 Principle...
312 — Control Abstraction........
313° Characteristics...
3.14 Applications of the Greedy AppIO&CH. ese
32 Knapsack Problem...
321 O/L Knapsack....
3.2.08) Working Mechanism...
3.2.1(8) AIGOFItHTM nen
321(€) Complexity Analysis.
3210) Examples.
322 Fractional Knapsack..
2.22(4) Working Mechanism
33° Stheduling Algorithms...
33.1 Job Scheduling Problem.
331A) Working Mechanism...
33.1(8) Algorithm...
33.(C) Complexity Analysis...
33.1(0) Examples...
332 Activity Selection Problem...BF Design and Analysis of Algorithms (SPPU) 4 Table of Cory
33218) Working Mechanism. on ee
3.3.2(8) Algorithm... a4
3.3.2(Q) Complexity Analysis. ay —
en Seer
3.3.20) Examples... ta
BA
a2
44
46
4.1.1 — General Strategy.
4.1.2 Control Abstraction ww
413 Characteristics of Dynamie Programming...
4.14 Applications of Dynamic Programming...
4.15 Dynamic Programming vs. Greedy Algorithm...
432 — Algorithm,
433 Complexity Analysis...
Optimal Binary Search Tree...
44. Working Mechanism.
442 Algorithm...
443
444
0/1 Knapsack Problem...
451 First Approach.
452 Second Approach...
Chain Matrix Multiplication.........
461 Working Mechanism.
462 Algorithm...
463 Complexity Analysis.
464 Examples...pesign and Analysis of Algorithms (SPPU) bye ovine
_coaior 5: Baowireeking 5105-24
5a
511 — Principle.
5.12 Control Abstraction......
5.1.20). Recursive Backtracking Method...
5.12(6) Iterative Backtracking Method.
5.13 Time Analysis
5.14 Terminology.
5.15 Applications of Backtracking..
5:2 N-Queen ProbleM renee
5.21 The 4-Queen Problem
5.22 The 8-Queen Problem.
5.23 Examples.
5.3 Graph Coloring Problem...
5.31 — Working Mechanism
532 Algorithm.
5.33 Complexity AnalySis nnn
534 Applications.
5.35 Bipartite graph..
536 Examples
54 Sum of Subsets Problemewen-
541 Working Mechanism
542 Algorithm..
543 Complexity Analysis.
544 Examples.
Stepter6 : Branch and Bound 6-1 to 6-42
ek
61
Introduction...
611 Principle.|
|
62
63
64
65
and Analysis of Algorithms (SPPU)
61.2 Backtracking Vs Branch and Bound..
6.1.3. Dynamic Programming vs. Branch and Bound..
6.1.4 Applications of Backtracking..
615 Time Analysis. ee
Control Abstraction.. 4
621 General Approach... ‘
62.2 Control Abstraction for FIFO Branch and Bound. 5, HA
623 Control Abstraction for LIFO Branch and Bound 8
624 Control Abstraction for Least Cost (LC) Branch and Bound. 8.
Bounding.. 8:
‘Travelling Salesman Probie!
641 CBB using Static State Space Tree.. 2
642 LCBB using Dynamic State Space Tree. Em
Knapsack Problem vn 84
a4.
651 Working Mechanism.
652 Algorithm nnn
653 LCBB for Knapsack.
654 FIFO Branch and Bound for Knapsack...
pn tam mr rsa TI
932
ma
72
73
Chapter7 :_ Amortized Analysis TA tot
Amortized Analysis .nnneone a nt a3
Aggregate Method ..-rcaersmen A
721 Stack Operations. a S14
722 Binary Counter. 9.1408
Accounting Method....... 915
734 Stack Operations... . 9a.
732 Binay Counter... il 91.5(8)
7A
Potential
Method... . s Distribut
741 Stack Operations,
742 Binary Counter.83
a4
Tractable and Non-Tractable Problems
Randomized Algorithms.
92.1 Randomized Quick Sort.
82.2 Randomized Min Cut..
Approximate Algorithms...
831 Introduction...
832 Characterizing Approximation Algorithm.
833 Performance Ratios
834 Vertex Cover Problem.
835 Travelling Salesman Problem,
Embedded Algorithms.
841 Embedded System Scheduling (Power Optimized Scheduling Algorithm)...
842 Sorting Algorithm for Embedded Systems...
cr
Chapler9:_ Multithreaded and Distributed Algorithms:
9a
92
Multithreaded Algorithms.
911 —_Introduetion
912 — Models of Communication...
913
914 Analyzing Multithreaded Algorithms..
9..4(A) Parallel Loops...
9.1.4(8) Race Conditions...
91.5 Problem-Solving using Multithreaded Algorithms.
9.1.5(A) Multithreaded Matrix Multiplication...
911.5(8) Multithreaded Merge Sor...
Distributed Algorithms ...
vee LL
921 Introduction...
Tenkoaley
923
Distributed Minimum Spanning Tree...
‘Chapter 10 : String Matching Algorithms
oooGreedy Algorithms
time analysis of control abstraction, knapsack problem,
© Greedy Strategy: Principle, control abstraction,
scheduling algorithms-Job scheduling and activity selection problem.
3.1 __ Introduction
3.11 Principle
Q.__What is the greedy algorithmic approach ?
‘© When the problem has many feasible solutions with different costs or benefits, finding the best solution is know
as an optimization problem and the best solution is known as the optimal solution.
* Ina real-life scenario, there exist uncountable optimization problems such as make a change, knapsack, shorts
path, job sequencing, activity selection and so on.
* Like dynamic programming, greedy algorithms are also used to solve optimization problems.
‘* The beauty of greedy algorithms is that they are very simple. They are easy to understand and quick to buid
Perhaps greedy algorithms are the least complex among all optimization techniques.
* The greedy algorithm derives the solution step by step, by looking at the information available at the cute
moment. It does not look at future prospects. Decisions are completely locally optimal. This method constructs ti
solution simply by looking at current benefits without exploring future possibilities and hence they are known 3
greedy.
©The choice made under the greedy solution procedure i
best solution, it cannot be backtracked,
irevocable, which means once we have selected the ld
‘© Thus, a choice made at each step in the greedy method should be:
«Feasible: choice should satisfy problem constraints.
+ Locally optimal: Best solution from all feasible solutions at the current stage should be selected. :
a
tion is selected (rejected) in ’
+ Irrevocable:
step, it cannot be rejected (selected) in subsequent stages.
© Greedy algorithms are based on five pillars,
1. Candidate set = It is a pool of choices from which sele
example includes a set of nodes, vertex, items, denominations ete
2. Selection function : This function chooses the best ; :
depending ona mssniztion or miinizaton problem OPtMAl Sluion based on some he
Ince the choice is made, it cannot be altered, ie. if a feasible solupesign and Analysis of Algorithms (SPPU)
Greedy Algorithms
Feasiblity function ‘The candidate selected by the selection function from the candidate set is tested against
the feasibility function. The feasibility function tells us whether to add or not a particular candidate to the
solution set.
4, Objective function : Objective function assigns benefit or cost to @ partial solution. It decides whether a
problem can be solved by maximizing cost/profit or by minimizing it. Certain problems are maximization
problerns whereas certain are minimization problems. The knapsack is @ maximization problem, and making 2
‘change is a minimization problem.
5, Solution function :It indicates whether a complete solution is found or not. If a complete solution is built
‘then stop, otherwise continue selecting the next candidates from the candidate set until a solution is found or
the candidate set is exhausted
2. Control Abstraction
White control abstraction for the greedy approach.
‘The greedy algorithm derives the solution step by step, by looking at the information available at the current
moment. It does not look at future prospects. Decisions are completely locally optimal. The choice made under the
greedy solution procedure is irrevocable, which means once we have selected the local best solution, it cannot be
backtracked
Control abstraction for the greedy approach is described below :
Algorithm GREEDY_APPROACH(L, n)
// Description: Solve the given problem using a greedy approach
J Input: L: List of possible choices, n: the size of the solution
// Output: Set Solution containing the solution of the given problem
Select locally optimal,
imevocable choiee from L
‘Check itselacted choice
Is feasible or not
Af (feasible(Choice W Solution)) then
Solution
foriciton do
Choice Select(L)
Solution — Choice v Solution
end
‘Add feasible choice to
end partial solution
Teturn Solution
The greedy approach does nat ensure the optimal solution, but in most cases, it provides a good approximation.
For certain problems, greedy always produces an optimal solution (like Prim’s and Kruskal's algorithm).
3.13 Characteristics
©. Enlist and explain the characteristics of greedy algorithms.
2: __Distuss the elements of a greedy algorithm.(1) Greedy choise propery
@ Optimal substructure |
Fig, 3.1.2 : Properties of the greedy method
1. Greedy choice property
* The global optimal solution Is derived by making lacally optimal choices, ie. the choice which looks
the moment.
‘* If the choiee is feasible, then add it to the solution set and reduce the problem size by the same amoun,
‘+ The current choice may depend on previously made choices but it does not depend on future choices,
+ The greedy choice is irrevocable, so it does not reconsider any choice.
* The greedy choice property helps to derive the optimal solution by reducing the problem size by sobingl
optimal subproblems,
2. Optimal substructure
+ We say given problem exhibits optimal substructure if the optimal solution to the given problem conta
optimal solution ta its subproblems too.
+ Ina problem which possesses the optimal substructure, the best next choice always leads to an oft
solution.
Cases of failure
In many cases, the greedy algorithm fails to generate the best solution, Sometimes it may even create the ™
solution.
Global
maximum
Local_
maximum,
@)
G2) (2)
OOO® casa
maximum maximum
Fig. 3.1.3 : Cost curve
# Suppose we want to find the maximum number from the bina
greedy to find the maximum element, at each level we com;
and will take decision based on that level only,
Fig. 3.1.2 : Binary tree
/
y tree in Fig, 3.1.2, We start from the root: 25"
pare the local maximum element with its to
Without exploring future branches.
+ At level 2, the algorithm selects the branch having maximum value, so it selects the left: branch because
ve left bran
value of 42 which is higher than the right branch value ie, 12Wpeson an nal of Alors (SPUD 3 Greedy Algorithms
+ And hence algorithm terminates with 73, instead of finding 91 as the best solution. The greedy approach fails to
find the maximum element for this case.
«Same way, in same search space, suppose we are at some point p 3s shown in Fig. 3.13, if the problem is of
maximization or minimization, in no case greedy can generate an optimal solution.
«Greedy can achieve only a local maximum or local minimum solution in the discussed case,
3.1.4 Applications of the Greedy Approach
List out a few applications af the greedy approach.
Greedy algorithms are used to find an optimal or near-optimal solution to many real-life problems. Few of them are
listed below:
1. Make a change problem
Knapsack problem
Minimum spanning tree
Single source shortest path
Activity selection problem
Job sequencing problem
New eun
Huffman code generation
3.2 _Knapsack Problem
The knapsack problem has two variants.
© Binary or 0/1 knapsack The item cannot be broken down into parts.
‘+ Fractional! knapsack : An item can be divided into parts,
3.2.1 O/1 Knapsack
3.2.1(A) Working Mechanism
Problem
© Given a set of item shaving some weight and value or profit associated with it, The knapsack problem is to find the
set of items such that the total weight of selected items should be less than or equal to a given limit (Le. the
capacity of the knapsack) and the total value or profit earned should be as large as possible,
©The knapsack is an optimization problem and itis useful in solving resource allocation problems.
© LAK cay ae yeec ene? is the Set OF items. W = Wy, Wy Mayes eM? and V = <¥y Wy Varovos M2 Be the sete
of weight and value associated with each items in X, respectively. Let M be the capacity of the knapsack.
* Select items one by one from the set of items X and fil the knapsack such that it would maximize the profit. The
knapsack problem has two variants. 0/1 knapsack does not allow the breaking of items. Either add the entire item
in a knapsack or reject It. It is also known asa binary knapsack. A fractional knapsack allows the breaking of items
fered accordingly.
So profit will also be cor
TeaaaeEF Design and Analysis of Algorithms (SPPUL
* Knapsack problem can be formulated as fellow
a a
Maximize vx, subjected to) wi%sM
isl
X€ (0, 1 for binary knapsack
X€ (0, 1] for fractional knapsack
3.2.1(B) Algorithm
The algorithm for binary knapsack is shown below:
Algorithm BINARY_KNAPSACK(W, V, M)
// Wisan array of weights of items
//Visan array of value or profit of items
// Mis the capacity of the knapsack
// Items are pre-sorted in decreasing order of pr=¥i/ wiratio
S<-{} // Variable to keep track of weight of selected items
fea
Peo // Variable to store earned profit
whiles
* Iteration 3 :
SW =
sw >
S =
All items are tested. The greedy algorithm selects items (1),
Fx: 3.23 : Consider the folowing instance fora simple knapsack problem Find the solution usi
usin
n=8
v=(11,21, 31, 33, 43, 63, 55, 65)
we (1, 11, 21, 23, 93, 43, 45, 55)
M=110
Soin. :
Let us arrange items by decreasing the order of profit den:
Ihe), have profit v = (11, 21, 31, 33,43, 53, 55,
acy, then add i. Otherwise, skip the current item and praca
ick is full or all items are scanned
65) and weight w = (1, 21, 22,
the in
0,
0,
0
20.
(SW+w,) =0+15 =15
M,so select I,
{Ip} SW= 15, SP =0 + 25 = 25
(SW + w,) = 15 + 18 = 33
M, so reject I,
(L} SW = 15, SP = 25
(SW +w,) = 15 + 20= 35
M, s0 reject
(CL SW = 15, sP = 25
and it gives a profit of 25 units.
1g the greedy method,
sity. ASSume that items are lab
led as x = 4, 1, 1, ly
23, 33, 43, 45, 55),
Item (x) | Value (v) | Weight (wi) Pia vsu
1 u 1 Lo
L a uu LoL ie
4 31 21 148, -
33 23¥ Design and Analysis of Algorithms (SPPU) 38 Greedy Algorithms
Trem (x) | Value (v) | Weight (wi) | i= ¥./m
I 43 33 130
53 43 1.23
1 55 45 122
4 65 55 1.18
‘We shall seleet one by one item from the above table, We check the feasibility of the item, if the inclusion of the
item does not ctoss the knapsack capacity, then add it. Otherwise, skip the current item and process the next. We
should stop when the knapsack is full or all items are scanned,
Initialize, Weight of selected items, SW = 0,
Profit of selected items, SP = 0,
Set of selected items, S = {h
10.
Here, Knapsack capacity M
© Iteration 1:
sw
GWew)sO¥1=1
sw
M, so select I,
S = (,)SW=1SP20+11=11
+ Meration 2:
SW = (WHw,)=1411=12
SW s M,so select;
S = (hb}SW= 12,SP=11+21=32
© Iteration 3
SW = (SWew)=12 + 21= 33
SW < M,soselectl,
S = (hlgly} SW = 33, SP = 32431 =63
+ Iteration 4
SW = (SW+w) =33 +23 = 56
SW < M,soselect |
S = (hlgkyleh SW = 56, SP = 63 + 33 = 96
+ Heration 5
= (SW+w,) = 56 +33 = 89
M, so select Is
= (htaly tals SW = 89, SP = 96 + 43 = 139
o 22Re Design and Anais of gots I Sree
i
¥ Design and Analysis of Algorithms (SPPU)
+ Iteration 6 a
sw = GW+w) =89+4
sw > M,soreject Ie
$= (yhbylalsh SW = 89, SP = 139
© Iteration 7: gw = GW+w) =894 45-134
SW > M,soreject],
5 = (hlslelsh SW = 89, $? = 139
* Iteration 8 >
SW = (SW+w,)=89 + 55= 144
SW > M,soreject,
s
Allitems are tested. The greedy algorithm selects items (I,J, I, f,} and gives a profit of 139 units
(i Tyhy lls SW = 89, SP = 139
3.2.2 Fractional Knapsack
3.2.2(A) Working Mechanism
[a pian and wit the ago loro Knapsack using a greedy approach.
Problem
> be the set ofn items, W = be they
Let M is the capacity of the knapsack, ie. kept
]apsack problem is'the problem of filing the kneps
using items in X such that it maximizes the profit and collective Weight of added items should not cross the knap
capacity,
‘atlo, a fractional knapsack guarantees the optimal solution. Wit
the end of the algorithm,
A binary knapsack does not guarantee an optimal solutioy
Qn,
halts.
Knapsack problem can be formulated as,
A a
Maximize D) v,x, subjected to Swix, < M
i=l isa
%€ (0,1) for binary knapsack
%€ [0, 1] for fractional knapsackign and Analysis of Algorithms (SPPU) 3.10 Greedy Algorithms
3.2.2(B) Algor
‘The algorithm for fractional knapsack is described below :
‘Algorithm GREEDY_FRACTIONAL KNAPSACK(X, V, W. M)
‘// Description: Solve the knapsack problem using @ greedy approach
ys input: X: An array of n items
V: An array of profits assoctated with each item
W: An array of weights associated with each item
M: Capacity of knapsack
[fOutput: SW: Weight of selected items
‘P+ Profit of selected items
‘// tems are presorted in decreasing order of pi= vi/ w ratio
seo df Set of selected items, initially empty
swe0 Jf weight of selected items
SPO A/ profit of selected items
jet
whileis ndo Addition ofitem does not cross the
knapsack capacity, s0 add whole item
f(SW + wfi]) SM then
S M,so break down item 1,
+The remaining capacity of the knapsack is 5 units, so select only 5 units of iter.
AM - SW) / 4 = (20 - 15)/18 = 5/18
Portion of item i, xi]
SP = SP4v,*5/18=25 + 24°5/18 =25 + 6.67 = 3167
SW = SWe wy "5/18 515 +18*5/18= 15 +5=20
S = (,he58h
+ The knapsack is full, The fractional Greedy algorithin seleets items ( |, * 5/18}, and it gives a profit of 31.67 units.
3.3 Scheduling Algorithms
* Inthis seetion, we will diseuss two scheduling problems:
© Job scheduling problem
© Activity selection problem
* Scheduling algorithms are used in resource allocation.a”
Y¥ Design and Analysis of Algorithms (SPPU) 13 besa, ae
Job Scheduling Problem Ne
3.3.1(A) Working Mechanism
"@. Stale and sclve tha job soquoncing problem using # greedy approach.
2. Whatie a ob scnocuing problem? Discuss the groedy approach 1o solve the job scheduling algorty,
ps on the single processor Which maximizes py
30n,
re
Problem : Schedule some n jobs out of a set of N jc
possible.
Explanation
‘Wie have N jobs, each taking unit time for execution, Each job is having some profit and deadline aston,
We can earn a prit only if the jab is completed on or before its deadline. Otherwise, we have to pay"
penalty Each job has a deadline d’21 and profit pz 0. Ata time, only one jab can be active onthe proce,
«© Jobis feasible only if it can be finished on or before its deadline. A feasible solution is a subset of 05 sy
each job can be completed on or before its deadline. An optimal solution is a solution with maximum proj,
+ Assimple and efficent solution is to generate all subsets of @ given set of jobs and find the feasts»
maximizes the profit. But this approach runs in O(2") time.
+ However, the greedy approach produces an optimal result in fairly less time. As each job takes the samme any
time, we can think of schedule $ consisting of a sequence of job slots 1, 2, 3, ... N, where S(t) indicates
scheduled in slot t. Slot t has a span of (t- 1) ta t. S(t) = 0 implies no job is scheduled in slot t.
* Schedule S is an array of slots $(t), S(t) © (1, 2, 3, .., Np for each te (1, 2, 3,.., N}
+ Schedule S is feasible if,
1. IFStt) =, then t is discarded,
profit SP
Iteration 3
Job J; is not feasl
ed before i
$= ,J8 and
cannot be finish
solution set
ret wo sors are already cccupied and if we schedule J ny time at
hk
Iteration 4
js discarded,
‘ble, because the fi
ine 1. S0job 42
i SP = (100, 27)
‘able to schedule 0.
job sequencing with
30) and deadlines (i+
Job J isnot feas
ed before its dead
=U, Jah and Profit
proach, we will De
‘cannot be finis
ives a profit of 100 + 27 = 12)
Un
jobs (J which 9}
deadlines" problem *
dpvaide) = (1,3: 4,352, 1,2)
solution set §
with a greedy PI
‘32: Sove the folowing instance of the “I
pn) = (2 5,205 18:1
n=7, profits (Pr Per Po >
in such a way as 10.90 maximum profit.
on eee
3 5 | 20 18/1 6 | 30
ifal4}3{2t
Soln.
ding order of profit.
yD Ja So Sa
it satisfies the deadline. If
Sort all jobs in descen
50, P = (30, 20, 18, 6, 5.3, 3) I=
thelist of sorted jobs J, and checks if
'e current job and process the next one,
1,3, 1, 2). We will select one by one jt
Jy) and D = (2, 4,3,
b in the latest free slot. If nosic
so, schedule the jo!
is free, skip th
Initially,
sLI
Profit of scheduled jobs, SP = 0
Iteration 1
7 ara
pases job Jy is 2. Slot 2 (t = 1 to t = 2) is free, so schedule
ition set S = {J,}, and Profit SP = (30)
8 ay L | L
45
o 1 2 8
Iteration 2
‘The deadline for job J3is 4. Slot 4 (t = 3 to
Soluti _ 4) is free, so s¢ iti
jon tS = J, ad Profit SP = 80,20) schedule it in slot 4.
8
y Jy I [pesign and Analysis of Algorithms (SPPU)
Greedy Algorithms
iteration 3
‘the deadline for job Ja is 3, Slot 3 (t = 2to t= 3)is free, so schedule
Up, Js Ja) and Profit SP
8
slot 3,
Solution set S
iteration 4
the deadline for job Js is 1. Slot 1 {t = Oto t = 1)is free, so schedule it in slot 1.
Solution set S = Us, Js, Je, Je), and Profit SP = {30, 20, 18, 6)
s[s[o]4|[%
ot 2 ea
First, all four slots are occupied and none of the remaining jobs has a deadline lesser than 4. So none of the
remaining jobs can be scheduled.
‘Thus, with greedy approach, we will be able to schedule four jobs (J, J, J, 4), which gives profit of 30 + 20 + 18 +
6 = 74 units.
Ex.3.3.3 : Obtain the optimal job sequence for the following data ct jobs.
(Pas Pe, Pas Pas Ps) = (26,18,12,5,1) and (D1, Dz, Da, De, Ds) = (8,2,1,9,3).
Soin.
Soft all jobs in descending order of profit.
So, P = (25, 15, 12, 5, 1), J = Uy do Jn, Je Js) and D = 3, 2, 1,3, 3). We will select one by one job from the list of
sorted jobs and checks if it satisfies the deadline. If so, schedule the job in the latest free slot. If no such slot is free, skip
the current job and process the next one.
hnitally,
Profit of scheduled jobs, SP = 0
Iteration 1
leit in slot 3.
The deadline for job J, is 3. Slot 3 (t= 2to
Solution set S = (J, },and Profit SP = (25)
sL] [4
Iteration 2
The deadline forjob Jy is 2. Slot 2(t = to t= 2) s free, 0 schedule it in slot 2,
Solution set § = (J, J4) and Profit SP = (25,15)
s zL2. 1 1]
o 1 2 8 4 8
irrand if we schedule J, any time ay
nt
ready occupied
pamniont jt tree sits are
Job J, is 0" job Js ig discarded.
1 feasible. DO" bane
cannot be finished ve its deadlin
Profit SP
= (25,15, 12)
pied and if we schedule any tie ae
vee sjots are already occu
because the first th
b J, is discardet
its deadline 3. so jot
= (25,15,12)
0 schedule three Jol Jal, which gives a ¢
and Profit SP
bs Un Je
olution set = Py Js 3)
we will be able *
‘with the greedy approach,
95 4:15 + 12 = 52 units
332 Activity
‘activity selection problem.
Selection Problem
ith for thes
ready method? Write the algo
How to solva itusing the g
3,3.2(A) Working Mechanism
that need exclusive access to resources like pe
er of compatible activitis
problem : “Schedule raximurn numb
classroom, event venue etc.”
@Thespan of activity is defined
tivities:
ime. Suppose we have such N 3
out it
by its stat time and finishing ti
ities to be carrie
| schedule with a maximum number ofa
‘The algorithm aims to find an optimal
‘we want to schedule.
resources. Suppose S = {8y Ay By Sul is the set of activities that
. canes activities must be compatible with each other. Let's say the start time of activities is 2%4 Ls
inn tee ar "
is, Activities | andj are called compatible if and only iff, < s) 0" f < Sr tn other words, wo =
compatible f their time durations do not overlap.
© — Consider the below timelir tivitie
= son ttn. Acts Ay ‘Ay and {A, As) are compatible set of activities pctvites
1ot compatible as their time span overlaps
© — For given n activities, there
, there may exist mult jas
anon, multiple such schedules, The activity selection ProBle™ cual
#
‘ng ord
increasing oe i
+ The greed
tehene. SS ee time in
Shy 7
Si otid cmc okie It It schedules the first activity in the sorted list. sub:
Sian ger than the finish time of the previous activity. Run t
at
sequent ae
rough @l—
pesign and Analysis of Algorithms (SPPU) 18 Greedy Algorithms
ree
12 38 4 68 6 7 8
+ Consider the activity 98 = (Ay Ay Ay Ay As Ay Ay, Ay Ag, thelr start time’ = (1, 4, 5,2, 6,3, 10, 12,11) and finish
time F = (5, 7,9, 10, 14,15, 26, 17. If we first select activity Ay for scheduling, we wil end up with two activites
Wed
« Tfweselect A, followed by A, we end up with (A, AJ) only,
Whereas, greedy algorithm schedules (A, A, Ay A,), which is the largest possible set.
3.3.18) Algorithm
Algorithm GREEDY_ACTIVITY_SELECTION(A, s)
([Atsasetofn activities sorted by finishing time.
/pSissolution set
Se(AlI]) — // By defaule schedule first activity from sorted Uist A
-
jer
for)
First ofall, sort all activities in non-decreasing order of their finishing time.
ViLet usnow check the feasib
ity will be sek
¢ not compatible. SKIP Az
Schedule Ay 5 =
sy default, fist acti heduled, 50 S =
{> S50, and Ana
are compatible:
bie, Sehedule Ay $= ¢An Ay AP
= 5930s and Ac are not compatible. An
‘The following chart shows the timeline of all activities.
1
Hence final schedule fs, § =
Ex.335: Given the
‘plowing data, determine the optimal schedule for activity selection using a greedy algoritm.
Ra cBys AeA Bas Aas Aes Ar eds S= <1, 2,3, 4, 5:6, 71 8,
F=<4,3,7,5, 6,6, 10, 9
Soln.:
First ofall sort all activi
ivities in non-decreasin
19 order of their finishing ti
ig time,
Activity} A,
; 7 | As A, Ay
a
fi
i 3
. - : 7 8 9 10
4
: 2
-1*}s]3i6}] ef?
Let us naw check the
\e feasible set. an
8y default, the first at ‘of activites, Activities iand j i
is activity willbe schedule o § j are compatible only if £5;
30
‘ =< Ap
>
5y50A, and A, are
not compati
£55.80 and compatible. Skip Ay and check
‘A, are compatible, Schedule A, for the next activity
le
Ay, A>
te co eetsg oeignang Arba of Agorns (24)
Greedy Algorithms
sps0-Aqand A ate compatble. Schedule Ay =
@.1 Discuss the basic pr
2 Define: Optimization problem
Q3 Write control abstraction for greedy method
@4 Discuss Characteristics of greedy method
Q5 How to solve 0/4 knapsack problem using greedy method?
Q6 Find.an optimal solution for the following knapsack problem: n= 4, M= 70, w = {10, 20, 30, 40),
V= (20, 30, 40, 50)
@7 How to solve fractional knapsack problem using greedy method?
@.8 Consider the following instances of the fractional knapsack problem: n= 3, M= 20, (P1, P2, P3) =
(24, 25, 18) and (W1, W2, W3) = (18, 15, 20) find the feasible solutions.
2.9 Explain job sequencing problem using greedy method
G10 Explain activity selection problem using greedy method
a00pynamic Programming
cof control abstraction, binomial cg.
oth
raction, time analysis
rinciple, control abst
© Dynamic Programming + a
pst, 0/1 knapsack. chain ication.
Matrix muttipli
41 Introduction
1 Richard Bellman in 1950. Like greedy,
‘ted by US. mathematicia
robles. But unlike the greedy approac, «
rogramming was inven
also used to solve optimization Pi
res aptimal / best solutions.
© Dynamic P'
dynamic programming iS
programming abvays ens
«A feasible solution is 2 Sol
feasible solutions with different costs,
lution that satisfies the constraints oF the problem. When the problem has,
the solution wth the minimum cost or maximam profit is called they,
solution.
plems, the cost metric may be a number of cons
ic depends on the problem. For sorting pro!
metric is a number of multiplications. For thes
©The cost met
a cost
ora number of swaps. For matrix multiplication,
problem, the cost metric is the total profit earned.
4.1.1 General Strategy
(Whats Dynamic programming?
echnique for optimization problems. Here word ‘progam’
© Dynamic programming is a powerful design t
rogram
tothe plannit ion, it
the planning or constuction ofa solution, it does not have any resemblance with computer Pl
small sub-problems. Subproblems are solved recut
+ Divide and conquer divides the problem into
roblems *¢
divide and conquer, sul .
Ee on oly a -problems in dynamic programming are not independent. Subp!
ap with each other. Solutions to subproblems are merged to get the solution off!
large problem.
ved i
© Individe and c
inne cieainig sub-problems are independent and hence repeated problems are s°!
sated torches mee the solution in the table, so when the same problem encounter agai
problem and uses a eae oh peseaming et keeero pease A ae ae
e sm ;
Be. aller problem to build a solution to the larger problem.
in te
* The method applies
‘to only thos
#e problems which possess the property of the principle of opts
* We must keep track of partial solution:
s.
Dynamic programming is more complex and tim ‘consuming
it Sl
Si
Swe 4ign and Analysis of Algorithms (SPPU) rs
ee Ki
x Control Abstraction Dynamic Programming
42
tro ion
J bisa th contol abstraction for dynam programming
‘q,__Whatis the dynamic programming approach to solve the problem?
Dynamic programming (OP) splits the large problem at e ‘
sufficiently small, DP solves it very possible point. When the problem becomes
rogramming is a bottom-up a it
Dynamic programming is a bottom-up approach, it finds the solutionto the smallest problem and constructs the
solution of the larger problem from already solved smaller problems,
To avoid vecomputation of the same problem, DP saves the result of sub-problems into thetable. When next time
the same problem is encountered, the answer is retrieved from the table by lookup procedure.
«e Control abstraction for dynamic programming is shown below :
‘Algorithm DYNAMIC_PROGRAMMING (P)
ifsolved(P) then
a
else
Ans < SOLVE(P)
store (P, Ans}
end
Function SOLVE(P)
ifsufficiently small(P) then
solution(P) ‘// Find solution for sufficiently small problem
else
Divide P into smaller sub-problems Ps, Pay.» Px
‘Ans; — DYNAMIC PROGRAMMIN(P1)
‘Ans: DYNAMIC_PROGRAMMIN(P2)
‘Combine solutions of
‘mailer sub problems in
order to achiove the
solution to larger
problem
‘Ans, «— DYNAMIC_PROGRAMMIN(Ps)
Teturn(combine(Ans1, Ansz,..., AnSs))
end
Programming
4.13 Characteristics of Dynal
yming?.
._ What are the characteristics of Dynamic Progr
Dynamic programming works on the following principles
i.e build a mathematical model of the solution.
Characterize the structure of the optimal solution,
Recursively define the value of the optimal solution.
1¢ optimal solution for each possible sub-problems.
Using the bottom-up approach, compute the value of th
9 information computed in the previous step.
Construct an optimal solution for the original problem usin =
senslems. Its used to solve many real-life problem
cr
ptimization pre!
jc programmin:
Dynamic prog Knapsack problem
2
7 role a
nal a hae 4, Traveling salesman problem
tit inary search tree .
3. Optimal bina 5, Assembly line scheduling
'5, All pair shortest path problem
7, Multi-stage graph problem
a rithm
4:15 Dynamic Programming V5: Greedy Algor!
Give he dforonce belwee? ‘dynamic programenin ‘and the Gready method.
sr. No pynamic Programming = Greedy Approach
L Itguarantees optimal solutions. It does not guarantee an optimal solution
2 ‘Subproblems overlap. Subproblems do not overlap.
3, | Itdoes more work, Itdoes little work.
4, | Considers the future choices. Only considers the current choices.
s._ | there is no specialized set of feasible choices. | Construct the solution from a set of feasibleso
6. _| Select choice which is globally optimum. Select choice which is locally optimum.
7. | Employ memorization. There is no concept of memorization
4.2 _ Principle of Optimality
'@. Whats the dynamic programming approaeti to’ solve the problern?: : -
Q. Define the principle of optimality. : 2
Q.__ Explain the principle of optimality.
+ Principle of optimali it
jtimality : “In an optimal sequence of decisions or choices, each subsequence ‘
optimal”,
© The principle of optimality i
Mes ue ete heart of dynamic programming, It states thatto find the optimal an
solution using dynamic pro te €ich subproblem also must be optimal. itis not possible © aie
gramming if the problem does not possess the principle of optimality
x, - Bis the shortest
to
‘+ The shortest path problem satis ri Sf If A ~ Xy~ Xy~
satisfies the principle i
ciple of optimality. If A - X, -
‘nodes A and 8, then any subpath X, to X, mi exi st ™ Itiple paths
» th X; to X; must be the shortest. If there exist multipl
selected path is not minimum, then obviously the path from re shor
A to B cannot be the shortest.
4.2.1 (a) shortest path from th ’
B-Cand the other is 8 c from A to Cis A= BC. There exist two path 0" ae
ind
But B-c
s the shortest path so we should add that one in tN . 4
4
* For example : Inrn and Analysis of Algorithms (spp
woes 4
Dynamic Programming
~B-C-p,
p-A-E-D~C Thus the sub-path of the tong
@ (ey
Fig. 4.2.1 : Illustration of shortest and lon
gest path problem:
43 Binomial Coefficients
43.1 Working Mechanism
+ Many sub-problems are called again and again since ithas overlapping subproblems property. Re-computations of
the same subproblems are avoided by storing their results in temporary array Cli j] in a bottom-up manner.
+The optimal substructure for binomial coefficient using dynamic programming is stated as,
1 ,ifi=j orj=o
Cte} O. |
Cfi-1,j-1,+C-1j) ‘otherwise
+ InTable 43.1, the index i indicates the row and index; indicates the column,
Table 43.1
Ne Oa ee a ea Oe a eet k
fo fa
rfaifa
—
2fi fz |r
za [3 [3 [a
Gin- k= | Cin = 2,4)
cia)y
Dynamic p
an
WF Design and Analysis of Algorithms (SPPUL ‘mown as Pascal's triangle
: ts is also
* This tabular representation of binor™ off
4.3.2 Algorithm mming is shown below:
+ Thealgorthm to solve binomial coefciet using dynamic progra
‘Algorithm DP_BINOM_COEFF (", )
‘Hnis the votal number of items
J/kis the number of tems co be selected from n
for i<-Otondo
forje-otokdo
j==0 then
oft 4
else
Gli, -Cfi- 1, j- 1] + fi - 4 /). : :
end BE
end
= : ae ee
return C[n][K) e zs oe
433 Complexity Analysis
‘+The cost of the algorithm is the cost of filling out the table, Addition is the basic operation. Because k sat
needs to be split into two parts, only half of the table needs to be filled out for i< k and the remaining
table is filed out across the entire row,
* The growth pattem of the binomial coefficient
katwaa —
and Analysis of Algorithms (SPPU)
By 6 Dynamic Programming
Trak) «sum for the upper triangle + Sum forthe lower rectangle
: k int nok k k
Tink ze a LY Das dene ck
aje2 keijen 1 ieket
FGs2eetk-ayek Yo 1 HeMk yay
isker
. Kak + ani 212 Kok
2 Sako 7-2
= nk tk < of distinct probability in sorted order such that ky < ky <<
‘an unsuccessful search, so with n keys, the binary search tree
Consider the sequence of keys K = < ky Kink =»
* Words between each pair of the keys lead to
Contains n + 1 dummy keys dy representing unsuccessful searches.
with 5 keys
Binary search trees
4 ive search for an optimal binary ser
with m modes, there exist a n exhaust! Fy Search tp
toa huge amount of time,
ds to either a successful or unsuccessful search
binary search trees: A
oop ecole otal probability would is defined
met m=O
ching in the tree is @ combination of both, successful and unsuccessful searches:
The cost of sea!
a a
cm= = (depti(ky) + 2) - Pn + E (depthtk,) +1) de
mel m=0
n n
=a SE Gdeptheg): Pot Z (depth(ky)) * der Cpa tt
mai m=0
a Wetryto construct a tree which miirnizes the total search cost. Such a tree is called an optimal binary 2°
COAST does not claim minimum height. tis also not necessary that the parent of sub tree hhas higher pr
its child.
Mathematieal formulation
keys in sorted ont
We \ ;
fe formulate the OBST with the following observations. Any subtree in OBST contains
where L 14s
2
=
x
"
2,0)+el
ain a 3) + 63-21 + 0
3) i
3 + €l331 + #2 2) | = min
win 31+ 02,3)
oa + 025 + 035 |. (oa
{ g4 2005 +935 = min | ogo J =07
01s + 0.70 + 10 185
10 + 025 + 1.0 f=min} 225 }.,
145 + 0.05 + 10 25
ef2,3) = ez 2) + el4
ets,0)+e2.3* #0) mit
nd of 4 4 WF) f=
3) nf Grae al ewa3
So finally.
eft, 3] is minimum where r= 1
2, 3) is minimum where r= 2
ef, 2) is minimum where r = 1
{3 3) is minimum where r= 3
el2,2) is minimum where r = 2
(1, 11is minimum where r = 1
Thus, finally, we get.
Let us now construct OBST for given data.
‘1, 3] = 1, 50k, will be at the root,
yun fe 0” the right side of k,
2, 3] = 2, So k; will be the root of this sub-tree,
ig will be on the right of k.pesign and Analysis of Algorithms (SPPU)
,2: Compute and construct OBST for given valuos using dynamie programming
= 4, (Bye as Bas 84) = (40 tint hill)
let P(1, 4) = (3, 3, 1,1),.9(0, 4) = (2, 9, 1,1, 1)
Soin.
Given that,
1 a{2|3]a
B. 3iafafi
a ]2}3alajija
‘The recursive formula for OBST is,
aa
min {eli, r- 2] + er + 1, 1+ wii,
etl =
isrs)) ifisj
Initial,
0 oO
wa = 2 pat D an=q=2
m=1 m=o
1 1
wat = SD pat D anzaed
med mei
2 2
w32)= 2 Pat Daeg t
m=3 m=2
3 3
wast = Yo pat Y ansqed
m=4. m=3
4 4
wiS4l = DO pat DS Qn= 22
m=S maa
1 1
wit = DL pat LD a=prarasse2+3-8
m=1 m=0
2 2
wi) =D pat DY Qe= eta taas4s4la7
m2 mel
3 3
w3) = 5 pat Db A= Pst Qt qgaieiei=3
m=} m=2
4 4
wit.) = Spat Dane Pata tqrd+i+l=3
m=e4 m=3
z 2
wi.) = E pat Lana (hit Pat ot ae ay #64 6= 12
mei m=0{W Design and Analysis of Algorithms (SPPU).
3 3
wi2, 3) = P+ x On (pa + Ps) + +2
m=2 mel
4 4
wal = E pat Dane at pdt Get art ada 24325
mes) me
Lat E Gq = (Py + Pr *P3) + (Go # Os 4 a4 0) = 747 = 14
wil, 3] =
m=1 m=O
4 4
wid) = DL pet De Wt Pet PP) tt tat a= 546 = 11
m=2 mei
4 4
w= ZX pp + Ze =P. t Pet Pat PI + Got ht htt qd=8+ B= 16
med 7
So finally,
Let us compute efi, jl;
ell, O = q=2
ef2, Y= q=3
eB, 2} = qal “Le efi = 4)
ef, 3] = q=1
e5, 4] = q=1
ef, 1) = min{efl, 0] + ef2, 1) + wi, y= min(2+348}=13
2, 2] e121) + €f3, 2) + wi, 2)= min(3+14+7};=1
€[3, 3] = min{ef3, 2) + e[4, 3) +w@, 3)}=min({1+143)=5
e[4, 4] = min { el4, 3] + ef5, 4] + wa, 4))=min(1+1+3)=5
= mind Hh 1 + el2, 21 + wid, 2) 241412 S|
sha = vin Gtaear acme = inn seis min 7
{ { 3B+5+9]_ in{ 3 bear
min edeg f=] 2
ee ee aS 1+5+5 fl) ay
ef3, 3) + e[5, 4] + w(3, 4) f = min S4145 f=") as >
el, 0) + ef2, 3) + w(, 3) 2417414 33
fl, 1) + ef8, 3) + wit, 3) P= miny 1345414 }aminy 32/222
12, 2] + e(4, 3) + 01, 3) +1414 “a
a ——eeSSS
el2, 2) + ef3, 3] + wi2, 3)
(2, 3] = min el2, 2) + ef4, 3] + w(2, 3)
e(3, 4)
eld, 3) =and Analysis of Algorithms (SPPU)
ves
14
S.10+ Balsa — Dynamic Programming
ela. in} el2, 2) + e14, 4) + wg +iea 25
12,3) + €15,4) + ma.) mn eile =min} 27 P= 2s
SUSI) of eaeay te
= ming 3 + [3,4] + W014) a
ett 4) = 1) oft, 2) + ela, g) «wa. a) f= min ed be ming 19} eo
(1, 3) + eS, 4) + w(t, 4) Reieie ‘
sosfinally
f1,4) is minimum where r = 2
(1,3) is minimum where r = 2
2,4] is minimum where r = 2
{1,2 is minimum where r= 2
{2,31 is minimum where r= 2
€3,4]is minimum where r= 3 or r= 4
¢{2,]is minimum where r= 2
€(2,2]is minimum where r = 2
23,3] is minimum where r = 3
eld, 4 is minimum where r= 4
Sowe have,
Letus construct OBST from
Lj} = M11, 4] = 2, So k, win be at root k, is the left child of ky
‘ma are on the right side of k,
4
"8, 4)=3, so k, will be the root of this subtree k, would be on the right of ky.
So tree would look like.a
Soin. :
The recursive formula fF oBsTis
etidl= {es /
min deparenre ete Ie)
1srsj
i i
wig =D Pot DO
mei 1
cetusirst find the w mati intial.
o 0
wa) = pyt Y aq= a= 005
m-2 moo
: 1
wan= >
met Daeas
matt mea 9, = 0.10
2 2
w32= >
Pat 2 Whe
iiss mez’ 4; = 005
. 3
wi4,3} = >
Pn + .
m=4 eta 2 Om
wi5,4) = 5
n= y=
“ea 0.05
5
wi6, 5) =
1s Dae Dag
Im = 4s = 0.10
ime- rr
s of Algorithms (SPPL)
nd Anal
wy pesig : a Dynamic Programming
wit = ZL pat D on=prqraq
a mea at, PLY +4, =035 + 008 + 0210 030
2 2
w22i= 2D pet D aeptara,
m=2 mel
= 010-4030 +005 = 0.25
3 3
w23) = 2D Pet De da-ptata
m=3 m=?
= 005 +005 +005 = 0.15
4 4
wil = DS Pet D ane pr ara
m=4 =3
= 010 +005 + 005 = 0.20
5 5
w551= Y pat D a-ptara
mss m=4
So,now = 0.20 +005 +0.10= 0.35
2
wa21= Dp, +
m=1
0.15 + 0.10 + 0.05 + 0.10 + 005 = 045
3
Pa Pa) + (G54 43+ a4)
wi, 3] =
0.10 + 0.05 + 0.10 + 0.05 + 0.05 = 0.35
4 4
wl = 3 pat SS ane ( Pst Pad + (ar* a ad
m=3 m=2
= 005 + 0.10 + 005 + 005 + 005 = 030
5 5
MAST = Spat By dm = Pet Ps) + (ay + et a)
ma4 m=3
= 0.10 +020 + 0.08 + 0.05 + 0.10 = 050= tpt Pet Pad + at at H* 99) = 0301+ O25) = 055
=P:
4 a 4 3 + Ga) = (0.25) +
3+ 9) (0.25) =o,
2 (pp Pst Pa + at 150
wea) = LD pat 2
m=z mel
7 2 = (0.35)
B= L pot 24 (oy + Pat Pat (a+ G4 #4 = 035) * 025) = 069
wh3,5) = ns In
m=3 m=? w
4 4
wi) =X pat DL Gn= (Pit Pr * Pat Pe) + (Go* Gi * Gt e+ A) = (0.40) + (0.30) =p
mel m=0
L s 5
— wi2.5} = DL. Pat DL Gn= (Pet Pat Pat PS) + (Gi + G2 + 93% G+ 5) = (045) + (035) 0)
m=2) m=1
WLS =D pat Di Ga= (Pst Pet ome + Py) + (Gt Qt se tg)
m=1" m=o
= 060 +0.40=10
So, finaly,
Now let us compute ei jsign and Analysis of Algorithms (SPPU)
oe
Dynamic Programming
ail {=
min (eli =U) 4 eft 41,14 wii fp
isrsj
rritialy, maleic fi. jis empty
ef1, 0) = a= 005 (2 j 1)
eM = q,= 020 (» j=i-n)
oa. ime y
ef4.31 = a= 005 (2 j=i-1)
(5,4) = a= 005 (y j=i-1)
e(6,5} = q,=020 (: j=i-1)
e{Lt] = min (e (1, 0] + ef2, 3) + wil, 1)
£0.05 + 0.10 +030) <045
22.2] = min( e(2,1] + ef3, 21 + wi2,2)}
= min{0.10 + 0.05 +025) =0.40
(3,3) = ming &(3,2] + ef4, 3] + w13,3])
0.05 + 0.05 + 0.15}= 0.25
[44] = mind e (4, 3] + e[5, 4] + w[4, 4])
jin { 0.05 + 0.05 + 0.20) = 0.30
5.5) =
in ( (5, 4] + e[6, 5] + w(5, 5]} = min (0.05 + 0.10 + 0.35} =0.50
. { (1,0) « ef2, 2) + wil, 2) . | 0.05 + 0.40 + 0.45 "
aay = min{ Sirah egal swat |" { os voos sas |= m4
Sew 0.40 + 0.25 + 0.35
{ tbs)
os
. - { |. . {070}
223) = min} (,2) + ofa, 3} + w(2, 3} J = ™" | 0.40 +0054 035 = ™| ogo) = 070
28.2 selbed wind, { 905 +03m+030) {O65 |
al= in| (3.3) + (5,4) +wid.4} [=| 0.25 +005+030 [= ™™| o60 | = 960
@ (4, 3] + e[5, 5] + wid, 5D | 0.05 +0.50+0.50 [130
el 5) = on ee eal }.=min{ 995 Saae tase = min} ogo | = 990
oft, 0} + ef2, 3) + wi. 3) { |
mind 135 }=135
Sas me a eeta al ew) 090 + 005 + 065 60
el2, 1} + ef3, 4] + wi2.4)
a2. 4) = nin{ el2, 2) + ef4, 4] + 2, 4)
ef2, 3] + (5,41 + w(2,4)
= min
005 + 070 + 065
ell, 1) + €f3, 3) + 40,3) p= min) O45 + 025 + 065
020 + 060 + 050 120
040 + 030 + O50 ¢ =miny 120 foam
070 + 0.05 + 050 125
Sa‘ pPU)
BH design and Anais of Agotth KT, 5
e217 8
3.3)
(3, 4] 06
4
eft 0) + €(2.41 + ws
1 yes, 4) +m)
ott Ng aj + wh 4)
ay = min} garels
° { egal ea) ewan)
005 + 120 + 070
as + 0.60 + 070
=min
ef2, 5] = wf
090 + 030 + 070
ef2,5] = w{
1
min) 1
135 + 0.05 + 070 2
el2, 11 + 013, 51 + 2 9)
el2, 21 + el4, 5] + We, 5)
ef2, 3] + [5 51+ w(2, 5)
ef2, 4) + 616, 5] + w2,5)
010 + 130 + 080
40 + 0.90 + 080
0.70 + 0.50 + 0.80
120 + 0.10 + 0.80
+
+
+
1
95
15
90
10
= 175
2.20
210
2.00
2.10
=20
ef, 0] + e(2, 5] + w(l, 5)
ell, 1) + of3, 51 + w(t, 5)
eft, 21+ e[4, 5] + w(l, 5)
eft, 3] + e[5, 5] + w(t, 5)
el, 4] + e (6,5) + wd, 5)
305
275
280
285
285
eft, 5] = min
= min
= min =275
0.05 + 200 + 100
045 + 130 + 100
0.90 + 090 + 100
135 + 0.50 + 100
175 + 0.10 + 1.00“a
sign an Arai of Algorithms (PPL)
v
Programming
ofall
aga is inimur where r= 2
aq, sis minimum were
afi 4's rinimurm where
«gg 8 minimum where
e(2,4}is minimum where r = 2
ell
«(4 5}is minimum winere r= 5
4)is minimum where «= 2
gg isminimum where r= 4
2,3) is minimum where
1.2) is minimum where
swe have,
{{1,5] = 2,50, Will be at root.
4
&, willbe the left child of ky
Sub tree k, 5 will be the right child of k,
3,5] = 5.50 k, willbe the root of subtree ky ..s
, awill be left child of ks.
J
R13,4] = 4. So k, will be the root of this subtree k; will be on the left of ke
So final OBST would look like this.
48 _0/1 knapsack Problem
We Explain OM Knapsack using dynamie programming wih an example
Problem : Given a set of items, each having a different weight and value or profit ass . Find the set of
items such thatthe total weight is less than or equal tothe capacity of the knapsack and the total value eared is
Ss lar9¢ as possible.roblems, Let = < By XxX...
sigs
ap are weight and value 2soclated wi, a
© Thek jom i
napsack probe i
iy
Items, Sets w= VE~149) then
Vid) < vl) *Vii=2.)- whi]
else
Veil Y= 2
end
else
vfij] © VE- Li) ) wie w
end
end
end
¢ Treabove algorithm will just tell us the maximum value we ean eam with the dynarnie programming, It dees not
speak anything about which items should be selected. We can find the items that give optimum results using the
{allowing algorithm.
‘Algorithm TRACE. KNAPSACK(wr, v, W)
sfwisanarcay of the wetght of tterns
/[ovivan array of values of items
7 Wtsthe knapsack capacity
sWwe-()
re)
while(j> 0) do
HOE. == VO -4, J then
VO < Vii) - vi
Change j accordingly
SWesw wll]
SPE SPyfi]
end
end
Complexity analysis
With n items, there exist 2° subsets, brute force approach examines all subsets to find the optimal solution. Hence,
‘
"running time of the brute force approach is 0(2"). This is unacceptable for large n.
ex M, where n is a number of
same is the space complexity,
—
tena Programming finds an optimal solutian by constructing a table of sis
27d M is the capacity of the knapseck. This table can be filled up in O(0M) time,
a© The running ime of the Brute force approach is O2
‘i ing is O(n *M).
+ Running time using dynamic programming i O17 * ).
Wa th 6, 8), p= (10, 12, 18) Using dynamic progia
Ee 451: sack problem N= 9
i. 4.8.1 : Consider 0/1 knapsack P ina, Botermine the optimal profit forthe knapsack of gq <"#
apa
%.
recurrence relations for the problem and solve the $8
Soln,:
Here, the knapsack capacity is sufficiently small, so.we can use the first approach to solve the probe,
Solution of the knapsack problem is defined 25, ;
vi-2i vif] s yp
GIS {eewieupyevit-nl—wl vitjem
We have the following stats about the problem,
Table P. 4.5.1
Boundary conditions would be W [0, i]
Item detall
Filling first column, j= 1
VIL >i=3, j=1w=
As, ji=2, j=1w=
“13
V0, 1)
As, Jew, Vial =
. VM = viLy=
VBU >i=3, j=, w=0,- 8 1-0
As icm, Vil = Vii-aj)
myand Analysis of Algorithms (SPPU) 7"
posig’ Dynamic Programming
j—
Kem detail | o/a) 2/3} 4|5]6| 7
8|9| 10
i 9}9)0}0}0 }o Jo Jo Jo Jo Jo
i i=l o}o |
is? ojo
ies oo ‘|
rating second column, |= 2
yang eiek J=2 ame 4
As, jew, Viejl = Vi-1j)
+ VIL) = Vi021=0
v2.2 2i=2 j=2 ww. = 6
As ic, Vilil = Vii-aj)
2 Vi22) = ViLa=0
VB2 9i=3, j=2 Wem = 8
As ism. Vii) = Vii-ag)
2“ VB2 = Vi.2=0
;—
Item detail olalzials ls je |7 |e [2 [10
iso ofojo}ofjo jo jo jo Jo jo jo
i 0 jojo _|
o jojo
0 fofo
Filling second column, j= 3
V0L3) Si=1, j= 3, mows 4
As jew,Vib = Vlin2il
« VIL3) = V(0,3]=0
Vigo:
31 sie2, ja3, w=w,= 6
As jew, Vil =
2 V2.3)
3, j=3, wew,= 8
VB3) i
1
<
7
As, jew, Vil
« VB3)
4
<
B
°
Wrenyl
¥& Design and
Filling second’ column, j=4
ews 4m eld
qeieh ith rr , ae
a ps, j2m VET = max vied eV On Bd wl)
= max {V (0,4). 10 + V {0,0}
veg} = max (020+) = 10
ving size ie% wyewy = 6.0232
as jem VO = VEnM
2. VI24l = viL4) = 10
vp.4) 2123 5-4 , = 84215
As, j< my Vii) = viru
2 VBA) = V(24)=10
“Item detail «| 0 a2. a
alolofolo jo jo jo |° 0 fo
oat
o}o}o jo | 10
= a}o}o jo} 10
i=3 etolo te [an
Filling second column, j= 5
Vi 5} Si, j=5 wews 4y=10
As, j2w, Vij]
VIL, 5}
V2.5) 2122, j25 mews Gwen
Ve =
As Jem Vill
max (Vi 4,j + Vfi-4,j-w)
max {V (0, 5], 10 + V (0, 1))
max (0, 10 +0) = 10
VO-4jof Algorithms:
ee - bynamie Programming
V2.5) = via s)=20
23, JES WE = Bwa1s
yes =!
as i
a
10
12
12
Filldg second column, j = 7
"BT ina, j iv, = 10
As jz) Vii = max (V [l= Idk t Vl a5)
max {V (0, 7], 10 + V0, 31)
VIL, 7] = max (0,10 + 0) = 10
V2.7 i=2, j=7 wom, = 622Filling:
Filling:
i238 |=
second cotaman. j= 8
oh a wems url?
is, j2m Vid)
vi. 8) =
via siz2 J=8 mr *
As jem Vill
vi2.8) = max (10, 12 + 0)
vB8) 21-3
As, j2m, Vib =
VB, 8) = max (12,15 +0) =
8 wmeMs =
10
max Viral + VE
max {V (0, 8], 10 + v0.41
max (0,10 +0) = 30
6.v2= 12
max Vfi- 23%
(a, 8,12+ V1 2)
+Vfi-aj-wi}
max (V
2
8,v,= 15
max (Vfi- 2d + V2.
max (V [2, 8], 115+ V [2, Ol}
15
j-wl}
Item detail
second column,
ViL9) Si=1, j=9, mem
4M, = 10sna Analysis of Algorithms (SPPU)
As,
VL, 9}
v2.91 2i=% J=9 mew,
As, j2W, VE)
vB,
max (Y (0, 91, 20 + v (0, 5),
max (0, 10+ 0) = 19
6.2
12
max {Vv fi-
\#VUi-1j-w))
max {V (1, 91,12 + VB, 3),
VI2, 9] = max (10,12+#0) = 12
VB9) S123 J=9 wWew, = Baas
As jew) Vili = maxviim-aiiysVtinnj—w))
= max tv (2, 91,15 + V12,1))
= max (12,1540) = 15
J
Item detail olajzia|a is i
a}olo}olo fo jo jo jo jo fo
O}0/0/0 1] 10] 10 | 10 } 10 | 10 | 10
O}0/0}0} 10] 10} 12 J 12 | 12 | 12
i=3 ]w,=8 | v,=15 |] 0] 0} 0] 10}10 | 12 412 | 15 | 15
Fillng second column, j = 10
Vi29) i=, j=10, w= wy = 4,v = 10
As, jzw, V
VE, 10) = max (0, 1040) = 10
YR 10) si=2, 0, WW, = 6v,=12
As, j=w, Vii
‘V{2, 10) = max (10, 12 + 10)
YB1) 13, j=20, w=wy= 8 =15
As, jew, Vii
VB, 10)
max {Vf} 2ijh y+ V0~26j-w))
‘max {V (0,20), 20 + ¥ (0, 6D
max (V [i= 1.jl y+ V fi-2,j—wi}
max (V [1, 10), 124 VE 41)
22
max {Vv fi= Aly + V0-2.5- 0)
max {V [2, 101,25 + V2, 21}
max (22,25 + 0) = 22
Dynamic Programming,SF Design and Analysis
0
ied
i wd [mete 0
eo [ez y=i2 |
wee [wee LP
‘Trace for w= 10
a: 1235720
vid = yponapsoisicd 32°?
step2: 1725720
fail Vi~ 2d} 59 aad item j= I,in solution set
reduce problem size BY ™ 2
= w= 10-6=4
ie
i-2
j
i
Solution Set $ = (1)
-121
Step 3: jt
vide Vli- 1,j,80 add item Ti = 1, in solution set. Reduce problem size j by Wi = a
je i-wel™ =4-4=0
Solution SetS = Ov)
Ex. 4.5.2: Find an optimal solution for the following 0/1 Knapsack problem using dynamic programming: Number
mn=4,Krapsack Capacity M= 5, Weights (Ws, Wa, Wo WA) = (2,3, 4,8) and profits (Ps. Pa, Pa, Pa) = (34,55)
Soln.:
«Solution ofthe knapsack problem is defined as,
Vii- :
val -{ fi 23 vif} Swy
max (V[i-4, jy, + V0i-Lj-w) jew,
We have the following stats about the problem,
Table P. 4.5.2
4 2 P
L 3 a
L 4 ;-
iqnane Analysis of Algorithms SPPU)
4-30
Dynamic Programming
’ pouneryconaons WOuld EV 18. = VI, O10. 7he ina configuration of the table looks like this.
= js
Sebel oli |a alae
2lololololo
me2 |y=3 lo
ist | wes ol
ira [maa [ues |o I]
[itd [m=5 |uee fo
sing fst colurnn,j
vod siek jaa w=
As j me Vid
Vi2.5) = max (3,443) =
jeS wemy= ayes
veaet=>
4s, JB, Vil
VB3,5) = max (7, 5 + 0)
= 5y=6
vasioiz4 5=5™
As, Jew Vi
Via, 5] = max (7, 6 +0)
Thefnal table would be,
2 Dynai
‘max {V [0, 5], 3 + V (0, 3))
Programming
=3
= mac Wfi-2 jy + Vii 1j=wl)
max (VIL 5),4 + V (1, 2)
max (V [i= 4, y+ Via j- md
max (V[2, 5], 5 + V 12, 2)}
7
max (V [i= 1, J. + ¥
Lj- wit
max (V3, 5], 6+ V (3, 0}
7
4jss
Mid = vi-2,j,
Solution Set § = { )
aif
Sep2;
Vi = Vi-3,j,50
Solution Set $ = ( )
3:
-1=4-123ho algorithm also. Knapsack Capa,
ch,
Solution of the kat
{ Vii-bl
Vii = | actv fashwt Vl“ bJ-m) siz
‘We have the following stats about the problem,
Table P. 4.5.3
Weight (w))
1 1
2 6
Ly 5 18
u 6 2
4 7 28
Boundary conditions would be V [1,0] =
1a, 0) = yand V0, 1} = 0. The initial configuration of the table looks lie ts
Tao |
Filing fist column, j = 1
VY si=2, 5As jew, V
v2,
simialy, we will GE
vea= 6
v&2= 6
v(a= 6
sig ted column, J
vansie2 123 W=m= 2 yey
As, weJ, VOI
Sitar, we wil get,
VB.3
Vi43) = 7
V5.3 = 7
Filng fourth column, j = 4
YR si=2, j
7
Wye wy= 2 YEN
As, wai. VOI
(16 +1) 27
“iat, ve wil ge,
YBa = 7
Via) = 7
VB,4) = 7
4
Dynamic Programming
vii
Li)
max (V {i
dhy+ Vii-1,j-w))
max (V (1, 2], 6 + V [1, O}}
max (1,6 +0) = 6
6
max (V [i - 1.
max (V (2, 3],6 + V (2, LI)
max (1,6 #1) = 7
Lv # V2. = Wd}
=6
max (V fi LJ. + WEIL)
max (V (1,4, 6-4 ¥ (21)25
fifth column) . /
Filling fi , peeve i
je5
ves aiat!
vd =7
vpeei=s J=6 wes” 5, y= = 18
Asj2™
Vil = max Vfi-L jt VEL MdD
max v 2,6) 18 +V 2,30} = max (7,18 + 1) = 19
yay =22
Vis,6) 2i=4, j= 6 WE Ms
As, jz) ViVi) = max (V(i- 2, V0-1J- wd)
= max (V3, 6), 22+ V3, O
= max (19,22 +0) = 22
Similarly, we will get, V{5,6] = 22
sf we continue similary finally table would look lke this,
o|1}6]7|7| 18 | 19 | 24 | 25”
sonst
analysis of Algorithms (SPPUD
Dynamic Programming
-1=5-1-4
Vipil eV - 2), s0 add item f= in solution set,
sesceprobier se] BY Us = 4
je jewej-wtea~
je i-deanae3
ganse
3: tead=$
#r peW-1. J 50.0 item = insolation set. Reduce problem =,
j= j-mej-my=5-5=0
gution Set S = (yk)
452 Second Approach
she fist approach is suitable when knapsack capacity is small. With a large knapsack, the first approach is not
advsable from a computation as well as memory requirement point of view.
«+ Thesecond approach discussed below is more suitable when the problem instance is large
«Inthe forward approach, dynamic programming solves the knapsack problem as follows:
1 Sets? = {(0, 0}
2 Si=((p,™) |(P-PdE S.(w—w) © S'}
We an obtain $!** by invoking X43
* Ty =0 (xis excluded) then s,'= s!
* t= 1641s included) then add S;' is computed by adding (P;.1. Wh.) in each state ofS
1 S'*1- MERGE PURGE(S|, $:). MERGE_PURGE does the following:
For two pairs dp, w,) € $!°? and (py w,) € S'*% if PS Py and Wee Wy
1) And the pair (p, ws) is discarded,
also purge all the pairs (p, w) from S!
Repeat step 1 n
We say that (py w) is dominated by (py.
+ Ligw > M, be. it weight exceeds knapsack capacity:
es
10M) = S*.This will ind the solution of KNAPSACK(L nM)
for each pair(p, w) ¢ $"
if(P, w) © S'"}, then set x, =O
por if(, w) € S°-4, then set x, = 1, update p = P~% 4m
KNaps,
S500, Oy ACK(p, w, M)
Stat do
dw = WW
~Dynami
Ls ic Pro
YF Design and Analysis of Algorithms (SPPUL 7
apieuweyt wierd
S, —{(p, w) | foreach (9) ©
sit" © MERGE.PURGEGS,)
end
foricntoido He
Select last pair (Pe Ws) © i '
if (py ws) € S*'then , E i
neo
else :
nel
end :
4 “Solve the instance of 0/1 knapsack problem using ‘dynamic Programming :n= 4, M=
z Bee Psa Pe) = (10, 12514, 18, (Wy, We, Wor We) = (8 12,14)
= i jitable. We will use the
Knapsack capacity is very large, ie. 25, so a tabular approach won't be suitable. Second approsy
aps i 1€. 25,
solve this problem
Initially, Ss? = {(0,0)}
Iteration 1
Obtain S° by adding pair (ps, ws) = (, 2) to each pair of. s
sh = S's 20,9) = (G0, 9)
oe
Obtain —_S* by merging and purging S° and S,
s} = MERGE_PURGE (S?,S. ) = ((0, 0), (20, 9)}
Iteration 2
‘Obtain S; by adding pair (pz, w:) = (12, 8) to each pair of S*
Sh = S'+ (12,8) = (2,8), 22, 17),
Obtain S* by merging and purging S* and S,
S* = MERGE PURGE(S', 5.) = ((0, 0), (12,8), (22, 17))
Pair (10, 9 is discarded because pair (12, 8) dominates (10, 9)
Iteration 3
2
Obtain S, by adding pair (p;, w.) = (14, 12) to each pair of S?
2
S, = S*+ (14, 12) = ((14, 12), (26, 20), (36, 29) }
Obtain S? by merging and purging $? and s?
.
2 2
St = MERGE PURGE (S,$°) = ( (0,0) (12,8), (22,17), (0 nin cea)4.38
ic Programming
= $+ (16,14) = ((16, 14), 28, 23 438,30, 0,26, ca, 34)
and purging S* ands).
cwrsin S DOTS
St = MERGEPURGE S51) = (0.02 8, 04,121,6, 14), (22,17), (26, 20), (28, 22)})
18, 30) 80, 26) and (42, 34) are discarded because its ws M
pat
pa optima solution
Hera n= 4
saart with the le
ame Stout 28,22) S*
sosetey=¥ = 1
p = p-p=28-16=12
w = W-W.= 22-1428
last pair in S*, Le. (28, 22)
3, pair (12, 8) eS? and (12,8) « S*
m= 0
2, (12,8) eS" but (12, 8) e¢ S*
mel
p-p:=12-12=0
w = w-w,=8-8=0
The problem size is 0, so stop.
Ortimal solution vector is (ty, Me, 38) = (0,1, 0,2)
‘This this approach selects pairs (12, 8) and (16, 14) which gives 2 profit of 28.
1488; Generato the sets 6, 0.21.2 for the folowing knapsuck instance N= 9, (ww) = (2,84) an pw Be Ps) =
2.9 vith M6, Find optimal solution.
lus
bitaly, go
FS & (0,0)
eration 3
Obtain 3? 0
in, by adding pair (p,, wy) = (1, 2) to each pair of S
S = S'4a.22«2)
Obtain
1 °
S' by merging and purging $° and 5,rots!
(2,3) to each Pair of S
ation 2 7 air (px 2) = . 3, 5)
Iter pts st byedna ge gh +2) = (2 318.5))
1
ing stand 5; 4
obtain by merging and purging § ° . sence PURGES" S,) = (0,0), (2, 2), (2,3),3, 5)
i
2
to each pair of S
teration 3 pw) = 4)
obtain S, by adding pait (P> 2D ge 4) = 6,4), 6% 7, (8,9)}
st
2
ing S? and S,
‘ng and purging Sand 5,
‘optain S* by merging 2 2 £9, 0) (he 2) (2 3) 5, 4) (6. 6)
£ (52, §,) = (0,0) G
° = MERGE_PURG!
pair (7, 7) and (8, 9) are discard
air (9,5) is discarded becouse pai (- 4) domina
Find optimal solution
Here, n= 3.
start with the last pair in $3 ie. (6,6)
eS but66eS
Sosetm=%=1
Update, =
we
= n=
Soset x =
‘Now ne
Sosety, =
Optimal solution vector is (xy, x, x3) = (1, 0, 1)
‘Thus, this approach selects pairs (1, 2) and (5, 4)
£2. 454 : Consider the folowing instance of Knapsack
(100, 50, 20, 10,7,3) and M = 165. Solve the
Soln.:
Knapsack capacity is very large,
Here, n
50 we will follow
6, for each i, (Pw) = PEW
ASalIP, = ws for simplicity, we write p
fed because their w > M
tes (3, 5)
p-p)=6-5=1
w-w3=6-2=2
2,2 Sand (1, eS
x= 0
1,.(,2)€$' but (1, eS?
x=
problem : n= 6, (pt, p2,P3; Py PS:
Problem using the Dynamic programming approac
the second approach.
h
instead of pair (p;, w)
(3.
pa) = (wevepao of gorithms SPPUp 440
rf
gary =
Dynamic Program
f= 8° + (100) = (209)
MERGE_PURGE (S'S,
5 = (0, 100)
ion? 1
we gi = S' + (60) = (50, 150)
gf = MERGE_PURGE(S' S') = (0, 50, 100, 50)
3
eation 7
si = $' + (20) = (20, 70, 120, 170}
2g
s) = MERGEPURGE(S', S:) = (0,20, 50, 70, 100, 120, 150)
ris purged from S° because 170 > M
ation 4
si = S' + G0} = (10, 30, 60, 80, 120, 130, 160)
st = MERGEPURGE(S’, S')
= {0, 10, 20, 30, 50, 60, 70, 80, 100, 110, 120, 130, 150, 160)
tration S
se {7} = (7, 17, 27, 37, 57, 67, 77, 87, 107, 127, 137, 157, 167}
s' = MERGE PURGE(S'S!)
= {0,7, 10, 17, 20, 27, 30, 37, 50, 57, 60, 67, 70, 77, 80, 87, 100, 107,
110, 117, 120, 127, 130, 137, 150, 157, 160}
167 is purged from si because 167 > M
Iteration 6
S) = s*+ (3) =(G, 10, 13, 20, 23, 30, 33, 40, 53, 60, 63, 70, 73, 80, 83, 90,
103, 110, 113, 120, 123, 130, 133, 140, 153, 160, 163}
3° = MERGE PURGES, 5, )
40,3, 7, 10, 13, 17, 20, 23, 27, 30, 33, 37, 40, 50, 53, 57, 60, 63, 67,
70, 73, 77, 80, 83, B7, 90, 100, 103, 107, 110, 113, 117, 120, 123, 127,
Fd 130, 133, 137, 140,150, 153, 157, 160, 163)
Ftimal solution
Maren = 6,
Star
“withthe last value in $¥, ke, 163
Ie Sy 5
utl63¢ s
Sex =