Unacademy - Algorithms
Unacademy - Algorithms
such that $ two positive constants c > 0, n0 ≥ 1. Two functions, f(n) and g(n) are equivalent
Introduction know the efficiency of an algorithm and if and only if f(n) = O(g(n)) and f(n) = W(g(n))
c g(n)
we can compare the relative performance
y An Algorithm is a combination of a of an alternative algorithm. o-Notation [Pronounced “little-oh”]
sequence of finite steps to solve a problem. y We study asymptotic efficiency of an
All steps should be finite and compulsory f(n) For a function f(n), we denote o(g(n)) as the
algorithm to know how the running time
(If you are removing any one step, then the set of functions
of an algorithm increases with the size of
algorithm will be affected). input. f(n) = o(g(n))
y Which algorithm is best suitable for a
given application depends on the input Asymptotic Notations iff
size, the architecture of the computer, f(n) < c . g(n), " c > 0 whenever n ≥ n0
y We use asymptotic notation primarily to n0 n
and the kind of storage to be used: disks,
describe the running times of algorithms. f(n) = O(g(n)) n0 ≥ 1.
main memory, or even tapes.
y With the help of Asymptotic notation, we ω-notation: [Pronounced ‘little-omega”]
y It should terminate after a finite time and Fig. 1.2 (B)
compare two positive functions in terms
produce one or more outputs by taking e.g., y By analogy, ω-notation is to Ω-notation as
of running time.
zero or more inputs. o-notation is to O-notation.
θ-Notation f(n) = 3n + 2, g(n) = n
Need For Analysis For a function f(n), we denote ω(g(n)) as
Let f(n) and g(n) be two positive functions f(n) = O(g(n)) the set of functions.
y Problem may have many candidate f(n) = θ(g(n)) f(n) ≤ c g(n) f(n) = ω(g(n))
solution, the majority of which do not solve if and only if
if and only if 3n + 2 ≤ cn
the problem in hand. Finding solution that f(n) > c . g(n), " c > 0 whenever n ≥ n0
does, or one that is ‘best’, can be quite a f(n) ≤ c1 . g(n) c=4 n0 ≥ 1.
challenge. and Some of the important properties of asymptotic
n0 ≥ 2
y While running a program, how much time notation are:
it will take and how much space it will f(n) ≥ c2 . g(n) Ω-Notation: [Pronounced “big-omega”]
Transitivity:
occupy is known as analysis of program or " n ≥ n0 y Ω notation provides an asymptotic lower
analysis of Algorithm. y If f(n) = q(g(n)) and g(n) = q(h(n)) then f(n)
such that there exists three positive constant bound for a given function g(n), denoted
y An algorithm analysis mostly depends on = q(h(n))
c1 > 0, c2 > 0 and n0 ≥ 1 by Ω(g(n)). The set of functions
time and space complexity. y If f(n) = O(g(n)) and g(n) = O(h(n)) then f(n)
f(n) = Ω(g(n))
Generally, we perform following types of = O(h(n))
c1 g(n) if and only if
analysis y If f(n) = W(g(n)) and g(n) = W(h(n)) then
f(n) ≥ c . g(n), " n ≥ n0
⚪ Worst case: f(n) = W(h(n))
f(n) such that $ two positive constants c > 0,
The maximum number of steps or y If f(n) = o(g(n)) and g(n) = o(h(n)) then f(n)
n0 ≥ 1.
resources required to solve a problem. c2 g(n) = o(h(n))
⚪ Best case: y If f(n) = w(g(n)) and g(n) = w(h(n)) then f(n)
The minimum number of steps or
f(n) = w(h(n))
resources required to solve a problem. Reflexivity: Symmetry:
n c g(n)
⚪ Average case: n0
The average number of steps or y f (n) = θ(f(n)) f(n) = θ(g(n)) if and only
Fig. 1.2 (A) y f (n) = O(f(n)) if g(n) = θ(f(n))
resources required to solve a problem.
⚪ Amortized: y f(n) ≤ c1 . g(n) and f(n) ≥ c2 . g(n), "n, n ≥ n0 y f (n) = W(f(n))
A sequence of operations applied on
O-Notation [Pronounced “big-oh”] n0 n Transpose symmetry:
the input of size n averaged over time.
y The order of growth of running time curve Let f(n) and g(n) be two positive functions f(n) = Ω (g(n)) f(n) = O(g(n)) if and only if g(n) = Ω(f(n))
that varies with size of input helps us to f(n) = O(g(n)) Fig. 1.2 (C) f (n) = o(g(n)) if and only if g(n) = ω(f(n))
} n method.
Note: ∴ n grows faster than nlogn
} c) Master theorem:
y log(n!) = θ(nlogn) (by string’s approximation) 4. Following are some of the functions in If any recurrence relation is there in the
}
y Fibonacci numbers are co-related to the increasing order of asymptotic (big-O) growth. form of:
Express number of times prepladder is
golden ratio φ and to its conjugate φ̂ , 4loglogn < 4logn < n1/2log4(n) < 5n < n4 <
printed using O notation. 1
which are two roots of the equation x2 n
(logn)5logn < nlogn < nn
5
< 5n < 55n < (n/4)n/4 < T(n) aT + f(n)
=
=x+1 n b
4 4 n
=n–1+1=n
4 4 4 n
2
log4n−1, is 3i c i = ( 3 16 )i cn2 = ∑
3
16
2 log 3
cn + θ n 4 ( )
∴ T(n) = O(n) 4 i =0
_ _ _ _ _ _ _ _ _ _ _ _ _ __ _ _ _ _
=
1 2
cn + θ n ( log 4 3
) c c 2c
1− 3 ( 16) Example:
=
16 2
(
log 3
cn + θ n 4 )
ii) T(n) =T n ( 3 ) + T (2n3 ) + O(n) c c c c 4c
13
( 8)
T n ( 8) T (n8)
T n ( )
T n
8 ( )
T n
8 ( 8) T (n8)
T n T n
8( ) 8c
y The total cost over all nodes at depth i for y The recurrence (1) describes the running
i = 0, 1, 2,…, logn-1, is 2ic time of an algorithm that divides into ‘a’
log n− 1 subproblems where each is restricted by
T(n) = ∑i =0
2i c size n .
b
=c + 2c + 4c + ...nc y The ‘a’ subproblems are solved recursively,
( b)
iii) T(n) = 2
c ; if n = 1 Where a ≥ 1 and b > 1 are positive constant and if a f n ≤ cf(n) for some constant c < 1
T(n) ≤ T( n ) + T(2n ) + cn
3 3 and f(n) is a positive function.
and all sufficiently large n, then
( )
n =n for (j = 1 ; j < n ; j+=i) { algorithm to set the twin pointer in each
2.
= T(n) T 2n + 1
3 log 7 −∈ printf (“%d %d”, i, j) ; entry in each adjacency list?
=f(n) O(n
= ) for ∈ 0.8
b 3 , f(n)
a 1,=
= = 1
}
2 Case 1 applies [GATE CSE 2016 (Set-2)]
}
and nlogb=
a log
n 3/2= n=o
1
1 ∴ T(n) = θ(nlog27) } (A) Θ(n2 )
Case 2 of master theorem applies, Asymptotic notation of fun in terms of q
(B) Θ(n + m)
since f(n) =
θ n b =(
log a
θ(1) ) Rack Your Brain
notation is
(A) q (n n ) (B) q (n2) (C) Θ(m2)
∴ T(n) = θ (logn) (C) q (n logn) (D) q (n2logn) (D) Θ(n4)
Give the tight asymptotic bound for the
3.
= T(n) 3T n ( 4) + nlog n recurrence relation Solution:(C) Solution: (B)
T(n) = 2T(n / 4) + n
y When the first iteration is done, then one = 2∑ ( T(i − 1)) + (n − 1)n ...(i)
nT(n) ...(1) In quicksort, for sorting n elements, the (n/4)
i= 1 4 th smallest element is selected as pivot
subarray contains 0 elements, and the = + 2H(n) − 4
For (n-1) n+ 1 using an O(n) time algorithm. What is the
other subarray has n-1 elements. worstcase time complexity of the quicksort?
y Quicksort is applied recursively on this n− 1
[CS 2009]
− 1) 2∑ ( T(i − 1)) + (n − 1)(n − 1 − 1)
(n − 1)T(n= Now,
second subarray of n-1 elements. i= 1 (A) θ(n)
4 (B) θ(nlogn)
n− 1 T(n) =+
(n 1) + 2H(n) − 4
− 1) 2∑ ( T(i − 1)) + (n − 1)(n − 2)
(n − 1)T(n= ...(2)
...(ii) n + 1 (C) θ(n2)
i= 1
(D) θ(n logn) 2
T(n) = 2(n+1) H(n) - 4(n) Solution: (B)
Subtracting equation (2) from (1), we get
T(n) = 2(n+1) logen + 1.16(n+1) - 4n
nT(n) − (n − 1)T(n − 1)
= 2T(n − 1) + 2(n − 1)
T(n) = 2nlogen - 2.84n + O(1)
nT(n) − (n + 1)T(n − 1) = 2(n − 1) T(n) = 2nlogen Previous Years’ Question
T(n) T(n − 1) 2(n − 1) The average number of comparisons during Randomised quicksort is an extension of
− = quicksort on n elements approaches.
n+ 1 n n(n + 1) quicksort where the pivot is chosen randomly.
2nlog2n - 2.84n What is the worst-case complexity of sorting
However, quicksort is fast on the “randomly 2(n − 1)
g(n) − g(n − 1) = = 1.39n log2n - O(n) n numbers using randomised quicksort?
scattered” pivots. n(n + 1) [CS 2001]
The partition step needs atleast n-1 So, the average case time complexity as
(A) O(n)
comparisons. T(n) quicksort is O(nlogn).
where g(n) = (B) O(nlogn)
n+ 1 (C) O(n2)
y The recurrence for the running time is
Taking RHS- (D) O(n!)
= T(n − 1) + Θ (n)
T(n) Solution: (C)
2(n − 1) 2(n + 1) − 4
T(1) = θ ( 1) =
n(n + 1) n(n + 1)
T(n)=T(n-1)+Θ(n)
{ T(n)
= ∑ ( T( j − 1) + T(n − j) + n − 1)
n j= 1
generator.
n+ 1 n−1 2(n + 1)
< T(n − 3) + 2 + +2
i Random(p, r) /* it generate a (n − 1) (n − 2) n
random number (replacing i with j for simplicity) Solution of quicksort( ) recurrence relation:
n+ 1 2(n + 1) 2(n + 1)
between p = This is the expectation. 2 n− 1 < T(n − 3) + + +2
and r and including
T(n)
= ∑ T(i) + n − 1
n i=0
...(1) ...(i)
(from eq.(3))
(from eq (3)) (n − 2) (n − 1) n
p and r */ n
n+ 1 1 1
∑ T( j − 1) ==
This quantity varies
this quantity variesfrom
fromT(0)
T(0)to
toT(n-1)
T(n-1)
Note that T(0) = 0, T(1) = 0. <
T(n − 3) + 2(n + 1) + +2
exchange Arr[r] Arr[i] j= 1
(n − 2) n (n − 1)
return Partition (Arr, p, r) 2 n− 2
∑ T(i) + (n − 2)
n
Now,=
T(n − 1) n + 1 n − 3 + 1 1 1
} ∑ T(n − j)==
this quantity
This quantity varies
varies from
from T(n-1)
T(n-1) to
to T(0)
T(0) (n − 1) i=0 < T(n − 4) + 2 + 2(n + 1) + +2
j= 1 (n − 2) n − 3 n (n − 1)
Randomised_Quicksort (Arr, p, r)
So, every term T(0) to T(n-1) appears twice. 2 < n−n2 + 1 n − 3 + 1 1 1
{ or, (T(n − 1) − n + 2) = ∑ T(i) T(n − 4) + 2 + 2(n + 1) + +2
So, we write it as: (n − 1) i(n
=0
− 2) n − 3 n (n − 1)
if p < r
n+ 1 n−2 1 1
q Randomised_Partition (Arr, p, r) =
2 n− 1 1 n
∑ T( j) + n ∑ n − 1 Multiply both sides by (n − 1) < T(n − 4) + 2 + 2(n + 1) + +2
n n (n − 2) (n − 3) n (n − 1)
=j 0=j 1
Randomised_Quicksort (Arr, p, q-1)
Randomised_Quicksort (Arr, q+1, r) n+ 1 n−2 1 1
2 n− 1 n(n − 1) 2 n− 2 (n − 1) (n − <
1) − n + T(n − 4) + 2 + 2(n + 1) + +2
} = ∑ T( j) + n ∑ ( T(i)) × n= n ×(n( T(n
(n − 1) i=0 (n − 3) 2)
− 2)− 1) n (n − 1)
n j=0
Time complexity:
2 n− 2 (n − 1) n+ 1 2(n + 1) 1 1
y Let’s assume T(n) be the number of= 2 ∑ T( j) + n − 1
n− 1
...(3)
or, ∑ T(i)= n =
n i=0
( T(n − 1) − n + 2) ...(ii)
...(2) <
(n − 3)
T(n − 4) +
(n − 2)
+ 2(n + 1) + +2
n (n − 1)
comparisons needed to sort n numbers n j=0
from (1) and (2); (n + 1) 1 1 1
using quicksort. < T(n − 4) + 2(n + 1) + + +2
= O(nlog 2 n) (n − 1) (n − 3) n (n − 1) (n − 2)
y Since each split occurs with probability T(n)
=
n
( T(n − 1) − n + 2) + n2 T(n − 1) + (n − 1)
1/n. In quicksort, every number requires
atleast (n-1) comparison because the (n-1) \ The expected number of comparisons is n − 1 + 2 2(n − 1) (n + 1) 1 1 1 1
number of elements has to be compared O(nlogn). But the worst-case time complexity = T(n − 1) −n+ 1+ + (n − 1) < T(2) + 2(n + 1) + + + ... + + 2
with the pivot element. of randomised quicksort is O(n2).
n n (n − (n + 3)) n (n − 1) (n − 2) 4
(n + 1) 2(n − 1)
= T(n − 1) + (n + 1) 1 1 1 1
n n < T(2) + 2(n + 1) + + + ... + + 2
(n − (n + 3)) n (n − 1) (n − 2) 4
n+ 1 2(n − 1)
So, we got =
T(n) T(n − 1) +
n n
Selecting locally optimal solutions which Fixed 0 1 0 1
may lead us to global optimal solutions. length 000 001 010 011 100 101 111 c : 12 b : 13 14 d : 16 a : 45
Codeword c : 12 b : 13 d : 16
In other words, we choose the choice that 0 1
14
Optimal substructure: Frequency 7 11 12 19 21 25 35
14 d : 16 25 a : 45
+ 19 + 21 + 25 + 35) × 3 = 390 bits 25 30 a : 45 0 1
Huffman Codes
For variable length number of total 0 1 0 1 25 30
y Huffman coding compresses data very bits = 4×7 + 4×11 + 3×12 + 3×19 + 3×21
effectively. + 2×25 + 2×35 = 348 bits. c : 12 b : 13 14 d : 16 0 1 0 1
Greedy Techniques 41 42 Greedy Techniques
y The ideal prefix code is represented by the Knapsack holds, i.e (w) = 4 pounds
final tree. Previous Years’ Question y Since, the thief can take a fraction of an t Previous Years’ Question
y The straightforward path from the root to item
the letter is the letter’s codeword.
Consider the string abbccddeee. Each letter Consider the weights and values of items
2 pounds of item A
Analysis: in the string must be assigned a binary listed below. Note that there is only one
Solution = +
code satisfying the following properties: 2 pounds of item C
unit of each item.
y We assume that the Queue is implemented
as a binary min heap in the algorithm. 1. For any two letters, the code assigned Item Weight Value
to one letter must not be a prefix of the Value = 100 + 80 number
y In line 2, the Queue Q is intialised in O(n) (in kgs) (in Rupees)
code assigned to the other letter. = 180
for a character set of n characters (using
BUILD_MIN_HEAP). 2. For any two letters of the same 1 10 60
y From lines 3-8, the for loop iterates for frequency, the letter which occurs earlier Greedy solution for fractional knapsack: 2 7 28
exactly n-1 times, and each heap operation in the dictionary order is assigned a code 3 4 20
y Given a set of items I.
takes O(logn), which results in O(nlogn) whose length is at most the length of
4 2 24
time. the code assigned to the other letter.
Among the set of all binary code Weight w1 w2 … wn
y Hence, the total time taken is O(nlogn), The task is to pick a subset of these
where n is the number of characters assignments which satisfy the above two items such that their total weight is no
Cost C1 C2 … Cn
present in the character set C. properties, what is the minimum length more than 11 kgs and their total value is
of the encoded string? [2021 (Set-2)] maximised. Moreover, no item may be
Let P denote the problem of selecting items split. The total value of items picked by
(A) 21 (B) 23
from I, with weight limit K, such that the an optimal algorithm is denoted by Vopt A
(C) 25 (D) 30
Previous Years’ Question resulting cost (value) is maximum. greedy algorithm sorts the items by their
Solution: (B)
Ci value to weight ratios in descending
A message is made up entirely of (1) Calculate vi = , for i = 1, 2…, n
wi order and packs them greedily, starting
characters from the set X = (P, Q, R, S, Fractional Knapsack Problem
(2) Sort the items by decreasing vi. Let the from the first item in the ordered list.
T). The table of probabilities of each y Let’s assume a thief went to rob a store The total value of items picked by the
sorted item sequence be 1,2,…,i,…n and
character is shown below: containing n items. Let wi be the weight of greedy algorithm is denoted by Vgreedy’
the corresponding v and weight be vi and
the ith item and vi is the value of the ith wi, respectively. The value of Vopt-Vgreedy is
Character Probability
item.He has a knapsack that can handle a (3) Let k be the current weight limit (initially, Range [16 to 16] [NAT]
P 0.22 weight of w (integer). k = K)
Q 0.34 y The items can be picked in fractions, and y In each iteration, we choose item i from
R 0.17 the problem is to find the set of items the head of the unselected list; if k > = wi, Time complexity:
whose weight sums up to a weight w and we take item i and update k = k–wi, then
S 0.19 y The sorting of all items in decreasing order
whose value is maximum. consider the next unselected item.
T 0.08 of their cost/weight ratio is the most time-
Example: y If k < wi, we take a fraction f of item i, i.e.,
Total 1.00 consuming step. It takes O(n logn) time.
k
y When a thief walks into a store, he notices we only take f = (< 1) of item I, which y Iterating through every item in step (3)
A message of 100 characters over X is wi
the following items. takes O(n) time if the items are already
encoded using Huffman coding. Then weights exactly k. Then the algorithm is arranged in the required order.
the excepted length of the encoded finished. Total time complexity = O(nlogn) + O(n) =
Items A B C
message in bits is __ [2017 (Set-2)] y The algorithm can take an item in fraction, O(nlogn)
(A) 225 (B) 226 i.e., an item need not be considered
Cost 100 10 120
(C) 227 (D) 228 completely.
Solution: (A)
Weight 2 pounds 2 pounds 3 pounds
time, the shortest deadline possible for y Both the directed and undirect graphs
any job is 1. The Graph G(V,E) is expressed as an array Adj
need Θ(V+E) memory for their adjacency 1 2 3 4 5
y When only one job can be scheduled at a of size |V|, one for each element of Adj is a
list representation. 1 0 1 0 0 6
time, how can the total profit be maximized? list for each vertex in V. 2 1 0 3 4 2
y One of the disadvantages of the adjacency
3 0 3 0 6 0
Example: y For each vertex u in the set of vertices V, list is its O(V) time complexity to search 4 0 4 6 0 8
Input: there exists an entry in the array Adj i.e. whether an edge (u,v) exists in the graph 5 6 2 0 8 0
Adj[u] with list of all vertices adjacent to u. or not.
Jobs A B C D
i.e. Adj[u] has links of all the vertices one by Minimum Spanning Trees
Adjacency-Matrix representation:
Deadline 1 4 1 1 one adjacent to u in G. Minimum spanning tree T is a subset of the
y In an adjacency matrix, a graph G(V,E) is
represented as a matrix represented in |V| set of Edges E in a connected and undirected
Profit 10 20 30 40 1 2
x |V| size. graph G(V, E) such that it forms an acyclic
3 // An undirected graph G with
1 if (i, j) ∈ E, graph with
5 vertices and 7 edges aij =
Output: 5 4 0 otherwise
The following is a list of jobs in order of
w(T) = ∑
(u,v)∈T
w(u, v) , where w(T) is minimised.
maximum profit.
1
current greatest profit. Keep the job in ∈ E with vertex v in u’s adjacency list.
The adjacency-matrix representation of G. nodes.
this slot and mark this slot filled.
Weighted graphs: y An adjacency matrix of a graph requires
Time complexity:
Θ(V2) space, independent of the number
y For sorting the jobs in decreasing order of Definitions
profit O(nlogn) Definitions 1 2 3
of edges in the graph. 4 5
y An adjacency matrix can represent a A complete graph is a graph in which
y For each job, checking the open time slot
between deadline and 0 is O(n)
Graphs in which a weight w is given to 1 0 1 0
weighted graph also. 0 6 each pair of graph vertices is connected
each edge (u, v). y For example, if G(V,E) is a weighted graph by an edge.
for ‘n’ jobs → n * O(n) 2 1 0 3 4
with edge-weight function w, we can2 A complete graph with n vertices is
= O(n2) y If the length of an entry in an adjacency simply store the weight w(u,v) of the edge represented by symbol Kn.
∴ Total time complexity = O(nlogn) + O(n2) list is given by the number of vertices 3 0 3 0 6 0
(u,v) ∈ E as the entry in a row u and column
= O(n2) linked in that entry, then the sum of all v of the adjacency matrix.
4 0 4 6 0 8
5 6 2 0 8 0
Greedy Techniques 45 46 Greedy Techniques
y For non–complete graphs, we can use The two commonly used algorithms to solve y Sorting edges takes O(ElogE) in line 4. ⇓
Kirchhoff’s theorem to find number of minimum spanning tree problems are: y For loop takes O(V) in 2nd and 3rd lines.
8 7
spanning trees. y For loop takes O(ElogV) to perform O(E) b c d
1. Kruskal’s algorithm 4 9
2. Prim’s algorithm UNION and FIND_SET operations on
2
Kirchhoff’s theorem: disjoint_set forest in 5th and 8th lines. q 11 4 14 e
1. For the given graph, create an adjacency Kruskal’s algorithm: ∴ T(n) = O(1) + O(V) + O(ElogE) + O(ElogV) 7 6
8 10
matrix. = O(ElogE) + O(VlogV) = O(ElogE) =
MST_KRUSKAL (G,w) h
1
g
2
f
2. Substitute the degree of nodes for all the O(Elog(V(V-1)/2)) = O(ElogV)
diagonal elements. 1. A=φ ⇓
Example:
3. Substitute -1 for all non-diagonal 1’s 2. for each vertex v ∈ G.V
4. Calculate the co-factor for any element. 3. MAKE_SET(v) Applying Kruskal’s algorithm on this graph b
8
c
7
d
= 16 – 4 – 4 2
Time complexity analysis: q 11 4 14 e
=8
y Initializing set A to empty in Line 1 takes 7 6
∴ Total 8 spanning trees possible for given 8 10
O(1) time. h g f
graph. 1 2
=A {(v, v ⋅ π) : v ∈ V − {r} − Q}
(A) 29 (B) 31
(C) 38 (D) 41 y When the algorithm terminates, the min-
Solution: (B) priority queue Q is empty
y The minimum spanning tree A for G is thus
Prim’s algorithm: =A {(v, v ⋅ π) : v ∈ V − {r}}
PRIM_MST(G, w, r)
y Lines 1-5 in pseudocode, make the key
1. for each u ∈ G.V
attribute of each vertex initialised to ∞
2. u. key = ∞
except the root. Since the root is the first
3. u.π = NIL
vertex to be processed, its key is initialised
4. r.key = 0
to 0.
5. Q = G.V
1. INITIALISE_SINGLE_SOURCE(G, s) 3 5 25
The task is to pick a subset of these
2. For i = 1 to (number of vertices) - 1 4 3 18 items by using greedy method such that
3. For every edge (u, v) ∈ G.E their total weight is no more than 4 kgs
5 3 9
(A) SDT (B) SBDT 4. RELAX (u, v, w) and their total value is maximized.
(C) SACDT (D) SAC ET 5. For every edge (u, v) ∈ G.E (Edges of graph) The task is to pick a subset of these The total value of items picked by the
Solution: (D)
6. If v.d > u.d + w (u, v) items by using greedy method such that greedy algorithm is denoted by Vgreedy•
7. Return FALSE their total weight is no more than 15 kgs The value of Vgreedy is ----
The Bellman-Ford algorithm: and their total values is maximized.
8. Return TRUE Solution: 8
y The Bellman-Ford algorithm is a single- The total value of items picked by the
y The Bellman-ford algorithm runs in time Here, the value / weight ratio of both the
source shortest path algorithm. It works greedy algorithms is denoted by Vgreedy”
O(VE), since the initialisation in line 1 takes item is 2. So, we can take any item 1st.
on digraphs with negatively weighted The value of Vgreedy is ------
θ(v) time, each of the |V|-1 passes over the Solution: 80 Let’s take item 1 first. Hence, the total
edges, also.
edges in lines 2-4 takes θ(E) time, and the capacity is going to be 4, and the profit is 8.
y The Bellman-Ford algorithm results in a Now, sort the object in non-increasing order
for loop of lines 5-7 takes O(E) time.
boolean value indicating whether a negative of value/weight ratio.
weight cycle is present or not in the graph. 3. The characters a to f have the set of
Example: Item Weight Value (in Value/ frequencies as shown below:
No. (in kg) rupees) weight a:40, b:30, c:20, d:5, e:3, f:2
A Huffman code is used to represent
2 4 28 7
the characters.
4 3 18 6 What is the weighted external path length?
3 5 25 5 Solution: 205
a b c d e f
5 3 9 3 Step 1: 40, 30, 20, 5, 3, 2
1 1 2 2
First we pick item 2 then item 4 then item
3 and at last item 5. Total capacity of this
4 object is 15 kg and total value is 80. So,
overall profit is 80.
J1 J2 J3 J4 J5 J6
Deadline 5 3 3 2 4 2
Profit 200 180 190 300 120 100
The gantt chart is shown below:
d e f g h
7 4 6 9 14 22
Step 5:
Profit = 990
So, the Huffman code is
7. What is the number of spanning tree
d: 000 possible from the given graph?
e: 001
g: 01
h: 11
Weighted external path length = 1 × 40 + 30 f g h f: 101
× 2 + 4 × 5 + 20 × 3 + 2 × 5 + 3 × 5 7 10 9 14 22
c: 1000
= 40 + 60 + 20 + 60 + 10 + 15 a: 10010
= 205 b: 10011
Group the string into characters from right Solution: 125
to left:
Since it is a complete graph. So, the
11 000 001 01 101 1000 10010 10011 number of spanning trees for a complete
h d e g f c a b graph kn is n(n-2), (where n = no. of vertices)
g h
10 16 14 22
Step 2: Now add either (B, C) or (A, E). We are The minimum spanning tree is:
adding (A, E).
Step 4: Now, we can add either (B, D) or (D, The minimum spanning tree for this graph is
E). So, we will get two minimum spanning
trees as shown below:
Greedy PB
Techniques <Ch_Name>
61 Graph-Based Algorithms 63
y Line 17 adds each vertex discovered into algorithm, and vertices are enqueued and y We can initialise the visited array of all y For initialisation of visited [ ] array, it takes
queue and line 18 marks the explored dequeued only once, which is tested in the vertex to be zero (i.e., visited [i to n] = O(V) time.
vertex into black, i.e., u.colour = black. line 13. 0) and then run the breadth-first search y Therefore, the total time complexity
y This procedure (from line 10 to line 18) is y The enqueuing process and dequeuing starting from one of the vertexes, and after = O(V + 2E)
done until the queue gets empty. processes of a vertex take O(1) time which the search finishes, we should examine = O(V + E)
y The order of vertices resulting from BFS in turn takes O(V) time. the visited array. y The time complexity is the same for
depends on the order of adjacent vertices y The algorithm scans the adjacency list of y In case if visited [ ] array indicates that both the directed graph as well as for
visited in line 12: the BFS may be different, each vertex which takes O(E) time. some of the vertices are not visited, then undirected graph.
but the distances computed by the y Since initialisation takes O(V) time, the the reason is definitely that the graph is y Space complexity is also going to be the
algorithm do not. total time taken is O(V+E). not connected. same as Breadth-First Search.
Analysis y Hence, BFS runs in time equal to the size y Therefore, by using breadth-first search, Conclusion
y All the vertices in the graph are white at of the adjacency-list representation of G, we can say that whether the graph is y The time and space complexity of breadth-
first except the source S, and the vertices connected or not. first traversal is the same as breadth- first
i.e., O(V+E).
are not changed into white further in the Breadth-First Traversal search.
BFT can be executed using BFS. y For a given graph,BFT calls BFS on every
Example: node.When BFS is called on a single node
Algorithm:
BFT (G, n) /*G is the graph, and ‘n’ is the ,that means we are working on smaller
number of vertices */ part of the graph and then continue with
{ the remaining part.
for i = 1 to n do y So, it is as good as running the Breadth-First
visited [i] = 0; /* visited[] is a search on the entire graph exactly once.
global array of vertices. ‘0’ value Applications of Breadth First Traversal
indicate it is not visited and ‘1’ y Shortest path and minimum spanning tree
indicate it is visited.*/ for weighted graph
for i = 1 to n do y Path finding
if(visited [i] == 0) then y To test if a graph is bipartite
BFS(i); y Finding all nodes within one connected
} component
y Cycle detection in an undirected graph
y For the time complexity of breadth-first y Social networking websites
traversal (BFT), we have to do an aggregate y Crawlers in search engines
analysis.
y Aggregate analysis considers overall work
done. Previous Years’ Question
y In case if we are going to use adjacency
list representation, then every node is The Breadth-First Search (BFS) algorithm
going to have an adjacency list. has been implemented using the
y Whenever we call BFS, then some part queue data structure. Which one of the
of the adjacency list will be completely following is a possible order of visiting
visited, exactly one. the nodes in the graph below?
y Next time, when we call BFS on the
M N O
remaining nodes, then the remaining
nodes which are present on this list will
R Q P
Traversal vs. Search y Traversal is a search, which involves all be also visited exactly one.
y Traversal of a graph or tree involves the nodes in the graph. y Therefore, overall, on average, all the
(A) MNOPQR (B) NQMPOR
examining every node in the graph or tree. y By using Breadth-First Search (BFS), we nodes will be visited exactly one.
(C) QMNPRO (D) QMNPOR
y Search may or may not involve all the nodes can even find out whether a graph is y In case of the undirected graph, the
Solution: (D) [2017 (Set-2)]
in visiting the graph in systematic manner. connected or not. adjacency list contains 2E nodes.
64 64 Graph-Based
Graph-Based
Algorithms Algorithms Graph-Based
Graph-Based
Algorithms Algorithms 65 65
Depth First Search (DFS) y The code below is DFS with Graph y After all the vertices adjacent to u are BFS = BFT = DFS = DFT = O(V2)
y Depth-first search (DFS) is an algorithm G(directed or undirected) as input. Time is explored,8th line to 10th line in algorithm y Space complexity of
for searching the vertices of a graph a global variable. colours the vertex to black,increments BFS = BFT = DFS = DFT = O(V).
(traversing a graph to be specific), that DFS (G): time, and u.f is noted. Applications of Depth First Search
works by exploring as far into the graph as 1. For each vertex u Î G.V Note: y Detecting cycle in a graph:
possible before backtracking. 2. u.color ← WHITE If there is a back edge in DFS, it has a
The order of vertices returned by
y Input: 3. u.π ← NULL cycle. Hence DFS can detect a cycle in a
DFS depends on the order of vertices
Graph G(V,E), where V is the set of vertices 4. Time ← 0 graph.
discovered in line 5 of DFS algorithm,
and E is the set of edges. 5. For each vertex u Î G.V y Path finding:
and line 4 of DFS-VISIT algorithm.
y To keep track of progress, a depth-first 6. If u.color is WHITE The DFS algorithm can be tweaked to
search colours each vertex 7. DFS-VISIT(G,u) y Apart from the time to execute calls of find a path between two given vertices,
⚪ White: Vertices are white before they end DFS_VISIT,the loops in lines 1-3, and 5-7 in u and z.
are discovered DFS-VISIT(G,u) DFS gives Θ (V ) time complexity. y Used as logic behind topological sorting
1. Time ← time + 1 // white vertex u has y The algorithm DFS_VISIT discovers every
⚪ Gray: Vertices are grey when they are y To test if a graph is bipartite
just been discovered vertex exactly once, and the vertex on
discovered, but their outgoing edges are y Finding strongly connected components
2. u.d ← time
still in the process of being explored. which DFS_VISIT is called should be a of a graph
3. u.color ← GREY
⚪ Black: Vertices are black when the white vertex, and the DFS_VISIT will colour Triee edge, Back edge and Cross edges in
4. For each v Î G.Adj[u] // explore edge
entire subtree starting at the vertex (u,v) it to grey at the very first step. DFS of graph
has been explored 5. if v.color is WHITE y The loop lines 4-7 execute |Adj[v]| times in
y The moment DFS discovers a vertex v in 6 v.π ← u the execution of DFS-VISIT algorithm.
an adjacency list of already visited u,the 7. DFS-VISIT(G,v) ∑ | Adj[v] |= Θ(E) , the total time for
algorithm marks the predecessor of 8. u.color ← BLACK // blacken u; it is v∈V
66 66 Graph-Based
Graph-Based
Algorithms Algorithms Graph-Based
Graph-Based
Algorithms Algorithms 67 67
Difference between DFS and BFS
Previous Years’ Question
Depth-First Search Breadth-First Search
Consider the DAG with consider V={1, 2, 3, 4, 5, 6}, shown below. Which of the following is
1. Backtracking is possible from a dead end. 1. Backtracking is not possible. NOT a topological ordering? [2007]
2. Stack is used to traverse the vertices in 2. Queue is used to traverse the vertices
LIFO order. in FIFO order. 2 5
3. Search is done in one particular 3. The vertices at the same level are
1 4
direction. maintained in parallel.
3 6
Topological sort: 1. Therefore, the topological sort takes
O(V + E) time in total. (A) 1 2 3 4 5 6 (B) 1 3 2 4 5 6
y DFS is the logic behind topological sort in
DAG-SHORTEST-PATHS(G, w, s) (C) 1 3 2 4 6 5 (D) 3 2 4 1 6 5
a directed acyclic graph (DAG).
a) sort the vertices of G topologically Solution: (D)
y A topological sort of a DAG G=(V, E) is a
b) start with source
linear ordering of all its vertices, such that
c) for each vertex u, taken in topologically
if G contains an edge (u, v), then u appears
sorted order
before v in the ordering. d) for each vertex v ∈ G.Adj[u] Solved Examples
y If a cycle exists in a graph, there will be no e) RELAX(u, v, w)
linear ordering possible. 1. Consider the following graph (G) (C) Directed acyclic graph
Analysis
y The topological sort of line 1 takes Θ (V +E) C D F (D) None of the above
Topological Sorting(G)
time. Solution: (C)
Step 1: C
all DFS on the graph, so that it
calculates the finishing time of each y The call of INITIALISE-SINGLE-SOURCE in Topological sort can be applied to directed
vertex. line 2 takes Θ (V) time. acyclic graphs.
G
Step 2: Based on the finishing time, insert y The for loop of lines 3–5 makes one
them into a linked list and return the iteration per vertex.
3. Consider the directed acyclic graph with
list of vertices. y Altogether, the for loop of lines 4–5 relaxes B A E
V={V1, V2, V3, V5, V6, V7} shown below.
each edge exactly once.
Analysis Number of cut vertex or articulation
y Because each iteration of the inner for V2 V5
points is _______.
y DFS takes Θ (V + E) time, and O(V) to add loop takes Θ (1) time.
Solution: 2
all vertices one by one into the linked list. y The total time is Θ (V +E).
In an undirected graph, a cut vertex (or
Examples of a directed acyclic graph (DAG): articulation point) is a vertex and if we V1
6 1 6 1 remove it then the graph splits into two
r
5
s
2
t
7
x
-1
y
-2
z r
5
s
2
t
7
x
-1
y
-2
z disconnected components.
0 0 2 6
Removal of “D” vertex divides the graph into V3 V7
3 4 2 3 4 2
68 68 Graph-Based
Graph-Based
Algorithms Algorithms Graph-Based
Graph-Based
Algorithms Algorithms 69 69
Solution: (C) Which all of the above is/are not possible
output(s)? Chapter Summary
Here, every edge has a dependency, like this
(A) 1,3, and 4 only
V2 (B) 1 and 3 only y BFS – Simple algorithm for traversing a graph (Breadth-wise).
(C) 1, 2, 3, and 4 only y Traversal Vs. Search – Traversal goes through each vertex, but the search may or may
not.
(D) None of the above
y DFS – Algorithm used for traversing a graph (depth-wise).
V1 Solution: (D) Difference between DFS and BFS
All the sequences of nodes are the possible
this edge means that V2 comes before V1 Depth First Search Breadth-First Search
output of Depts First Search (DFS).
in topological ordering. Initially, V2 and V5
doesn’t have any dependency. So any one of 1. Backtracking is possible 1. Backtracking is not possible.
5. The Breadth-First Search algorithm has from a dead end. 2. The vertices to be explored are
them can be done independently.
been implemented using the queue data 2. Vertices from which organised as a FIFO queue.
So, either start with V2 or V5. structure. The possible order of visiting exploration is incomplete 3. The vertices at the same level
So, the topological ordering is given below: the nodes of the following graph is (MSQ) are processed in a LIFO are maintained in parallel.
order.
V2V5V1V7V3V6 a 3. Search is done in one
Or particular direction.
V5V2V1V7V3V6
y Tree edge, Back edge, and Cross edge in DFS of a Graph:
Another topological ordering is also possible, b c Tree edge: Formed after applying DFS on a graph.Red edges are tree edges.
but V1V5V2V7V3V6 is not correct topological Forward edge: Edge (u,v) in which v is a descendent of u.
ordering because it starts with V1 before V2 eg: 9.
or V5. Back edge: Edges (u,v) in which v is the ancestor of u.
Hence, (C) the correct option. eg : 10.
d e f g
4. Consider the following sequence of nodes
for the undirected graph given below.
1. a b d h e f c g 2. a c g h f e b d
3. a b d h f c g e 4. a b d h g c f e
h
If a Depth-First Search (DFS) is started
(A) abcdefgh
at node a using stack data structure. The
(B) acbfgdeh
nodes are listed in the order they are
first visited. (C) abcedgfh
(D) abchdefg
a
Solution: (A), (B), and (C)
The sequence of nodes given in options (a), Cross edge : Edges(u,v) in which v is neither ancestor nor descendent to u.
b c (b), and (c) are the correct possible order eg : 5
of visiting the nodes by using breadth-first y Topological Sort : “A topological sort of a DEG G=(V, E) is a linear ordering of all its
search, because breadth-first search visits vertices, such that if G contains an edge (u, v), then u appears before v in the ordering.”
d e f g the “breadth” first, i.e., if it is visiting a node,
then after visiting that node, it will visit
the neighbour nodes (children) first before
h moving on to the next level neighbours.
70 70 Graph-Based
Graph-Based
Algorithms Algorithms Graph-Based
Graph-Based
Algorithms Algorithms 71 71
5 Dynamic Programming y In the above case, fib(2) was evaluated
three-time (over-lapping of subproblems)
int fib (int n)
{
y If the n is large, then other values of fib
(subproblems) are appraised which leads if(n == 0) return 0;
y By using dynamic Programming, we ο Build a suitable solution from the to an exponential-time algorithm. if(n == 1) return 1;
can avoid recalculating the same result calculated result. y It is better, instead of solving the same if(fib[n]! = 0) /*check if already
multiple times. y Steps 1 -3 form the foundation of a DP problem, again and again, to store the calculated*/
y Using the divide and conquer technique; technique. result once and reuse it; it is a way to
we divide bigger problems into simpler y If we need only the value of a solution, return fib[n];
reduce the complexity.
subproblems as the algorithm requires. then skip step 4. y Memoization works like this: Begin with return fib[n] ¬ fib (n-1) + fib (n-2);
y Dynamic programming works on both y During step 4, we store additional a recursive function and add a table that }
top-down and bottom-up approach/ information throughout step 3 so that we maps the parameters of the function to
technique. can easily calculate a suitable solution. y In both methods, Fibonacci Series
the results calculated by the function.
y We usually start with dividing the large complexity is reduced to O(n).
y When the function is called with identical
task into smaller subtasks, storing the
Understanding Dynamic Programming y Because we use already computed values
parameter values that have already been
results of those subtasks to be used by from the table.
Before jumping to problems, let us try to called once, the stored value is returned
another subtask at the later stages of the TC: O(n).
understand how dynamic programming from the table.
problem solving, until we get the desired SC: O(n).
works through examples. Improving:
solution. Note:
Fibonacci series: y Now, we see how DP bring down this
y Using this technique complexity of various Both the approaches can be derived for the
algorithms can be reduced. A series of numbers in which the present problem complexity from exponential to dynamic programming problems.
y The greedy method only generates one number is the sum of the last two numbers. polynomial.
decision sequence at a time, and dynamic y We have two methods for this: Matrix Chain Multiplication
The Fibonacci series is defined as follows:
programming takes care of all the y 1. bottom-up approach: Start with the
possibilities. Fib(n) = 0, if n = 0 lower value of input and slowly calculate Problem:
y The problems with overlapping subproblems 1, if n = 1 a higher value. Let’s say a series of matrices are given, A1
are generally solved by the dynamic int fib[n]; × A2 × A3 × ... × An, with their value, what is
Fib(n-1) + Fib(n-2), if n > 1
programming technique; problems that do int fib(int n) the right way to parenthesize them so that
not have a similar type of pattern are difficult Recurrence relation and time complexity: it provides the minimum the number of total
{
to solve using the dynamic programming y The recurrence call T(n) is divided into multiplication. Assume that we are using
technique. if (n < = 1) standard matrix multiplication and not
T(n-1) and T(n-2) till the base conditions
y Dynamic programming follows four steps: T(1) and T(0). return n; Strassen’s matrix multiplication algorithm.
ο Identify the pattern of a suitable solution. y Therefore, the depth of the tree will be O(n). fib[0] ¬ 0 Input:
ο Recursively calculate the solution. y The number of leaves in a full binary tree fib[1] ¬ 1 Chain of matrices A1 × A2 × A3 × ... × An, where
ο Use the bottom-up technique to of depth n gives 2n. Since each recursion
calculate the value of the solution. for ¬ 2 to n Ai has dimensions or size Pi-1 × Pi. The value
takes O(1) time, and it takes O(2n).
is specified in an array P.
fib(5) fib[i] ¬ fib[i-1] + fib[i-2];
Goal:
return fib[n];
} Finding a way of multiplying matrices(in
fib(4) fib(3)
parenthesized form) such that it gives the
y 2. Top-down approach: We just save the optimal number of multiplications.
fib(3) fib(2) fib(2) fib(1) values of recursive calls and use them in Solution:
future.
y The implementation y We know the fact of mathematics that
fib(2) fib(1) fib(1) fib(0) fib(1) fib(0) matrix multiplication is associative.
int fib[n];
y Parenthesizing does not have any effect
for (i ¬ 0 to n)
fib(1) fib(0) on the result.
fib[i] ¬ 0; /* initialisation */
y Here, the number of the subproblem is y The resulting running time is O(V E), which
2 5. For i = 1 to n
Fig. 5.2
(nw), and the time required to calculate for the dense graph is O(V4) 6. For j=1 to n
each subproblem is O(1). Hence, time y P1 is a shortest path from vertex i to vertex
complexity = (nw) × O(1) = O(nw).
y In the Floyd-Warshall algorithm,
negative-weight edge is allowed.
the =
k with all intermediate vertices in the set
7. dij(k) min dij(k −1) , dik
(k − 1)
( (k − 1)
+ dkj )
{1,2,….,k}. In fact, we can make a slightly 8. Return D (n)
eg: 2 3 0 -4 1 -1 Among the two, the first tree is giving Where “r” indicates the positions of the
better results. root element in the array.
D =
(5)
3 7 4 0 5 3
y Here, frequencies are not given, and if y If we replace the left subtree and right
4 2 -1 -5 0 -2 subtree times with their corresponding
we want to search all elements, then the
5 8 5 1 6 0 recursive calls, then the expression
above simple calculation is enough for
Space complexity: deciding the best tree. becomes:
Here, space complexity is O(n3), but we can y If the frequencies are given, then the n
ks(1, 5) max =
= max
= 10 3 1 5 0 2 3 1 7 0 2
ks(0, 5) 0 4 3 3 2 0 4 6 3 2 0
p1 + ks(0, 5) 10 + 0
ks(1, 6) max =
= max
= 10 1 2 3 4
ks(0, 6) 0 1 0 11 1 3 1
Solution: 5 2 3 4
ks(2,
= 1) ks(1,
= 1) 10 (B) 2 11 0 7 3 1 0 11 1 6
(1, 2) + T(2, {3, 4})
p2 + ks(1, 0) 12 + 0 3 1 7 0 2 2 11 0 7 3
ks(2, 2) max = max
= =
12 T(1,=
{2, 3, 4}) min (1, 3) + T(3, {2, 4}) D=
2
ks(1, 2) 10 4 3 3 2 0 3 1 7 0 2
(1, 4) + T(4, {2, 3})
p2 + ks(1, 1) 12 + 10 1 2 3 4 4 6 3 2 0
ks(2, 3) max =
= max
= 22 1 + 4
1 0 11 1 6
ks(1, 3) 10
= min 2 + 7
3 + 4 (C) 2 11 0 7 3 1 2 3 4
p2 + ks(1, 2) 12 + 10
ks(2, 4) max = max
= = 22 3 1 7 0 2 1 0 8 1 3
ks(1, 4) 10 =5 4 6 3 2 0 2 8 0 7 3
p2 + ks(1, 3) 12 + 10 (2, 3) + T(3, {4}) 4 + 8 D3=
ks(2, 5) max = max
= =
22 T(2,
= {3, 4}) min = min =
4 (D) None of above 3 1 7 0 2
ks(1, 5) 10 (2, 4) + T(4, {3}) 2 + 2 4 3 3 2 0
p2 + ks(1, 4) 12 + 10 (3, 4) + T(4, {2}) 5 + 5 Solution: (A)
ks(2, 6) max =
= max
= 22 T(3,
= {2, 4}) min = min =
7
ks(1, 6) 10 (3, 2) + T(2, {4}) 2 + 5 Let Di = set of all shortest path between every 1 2 3 4
ks(3,
= 1) ks(2,
= 1) 10 (4, 2) + T(2, {3}) 4 + 5 pair in such a way that the path is allowed to 1 0 6 1 3
T(4,
= {2, 3}) min = min =
4
ks(3,
= 2) ks(2,
= 2) 12 (4, 3) + T(3, {2}) 1 + 3 go through node 0 to node i. 2 6 0 5 3
D4=
ks(3,
= 3) ks(2,
= 3) 22 T(3, {4}) = (3, 4) + T(4, 1) = 5 + 3 = 8 So, 3 1 5 0 2
T(4, {3}) = (4, 3) + T(3, 1) = 1 + 1 = 2 1 2 3 4 4 3 3 2 0
p3 + ks(2, 0) 28 + 0
ks(3, 4) max = max
= =
28 1 0 11 1 6
ks(2, 4) 22 T(2, {4}) = (2, 4) + T(4, 1) = 2 + 3 = 5
2 11 0 7 3
D=
0
p3 + ks(2, 1) 28 + 10 T(4, {2}) = (4, 2) + T(2, 1) = 4 + 1 = 5
3 1 7 0 2
ks(3, 5) max =
= max
= 38
ks(2, 5) 22 T(2, {3}) = (2, 3) + T(3, 1) = 4 + 1 = 5 4 6 3 2 0
p3 + ks(2, 2) 28 + 12 T(3, {2}) = (3, 2) + T(2, 1) = 2 + 1 = 3
ks(3, 6) max = max
= =
40
ks(2, 6) 22 So, minimum cost is 5 and the path is
8. Subset-sum can be computed by 1 2 1
(A) Row-major order only 1 2 4 3
O(2n )
Time complexity of 0/1 knapsack = minimum (n is the number of object and
O(nw)
w is the total capacity of the knapsack.)
y Subset sum problem: Given a sequence of n positive numbers A1, A2,….An, give an
algorithm which checks whether there exists a subset of A whose sum of all numbers is
T.
y Brute force method time complexity of subset sum problem is O(2n).
y Let us assume that SS(i,S) denote sum from a1 to ai whose sum is equal to some number
‘S’.
y Recursive equation of subset-sum is given below:
O(2n )
y Time complexity of subset-sum = minimum
O(nw)
All Pair Shortest Path Floyd Warshall
Problem: Given a weighted directed graph G=(V,E), where V = {1, 2, …,n}. Find the shortest
path between all pair of nodes in the graph.
y The running time of Dijkstra’s algorithm if we use to find all pairs shortest path is O(VElogV).
y The running time of Bellman-Ford algorithm if we use to find all pair shortest path is
O(V2E).
y Let dij(k) be the weight of a shortest path from vertex i to vertex j for which all intermediate
vertices are in the set {1, 2, …,k}
The recurrence equation is:
wij , if k = 0
dij(k) =
( (k − 1) (k − 1)
min dij , dik + dkj
(k − 1)
) , if k ≥ 1