Rubrics for thecourse
• CAT-1 : 15 marks
• CAT-2 : 15 marks
• FAT : 40 marks
• Quiz-1: 05 marks (online)
• Quiz-2: 05 marks (online)
• Quiz-3: 05 marks (offline)
• DA-1: 10 marks (easy-intermediate)
• DA-2: 05 marks (very easy)
Why to LearnDSA?
• DSA (Data Structures and Algorithms) is the study of organizing
data efficiently using data structures like arrays, stacks, and trees,
paired with step-by-step procedures (or algorithms) to solve
problems effectively. Data structures manage how data is stored
and accessed, while algorithms focus on processing this data.
• Learning DSA boosts your problem-solving abilities and make you
a stronger programmer.
• DSA is foundation for almost every software like GPS, Search
Engines, AI ChatBots, Gaming Apps, Databases, Web Applications,
etc
• Top Companies like Google, Microsoft, Amazon, Apple,
Meta and many other heavily focus on DSA in interviews.
4.
What is DATA?
•Data is all around us
• Any information in data to us
• Information is knowledge and to gain that information or
knowledge we need to refer data
• For example: While studying we refer books, books have
information, or we can say books have the data that we can
refer for the exam.
5.
What is DataStructure?
• The information or knowledge needs to be stored
somewhere in order to refer it for the future.
• From previous example, the book is a data structure
because it stores the data that is important to us.
• Therefore, data structure is a storage structure which helps
us to store data.
• Examples of data structures are: Arrays, Stacks, Queues,
Priority Queues, HashMap or HashTable, Tree, Graphs,
Heaps & AVL etc.
• We will study about these data structures in detail in this
course
6.
What is anAlgorithm?
• When you prepare tea for instance, there are certain
instructions or sets of instructions we perform in order to
prepare tea.
For example:
1. Igniting the gas
2. Pouring milk and water in a sauce pan
3. Heating the pan
4. Adding ginger
5. Boiling it for 5-10 mins
6. Serving the tea in a cup
7. Drinking the tea while it’s still hot
7.
What is anAlgorithm?
• An algorithm is the same steps-by-step procedure of instruction that
is followed in order to solve a problem
• Eg: Sort the given List/Array [7,3,6,4,5,2,9,1,8]
• For (i=0; i<n; i++){
for (j=i+1; j<n; j++){
if (list[i]>list[j]){
swap(list[i], list[j]);
}
}
}
// Bubble sort algorithm
8.
What is anAlgorithm?
• In the previous page, the step written in order to sort the list
is known as an algorithm
• There are very famous two types of algorithms : searching
and sorting algorithms.
• Searching algorithms are used in order to search an
element in a data structures. Eg: searching for a word in a
dictionary
• Sorting algorithms are used to arrange data items in a data
structure in increasing or decreasing order.
FIND OUT WHAT IS MONOTONIC
INCREASING/DECREASING
FIND OUT WHAT IS LOCAL SEARCHING
9.
Time Complexity andSpace
Complexity
• Many times there are more than one ways to solve a
problem with different algorithms and we need a way to
compare multiple ways.
• There are situations where we would like to know how much
time and resources an algorithm might take when
implemented
• Time Complexity: The time complexity of an algorithm
quantifies the amount of time taken by an algorithm to run
as a function of the length of the input. Note that the time
to run is a function of the length of the input and not the
actual execution time of the machine on which the
algorithm is running on.
• The time required by the algorithm to solve given problem is
10.
Example 1: Additionof two scalar variables.
Algorithm ADD SCALAR(A, B)
//Description: Perform arithmetic addition of two numbers
//Input: Two scalar variables A and B
//Output: variable C, which holds the addition of A and B
C <- A + B
return C
The addition of two scalar numbers requires one addition operation.
the time complexity of this algorithm is constant, so T(n) = O(1) .
In order to calculate time complexity on an algorithm, it is assumed that
a constant time c is taken to execute one operation, and then the total
operations for an input length on N are calculated.
11.
int a[n];
for(int i= 0;i < n;i++)
cin >> a[i]
for(int i = 0;i < n;i++)
for(int j = 0;j < n;j++)
if(i!=j && a[i]+a[j] ==
z)
return true
return false
•N*c operations are required for input.
•The outer loop i loop runs N times.
•For each i, the inner loop j loop runs N times.
So total execution time is N*c + N*N*c + c.
The highest order term is taken (without constant) which is N*N in this case.
12.
count = 0
for(int i = N; i > 0; i /= 2)
for (int j = 0; j < i; j++)
count++;
This is a tricky case. In the first look, it seems like the complexity is O(N * log
N). N for the j′s loop and log(N) for i′s loop. But it's wrong. Let's see why.
Think about how many times count++ will run.
•When i = N, it will run N times.
•When i = N / 2, it will run N / 2 times.
•When i = N / 4, it will run N / 4 times.
•And so on.
The total number of times count++ will run is N + N/2 + N/4+...+1= 2 * N. So
the time complexity will be O(N).
15.
Asymptotic Notations forAnalysis of Algorithms
• Asymptotic Notations are mathematical tools used to analyze the performance of
algorithms by understanding how their efficiency changes as the input size grows.
• These notations provide a concise way to express the behavior of an algorithm's
time or space complexity as the input size approaches infinity.
• Rather than comparing algorithms directly, asymptotic analysis focuses on
understanding the relative growth rates of algorithms' complexities.
• It enables comparisons of algorithms' efficiency by abstracting away machine-
specific constants and implementation details, focusing instead on fundamental
trends.
• Asymptotic analysis allows for the comparison of algorithms' space and time
complexities by examining their performance characteristics as the input size varies.
• By using asymptotic notations, such as Big O, Big Omega, and Big Theta, we can
categorize algorithms based on their worst-case, best-case, or average-case time or
space complexities, providing valuable insights into their efficiency.
16.
Asymptotic Notations forAnalysis of Algorithms
• Execution time of an algorithm depends on the instruction set, processor
speed, disk I/O speed, etc. Hence, we estimate the efficiency of an
algorithm asymptotically.
• Time function of an algorithm is represented by T(n), where n is the input
size.
• Different types of asymptotic notations are used to represent the
complexity of an algorithm. Following asymptotic notations are used to
calculate the running time complexity of an algorithm.
• O Big Oh Notation
−
• Ω − Big omega Notation
• θ − Big theta Notation
• o Little Oh Notation
−
• ω − Little omega Notation
17.
Big Oh, O:Asymptotic Upper Bound
• The notation (n) is the formal way to express the upper bound of an
algorithm's running time. is the most commonly used notation. It
measures the worst case time complexity or the longest amount of
time an algorithm can possibly take to complete.
• A function f(n) can be represented is the order of g(n) that is O(g(n)), if
there exists a value of positive integer n as n0 and a positive
constant c such that
• Hence, function g(n) is an upper bound for function f(n), as g(n) grows
faster than f(n).
f(n)⩽c.g(n) for n>n0 in all case
Big Omega, Ω:Asymptotic Lower
Bound
• The notation Ω(n) is the formal way to express the lower
bound of an algorithm's running time. It measures the best
case time complexity or the best amount of time an
algorithm can possibly take to complete.
• W
We say that f(n)=Ω(g(n)) when there exists
constant c that f(n)⩾c.g(n) for all sufficiently large value of n.
Here n is a positive integer. It means function g is a lower bound for
function f ; after a certain value of n, f will never go below g.
Example
Let us consider a given function, f(n)=4.n3+10.n2+5.n+1.
Considering g(n)=n3, f(n)⩾4.g(n) for all the values of n>0.
Hence, the complexity of f(n) can be represented as Ω(g(n)), i.e. Ω(n3)
20.
Theta, θ: AsymptoticTight Bound
• The notation (n) is the formal way to express both the lower
bound and the upper bound of an algorithm's running time.
Some may confuse the theta notation as the average case
time complexity; while big theta notation could
be almost accurately used to describe the average case,
other notations could be used as well.
• W
We say that f(n)=θ(g(n)) when there exist
constants c1
and c2
that c1.g(n)⩽f(n)⩽c2.g(n) for all
sufficiently large value of n. Here n is a positive integer.
This means function g is a tight bound for function f.
Example
Let us consider a given function, f(n)=4.n3+10.n2+5.n+1
Considering g(n)=n3, 4.g(n)⩽f(n)⩽5.g(n) for all the large values of n.
Hence, the complexity of f(n) can be represented as θ(g(n)),
i.e. θ(n3).
Recurrence Relations
• Arecurrence relation is a mathematical expression that
defines a sequence in terms of its previous terms. In the
context of algorithmic analysis, it is often used to model the
time complexity of recursive algorithms.
an
=f(an 1
,
− an 2
,....,
− an−k
)
where f is a function that defines the relationship between the current
term and the previous terms
Recurrence Relations play a significant role in analyzing and
optimizing the complexity of algorithms. Having a strong
understanding of Recurrence Relations play a great role in developing
the problem-solving skills of an individual.
24.
Example Recurrence Relation
FibonacciSequence F(n) = F(n-1) + F(n-2)
Factorial of a number n F(n) = n * F(n-1)
Merge Sort T(n) = 2*T(n/2) + O(n)
Tower of Hanoi H(n) = 2*H(n-1) + 1
Binary Search T(n) = T(n/2) + 1
25.
1. Linear RecurrenceRelations:
• T(n) = T(n-1) + n for n > 0 and T(0) = 1
• T(n) = T(n-1) + n
= T(n-2) + (n-1) + n
= T(n-k) + (n-(k-1))….. (n-1) + n
• T(n)=T(0)+1+2+…..+n=n(n+1)/2=O(n2)
2. Divide and conquer recurrence relations:
T(n) = 2T(n/2) + cn
T(n) = 2T(n/2) + n
√
26.
Substitution Method:
• Forexample consider the recurrence T(n) = 2T(n/2) + n
• We guess the solution as T(n) = O(nLogn). Now we use induction
to prove our guess.
• We need to prove that T(n) <= cnLogn. We can assume that it is
true for values smaller than n.
• T(n) = 2T(n/2) + n
<= 2cn/2Log(n/2) + n
= cnLogn - cnLog2 + n
= cnLogn - cn + n
<= cnLogn
27.
Recurrence Tree Method:
•For example, consider the recurrence relation
• T(n) = T(n/4) + T(n/2) + cn2
• cn2
/
T(n/4) T(n/2)
• If we further break down the expression T(n/4) and T(n/2),
we get the following recursion tree.
• cn2
/
c(n2
)/16 c(n2
)/4
/ /
T(n/16) T(n/8) T(n/8) T(n/4)
28.
• Breaking downfurther gives us following
• cn2
/
c(n2
)/16 c(n2
)/4
/ /
c(n2
)/256 c(n2
)/64 c(n2
)/64 c(n2
)/16
/ / / /
• To know the value of T(n), we need to calculate the sum of tree
nodes level by level. If we sum the above tree level by level,
• we get the following series T(n) = c(n^2 + 5(n^2)/16 +
25(n^2)/256) + ....
The above series is a geometrical progression with a ratio of 5/16.
• To get an upper bound, we can sum the infinite series. We get the
sum as (n2
)/(1 - 5/16) which is O(n2
)
29.
Master Method:
• T(n)= aT(n/b) + f(n) where a >= 1 and b > 1
There are the following three cases:
• If f(n) = O(nc
) where c < Logba then T(n) = ?(nLogba
)
• If f(n) = ?(nc
) where c = Logba then T(n) = ?(nc
Log n)
• If f(n) = ?(nc
) where c > Logba then T(n) = ?(f(n))