KEMBAR78
Dsa Unit Test Material | PDF | Queue (Abstract Data Type) | Algorithms And Data Structures
0% found this document useful (0 votes)
31 views13 pages

Dsa Unit Test Material

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views13 pages

Dsa Unit Test Material

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

DSA UNIT TEST MATERIAL

Unit I: Introduction to Data Structures and Complexity


Analysis

Q1. Define and differentiate Linear vs. Non-linear data structures with examples.

1. Linear vs Non-linear Data Structures

Data structures are classified based on how data elements are organized and accessed.
Linear data structures store elements in a sequential manner, where each element is
connected to its previous and next element. Examples include arrays, linked lists, stacks,
and queues. These structures are simple to implement and are ideal for scenarios where
data is processed in order.

In contrast, non-linear data structures organize data hierarchically or in a networked


format. Examples include trees and graphs. In trees, each node may have multiple
children but only one parent, forming a hierarchy. Graphs allow complex relationships
where nodes can be connected arbitrarily, useful in modeling networks like social media
or transportation systems.

.
Q2. Explain Abstract Data Types (ADT) with examples like Stack, Queue

An Abstract Data Type (ADT) is a theoretical concept that defines a data structure by its behavior
rather than its implementation. ADTs specify what operations can be performed and what results to
expect, without revealing how these operations are implemented. This abstraction allows
programmers to focus on logic rather than low-level details.

Common ADTs include:

• List: A collection of elements with operations like insert, delete, and search.

• Stack: Follows Last-In-First-Out (LIFO). Operations: push, pop, peek.

• Queue: Follows First-In-First-Out (FIFO). Operations: enqueue, dequeue.

• Deque: Double-ended queue allowing insertion/deletion from both ends.

For example, a Stack ADT supports:

• push(x): Add element x to the top.

• pop(): Remove and return the top element.

• peek(): View the top element without removing.

The actual implementation could be via arrays or linked lists, but the user interacts only with the
defined operations.

Benefits of ADT:

• Encapsulation: Hides internal details.

• Reusability: Can be used across multiple programs.

• Modularity: Easier to debug and maintain.

In real-world applications, ADTs are used in compilers (expression evaluation), operating systems
(process scheduling), and databases (query processing).

Q3. Define Time and Space Complexity with examples.

Time complexity refers to the amount of time an algorithm takes to run, depending on the input
size. Space complexity refers to the amount of memory used. These metrics help evaluate the
efficiency of algorithms.

For example, consider Linear Search:

for(i = 0; i < n; i++) { if(arr[i] == key) return i; }

Time complexity is O(n) because in the worst case, all elements are checked. Space complexity is
O(1) since no extra memory is used.

Types of Time Complexity:


• Constant: O(1)

• Linear: O(n)

• Logarithmic: O(log n)

• Quadratic: O(n²)

Space Complexity Example: A recursive function may use stack memory for each call. If a function
calls itself n times, space complexity becomes O(n).

Understanding these complexities helps in choosing optimal algorithms. For instance, Binary
Search is preferred over Linear Search for sorted data due to its O(log n) time.

for(i = 0; i < n; i++) {

if(arr[i] == key) return i;

Q4. Explain Asymptotic Notations: Big O, Omega, Theta with graphical interpretation.

Asymptotic notations describe the behavior of algorithms as input size grows towards infinity. They
help analyze performance without focusing on machine-specific details.

• Big O (O): Describes the worst-case upper bound. It tells us the maximum time an algorithm
can take.
Example: In Bubble Sort, even if the array is sorted, it still performs comparisons → O(n²).

• Omega (Ω): Describes the best-case lower bound. It tells us the minimum time an algorithm
can take.
Example: In Linear Search, if the element is at the first position → Ω(1).

• Theta (Θ): Describes the average-case or tight bound. It represents both upper and lower
bounds.
Example: In Binary Search, average case is Θ(log n).

Graphical Representation: Imagine a graph with input size n on the x-axis and time on the y-axis.

• Big O is the ceiling curve.

• Omega is the floor curve.

• Theta is the curve that fits tightly between both.

Why It Matters: These notations help compare algorithms. For example, choosing between Bubble
Sort (O(n²)) and Merge Sort (O(n log n)) becomes easier when analyzing large datasets.
Real-world Use: In competitive programming and system design, Big O helps predict scalability. For
instance, a search engine must use algorithms with O(log n) or better to handle millions of queries
efficiently.

Q5. Compare Best, Worst, and Average Case analysis for sorting algorithms.

Algorithm performance varies depending on input. Understanding best, worst, and average cases
helps in realistic evaluation.

• Best Case: Minimum operations.


Example: In Linear Search, if the element is at the first position → 1 comparison.

• Worst Case: Maximum operations.


Example: In Binary Search, if the element is not present → log₂n comparisons.

• Average Case: Expected number of operations over all inputs.


Example: In Linear Search, average comparisons = n/2.

Case Study: Bubble Sort

• Best Case: Array already sorted → O(n)

• Worst Case: Array in reverse order → O(n²)

• Average Case: Random order → O(n²)

Why It’s Important: Best case gives hope, worst case sets limits, and average case reflects reality.
For example, in real-time systems, worst-case analysis ensures deadlines are met.

Practical Tip: Always mention all three cases in exams to show complete understanding. Use
examples and diagrams to support your analysis.

Q6. Write and explain Linear Search and Binary Search algorithms with time complexity

Linear Search is the simplest search technique. It checks each element one by one until the desired
element is found or the list ends.

Algorithm:

for(i = 0; i < n; i++) { if(arr[i] == key) return i; } return -1;

Time Complexity:

• Best Case: O(1) → Element at first position

• Worst Case: O(n) → Element at last or not present

• Average Case: O(n/2)

Space Complexity: O(1) → No extra space used


Advantages:

• Works on unsorted data

• Easy to implement

Disadvantages:

• Inefficient for large datasets

• Slower than Binary Search

Use Case: Useful when data is small or unsorted, like searching a name in a short list.

Diagram: Imagine an array [10, 20, 30, 40] and key = 30.
Linear search checks 10 → 20 → 30 → found.

Q7BINARY SEARCH

Binary Search is an efficient algorithm used to find an element in a sorted array. It works by
repeatedly dividing the search interval in half. If the target value is less than the middle element,
the search continues in the left half; otherwise, it continues in the right half.

Algorithm Steps:

1. Set low = 0, high = n - 1.

2. While low <= high:

o Calculate mid = (low + high)/2.

o If arr[mid] == key, return mid.

o If arr[mid] < key, search right half (low = mid + 1).

o Else, search left half (high = mid - 1).

3. If not found, return -1.

Pseudocode:

int binarySearch(int arr[], int n, int key) { int low = 0, high = n - 1; while (low <= high) { int mid =
(low + high) / 2; if (arr[mid] == key) return mid; else if (arr[mid] < key) low = mid + 1; else high =
mid - 1; } return -1; }

Time Complexity:

• Best Case: O(1) → Element at mid

• Worst Case: O(log n)

• Average Case: O(log n)


Space Complexity: O(1)

Diagram: For array [10, 20, 30, 40, 50], searching for 30:

• Step 1: mid = 2 → arr[2] = 30 → found!

Advantages:

• Much faster than linear search for large datasets

• Efficient for sorted arrays

Limitations:

• Only works on sorted data

• Requires index-based access (not suitable for linked lists)

Use Case: Used in databases, search engines, and any system requiring fast lookup

Q8. Sort the array using Bubble Sort, show each pass.

Bubble Sort is a simple comparison-based sorting algorithm. It repeatedly steps through the list,
compares adjacent elements, and swaps them if they are in the wrong order. This process
continues until the list is sorted.

Algorithm Steps:

1. Traverse the array from start to end.

2. Compare adjacent elements.

3. Swap if the left element is greater than the right.

4. Repeat for n-1 passes.

Pseudocode:

void bubbleSort(int arr[], int n) {

for (int i = 0; i < n - 1; i++) {

for (int j = 0; j < n - i - 1; j++) {

if (arr[j] > arr[j + 1]) {

swap(arr[j], arr[j + 1]);

}
}

Time Complexity:

• Best Case: O(n) → Already sorted

• Worst Case: O(n²)

• Average Case: O(n²)

Space Complexity: O(1)

Diagram: Array [5, 3, 4]

• Pass 1: [3, 5, 4]

• Pass 2: [3, 4, 5]

Advantages:

• Easy to implement

• Good for small datasets

Disadvantages:

• Inefficient for large datasets

• High time complexity

Use Case: Used in teaching sorting concepts and small-scale applications

Q9 Compare Selection Sort vs Insertion Sort with time complexity

Selection Sort and Insertion Sort are basic sorting algorithms used for educational and small-scale
purposes.

Selection Sort:

• Finds the minimum element and places it at the beginning.

• Repeats for the remaining unsorted part.

Pseudocode:

for (i = 0; i < n - 1; i++) {

min = i;

for (j = i + 1; j < n; j++) {

if (arr[j] < arr[min]) min = j;

}
swap(arr[i], arr[min]);

Time Complexity: O(n²) in all cases


Space Complexity: O(1)

Insertion Sort:

• Builds the sorted array one element at a time.

• Inserts each element into its correct position.

Pseudocode:

for (i = 1; i < n; i++) {

key = arr[i];

j = i - 1;

while (j >= 0 && arr[j] > key) {

arr[j + 1] = arr[j];

j--;

arr[j + 1] = key;

Time Complexity:

• Best Case: O(n) → Already sorted

• Worst Case: O(n²)

• Average Case: O(n²)


Q10 What is the difference between static and dynamic data structures?.

Static data structures are those whose size and memory are fixed at compile time. They use
contiguous memory allocation and are simple to implement. Examples include arrays and static
stacks. Once declared, their size cannot be changed during runtime.

Dynamic data structures, on the other hand, allocate memory at runtime. They can grow or shrink
as needed, making them more flexible. Examples include linked lists, dynamic stacks, queues, and
trees. These structures use pointers to connect elements and are stored in non-contiguous memory
Q11. Explain Divide and Conquer strategy with example and time analysis.

Divide and Conquer is a powerful algorithmic paradigm that breaks a problem into smaller sub-
problems, solves them recursively, and combines their solutions. It’s used in many efficient
algorithms like Merge Sort, Quick Sort, and Binary Search.

Steps:

1. Divide: Split the problem into smaller parts.

2. Conquer: Solve each sub-problem recursively.

3. Combine: Merge the solutions to form the final result.

Example: Merge Sort

• Divide the array into halves.

• Recursively sort each half.

• Merge the sorted halves.

:
Time Complexity

• Divide: O(log n)

• Conquer: O(n)

• Total: O(n log n)

Diagram: Array [8, 4, 5, 2] →


Divide: [8, 4], [5, 2] →
Sort: [4, 8], [2, 5] →
Merge: [2, 4, 5, 8]

Advantages:

• Efficient for large datasets

• Reduces time complexity

Use Case: Used in sorting, searching, and optimization problems.

Pseudocode

mergeSort(arr[], l, r) {

if (l < r) {

mid = (l + r)/2;

mergeSort(arr, l, mid);

mergeSort(arr, mid+1, r);

merge(arr, l, mid, r);

Q12. Write pseudocode for Binary Search and explain its working.

Binary Search works on sorted arrays by repeatedly dividing the search space in half.

Working:

• Start with full array.

• Check middle element.

• If key < mid → search left half.

• If key > mid → search right half.

• Repeat until found or range is empty.


Example: Array: [10, 20, 30, 40, 50], key = 30

• mid = 2 → arr[2] = 30 → found!

Time Complexity: O(log n)


Space Complexity: O(1)

Use Case: Used in search engines, databases, and sorted lists.

Pseudocode:

binarySearch(arr[], key) {

low = 0;

high = n - 1;

while (low <= high) {

mid = (low + high)/2;

if (arr[mid] == key) return mid;

else if (arr[mid] < key) low = mid + 1;

else high = mid - 1;

return -1;

13. Explain complexity analysis of an algorithm and notations used

Complexity analysis evaluates the efficiency of an algorithm in terms of time and space. It helps
predict performance and scalability.

Time Complexity: Measures execution time


Space Complexity: Measures memory usage

Asymptotic Notations:

• Big O (O): Worst-case upper bound

• Omega (Ω): Best-case lower bound

• Theta (Θ): Average-case tight bound

Example: Linear Search → O(n)


Binary Search → O(log n)

Why It Matters: Helps choose optimal algorithms, especially for large inputs.

Use Case: In system design, choosing O(log n) algorithms ensures fast response.
14. Write short notes on Greedy strategy and algorithm characteristics

Greedy strategy solves problems by making the best choice at each step, hoping it leads to the
global optimum. It doesn’t backtrack or reconsider decisions.

Characteristics:

• Local optimization

• No recursion or dynamic programming

• Fast and simple

Example: Coin Change Problem Given coins of denominations [1, 5, 10], to make 18:

• Pick 10 → remaining 8

• Pick 5 → remaining 3

• Pick 1 × 3 → done

Pseudocode:

while (amount > 0) { pick largest coin ≤ amount; subtract from amount; }

Use Case: Used in scheduling, Huffman coding, and network routing.

Limitations: May not always give optimal solution (e.g., coin change with denominations [1, 3, 4]
for amount 6).

You might also like