KEMBAR78
Algorithm Notes (Recovered) | PDF | Time Complexity | Recurrence Relation
0% found this document useful (0 votes)
11 views31 pages

Algorithm Notes (Recovered)

Uploaded by

rvsceteceexam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views31 pages

Algorithm Notes (Recovered)

Uploaded by

rvsceteceexam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 31

Unit -1

The word Algorithm means ” A set of finite rules or instructions to be followed in


calculations or other problem-solving operations ” Or ” A procedure for solving a
mathematical problem in a finite number of steps that frequently involves recursive
operations”.

Time Complexity:

The time complexity of an algorithm refers to the amount of time that is required by the
algorithm to execute and get the result. This can be for normal operations, conditional if-
else statements, loop statements, etc.

How to calculate Time Complexity?


The time complexity of an algorithm is also calculated by determining the following 2
components:
 Constant time part: Any instruction that is executed just once comes in this part. For
example, input, output, if-else, switch, arithmetic operations etc.
 Variable Time Part: Any instruction that is executed more than once, say n times,
comes in this part. For example, loops, recursion, etc.
Therefore Time complexity of any algorithm P is T(P) = C + TP(I), where C is the
constant time part and TP(I) is the variable part of the algorithm, which depends on the
instance characteristic I.
Example: In the algorithm of Linear Search above, the time complexity is calculated as
follows:
Step 1: Constant Time
Step 2: Variable Time (Taking n inputs)
Step 3: Variable Time (Till the length of the Array (n) or the index of the found element)
Step 4: Constant Time
Step 5: Constant Time
Step 6: Constant Time
Hence, T(P) = 5 + n, which can be said as T(n)

How to express an Algorithm?


Natural Language: - Here we express the Algorithm in natural English language. It is too
hard to understand the algorithm from it.
1. Flow Chat :- Here we express the Algorithm by making graphical/pictorial
representation of it. It is easier to understand than Natural Language.
2. Pseudo Code :- Here we express the Algorithm in the form of annotations and
informative text written in plain English which is very much similar to the real code but
as it has no syntax like any of the programming language, it can’t be compiled or
interpreted by the computer. It is the best way to express an algorithm because it can be
understood by even a layman with some school level programming knowledge.
Space Complexity:
Space taken by the algorithm to solve the problem. It includes space used by necessary
input variables and any extra space (excluding the space taken by inputs) that is used by the
algorithm. For example, if we use a hash table (a kind of data structure), we need an array
to store values so
 this is an extra space occupied, hence will count towards the space complexity of the
algorithm. This extra space is known as Auxiliary Space.

cases in complexities:
There are two commonly studied cases of complexity in algorithms:

1.Best case complexity: The best-case scenario for an algorithm is the scenario in which
the algorithm performs the minimum amount of work (e.g. takes the shortest amount of
time, uses the least amount of memory, etc.).
2.Worst case complexity: The worst-case scenario for an algorithm is the scenario in
which the algorithm performs the maximum amount of work (e.g. takes the longest amount
of time, uses the most amount of memory, etc.).
In analyzing the complexity of an algorithm, it is often more informative to study the worst-
case scenario, as this gives a guaranteed upper bound on the performance of the algorithm.
Best-case scenario analysis is sometimes performed, but is generally less important as it
provides a lower bound that is often trivial to achieve.

Advantages of Algorithms
 Easy to understand: Since it is a stepwise representation of a solution to a given
problem, it is easy to understand.
 Language Independent: It is not dependent on any programming language, so it can
easily be understood by anyone.
 Debug / Error Finding: Every step is independent / in a flow so it will be easy to spot
and fix the error.
 Sub-Problems: It is written in a flow so now the programmer can divide the tasks
which makes them easier to code.
Disadvantages of Algorithms
 Creating efficient algorithms is time-consuming and requires good logical skills.
 Nasty to show branching and looping in algorithms.
The main idea of asymptotic analysis is to have a measure of the efficiency of algorithms
that don’t depend on machine-specific constants and don’t require algorithms to be
implemented and time taken by programs to be compared. Asymptotic notations are
mathematical tools to represent the time complexity of algorithms for asymptotic analysis.

Asymptotic Notations:

There are mainly three asymptotic notations:


1. Big-O Notation (O-notation)
2. Omega Notation (Ω-notation)
3. Theta Notation (Θ-notation)
1. Theta Notation (Θ-Notation) :
Theta notation encloses the function from above and below. Since it represents the upper
and the lower bound of the running time of an algorithm, it is used for analyzing
the average-case complexity of an algorithm.
Let g and f be the function from the set of natural numbers to itself. The function f is said to
be Θ(g), if there are constants c1, c2 > 0 and a natural number n0 such that c1* g(n) ≤ f(n)
≤ c2 * g(n) for all n ≥ n0
Mathematical Representation of Theta notation:
Θ (g(n)) = {f(n): there exist positive constants c1, c2 and n0 such that 0 ≤ c1 * g(n) ≤ f(n) ≤
c2 * g(n) for all n ≥ n0}

Note: Θ(g) is a set


The above expression can be described as if f(n) is theta of g(n), then the value f(n) is
always between c1 * g(n) and c2 * g(n) for large values of n (n ≥ n0). The definition of
theta also requires that f(n) must be non-negative for values of n greater than n0.

The execution time serves as both a lower and upper bound on the algorithm’s time
complexity.
It exist as both, most, and least boundaries for a given input value.
A simple way to get the Theta notation of an expression is to drop low-order terms and
ignore leading constants. For example, Consider the expression 3n3 + 6n2 + 6000 =
Θ(n3), the dropping lower order terms is always fine because there will always be a
number(n) after which Θ(n 3) has higher values than Θ(n2) irrespective of the constants
involved. For a given function g(n), we denote Θ(g(n)) is following set of functions.
2. Big-O Notation (O-notation) :
Big-O notation represents the upper bound of the running time of an algorithm. Therefore,
it gives the worst-case complexity of an algorithm.

If f(n) describes the running time of an algorithm, f(n) is O(g(n)) if there exist a positive
constant C and n0 such that, 0 ≤ f(n) ≤ cg(n) for all n ≥ n0

It returns the highest possible output value (big-O)for a given input.


The execution time serves as an upper bound on the algorithm’s time complexity.
Mathematical Representation of Big-O Notation:
O(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ f(n) ≤ cg(n) for all n ≥
n0 }

For example, Consider the case of Insertion Sort. It takes linear time in the best case and
quadratic time in the worst case. We can safely say that the time complexity of the Insertion
sort is O(n2).
Note: O(n2) also covers linear time.
If we use Θ notation to represent the time complexity of Insertion sort, we have to use two
statements for best and worst cases:

 The worst-case time complexity of Insertion Sort is Θ(n 2).


 The best case time complexity of Insertion Sort is Θ(n).
The Big-O notation is useful when we only have an upper bound on the time complexity of
an algorithm. Many times we easily find an upper bound by simply looking at the
algorithm.

3. Omega Notation (Ω- Notation):


Omega notation represents the lower bound of the running time of an algorithm. Thus, it
provides the best case complexity of an algorithm.

The execution time serves as a both lower bound on the algorithm’s time complexity.
It is defined as the condition that allows an algorithm to complete statement execution in
the shortest amount of time.
Let g and f be the function from the set of natural numbers to itself. The function f is said to
be Ω(g), if there is a constant c > 0 and a natural number n0 such that c*g(n) ≤ f(n) for all n
≥ n0

Analysis of algorithms :

is the process of finding the computational complexity of any algorithm. By computational


complexity, we are referring to the amount of time taken, space, and any other resources
needed to execute(run) the algorithm.
Mathematical Representation of Omega notation :
Ω(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ cg(n) ≤ f(n) for all n ≥
n0 }

Let us consider the same Insertion sort example here. The time complexity of Insertion Sort
can be written as Ω(n), but it is not very useful information about insertion sort, as we are
generally interested in worst-case and sometimes in the average case.

Analysis Types:

The algorithm complexity can be best, average or worst case analysis. The algorithm analysis
can be expressed using Big O notation. Best, worst, and average cases of a given algorithm
express what the resource usage is at least, at most and on average, respectively. The big o
notation simplifies the comparison of algorithms.

Best Case

Best case performance used in computer science to describe an algorithm’s behavior under
optimal conditions. An example of best case performance would be trying to sort a list that is
already sorted using some sorting algorithm. E.G. [1,2,3] --> [1,2,3]

Average Case

Average case performance measured using the average optimal conditions to solve the
problem. For example a list that is neither best case nor, worst case order that you want to be
sorted in a certain order. E.G. [2,1,5,3] --> [1,2,3,5] OR [ 2,1,5,3] --> [5,3,2,1]
Worst Case

Worst case performance used to analyze the algorithm's behavior under worst case input and
least possible to solve the problem. It determines when the algorithm will perform worst for
the given inputs. An example of the worst case performance would be a a list of names
already sorted in ascending order that you want to sort in descending order. E.G. [Abby, Bill,
Catherine] --> [Catherine, Bill, Abby].

Recurrence Relation:

A recurrence is an equation or inequality that describes a function in terms of its values on


smaller inputs. To solve a Recurrence Relation means to obtain a function defined on the
natural numbers that satisfy the recurrence.

For Example, the Worst Case Running Time T(n) of the MERGE SORT Procedures is
described by the recurrence.

T (n) = θ (1) if n=1

2T + θ (n) if n>1

There are four methods for solving Recurrence:

1. Substitution Method Iteration Method


2. Recursion Tree Method
3. Master Method

1. Substitution Method:

The Substitution Method Consists of two main steps:

1. Guess the Solution.


2. Use the mathematical induction to find the boundary condition and shows that the
guess is correct.

For Example1 Solve the equation by Substitution Method.

T (n) = T +n

We have to show that it is asymptotically bound by O (log n).

1. T (n) ≤c logn.
Put this in given Recurrence Equation.

T (n) ≤c log +1

≤c log + 1 = c logn-clog2 2+1


≤c logn for c≥1
Thus T (n) =O logn.

Example2 Consider the Recurrence

T (n) = 2T + n n>1

Find an Asymptotic bound on T.

2. Iteration Methods

It means to expand the recurrence and express it as a summation of terms of n and initial
condition.

Recursion Tree Method:

1. Recursion Tree Method is a pictorial representation of an iteration method which is in the


form of a tree where at each level nodes are expanded.

2. In general, we consider the second term in recurrence as root.

3. It is useful when the divide & Conquer algorithm is used.

4. It is sometimes difficult to come up with a good guess. In Recursion tree, each root and
child represents the cost of a single sub problem.

Master Method:

The Master Method is used for solving the following types of recurrence

T (n) = a T + f (n) with a≥1 and b≥1 be constant & f(n) be a function and can be
interpreted as

Let T (n) is defined on non-negative integers by the recurrence.


T (n) = a T + f (n)

In the function to the analysis of a recursive algorithm, the constants and function take on the
following significance:

o n is the size of the problem.

o a is the number of sub problems in the recursion.

o n/b is the size of each sub problem. (Here it is assumed that all sub problems are
essentially the same size.)
o f (n) is the sum of the work done outside the recursive calls, which includes the sum
of dividing the problem and the sum of combining the solutions to the sub problems.
o It is not possible always bound the function according to the requirement, so we make
three cases which will tell us what kind of bound we can apply on the function.

Lower Bound Theory

Lower Bound Theory Concept is based upon the calculation of minimum time that is required
to execute an algorithm is known as a lower bound theory or Base Bound Theory.

Lower Bound Theory uses a number of methods/techniques to find out the lower bound.

Concept/Aim: The main aim is to calculate a minimum number of comparisons required to


execute an algorithm.

Techniques:

The techniques which are used by lower Bound Theory are:

1. Comparisons Trees.
2. Oracle and adversary argument
3. State Space Method
1. Comparison trees:

In a comparison sort, we use only comparisons between elements to gain order information
about an input sequence (a1; a2......an).

Given ai,aj from (a1, a2.....an)We Perform One of the Comparisons

o ai < aj less than

o a i ≤ aj less than or equal to

o ai > aj greater than

o a i ≥ aj greater than or equal to

o ai = aj equal to

To determine their relative order, if we assume all elements are distinct, then we just need to
consider ai ≤ aj '=' is excluded &, ≥,≤,>,< are equivalent.

Consider sorting three numbers a1, a2, and a3. There are 3! = 6 possible combinations:

1. (a1, a2, a3), (a1, a3, a2),


2. (a2, a1, a3), (a2, a3, a1)
3. (a3, a1, a2), (a3, a2, a1)

The Comparison based algorithm defines a decision tree.

Decision Tree:

A decision tree is a full binary tree that shows the comparisons between elements that are
executed by an appropriate sorting algorithm operating on an input of a given size. Control,
data movement, and all other conditions of the algorithm are ignored.

In a decision tree, there will be an array of length n.

So, total leaves will be n! (I.e. total number of comparisons)

Searching Algorithm:
Searching Algorithms are designed to check for an element or retrieve an element from any
data structure where it is stored.

Based on the type of search operation, these algorithms are generally


classified into two categories:
1. Sequential Search: In this, the list or array is traversed sequentially and every element
is checked. For example: Linear Search.

Linear search:-

Linear Search is defined as a sequential search algorithm that starts at one end and goes
through each element of a list until the desired element is found, otherwise the search
continues till the end of the data set. It is the easiest searching algorithm

Linear Search Algorithm


Step 1: First, read the search element (Target element) in the array.
Step 2: In the second step compare the search element with the first element in the array.
Step 3: If both are matched, display “Target element is found” and terminate the Linear
Search
function.
Step 4: If both are not matched, compare the search element with the next element in the
array.
Step 5: In this step, repeat steps 3 and 4 until the search (Target) element is compared with
the
last element of the array.
Step 6 – If the last element in the list does not match, the Linear Search Function will be
terminated, and the message “Element is not found” will be displayed.
Examples:
Input: arr[] = {10, 20, 80, 30, 60, 50,110, 100, 130, 170}, x = 110;
Output: 6
Explanation: Element x is present at index 6
Input: arr[] = {10, 20, 80, 30, 60, 50,110, 100, 130, 170}, x = 175;
Output: -1
Explanation: Element x is not present in arr[]

2. Interval Search: These algorithms are specifically designed for searching in sorted data-
structures. These type of searching algorithms are much more efficient than Linear Search
as they repeatedly target the center of the search structure and divide the search space in
half. For Example: Binary Search.

Binary Search:

1. In Binary Search technique, we search an element in a sorted array by recursively dividing


the interval in half.

2. Firstly, we take the whole array as an interval.

3. If the Pivot Element (the item to be searched) is less than the item in the middle of the
interval, We discard the second half of the list and recursively repeat the process for the first
half of the list by calculating the new middle and last element.

4. If the Pivot Element (the item to be searched) is greater than the item in the middle of the
interval, we discard the first half of the list and work recursively on the second half by
calculating the new beginning and middle element.

5. Repeatedly, check until the value is found or interval is empty.

Analysis:
1. Input: an array A of size n, already sorted in the ascending or descending order.
2. Output: analyze to search an element item in the sorted array of size n.
3. Logic: Let T (n) = number of comparisons of an item with n elements in a sorted
array.

o Set BEG = 1 and END = n


o Find mid =

o Compare the search item with the mid item.

Case 1: item = A[mid], then LOC = mid, but it the best case and T (n) = 1

Case 2: item ≠A [mid], then we will split the array into two equal parts of size .

And again find the midpoint of the half-sorted array and compare with search element.

Repeat the same process until a search element is found.

T (n) = ...... (Equation 1)

{Time to compare the search element with mid element, then with half of the selected half
part of array}
At least there will be only one term left that's why that term will compare out, and only one

comparison be done that's why


Is the last term of the equation and it will be equal to 1
Interpolation Search:

The Interpolation Search is an improvement over Binary Search for instances, where the values in a
sorted array are uniformly distributed. Interpolation constructs new data points within the range of a
discrete set of known data points. Binary Search always goes to the middle element to check. On the other
hand, interpolation search may go to different locations according to the value of the key being searched.
For example, if the value of the key is closer to the last element, interpolation search is likely to start
search toward the end side.
To find the position to be searched, it uses the following formula.
// The idea of formula is to return higher value of pos
// when element to be searched is closer to arr[hi]. And
// smaller value when closer to arr[lo]
searchedx ==> Element to be searchedlo ==> Starting index in arr[]hi ==> Ending index in
arr[]

Pattern Search:

The Pattern Searching algorithms are sometimes also referred to as String Searching Algorithms and
are considered as a part of the String algorithms. These algorithms are useful in the case of
searching a string within another string.

The Naive String Matching Algorithm:

The naïve approach tests all the possible placement of Pattern P [1.......m] relative to text T [1......n].
We try shift s = 0, 1.......n-m, successively and for each shift s. Compare T [s+1.......s+m] to P
[1......m].

The naïve algorithm finds all valid shifts using a loop that checks the condition P [1.......m] = T
[s+1.......s+m] for each of the n - m +1 possible value of s.

NAIVE-STRING-MATCHER (T, P)

1. n ← length [T]

2. m ← length [P]
3. for s ← 0 to n -m

4. do if P [1.....m] = T [s + 1....s + m]

5. then print "Pattern occurs with shift" s

Analysis: This for loop from 3 to 5 executes for n-m + 1(we need at least m characters at the end)
times and in iteration we are doing m comparisons. So the total complexity is O (n-m+1).

Example:
1. Suppose T = 1011101110
2. P = 111
3. Find all the Valid Shift
The Rabin-Karp string matching algorithm calculates a hash value for the pattern, as well as for each M-
character sub sequences of text to be compared. If the hash values are unequal, the algorithm will
determine the hash value for next M-character sequence. If the hash values are equal, the algorithm will
analyze the pattern and the M-character sequence. In this way, there is only one comparison per text
subsequence, and character matching is only required when the hash values match.

RABIN-KARP-MATCHER (T, P, d, q)

1. n ← length [T]

2. m ← length [P]

3. h ← dm-1 mod q

4. p ← 0

5. t0 ← 0

6. for i ← 1 to m

7. do p ← (dp + P[i]) mod q

8. t0 ← (dt0+T [i]) mod q

9. for s ← 0 to n-m

10. do if p = ts

11. then if P [1.....m] = T [s+1.....s + m]


Complexity:

The running time of RABIN-KARP-MATCHER in the worst case scenario O ((n-m+1) m but it has a
good average case running time. If the expected number of strong shifts is small O (1) and prime q is
chosen to be quite large, then the Rabin-Karp algorithm can be expected to run in time O (n+m) plus the
time to require to process spurious hits.

Knuth-Morris-Pratt (KMP)Algorithm

Knuth-Morris and Pratt introduce a linear time algorithm for the string matching problem. A matching
time of O (n) is achieved by avoiding comparison with an element of 'S' that have previously been
involved in comparison with some element of the pattern 'p' to be matched. i.e., backtracking on the string
'S' never occurs

Components of KMP Algorithm:

1. The Prefix Function (Π): The Prefix Function, Π for a pattern encapsulates knowledge about how the
pattern matches against the shift of itself. This information can be used to avoid a useless shift of the
pattern 'p.' In other words, this enables avoiding backtracking of the string 'S.'

2. The KMP Matcher: With string 'S,' pattern 'p' and prefix function 'Π' as inputs, find the occurrence of
'p' in 'S' and returns the number of shifts of 'p' after which occurrences are found.

The KMP Matcher with the pattern 'p,' the string 'S' and prefix function 'Π' as input, finds a match of p in
S. Following pseudo code compute the matching component of KMP algorithm:

KMP-MATCHER (T, P)

1. n ← length [T]

2. m ← length [P]

3. Π← COMPUTE-PREFIX-FUNCTION (P)

4. q ← 0 // numbers of characters matched

5. for i ← 1 to n // scan S from left to right

6. do while q > 0 and P [q + 1] ≠ T [i]

7. do q ← Π [q] // next character does not match

8. If P [q + 1] = T [i]

9. then q ← q + 1 // next character matches

10. If q = m // is all of p matched?

11. then print "Pattern occurs with shift" i - m

12. q ← Π [q] // look for the next match

Running Time Analysis:

The for loop beginning in step 5 runs 'n' times, i.e., as long as the length of the string 'S.' Since step 1 to
step 4 take constant times, the running time is dominated by this for the loop. Thus running time of the
matching function is O (n).

Insertion Sort

Insertion sort is one of the simplest sorting algorithms for the reason that it sorts a single element at
a particular instance. It is not the best sorting algorithm in terms of performance, but it's slightly
more efficient than selection sort and bubble sort in practical scenarios. It is an intuitive sorting
technique.

Let's consider the example of cards to have a better understanding of the logic behind the insertion
sort.

Suppose we have a set of cards in our hand, such that we want to arrange these cards in ascending
order. To sort these cards, we have a number of intuitive ways.

ALGORITHM: INSERTION SORT (A)

1. for j = 2 to A.length
2. key = A[j]
3. // Insert A[j] into the sorted sequence A[1.. j - 1]
4. i = j - 1
5. while i > 0 and A[i] > key
6. A[i + 1] = A[i]
7. ii = i -1
8. A[i + 1] = key

How Insertion Sort Works

1. We will start by assuming the very first element of the array is already sorted. Inside the key, we
will store the second element.

Next, we will compare our first element with the key, such that if the key is found to be smaller
than the first element, we will interchange their indexes or place the key at the first index. After
doing this, we will notice that the first two elements are sorted.

2. Now, we will move on to the third element and compare it with the left-hand side elements. If it
is the smallest element, then we will place the third element at the first index.

Else if it is greater than the first element and smaller than the second element, then we will
interchange its position with the third element and place it after the first element. After doing this,
we will have our first three elements in a sorted manner.
3. Similarly, we will sort the rest of the elements and place them in their correct position.

Consider the following example of an unsorted array that we will sort with the help of the Insertion
Sort algorithm.

A = (41, 22, 63, 14, 55, 36)

Initially,

1st Iteration:

Set key = 22

Compare a1 with a0

Since a0 > a1, swap both of them.

2nd Iteration:

Set key = 63
Compare a2 with a1 and a0

Since a2 > a1 > a0, keep the array as it is.

3rd Iteration:

Set key = 14

Compare a3 with a2, a1 and a0

Since a3 is the smallest among all the elements on the left-hand side, place a3 at the beginning of
the array.
4th Iteration:

Set key = 55

Compare a4 with a3, a2, a1 and a0.

As a4 < a3, swap both of them.

5th Iteration:

Set key = 36

Compare a5 with a4, a3, a2, a1 and a0.


Since a5 < a2, so we will place the elements in their correct positions.

Hence the array is arranged in ascending order, so no more swapping is required.

Complexity Analysis of Insertion Sort

Input: Given n input elements.

Output: Number of steps incurred to sort a list.

Logic: If we are given n elements, then in the first pass, it will make n-1 comparisons; in the
second pass, it will do n-2; in the third pass, it will do n-3 and so on. Thus, the total number of
comparisons can be found by;

Output;
(n-1) + (n-2) + (n-3) + (n-4) + ...... + 1
Sum=

i.e., O(n2)
Therefore, the insertion sort algorithm encompasses a time complexity of O(n2) and a space
complexity of O(1) because it necessitates some extra memory space for a key variable to perform
swaps.

Time Complexities:
o Best Case Complexity: The insertion sort algorithm has a best-case time complexity
of O(n) for the already sorted array because here, only the outer loop is running n times,
and the inner loop is kept still.
o Average Case Complexity: The average-case time complexity for the insertion sort
algorithm is O(n2), which is incurred when the existing elements are in jumbled order, i.e.,
neither in the ascending order nor in the descending order.
o Worst Case Complexity: The worst-case time complexity is also O(n2), which occurs
when we sort the ascending order of an array into the descending order.
In this algorithm, every individual element is compared with the rest of the elements, due to
which n-1 comparisons are made for every nth element.

The insertion sort algorithm is highly recommended, especially when a few elements are left for
sorting or in case the array encompasses few elements.

Space Complexity

The insertion sort encompasses a space complexity of O(1) due to the usage of an extra
variable key.

Insertion Sort Applications

The insertion sort algorithm is used in the following cases:

o When the array contains only a few elements.

o When there exist few elements to sort.

Advantages of Insertion Sort

1. It is simple to implement.
2. It is efficient on small datasets.

Heap Sort:
Heap sort is a comparison-based sorting technique based on Binary Heap data structure. It is similar to
the selection sort where we first find the minimum element and place the minimum element at the
beginning. Repeat the same process for the remaining elements.
 Heap sort is an in-place algorithm.
 Its typical implementation is not stable, but can be made stable (See this)
 Typically 2-3 times slower than well-implemented QuickSort. The reason for slowness is a lack of
locality of reference.
Advantages of heapsort:
 Efficiency – The time required to perform Heap sort increases logarithmically while other
algorithms may grow exponentially slower as the number of items to sort increases. This sorting
algorithm is very efficient.
 Memory Usage – Memory usage is minimal because apart from what is necessary to hold the initial
list of items to be sorted, it needs no additional memory space to work
 Simplicity – It is simpler to understand than other equally efficient sorting algorithms because it
does not use advanced computer science concepts such as recursion.

Disadvantages of Heap Sort:

 Costly: Heap sort is costly.


 Unstable: Heap sort is unstable. It might rearrange the relative order.
 Efficient: Heap Sort are not very efficient when working with highly complex data.
Applications of HeapSort:
 Heapsort is mainly used in hybrid algorithms like the IntroSort.
 Sort a nearly sorted (or K sorted) array
 k largest(or smallest) elements in an array
The heap sort algorithm has limited uses because Quicksort and Merge sort are better in practice.
Nevertheless, the Heap data structure itself is enormously used.

You might also like