KEMBAR78
Data Structure & Time Complexity | PDF | Time Complexity | Multiplication
0% found this document useful (0 votes)
17 views18 pages

Data Structure & Time Complexity

The document provides an overview of Data Structures and Algorithms (DSA), emphasizing their importance in programming languages for organizing and processing data efficiently. It discusses the characteristics, complexities, and types of data structures and algorithms, as well as the significance of time and space complexities in algorithm performance. Additionally, it covers asymptotic notations like Big-O, Omega, and Theta for analyzing algorithm efficiency.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views18 pages

Data Structure & Time Complexity

The document provides an overview of Data Structures and Algorithms (DSA), emphasizing their importance in programming languages for organizing and processing data efficiently. It discusses the characteristics, complexities, and types of data structures and algorithms, as well as the significance of time and space complexities in algorithm performance. Additionally, it covers asymptotic notations like Big-O, Omega, and Theta for analyzing algorithm efficiency.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 18

Introduction of Data Structure and Algorithm

Data Structures vs Algorithms


• Data structures and algorithms (DSA) are two important
aspects of any programming language.
• Every programming language has its own data structures and
different types of algorithms to handle these data structures.
• Data Structures are used to organise and store data to use it in
an effective way when performing data operations.
• Algorithm is a step-by-step procedure, which defines a set of
instructions to be executed in a certain order to get the desired
output. Algorithms are generally created independent of
underlying languages.
Why to Learn Data Structures & Algorithms (DSA)?

• Three common problems that applications face:


– Data Search − Consider an inventory of 1 million(106) items of a
store. If the application is to search an item, it has to search an item in
1 million(106) items every time slowing down the search. As data
grows, search will become slower.

– Processor speed − Processor speed although being very high, falls


limited if the data grows to billion records.

– Multiple requests − As thousands of users can search data


simultaneously on a web server, even the fast server fails while
searching the data.
Learning Data Structures & Algorithms (DSA)?
• Time and Space complexities
– Time and Space complexities are the measures of the amount of time
required to execute the code (Time Complexity) and amount of space
required to execute the code (Space Complexity).
• Different Data Structures
– data structures like Array, Stack, Queye, Linked List etc.
• Different Algorithms
– These algorithms include searching, sorting, and other different
algorithms.
What is Data Structure?
• Data Structure is a systematic way to organize data in order to
use it efficiently.
– Interface − Each data structure has an interface. Interface represents
the set of operations that a data structure supports. An interface only
provides the list of supported operations, type of parameters they can
accept and return type of these operations.
– Implementation − Implementation provides the internal representation
of a data structure. Implementation also provides the definition of the
algorithms used in the operations of the data structure.
Characteristics of a Data Structure
• Correctness − Data structure implementation should implement
its interface correctly.

• Time Complexity − Running time or the execution time of


operations of data structure must be as small as possible.

• Space Complexity − Memory usage of a data structure


operation should be as little as possible.
Execution Time Cases
• Worst Case − This is the scenario where a particular data structure
operation takes maximum time it can take. If an operation's worst case
time is ƒ(n) then this operation will not take more than ƒ(n) time where
ƒ(n) represents function of n.
• Average Case − This is the scenario depicting the average execution time
of an operation of a data structure. If an operation takes ƒ(n) time in
execution, then m operations will take m x ƒ(n) time.
• Best Case − This is the scenario depicting the least possible execution
time of an operation of a data structure. If an operation takes ƒ(n) time in
execution, then the actual operation may take time as the random number
which would be maximum as ƒ(n).
Algorithm Basics
• From the data structure point of view, following are some important
categories of algorithms
 Search − Algorithm to search an item in a data structure.

 Sort − Algorithm to sort items in a certain order.

 Insert − Algorithm to insert item in a data structure.

 Update − Algorithm to update an existing item in a data structure.

 Delete − Algorithm to delete an existing item from a data structure.


How to write an algorithm
• Problem − Design an algorithm to add two numbers and display
the result.
– Step 1 − START
– Step 2 − declare three integers a, b & c
– Step 3 − define values of a & b
– Step 4 − add values of a & b
– Step 5 − store output of step 4 to c
– Step 6 − print c
– Step 7 − STOP
More precise way of writing algorithm
• Step 1 − START ADD
• Step 2 − get values of a & b
• Step 3 − c ← a + b
• Step 4 − display c
• Step 5 − STOP
Characteristics of an Algorithm
• Definitness− Algorithm should be clear and unambiguous. Each of its
steps (or phases), and their inputs/outputs should be clear and must lead
to only one meaning.
• Input − An algorithm should have 0 or more well-defined inputs.
• Output − An algorithm should have 1 or more well-defined outputs, and
should match the desired output.
• Finiteness − Algorithms must terminate after a finite number of steps.
• Feasibility − Should be feasible with the available resources.
• Independent − An algorithm should have step-by-step directions, which
should be independent of any programming code.
Algorithm Complexity
• Suppose X is an algorithm and n is the size of input data, the
time and space used by the algorithm X are the two main
factors, which decide the efficiency of X.
Time Factor − Time is measured by counting the number of key
operations such as comparisons in the sorting algorithm.

Space Factor − Space is measured by counting the maximum memory


space required by the algorithm.
• The complexity of an algorithm f(n) gives the running time
and/or the storage space required by the algorithm in terms of n
as the size of input data.
Space Complexity
• Space complexity of an algorithm represents the amount of
memory space required by the algorithm in its life cycle. The
space required by an algorithm is equal to the sum of the
following two components −
– A fixed part that is a space required to store certain data and variables,
that are independent of the size of the problem. For example, simple
variables and constants used, program size, etc.
– A variable part is a space required by variables, whose size depends
on the size of the problem. For example, dynamic memory allocation,
recursion stack space, etc.
Time Complexity
• Time complexity of an algorithm represents the amount of time
required by the algorithm to run to completion. Time
requirements can be defined as a numerical function T(n),
where T(n) can be measured as the number of steps, provided
each step consumes constant time.

• For example, addition of two n-bit integers takes n steps.


Consequently, the total computational time is T(n) = c ∗ n,
where c is the time taken for the addition of two bits. Here, we
observe that T(n) grows linearly as the input size increases.
Analysis of Algorithms
• Asymptotic Notations are mathematical tools used to analyze the performance of
algorithms by understanding how their efficiency changes as the input size grows.
• These notations provide a concise way to express the behavior of an algorithm’s time
or space complexity as the input size approaches infinity.
• Rather than comparing algorithms directly, asymptotic analysis focuses on
understanding the relative growth rates of algorithms’ complexities.
• It enables comparisons of algorithms’ efficiency by abstracting away machine-
specific constants and implementation details, focusing instead on fundamental
trends.
• Asymptotic analysis allows for the comparison of algorithms’ space and time
complexities by examining their performance characteristics as the input size varies.
• By using asymptotic notations, such as Big O, Big Omega, and Big Theta, we can
categorize algorithms based on their worst-case, best-case, or average-case time or
space complexities, providing valuable insights into their efficiency.
Big-O Notation (O-notation):
• Big-O notation represents the upper bound of the running time of an
algorithm. Therefore, it gives the worst-case complexity of an algorithm.
• It is the most widely used notation for Asymptotic analysis.
• It specifies the upper bound of a function.
• The maximum time required by an algorithm or the worst-case time
complexity.
• It returns the highest possible output value(big-O) for a given input.
• Big-O(Worst Case) It is defined as the condition that allows an algorithm
to complete statement execution in the longest amount of time possible.
Omega Notation (Ω-Notation):
• Omega notation represents the lower bound of the running time
of an algorithm. Thus, it provides the best case complexity of
an algorithm.
• The execution time serves as a lower bound on the algorithm’s
time complexity.
• It is defined as the condition that allows an algorithm to
complete statement execution in the shortest amount of time.
Theta Notation (Θ-Notation):
• Theta notation encloses the function from above and below.
Since it represents the upper and the lower bound of the
running time of an algorithm, it is used for analyzing the
average-case complexity of an algorithm.
• The execution time serves as both a lower and upper bound on
the algorithm’s time complexity.
• It exist as both, most, and least boundaries for a given input
value.

You might also like