KEMBAR78
Introduction To Data Structure | PDF | Queue (Abstract Data Type) | Data Structure
0% found this document useful (0 votes)
12 views105 pages

Introduction To Data Structure

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views105 pages

Introduction To Data Structure

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 105

Introduction to Data Structure

The name "Data Structure" is a combination of two words: "Data" and "Structure". Let's
go over each word individually:

 Data: Data is information that must be processed and stored in a computer. Data
can be anything from numbers to characters to dates to photos to videos.
 Structure: Structure refers to how data is structured. It specifies the relationship
between various data elements and how they are kept in memory. The structure
has an impact on the efficiency of data operations.

A data structure is a systematic way of storing and managing data in a computer so that
it can be retrieved and used efficiently.

What is Data Structure?


A data structure is a method of organising and storing data in a computer in order for it
to be retrieved and used efficiently. The desire to manage massive volumes of data
effectively leads to the need for data structures. This can be achieved by structuring
data in a way that makes operations like search, insertion, and deletion easier. Data
structures give a framework for organising and manipulating data, making information
management and analysis easier. They also enable the development of efficient data-
processing algorithms, allowing difficult issues to be solved in a reasonable amount of
time.

Why we should learn Data Structures?


There are various reasons why learning data structures is important:

 Problem Solving: Data structures are an important tool for handling many real-
world problems. Understanding data structures can help you build more efficient
and effective programmes.
 Algorithm Design: To store and modify data, many algorithms rely on data
structures. A solid understanding of data structures is required for the
development of effective algorithms.
 Requirements for the Job: Data structure knowledge is frequently required for
software engineering and computer science employment. Candidates with a
thorough understanding of data structures and algorithms are highly valued by
employers.
 Competitive Programming: Data structure knowledge is essential for
competitive programming and coding competitions.
 Better Understanding of Computer Science: Data structures are a
fundamental concept in computer science and understanding them can deepen
your overall understanding of the field.

In summary, learning data structures is important for problem-solving, algorithm design,


job requirements, competitive programming, and gaining a better understanding of
computer science.

What is the need of Data Structure?


We need data structures because they provide efficient solutions to organizing and
manipulating data. They help to store and retrieve data in an organized manner, making
it easier to manage and analyze the information. Some specific reasons why data
structures are important include:

1. Improved Time Complexity: Using appropriate data structures can lead to


better time complexity, making it possible to solve problems more quickly. For
example, searching for an element in a sorted array is faster than searching for it
in an unsorted array.
2. Better Space Complexity: Data structures can help to reduce the amount of
memory needed to store data. For example, using a linked list instead of an array
can reduce the amount of memory needed to store the same data.
3. Efficient Data Retrieval: Data structures make it easier to retrieve specific data
efficiently. For example, a hash table can retrieve data in constant time, while
searching through an unsorted array takes linear time.
4. Better Data Management: Data structures make it easier to manage and
manipulate data. For example, a stack can be used to implement an undo
functionality in an application.
5. Solving Complex Problems: Data structures can provide the foundation for
efficient algorithms, making it possible to solve complex problems in a
reasonable amount of time. For example, graph algorithms can be used to find
the shortest path between two points or to find the minimum spanning tree of a
graph.

In summary, data structures play a crucial role in solving problems related to data
management and analysis by providing efficient solutions for organizing, retrieving, and
manipulating data.

Characteristics of Data Structure


The following are some of the main characteristics of data structures:

1. Representation of Data: Data structures define a way of representing data in a


computer's memory, making it possible to store, manipulate, and access data
efficiently.
2. Access Techniques: Different data structures provide different techniques for
accessing data stored within them, such as random access or sequential access.
3. Storage Organization: Data structures define the organization of data in
memory, such as linear or hierarchical organization.
4. Insertion and Deletion Operations: Different data structures support different
methods for adding and removing elements, such as insertion at the end or
deletion from the front.
5. Time and Space Complexity: Data structures can have different time and space
complexities, depending on the operations they support and the way they
organize data.
6. Adaptability: Some data structures are more adaptable to certain types of data
and operations than others. For example, a stack is more suitable for problems
that require Last-In-First-Out (LIFO) behavior, while a queue is better suited for
problems that require First-In-First-Out (FIFO) behavior.
7. Flexibility: Different data structures have different degrees of flexibility, such as
the ability to dynamically grow or shrink in size, or the ability to efficiently insert or
delete elements in the middle.

In summary, these are some of the key characteristics of data structures that affect their
efficiency, suitability, and versatility for different types of problems and applications.

Advantages of Data Structure


The following are some of the main advantages of using data structures:

1. Better Data Organization: Data structures provide a way of organizing data in a


meaningful and efficient manner, making it easier to access and manipulate data.
2. Increased Data Retrieval Efficiency: Data structures can provide fast and
efficient retrieval of data, which is essential in many real-world applications.
3. Efficient Data Manipulation: Data structures can provide efficient methods for
adding, deleting, and modifying data, which is important in dynamic applications.
4. Improved Code Reusability: Reusable data structures can be used in many
different applications, reducing the time and effort required to write and maintain
code.
5. Better Problem-Solving Capability: Data structures provide a way of modeling
real-world problems and solving them in a more efficient and elegant manner.
6. Reduced Memory Requirements: Data structures can be designed to use
memory more efficiently, reducing the overall memory requirements of an
application.
7. Increased Data Security: Data structures can be designed to provide additional
security features, such as data encryption and protection against unauthorized
access.

==============================================================

Types of Data Structure


A data structure is a way of organizing, storing, and manipulating data in a computer so
that it can be efficiently used and retrieved. Data structures can be classified into two
categories: Primitive and Non-Primitive.
Primitive Data Structures
These are the most basic and fundamental data structures, such as integers, floats,
and Booleans, that are directly built into a programming language. They have a clear
and straightforward representation and are functionally constrained.

Here are a few points about Primitive Data Structures:

1. Basic: Primitive data structures are the most basic and fundamental data
structures that are directly built into a programming language.
2. Simple Representation: They have a simple and straightforward representation,
such as integers, floating-point numbers, and Booleans.
3. Limited Functionality: Primitive data structures are limited in their functionality
and can only store and manipulate simple data.
4. Built-in Types: Primitive data structures are typically built-in types in most
programming languages and do not require any special implementation.
5. Fast Access: Primitive data structures are stored in contiguous memory
locations, which allows for fast access to their elements.
6. Efficient Storage: Since primitive data structures have a simple representation,
they use less memory compared to non-primitive data structures.
7. Commonly used: Primitive data structures are commonly used in programming
to store basic information and are used as building blocks for more complex data
structures.

Non-Primitive Data Structures


These are more advanced and complex data structures that are built using primitive
data structures. Arrays, linked lists, trees, graphs, and hash tables are some examples.
They offer more advanced functionality and allow us to use the efficient storing and
manipulation of large volumes of data. They are used to handle complex problems that
primitive data structures alone cannot solve.

Here are a few points about Non-Primitive Data Structures:

1. Complex: Non-primitive data structures are more complex and sophisticated


data structures that are built using primitive data structures.
2. Advanced Functionality: They provide more advanced functionality and enable
the efficient storage and manipulation of large amounts of data.
3. Used to solve complex problems: Non-primitive data structures are used to
solve complex problems that cannot be solved using primitive data structures
alone.
4. Dynamic: Non-primitive data structures can grow or shrink in size during
runtime, whereas primitive data structures have a fixed size.
5. More Memory: Non-primitive data structures use more memory compared to
primitive data structures due to their complex representation.
6. Higher Abstraction: Non-primitive data structures provide a higher level of
abstraction compared to primitive data structures and hide the underlying
implementation details.
7. Efficient Access: Non-primitive data structures are designed to provide efficient
access to elements based on the problem they are used to solve.

Non-Primitive Data Structures are more complex data structures built using primitive
data structures. Examples of Non-Primitive Data Structures can be divided into two
categories: Linear and Non-Linear.

 Linear Data Structures: These data structures are organized in a linear or


sequential manner. Examples include: Array,Linked List, Queue,Stack
 Non-Linear Data Structures: These data structures are not organized in a
sequential or linear manner. Examples include: Trees and Graphs

Types of Linear Data Structures

Arrays

An Array is a collection of elements of the same data type stored in contiguous memory
locations. It is a linear data structure that provides efficient access to its elements based
on their indices. Each element in an array is referred to by its index, which is an integer
that starts from 0. The elements are stored in a contiguous block of memory, which
allows for fast access to elements based on their indices.

Arrays can be used to store a variety of data types, including numbers, strings, and
objects. Arrays can have a single dimension (also called a one-dimensional array) or
multiple dimensions (e.g. two-dimensional arrays, three-dimensional arrays, etc.).
Arrays are widely used in programming to store and manipulate data. They are used as
building blocks for more complex data structures and are used in a variety of algorithms
and applications.

Array data structures have many applications, including:

1. Storing collections of data, such as lists of items or numbers


2. Implementing mathematical objects such as matrices and vectors
3. Implementing dynamic data structures such as stacks, queues, and hash tables
4. Implementing tables, databases, and look-up tables
5. Implementing trees and graphs
6. Image processing and computer graphics
7. Representing strings and text data
8. Performing numerical computations and simulations.

Linked List

A linked list is a data structure consisting of a sequence of nodes, where each node
holds data and a reference (link) to the next node in the sequence. The reference is
stored in a field called "next". The last node in the list contains a null reference
indicating the end of the list.

There are two main types of linked lists: singly linked lists and doubly linked lists. In
a singly linked list, each node only has a reference to the next node in the list, while in a
doubly linked list, each node has a reference to both the next node and the previous
node.

A linked list has the following advantages over an array:

1. Dynamic size: Linked lists can grow or shrink in size during execution of a
program, while the size of an array needs to be fixed when it is declared.
2. Ease of insertion and deletion: Inserting or deleting an element in a linked list
is easier compared to arrays, as elements in linked lists do not need to be
shifted.
3. No wastage of space: In arrays, unused space is wasted, while in linked lists,
space is used only as needed.

On the other hand, linked lists have some disadvantages, such as slow random access
time and higher overhead compared to arrays due to the extra memory required to store
the links.

Linked lists are commonly used for dynamic data structures such as stacks, queues,
and associative arrays. They can also be used for implementing trees, graphs, and
other complex data structures.

Queue

A queue is a type of linear data structure that operates on the First In First Out (FIFO)
principle. That is, the first item put to the queue will be the first to be removed. The last
item added to the queue will be removed at last.

A queue performs two important operations:

 Enqueue: Adds an item to the queue's end . ( Rear or Tail )


 Dequeue: The act of removing an item from the front of a queue . ( Front or
Head )

An array or a linked list can be used to implement a queue. The implementation used is
determined by the specific use case and needs.

Queues are widely used in a variety of algorithms and real-world applications, including:

 Task scheduling in operating systems.


 Message queue in communication systems.
 Print spooler in printers.
 BFS in graph algorithms.

Stack

A stack is a linear data structure that follows the Last-In-First-Out (LIFO) principle. It
means the last element added to the stack will be the first one to be removed.
A stack can be thought of as a list or a collection of elements, where elements can only
be added or removed from the top of the stack. The top of the stack is the element that
was added most recently and the bottom of the stack is the element that was added
first in the stack.

Examples of real-life stack:

 Stack of plates or books, where you can only remove the top plate or book.
 The back button on a web browser, where you can only go back to the previously
visited page.

Examples of stack in computer science:

 Undo/Redo operations in text editors.


 Function calls in programming languages, where the program keeps track of
which function to return to after the current function returns.
 Compiler syntax parsing.

In computer science, a stack can be implemented using an array or linked list data
structure. Some of the common operations performed on a stack include push (add an
element to the top of the stack), pop (remove the top element from the stack), and peek
(return the top element of the stack without removing it).

Non-Linear Data Structurtes


These data structures are not organized in a sequential or linear manner. Examples
include: Trees and Graphs

Trees

A tree is a hierarchical data structure that consists of nodes connected by edges. It is a


non-linear data structure that represents a parent-child relationship between nodes.

A tree has a root node, which is the topmost node and has no parent. Each node in the
tree can have zero or more child nodes. A node with no children is called a leaf node.
The depth of a node is the number of edges from the root node to that node. The height
of the tree is the maximum depth of any node in the tree.
Trees have several applications in computer science, including:

 Representing hierarchical relationships, such as the file system on a computer.


 Storing data in a way that allows efficient search and retrieval, such as binary
search trees.
 Representing a graph data structure.

Examples of tree data structures:

 File system directory structure


 Expression tree
 Decision tree in machine learning
 Trie in computer science
 Huffman coding tree in data compression algorithms.

In computer science, a tree can be implemented using an array or linked list data
structure. Some of the common operations performed on a tree include insertion,
deletion, searching, and traversal (visiting all nodes in the tree).

Graph

A graph is a mathematical structure used to model relationships between objects. It


consists of a set of vertices (also known as nodes) and a set of edges, which connect
pairs of vertices. The vertices represent the objects, and the edges represent the
relationships between the objects.

Note that graphs can be either directed or undirected, depending on whether the
relationships represented by the edges have a direction. In a directed graph, the edges
have a direction and are called arcs, while in an undirected graph, the edges have no
direction and are simply called edges.

Undirected Graph

Directed Graph

Graphs can be used to model many different types of relationships, including social
networks, transportation networks, and relationships between web pages.

Examples of graphs:

1. Social network graph - vertices represent individuals, edges represent


connections (e.g., friendship) between individuals.
2. Road network graph - vertices represent intersections or locations, edges
represent roads connecting the locations.
3. Internet graph - vertices represent web pages, edges represent hyperlinks
between web pages.
4. Graph of flight routes - vertices represent airports, edges represent direct flights
between airports.

================================================================

Different Types of Data Structure


Operations with Examples
Data structure operations refer to the various actions that can be performed on a data
structure. A data structure is a way of organizing and storing data in a computer so that
it can be accessed and manipulated efficiently. Different data structures have different
capabilities and efficiencies for different operations.

Data structure operations can be broadly categorized into two types:

1. Basic Operations: These are the fundamental operations that can be performed
on a data structure. Examples include insertion, deletion, search, and access.

2. Advanced Operations: These are more complex operations that are built on top
of basic operations. Examples include sorting, merging, splitting, and traversal.

It's important to note that the specific operations that can be performed on a data
structure will depend on the type of data structure being used. For example, operations
like "pop" (removal of an item) and "push" (addition of an item) are specific to stack data
structures.

Here is a list of common data structure operations with examples:

1. Insertion: Adding an element to the data structure. For example, inserting an


element into an array or a linked list.

2. Deletion: Removing an element from the data structure. For example, removing
an element from a array or a linked list.

3. Search: Finding an element in the data structure. For example, searching for a
specific key or value in array or a linked list..

4. Access: Retrieving an element from the data structure. For example, accessing
the first element of a queue.

5. Traversal: Visiting every element in the data structure. For example, printing all
elements of an array or a linked list.

6. Sorting: Arranging elements in the data structure in a particular order. For


example, sorting an array of integers in ascending or descending order.

7. Merging: Combining two data structures into one. For example, merging two
sorted linked lists into a single sorted linked list.
8. Splitting: Breaking a data structure into smaller parts. For example, splitting a
linked list into two equal halves.

9. Reversal: Reversing the order of elements in the data structure. For example,
reversing the elements of a stack.

10. Indexing: Accessing an element of the data structure using an index or a key.
For example, accessing the element at index i in an array.

===========================================================

What are the Real World Applications of


Data Structures?
Data structures are a fundamental concept in computer science and have different
applications in various fields. Some of the most common applications of data
structures .

Here is a list of some common applications of Data structures, along with examples:

1. Database Management Systems - Data structures are used to store and


organize large amounts of data in a structured manner. For example, a relational
database uses tables and indices to store data, and a NoSQL database uses
document-oriented or key-value data structures.

2. Operating Systems - Data structures are used to manage resources, such as


memory allocation and process scheduling. For example, a priority queue data
structure can be used to manage the scheduling of processes in an operating
system.

3. Graphical User Interfaces - Data structures are used to store and display
hierarchical data, such as the file system in a computer's file explorer. For
example, a tree data structure can be used to represent the file hierarchy in a file
explorer.

4. Computer Networks - Data structures are used to store information about


network topology and to implement routing algorithms. For example, a graph
data structure can be used to represent a computer network, with nodes
representing devices and edges representing connections between devices.

5. Compiler Design - Data structures are used to store and process the syntax and
semantics of programming languages. For example, a parse tree data structure
can be used to represent the structure of a program and to generate intermediate
code.

6. Artificial Intelligence and Machine Learning - Data structures are used to


store and manipulate large amounts of data in order to train algorithms and make
predictions. For example, a decision tree data structure can be used in a
machine learning algorithm to make predictions based on input data.
7. Computer Graphics - Data structures are used to store and manipulate
geometric models for rendering and animation. For example, a 3D mesh data
structure can be used to represent a 3D object in computer graphics.

8. Bioinformatics - Data structures are used to store and analyze large amounts of
biological data, such as DNA sequences and protein structures. For example, a
suffix tree data structure can be used to efficiently store and search for patterns
in DNA sequences.

9. Web Search Engines - Data structures are used to store and retrieve large
amounts of web pages and to rank them based on relevance. For example, an
inverted index data structure can be used to store keywords and their
corresponding web pages, allowing for fast searching and retrieval.

10. Computer Algorithms and Optimization - Data structures are used to design
and analyze algorithms for solving complex problems. For example, a priority
queue data structure can be used in Dijkstra's algorithm for finding the shortest
path in a graph.

11. Geographic Information Systems (GIS) - Data structures are used to store and
analyze geographical data, such as maps, satellite images, and census data. For
example, a quadtree data structure can be used to efficiently store and search for
geographical features, such as cities or roads.

12. Video Games - Data structures are used to store and manipulate game objects
and their interactions. For example, a spatial partitioning data structure can be
used to efficiently search for and detect collisions between game objects.

13. Financial Systems - Data structures are used to store and manage financial
data, such as stock prices and transaction history. For example, a balanced
binary search tree data structure can be used to efficiently store and retrieve
stock prices.

14. Cryptography - Data structures are used to store and process encrypted data
and to implement cryptographic algorithms. For example, a hash table data
structure can be used to store and look up values for encryption and decryption.

15. Natural Language Processing (NLP) - Data structures are used to store and
process large amounts of text data, such as for text classification, sentiment
analysis, and text generation. For example, a trie data structure can be used to
efficiently store and search for words in a large text corpus.

==============================================================

Algorithm in Data Structure


An algorithm is a set of rules or instructions used to solve a specific computational
problem or achieve a desired outcome. In the context of data structures, an algorithm
refers to a procedure for manipulating or processing data stored in a particular data
structure such as an array, list, tree, graph, etc.

Algorithms can be designed using various computational paradigms, such as divide and
conquer, greedy, dynamic programming, and others. They can be expressed using
pseudocode or a high-level programming language. The efficiency and effectiveness of
an algorithm are determined by its time complexity and space complexity, which
describe the amount of time and memory required to execute the algorithm,
respectively.

Here are some examples of algorithms commonly used in data structures:

 Sorting algorithms: Algorithms used to sort data in a particular order, such as


bubble sort, insertion sort, quick sort, and merge sort.

 Search algorithms: Algorithms used to search for specific data in a data


structure, such as linear search and binary search.

 Tree algorithms: Algorithms used to manipulate tree data structures, such as


breadth-first search and depth-first search.

 Graph algorithms: Algorithms used to manipulate graph data structures, such


as Dijkstra's shortest path algorithm and Kruskal's minimum spanning tree
algorithm.

 Hash table algorithms: Algorithms used to implement hash tables, a data


structure that maps keys to values using a hash function.

Why we need Algorithms?


We need algorithms because they provide us with a systematic and efficient way to
solve computational problems. Algorithms allow us to process large amounts of data,
make complex decisions, and solve complex problems in a reliable and efficient
manner.

Here's an example to illustrate why we need algorithms:

Imagine you want to sort a large list of names in alphabetical order. You could do this
manually, but it would be a time-consuming and error-prone process. Instead, you can
use a sorting algorithm such as quicksort or mergesort, which are designed to sort data
in an efficient and reliable manner. By using a well-established algorithm, you can be
confident that the results will be accurate and the process will be much faster than
manual sorting.

In general, algorithms are essential for achieving computational efficiency, solving


complex problems, and automating repetitive tasks. They are used in many areas of
computer science, including artificial intelligence, cryptography, databases, and more.

Importance of Algorithms
Algorithms play a critical role in computer science and other fields, and their importance
can be seen in several ways:

1. Problem Solving: Algorithms provide a systematic approach to solving complex


problems and finding solutions. They can be used to process, analyze, and
visualize data, and to automate repetitive tasks.
2. Efficiency: Algorithms are designed to be efficient in terms of time and space
complexity. By optimizing algorithms, we can ensure that they run quickly and
use minimal memory, which is especially important for large data sets.

3. Accuracy: Algorithms can be used to process large amounts of data accurately


and quickly, without the risk of human error. This is particularly important in fields
such as medicine, finance, and scientific research where accurate data
processing is critical.

4. Automation: Algorithms can be automated to perform repetitive tasks quickly


and accurately, freeing up time and resources for other tasks.

5. Innovation: Algorithms drive innovation in fields such as machine learning,


artificial intelligence, and data science, enabling new and improved solutions to
complex problems.

6. Real-world Applications: Algorithms are used in a wide range of real-world


applications, including search engines, recommendation systems, navigation
systems, and e-commerce websites.

7. Job creation: The increasing demand for algorithms in various industries has
created job opportunities in areas such as software development, data analysis,
and machine learning.

In conclusion, algorithms are a fundamental tool in computer science and play a crucial
role in solving complex problems, improving efficiency, and driving innovation.

Characteristics of an Algorithm
An algorithm is a set of well-defined instructions used to solve a specific computational
problem. The following are the important characteristics of an algorithm:

1. Well-Defined Inputs: An algorithm must have well-defined inputs, which can be


any data or information required to perform the calculation or solve the problem.

2. Unambiguous Instructions: The instructions of an algorithm must be clear,


concise, and unambiguous. This means that the instructions must specify exactly
what needs to be done without leaving any room for interpretation.

3. Finite Number of Steps: An algorithm must have a finite number of steps,


meaning that it must come to an end after a certain number of operations have
been performed. This is an important characteristic because it ensures that the
algorithm will terminate in a reasonable amount of time and will not run
indefinitely.

4. Defined Outputs: An algorithm must have well-defined outputs, which are the
results of the calculation or problem-solving process.

5. Feasibility: An algorithm must be feasible, meaning that it can be implemented


in practice and its operations can be performed in a reasonable amount of time
and memory.

6. Consistency: An algorithm must be consistent, meaning that it produces the


same output for the same inputs every time it is run.
7. Generality: An algorithm should be general, meaning that it can be applied to a
wide range of problems, not just a specific instance.

8. Optimization: An algorithm should be optimized for performance, meaning that it


should be designed to perform its operations in the most efficient way possible.

By following these characteristics, an algorithm can be designed to be effective,


efficient, and reliable, making it an essential tool for solving computational problems and
achieving desired outcomes.

Dataflow of an Algorithm
The data flow of an algorithm refers to the sequence in which data is processed,
transformed and produced as output. It encompasses the input of data, the operations
performed on the data, and the resulting output. A well-designed data flow should be
efficient, understandable, and maintain the integrity of the data. Some points about the
dataflow of an algorithm are as follows:

1. Input of Data: The first step in the data flow of an algorithm is the input of data.
This can be from a user, a file, or any other source.

2. Data Processing: The next step is to process the data, which includes
performing various operations such as arithmetic calculations, string
manipulations, or other transformations.

3. Decision Making: The algorithm may need to make decisions based on the data
and execute different operations accordingly. This can be represented as a
branching structure in the flow chart.

4. Data Storage: The intermediate results of the processing may need to be stored
temporarily for later use.

5. Data Output: The final step is to produce the output, which can be a result, a
visual representation of the data, or any other form of representation.

6. Data Validation: Before processing the data, it is important to validate the data
to ensure that it is in the correct format and meets the requirements of the
algorithm.

7. Error Handling: The data flow should also include error handling mechanisms to
handle unexpected or incorrect input, or to handle other types of exceptions that
may arise during the processing.

8. Performance Optimization: The data flow can be optimized for performance by


reducing the amount of data that needs to be processed, using efficient data
structures and algorithms, and minimizing the use of memory and other
resources.

Overall, the data flow of an algorithm should be well-designed and efficient to produce
accurate and meaningful results.

Algorithm for Calculating Simple Interest


1. Start

2. Input: Take the principal amount (P), the rate of interest (R) and the time period
(T) as inputs.

3. Calculate the interest: Multiply the principal amount (P) by the rate of interest (R)
and the time period (T) to calculate the interest (I = P * R * T).

4. Output: Display the calculated interest (I).

5. End.

This algorithm uses the simple interest formula, which is I = P * R * T, where P is the
principal amount, R is the rate of interest, T is the time period, and I is the interest. The
algorithm takes these values as input and calculates the interest, which is then
displayed as output.

Factors of an Algorithm
The factors that contribute to the effectiveness and efficiency of an algorithm can be
broadly categorized into three categories:

1. Time Complexity: Time complexity refers to the amount of time an algorithm


takes to run as a function of the size of the input data. Time complexity is a
crucial factor in determining the performance of an algorithm, especially for large
input data.

2. Space Complexity: Space complexity refers to the amount of memory an


algorithm requires to run as a function of the size of the input data. Space
complexity is important to consider, especially when memory is limited or when
the algorithm needs to be scalable.

3. Correctness: Correctness refers to the ability of an algorithm to produce the


correct results for all possible inputs. An algorithm should be tested thoroughly
for different input cases to ensure that it produces the correct results.

4. Readability and Maintainability: The readability and maintainability of an


algorithm refer to the ease with which the algorithm can be understood and
modified. A well-designed algorithm should be readable, understandable, and
maintainable to allow for future modifications or updates.

5. Simplicity: A simple algorithm is generally easier to understand, implement, and


debug than a complex algorithm. A simple algorithm is also less likely to contain
bugs or errors.

6. Ease of Implementation: The ease of implementation of an algorithm refers to


how straightforward and uncomplicated it is to implement the algorithm in a
programming language.

7. Scalability: Scalability refers to the ability of an algorithm to handle larger input


sizes without significant performance degradation.

These factors should be considered and balanced when designing an algorithm to


achieve the desired level of performance, accuracy, and efficiency.
Issues of Algorithms
There are several issues that can arise when algorithms are used in data structures:

1. Time Complexity: The time complexity of an algorithm is a measure of how long


it takes to run as a function of the size of the input data. Some algorithms may be
efficient when dealing with small amounts of data, but become inefficient when
the data size grows larger.

2. Space Complexity: Space complexity refers to the amount of memory an


algorithm requires to run as a function of the size of the input data. Some
algorithms may be memory-intensive, leading to increased memory usage and
decreased performance.

3. Scalability: Some algorithms may not be scalable, meaning that their


performance degrades as the size of the input data increases.

4. Overhead: Some algorithms may have a high overhead, meaning that they
consume a large amount of resources, such as time and memory, even when the
data size is small.

5. Inefficient Use of Resources: Some algorithms may not use resources


efficiently, leading to wasted memory or processing time.

6. Unintended Consequences: Algorithms can have unintended consequences,


such as creating new forms of discrimination, amplifying existing biases, or
causing harm to individuals or communities.

7. Limited Problem Solving: Some algorithms may be limited in their problem-


solving abilities, meaning that they may not be suitable for all types of data
structures or all types of problems.

These issues highlight the importance of careful selection of algorithms when working
with data structures. The right algorithm can help to solve problems efficiently and
effectively, while the wrong algorithm can lead to inefficiency, waste of resources, and
unintended consequences.

List the Approaches of Algorithm


There are several approaches to designing algorithms:

1. Brute Force: This approach involves trying all possible solutions to a problem
and selecting the best one. This approach can be used for small problems, but is
often too slow for larger problems.

2. Divide and Conquer: This approach involves breaking a problem into smaller
sub-problems and solving each sub-problem separately. The results from the
sub-problems are then combined to find the solution to the larger problem.

3. Dynamic Programming: This approach involves breaking a problem into


smaller sub-problems and storing the solutions to these sub-problems to avoid
redundant computation. This approach can be used to solve problems with
overlapping sub-problems, such as optimization problems.
4. Greedy Algorithms: This approach involves making the best locally optimal
choice at each step and hoping that this leads to a globally optimal solution. This
approach can be used for problems with clear optimal substructure, such as
scheduling problems.

5. Backtracking: This approach involves searching through all possible solutions to


a problem by incrementally building and testing partial solutions. This approach
can be used for problems with many possible solutions, such as the traveling
salesman problem.

6. Randomized Algorithms: This approach involves using randomness to solve


problems. Randomized algorithms can be used for problems where an exact
solution is difficult or impossible to find, such as optimization problems.

7. Approximation Algorithms: This approach involves finding a solution that is


close to the optimal solution, but not necessarily the optimal solution. This
approach can be used for problems where finding the optimal solution is
computationally infeasible.

==================================================================

Algorithm Analysis
Algorithm analysis is the process of evaluating the performance of an algorithm and
measuring its computational efficiency. The goal of algorithm analysis is to determine
the time and/or space complexity of an algorithm and to compare it to other algorithms
for the same problem.

There are two main aspects of algorithm analysis: Time complexity and Space
complexity. Time complexity is a measure of the amount of time an algorithm takes to
run as a function of the size of the input data. Space complexity is a measure of the
amount of memory an algorithm requires to run as a function of the size of the input
data.

There are several methods for analyzing the performance of algorithms, including:

1. Asymptotic Analysis: Asymptotic analysis is the process of calculating an


algorithm's running time in mathematical units to determine the program's
limitations. It helps to study the long-term behavior of algorithms and estimate
their efficiency in terms of time and space complexity.

2. Worst-Case Analysis: Worst-Case Analysis is a method of evaluating the


performance of an algorithm by analyzing the maximum amount of resources
(such as time or memory) that the algorithm would require to solve the largest
possible input size within its scope.In other words, it's a way to determine how
the algorithm would perform in the worst possible scenario, allowing us to
understand the upper bound of its performance. This information is useful in
determining the reliability and stability of the algorithm, especially in mission-
critical applications where high performance is crucial.

3. Average-Case Analysis: Average-Case Analysis is a method of evaluating the


performance of an algorithm by analyzing the expected amount of resources
(such as time or memory) that the algorithm would require to solve a random
input of a given size. It provides an estimation of the average performance of an
algorithm over all possible inputs.

4. Best-Case Analysis: Best-Case Analysis is a method of evaluating the


performance of an algorithm by analyzing the minimum amount of resources
(such as time or memory) that the algorithm would require to solve the smallest
possible input size within its scope. It gives the upper bound of the algorithm's
performance.In other words, it's a way to determine how the algorithm would
perform in the best possible scenario, allowing us to understand the lower bound
of its performance. This information is useful in determining the minimum
resources required to solve a problem with the algorithm and provides an upper
bound on performance optimization

5. Amortized Analysis: Amortized Analysis is a method of evaluating the average


performance of an algorithm over multiple operations, by assigning a cost to
each operation and spreading it out over multiple operations. This allows us to
determine the average performance of an algorithm over a sequence of
operations, even if some of the operations are more expensive than others.For
example, consider a dynamic data structure like a linked list. Inserting an element
at the beginning of a linked list can be an expensive operation, as it requires re-
arranging the pointers of all the elements in the list. However, if we spread the
cost of this operation over multiple insertions, the average cost of each insertion
becomes much smaller. An amortized analysis of this operation would help us
determine the average cost of each insertion in terms of time or memory
complexity.

6. Experimental Analysis: Experimental analysis in data structures refers to the


study of the performance of different data structures, algorithms and their
implementation using mathematical methods and empirical experiments. This
involves testing and evaluating different data structures on various inputs and
operations to determine the time and space complexity of each implementation
and compare the results. The goal of experimental analysis in data structures is
to determine the most efficient data structure and algorithm for a particular
problem and make informed decisions about choosing the right data structure for
a specific use case.

===========================================================

What is Algorithm Complexity?


Algorithm complexity refers to the number of steps an algorithm needs to take in order
to solve a particular problem. When given an input of size n, an algorithm's complexity is
determined by how long it would take to complete a particular problem. If an algorithm
needs to scale, even for very large values of n, it should still be able to compute the
result in a finite and reasonable amount of time. Because of this, complexity is
estimated asymptotically as n gets closer to infinity. While complexity is typically
measured in terms of time, it can also be measured in terms of space, or more
specifically, in terms of the amount of memory used by the algorithm.

Examples to Understand Algorithm Complexity


 If there is a big, unsorted list and you are seeking for a certain item, you will likely
compare each item. Search time is proportional to the list size. In this case, complexity
is linear.
 On the other hand, searching for a word in a dictionary will be quicker since the words
are arranged in order, you are aware of the order, and you can quickly determine
whether you need to turn to earlier or later pages. This is an illustration of logarithmic
complexity.
 No matter how many words there are in the dictionary, selecting the first word from it
always takes a Constant amount of time. The complexity of joining the end of a bank
queue is also constant regardless of how lengthy the queue is.This is an illustration
of constant complexity.
 The difficulty increases quadratically if you are requested to find all duplicates in an
unsorted list. It is linearly complex to search for duplicates of a single item. If we apply
this rule to every item, complexity takes a quadratic form.

What is the time space trade off?


A space-time tradeoff is a strategy used in computer science to either solve a problem
or calculation quickly by using more storage space, or slowly and carefully by using very
little space.

An algorithm or software in this situation trades off more space usage for less time.
Thus, space refers to the amount of data storage used (RAM, HDD, etc.) and time
refers to the amount of time used to complete a certain activity (computation time or
response time)

Need of Time Space Tradeoff

 Less time by using more memory


 Solving a problem in very little space by spending a long time.

Asymptotic Analysis
Asymptotic notations are mathematical notations for analysing the runtime of a given
algorithm for a large input. It allows us to compare the runtimes of various algorithms
without having to manually calculate their runtimes. We must determine how long our
algorithm takes to complete each step in order to evaluate how well it performs for
various inputs. This time can easily be calculated by manually. Sometimes it's not
possible for humans to record such a small runtime, so manually measuring an
algorithm's runtime is not a smart idea.

Asymptotic notation is a mathematical notation that is used to analyze the time


complexity and the runtime of an algorithm for a large input.

An algorithm's asymptotic analysis involves determining the mathematical limits or


framing of its run-time performance. We may easily draw conclusions about an
algorithm's best case, average case, and worst case scenarios using asymptotic
analysis.

An algorithm's time requirement often falls into one of three categories.

 Best Case − Minimum time required for program execution.


 Average Case − Average time required for program execution.
 Worst Case − Maximum time required for program execution.
Types of Asymptotic Notations
The complexity of algorithms for asymptotic analysis is represented using asymptotic
notations. These symbols are tools used in mathematics to express complexity. Three
notations are usually applied.

 Big Oh (Ο) Notation


 Big-Omega (Ω) Notation
 Big-Theta(Θ) Notation

1. Big Oh (O) Notation


The formal notation for expressing the upper bound of an algorithm's execution time is
O(n). It calculates the worst-case time complexity, or the maximum time an algorithm
can take to run.

Think about the f(n) and g(n) functions, where f and g are defined on an unbounded set
of positive real numbers. For every big value of n, g(n) is always strictly positive.

The function f is said to be O(g) (read as big- oh of g), if, for a constant c>0 and a
natural number n0, f (n) ≤ CG(n) for all n >= n0. This can bw written as below

O(g(n)) = { f(n) : There exist positive constant c and n0 such that 0 ≤ f(n) ≤ c g(n), for all
n ≥ n0 }

2. Big-Omega (Ω) Notation


The formal notation for expressing the lower bound of the execution time of an algorithm
is Ω(n). It measures the best case time complexity or the fastest possible time an
algorithm may run.
We write f(n) = Ω(g(n)), If there are positive constants n0 and c such that, to the right of
n0 the f(n) always lies on or above c*g(n).

Ω(g(n)) = { f(n) : There exist positive constant c and n0 such that 0 ≤ c g(n) ≤ f(n), for all
n ≥ n0 }

3. Big-Theta(Θ) Notation
The formal notation for expressing the lower and upper bounds of an algorithm's
execution time is Θ(n). It express the average case of an algorithm. It is displayed as
follows:
We write f(n) = Θ(g(n)), If there are positive constants n0 and c1 and c2 such that, to
the right of n0 the f(n) always lies between c1*g(n) and c2*g(n) inclusive.

Θ(g(n)) = {f(n) : There exist positive constant c1, c2 and n0 such that 0 ≤ c1 g(n) ≤ f(n)
≤ c2 g(n), for all n ≥ n0 }

=============================================================

What are arrays in data structures?


In this tutorial you will learn about the arrays, definition, declaration of array and
Initialisation of array with example.

Definition of Array
An array is a collection of elements of the same data type, stored in contiguous
memory locations. Arrays in C are useful when you need to store and manage a large
amount of data of the same data type. They enable you to store multiple values in a
single variable and simply access and manipulate them using a single variable name
and an index.

Types of Arrays
An array in C is a collection of elements of the same data type, stored in contiguous
memory locations. There are two types of arrays:

 One Dimensional Array


 Two Dimensional Array

One Dimensional Array


A one-dimensional (1D) array is a linear list of elements of the same data type, such as
integers or characters. The elements are stored in contiguous memory locations and
can be accessed using a single index. The index of the elements in a 1D array starts
from 0.
Two Dimensional Array
A two-dimensional (2D) array is a table-like structure that stores data in rows and
columns. The elements are stored in contiguous memory locations and can be
accessed using two indices, one for the row and one for the column. The index of the
elements in a 2D array starts from 0.

Memory Representation of Array


In computer science, an array is a data structure that stores a collection of elements of
the same type in contiguous memory locations. The memory representation of an array
refers to how the elements of the array are stored in memory.

The most common way to represent an array in memory is to store the elements
sequentially in a block of memory. For example, if we have an array of integers a with 5
elements, the memory representation would look like this:

Row Major Order & Column Major Order


There are two main ways to represent an array in memory: row-major order and
column-major order.
Row Major Order
In row-major order, the elements of the array are stored row by row in memory. For
example, in a two-dimensional array A[i][j], the elements of the first row are stored
first, followed by the elements of the second row, and so on. Within each row, the
elements are stored in order of increasing column index. This means that the memory
location of an element A[i][j] can be computed as &A[0][0] + i * n + j, where n is
the number of columns in the array.

Column Major Order


In column-major order, the elements of the array are stored column by column in
memory. For example, in a two-dimensional array A[i][j], the elements of the first
column are stored first, followed by the elements of the second column, and so on.
Within each column, the elements are stored in order of increasing row index. This
means that the memory location of an element A[i][j] can be computed as &A[0][0]
+ j * m + i, where m is the number of rows in the array.

The choice of memory representation can have an impact on the performance of array
operations. For example, if a program needs to access all the elements of a row in an
array, it is more efficient to use row-major order because the elements of the row are
stored contiguously in memory. Similarly, if a program needs to access all the elements
of a column in an array, it is more efficient to use column-major order.

In general, the memory representation of an array should be chosen based on the


access patterns of the program that uses the array. If the program primarily accesses
rows of the array, row-major order should be used, and if the program primarily
accesses columns of the array, column-major order should be used.

Address Calculation in One Dimensional Array


To access an element in an array, the computer needs to know its memory address.
The memory address of an array element can be calculated using the following formula:

address = base_address + index * size_of_element


Where:

 base_address: the memory address of the first element in the array.


 index: the index or subscript of the element you want to access.
 size_of_element: the size (in bytes) of each element in the array.

For example, let's say you have an array of integers with 5 elements and the base
address of the array is 100. The size of each integer element is 4 bytes. To access the
third element a[2] of the array, you would use the following formula:

a[2] = 100 + 2 * 4 = 108

So the memory address of the third element in the array is 108.

It's important to note that array indices start at 0, so the first element in the array has an
index of 0, the second element has an index of 1, and so on.

Address Calculation in Two Dimensional Array


o access an element in a 2D array, the computer needs to know its memory address.
The memory address of an element in a 2D array can be calculated using the following
formula:

address = base_address + (row_index * num_columns + column_index) *


size_of_element

Where:

 base_address: the memory address of the first element in the 2D array.


 row_index: the index of the row containing the element you want to access.
 column_index: the index of the column containing the element you want to access.
 num_columns: the number of columns in the 2D array.
 size_of_element: the size (in bytes) of each element in the 2D array.

For example, let's say you have a 2D array of integers with 3 rows and 4 columns, and
the base address of the array is 100. The size of each integer element is 4 bytes. To
access the element at row 1, column 2 which is a[1][2] you would use the following
formula:

a[1][2] = 100 + (1 * 4 + 2) * 4 = 116

So the memory address of the element at row 1, column 2 in the 2D array is 116.

===============================================================

Operations on Arrays in Data Structures


Arrays are a fundamental data structure that allows for the storage and manipulation of
a collection of elements of the same data type. In addition to traversal, arrays can
support a variety of operations that make them useful in a wide range of applications.
Here are some common operations on arrays in data structures:

1. Traversal: Array traversal refers to the process of accessing each element of an


array in a sequential order from the beginning to the end or from the end to the
beginning.

2. Insertion: Insertion is the process of adding a new element to an array.


Depending on the application, the new element can be inserted at the beginning,
end, or any other location within the array. When inserting an element, the other
elements in the array may need to be shifted to accommodate the new element.

3. Deletion: Deletion is the process of removing an existing element from an array.


Depending on the application, the element to be deleted can be at any location
within the array. When deleting an element, the other elements in the array may
need to be shifted to fill the gap left by the deleted element.

4. Search: Search is the process of finding a specific element within an array. This
is done by comparing the target element with each element in the array until a
match is found. There are various search algorithms that can be used, such as
linear search and binary search.

5. Sorting: Sorting is the process of arranging the elements in an array in a specific


order. The most common sorting algorithms are bubble sort, insertion sort,
selection sort, merge sort, and quicksort. Sorting an array can help with
searching and other operations, as it allows for efficient data processing.

6. Access: Access is the process of retrieving the value of an element from an


array. This is done by specifying the index of the desired element within the
array. Array access is fast and efficient, as it does not require traversal of the
entire array.

These are just a few of the common operations that can be performed on arrays in data
structures. Depending on the application, there may be other operations that are
required to effectively manipulate and process array data.

Traversal Operation
Array traversal refers to the process of accessing each element of an array in a
sequential order. In data structures, an array is a collection of similar data items that are
stored in contiguous memory locations. Traversing an array involves visiting each
element of the array in a specified order, typically from the beginning to the end or from
the end to the beginning.

Algorithm for Traversing an Array


Traversal (data,LB,UB)

Here data is array name and LB is Lower Bound (start) index of the first element of an
array. UB is Upper Bound (End) is the index of the last element

 Step 1: Start
 Step 2: [Initialize variable. ] Set i = LB
 Step 3: Repeat steps 4 and 5 While i <= UB
 Step 4: Apply process (Print ) to data[i].
 Step 5: Increment i=i+1
 Step 6: End of loop
 Step 7: Stop

Program in c to perform traversal of array

#include <stdio.h>
#include<conio.h>
void main()
{
// Declare and initialize an array
int data[10], i=0,ub;
print("Enter the Size of Array");
scanf("%d",&ub);
for(i=0;i<ub;i++)
{
scanf("%d",&data[i]);
}

// Traverse the array and print each element


for (int i = 0; i < 5; i++)
{
printf("%d ", data[i]);
}

getch();
}

Insertion Operation
Insertion in an array refers to the process of adding a new element to a specific position
within the array while shifting the existing elements to make room for the new element.

Here's a simple algorithm in simple language to perform an insertion in an array:

1. Define the array and the element to be inserted.


2. Determine the position where the element should be inserted.
3. Shift the elements after the insertion position to the right by one index to make room for
the new element.
4. Insert the new element at the desired position.
5. Update the length of the array.

Algorithm to Insert an element in an Array


Here size is the array size. Position (pos) is location where element to be inserted and
Item is new value in the array

 Step 1: Start
 Step 2: [Initialize variable ] Set i = size - 1
 Step 3: Repeat Step 4 and 5 While i >= pos-1
 Step 4: [Move ith element forward. ] set arr[i+1] = arr[i]
 Step 5: [Decrease counter. ] Set i = i - 1
 Step 6: [End of step 3 loop. ]
 Step 7: [Insert element. ] Set arr[pos-1] = item
 Step 8: Stop

Program in c to Insert new item in array

#include <stdio.h>
#include <conio.h>
void main()
{
int arr[50], n, pos, item,i;

// Input the size of the array


printf("Enter the size of the array: ");
scanf("%d", &n);

// Input the elements of the array


printf("Enter the elements of the array:");
for ( i = 0; i < n; i++)
{
scanf("%d", &arr[i]);
}

// Input the position and value of the new element to be inserted

printf("Enter the position and value of the new element to be inserted:


");
scanf("%d%d", &pos, &item);

// Shift the elements after the insertion position to the right


for (i = n - 1; i >= pos-1; i--)
{
arr[i+1] = arr[i];
}

// Insert the new element at the desired position


arr[pos-1] = item;

// Increment the size of the array


n++;

// Print the updated array


printf("The updated array is:");
for (i = 0; i < n; i++)
{
printf("%d", arr[i]);
}
getch();
}

Deletion Operation
Deletion in an array means removing an element from an array and shifting the
remaining elements to fill the empty space.

Here is the algorithm to delete an element from an array in C:

1. Initialize the array and the index of the element to be deleted.


2. Traverse the array from the index of the element to be deleted until the end of the array.
3. Shift each element one position to the left.
4. Decrement the size of the array by 1.

Algorithm to Delete an element from an Array:

 Step 1: Start
 Step 2: [Initialize variable] Set i = pos - 1
 Step 3: Repeat Step 4 and 5 for i = pos - 1 to i < size
 Step 4: [Move ith element left side ] set a[i] = a[i+1]
 Step 5: [Incerement ] Set i = i + 1
 Step 6: [End of step 03 loop. ]
 Step 7: [Update Array Size] set size = size - 1
 Step 8: Stop

Program in C to perform deletion in array

#include <stdio.h>
#include <conio.h>
void main()
{
int array[100], position, i, n;

printf("Enter the number of elements in array");


scanf("%d", &n);

printf("Enter %d elements", n);


for (i = 0; i < n; i++)
{
scanf("%d", &array[i]);
}
printf("Enter the location where you wish to delete element");
scanf("%d", &position);

if (position >= n+1)


{
printf("Deletion not possible.");
}
else
{
for (i = position-1; i < n-1; i++)
{
array[i] = array[i+1];
}
printf("Final array is");

for (i = 0; i < n-1; i++)


{
printf("%d", array[i]);
}
}
getch();
}

==================================================================

Sparse Matrix in Data Structure


What is matrix ?
In computer science, a matrix is a data structure that represents a collection of values
arranged in a two-dimensional grid of rows and columns. Each element in the matrix is
identified by its row and column indices. Matrices are commonly used to represent data
in many fields, such as mathematics, physics, engineering, computer graphics, and
machine learning.
A matrix can be represented in many different ways, including using arrays, linked lists,
or other data structures. The most common representation of a matrix is using a two-
dimensional array, where the rows and columns are represented as the first and second
indices, respectively. For example, a matrix with three rows and four columns can be
represented as a 3x4 two-dimensional array:

What is Sparse Matrix?


A sparse matrix is a matrix that contains a large number of zero elements compared to
non-zero elements. In such matrices, it is not efficient to use traditional representations,
which store all elements, because a significant amount of memory will be wasted on
storing zero elements. Therefore, special techniques are used to represent sparse
matrices, which can save a lot of memory.

Representation of Sparse Matirx


Sparse matrices can be represented using both arrays and linked lists. Let's explore
both representations below:

1. Array Representation: In this representation, we use Two-dimensional arrays to store


the non-zero elements of the sparse matrix. In 2D array representation of sparse matrix,
there are three fields used that are named as - ROW,COLUMN,VALUE.
2. Linked List Representation: In this representation, we use a linked list to store the
non-zero elements of the sparse matrix. Each node of the linked list represents a non-
zero element and contains three fields: the value of the element, the row index of the
element, the column index of the element and next node to store the address of next
node.

Array Representation of Sparse Matrix


In the array representation of a sparse matrix, we store only the non-zero elements of
the matrix along with their row and column indices. We use three arrays to represent the
sparse matrix:

1. row_index: It stores the row indices of the corresponding elements in the array.
2. col_index:It stores the column indices of the corresponding elements in the array.
3. value : It stores the non-zero elements of the matrix in row-major order.

The size of these arrays depends on the number of non-zero elements in the matrix.
Here's an example of how we can represent a sparse matrix using the array
representation:

Program in C to implement array representation of sparse matrix


#include <stdio.h>
#include <conio.h>
void main()
{
//sm is sparse matrix
int sm[100][100], num_rows, num_cols, i, j, num_non_zero = 0;
printf("Enter the number of rows and columns of the matrix: ");
scanf("%d %d", &num_rows, &num_cols);

// Read the matrix elements from the user


printf("Enter the matrix elements:");
for (i = 0; i < num_rows; i++)
{
for (j = 0; j < num_cols; j++)
{
scanf("%d", &sm[i][j]);

if (sm[i][j] != 0)
{
// Increment the number of non-zero elements
num_non_zero++;
}
}
}

// Defining final matrix


int fmatrix[3][num_non_zero];
int k=0;

// final matrix from Sparse Matrix


for(int i=0; i<num_rows; i++)
{
for(int j=0; j<num_cols; j++)
{
if(sm[i][j]!=0)
{
fmatrix[0][k] = i;
fmatrix[1][k] = j;
fmatrix[2][k] = sm[i][j];
k++;
}
}
}

// Print the Final sparse matrix


for(int i=0 ;i<num_rows; i++)
{
for(int j=0; j<num_cols; j++)
{
printf("%d ", fmatrix[i][j]);

}
getch();
}

Linked List Representation of Sparse Matrix


A linked list representation of a sparse matrix is a way of storing a matrix that has a
large number of zero values efficiently by using linked lists. Each node in the linked list
contains the value of the non-zero element, its column index, row index, and a pointer to
the next node in the list. This representation reduces the amount of memory required to
store the matrix and allows for efficient manipulation of the non-zero elements.

The benefit of using a linked list instead of an array to represent the sparse matrix is
that it is simpler to add or remove nodes from a linked list than from an array.

 Row - Row where the non-zero element is located.


 Column - Column where the non-zero element is located.
 Value - It is the value of the non-zero element that is located at the index (row, column).
 Next node - Address of the next node.
Linked List Data Structure
A linked list is a fundamental data structure used in computer science to store and
manipulate data. It is a dynamic data structure, which means that its size may be
updated during runtime. A linked list is a sequence of nodes, each node includes data
and a reference (or pointer) to the next node in the sequence.

The first node in the list is known as the head node, and the final node in the list is
known as the tail node. If a node's reference is null, it denotes the end of the list.

The head is the first node in a linked list. It is the starting point for the linked list and
contains a reference to the first element in the list. The data for the first element in the
list is stored in the head node, along with a reference to the next node in the list.

Advantages of Linked List


Here are some advantages of using linked lists as a data structure:

1. Dynamic size: Linked lists are dynamic data structures, which means that its
size can be changed at run time. This makes it easier to manage data sets when
the number of items changes over time.

2. Efficient Insertions and Deletions: Inserting or deleting an element in an array


requires moving all the items to the right or left of the insertion or deletion point.
For big arrays, this can be time-consuming. By merely changing the pointers in
the relevant nodes, linked lists can insert or remove an element in constant time
(O(1)).

3. No Need for Contiguous Memory: There is no need for contiguous memory


allocation like arrays, Linked lists, on the other hand, can be built from non-
contiguous memory units.

4. Memory efficiency: When the number of items is small or the size of the
elements varies, linked lists can be more memory efficient than arrays. This is
due to the fact that linked lists only consume the memory necessary for the data
and pointers to the next node, but arrays allocate a set amount of memory for
each element regardless of whether it is used or not.
5. Flexibility: Linked lists can be singly or doubly linked, and they can be used to
build a variety of data structures such as queues, stacks, and hash tables.

Overall, linked lists offer several advantages over other data structures, including
efficient insertions and deletions, dynamic size, and flexibility. However, there are also
some disadvantages, such as slower access times and the need for more memory due
to the additional pointers.

Disadvantages of Linked List


Here are some disadvantages of using linked lists as a data structure:

1. Slower Access Time: Linked lists do not support constant-time access to


individual elements, in contrast to arrays. Instead, you must start at the head
node and work your way down the list until you reach the element you need to
access in a linked list. If the list is lengthy, this may take longer than accessing
an entry in an array.

2. Greater Memory Overhead: When compared to arrays, linked lists use more
memory. Linked lists must hold pointers to the following nodes in addition to the
data components. This may result in the linked list using more memory,
particularly if the list is lengthy.

3. Difficulty in Reversing: Reversal is challenging task in linked lists because it is


more difficult to reverse the items in linked list. We have to change the pointers at
every node, which can be a challenging and time-consuming task.

4. Difficulty in Accessing Random Elements: Accessing random elements in


constant time is not possible with linked lists. You can not access element in
linked list by index.

5. No built-in Support for Sorting: Linked lists do not have built-in functionality for
sorting. It can take longer to sort a linked list than an array using built-in sorting
algorithms. In Linked List we have to arrange all the pointers for sorting. It is very
difficult as compared to arrays.

Memory Representation of Linked List


In most computer languages, linked lists are commonly implemented using pointers. A
linked list can also be represented using arrays, however this is less common. We may
use two arrays to represent a linked list with arrays, one for storing the values of the
nodes and another for storing the indices of the next nodes.
Other way of memory representation of Linked List using arrays.

Above image shows a memory representation of a linked list data structure. It consists
of several nodes, each containing two parts: data and a pointer to the next node in the
list. The first node is called the head and the last node points to null. The data in each
node can be of any data type, and the pointer points to the memory address of the next
node. The linked list data structure allows for efficient insertion and deletion of nodes,
but requires traversal of the list to access a specific node.

Types of Linked List


A linear data structure called a linked list consists of nodes that are linked together via
pointers. Two fields are present in each node: a data field and a pointer (Link) field
pointing to the next node in the list. Linked Lists come in a variety of types, which are
listed below:
Singly Linked List
Each element (or node) in a singly linked list stores a value and a pointer to the next
element (or node) in the list. The first node is known as the head node, while the last
node is known as the tail node.

The structure of a singly linked list is as follows:

Doubly Linked List


In a Doubly Linked List, each node in a Doubly Linked List has two pointers, one
pointing to the next node in the list and the other referring to the previous node in the
list.This allows for easy traversal in both directions.

Circular Linked List


In a Circular Linked List, the last node in the list points to the first node, forming a circle.
This allows for continuous traversal of the list.
Doubly Circular Linked List
Each node in a doubly circular linked list contains two pointers, one pointing to the node
before it and the other pointing to the node after it, creating a circular link between the
nodes. In a double circular linked list, the last node points to the first node, making it
circular. Additionally, the first node's previous pointer also points to the last node,
making it doubly linked. This means that elements can be traversed both forward and
backward.

===============================================================

Linked List Operations


A data structure known as a linked list is made up of a series of elements, each of which
points to the element next in the list. The following are the various linked list operations:

 Insertion: To add an element to a linked list, create a new node with the specified value,
then place it at the proper location on the list.
 Deletion : Remove the node from the list that has the specified value to delete the
element from the linked list.
 Traversal : Traversing a linked list requires iterating over each element in the list and
performing some operation on each element.
 Searching : A linked list can be searched by locating the node that holds the specified
value, returning that value if it exists, or returning null if it does not.
 Concatenation : Concatenation is the process of joining the two linked lists to form a
single, longer linked list.
 Splitting : Splitting involves separating a linked list into two separate lists at a specific
position.
 Reversal : If you want to reverse the order of elements in the Linked list then it involves
changing its elements' positions so that the last element becomes the first element and
first element becomes last element..
 Sorting : Sorting a linked list means placing the elements in the list in some given order,
such as ascending or descending order.
 Merging : Merging two sorted linked lists means combining the two lists into a single,
sorted linked list.
 Counting : Counting the number of elements in a linked list involves iterating over each
element in the list and incrementing a counter variable for each element found.

Insertion in Linked List


Here are three cases of new node insertion in a linked list:

1. Insertion at the beginning of the linked list: In this case, the new node is
inserted at the beginning of the linked list, which becomes the new head node.
This can be done by setting the next pointer of the new node to point to the
current head node and updating the head pointer to point to the new node.

2. Insertion at the end of the linked list: In this case, the new node is inserted at
the end of the linked list, which becomes the new tail node. This can be done by
traversing the linked list till the last node and setting the next pointer of the last
node to point to the new node.

3. Insertion at a specific position in the linked list: In this case, the new node is
inserted at a specific position in the linked list, which involves traversing the
linked list till the desired position and inserting the new node there. This requires
keeping track of the previous node and updating its next pointer to point to the
new node, while setting the next pointer of the new node to point to the next
node.

Insertion at the beginning of the linked list


To insert a new element at the beginning of a linked list, we need to create a new node,
set its value to the new element, and then update the head of the linked list to point to
the new node.

Algorithm to Insert New Node at Beginning in Linked List


Here ITEM is the value we want to insert in the node.

 Step 1: IF NEWNODE= NULL

Write OVERFLOW
Go to Step 5
[END OF IF]

 Step 2: SET NEWNODE → DATA = ITEM


 Step 3: SET NEWNODE → NEXT = HEAD
 Step 4: SET HEAD = NEWNODE
 Step 5: EXIT
Here is a program in C to insert a single node in the Linked List at the beginning

#include<stdio.h>
#include<stdlib.h>
struct node
{
int data;
struct node *next;
};
int main ()
{
struct node *head, *newnode;
int item;
newnode = (struct node *) malloc(sizeof(struct node *));
if(newnode == NULL)
{
printf("OVERFLOW");
}
else
{
printf("Enter New Node value");
scanf("%d",&item);
newnode->data = item;
newnode->next = head;
head = newnode;
printf("New Node inserted");
}

// Print the linked list


struct node *currentNode=head;
while (currentNode != NULL)
{
printf("%d ", currentNode->data);
currentNode = currentNode->next;
}

return 0;
}
Program in C to insert multiple nodes in the Linked List at the Beginning.
Insertion at the end of the linked list
Inserting a new node at the end of a linked list involves creating a new node with the
given data and setting the next pointer of the current last node to point to the new node.

Algorithm to Insert New Node at last in the Linked List

 Step 1: IF NEWNODE = NULL Write OVERFLOW


Go to Step 12
[END OF IF]
 Step 2: IF HEAD==NULL Process step 3 to 5 ( Case 1 New Node is the First &
Last Node)
 Step 3: Set NEWNODE- > DATA = VAL
 Step 4: Set NEWNODE- > NEXT = NULL
 Step 5: Set HEAD=NEWNODE
 Step 6: Else
 step 7: Set NEWNODE- > DATA = VAL
 Step 8: Set TEMP=HEAD ( Case 2 Traverse all node and insert New Node at Last)
 Step 9: Repeat Step 8 while TEMP - > NEXT != NULL
 Step 10: SET TEMP = TEMP - > NEXT
 [END OF LOOP]
 Step 11: SET TEMP - > NEXT = NEWNODE
 Step 12: EXIT

#include <stdio.h>
#include <stdlib.h>

// Define a node structure


struct Node {
int data;
struct Node *next;
};

int main()
{
// Initialize an empty linked list
struct Node *head = NULL,*temp;
int n;
printf("Enter Number of Nodes: ");
scanf("%d",&n);
int arr[n];
for(int i=0;i<n;i++)
{
printf("Enter the Node %d Value: ",i+1);
scanf("%d",&arr[i]);
}

// Insert the New Node at last in the Linked List


for (int i = 0; i < n; i++)
{
struct Node *newNode = (struct Node*) malloc(sizeof(struct Node));

if(newNode == NULL)
{
printf("OVERFLOW");
}
else
{

newNode->data = arr[i];

// Case 1 New node is the Last Node


if(head == NULL)
{
newNode -> next = NULL;
head = newNode;

}
else
{
// Traverse all nodes and find the Last Node
temp = head;
while (temp -> next != NULL)
{
temp = temp -> next;
}
temp->next = newNode;
newNode->next = NULL;

}
}
}

// Print the linked list


struct Node *currentNode = head;
while (currentNode != NULL)
{
printf("%d->", currentNode->data);
currentNode = currentNode->next;

if(currentNode == NULL)
{
printf("NULL");
}

return 0;
}

Insertion at a specific position in the linked list


To insert a node at a specific position in a linked list, we need to perform the following
steps:

1. Traverse the linked list until we reach the node before the desired position.
2. Create a new node with the value to be inserted.
3. Set the next reference of the new node to the node that is currently at the desired
position.
4. Set the next reference of the node before the desired position to the new node.

Program in C to insert new node at given position in the Linked List

#include <stdio.h>
#include <stdlib.h>

// Define a struct for a linked list node


struct Node {
int data;
struct Node* next;
};

// Main function
int main() {
// Create a linked list with three nodes
struct Node* head = NULL;
struct Node* node1 = (struct Node*) malloc(sizeof(struct Node));
struct Node* node2 = (struct Node*) malloc(sizeof(struct Node));
struct Node* node3 = (struct Node*) malloc(sizeof(struct Node));
node1->data = 1;
node1->next = node2;
node2->data = 2;
node2->next = node3;
node3->data = 3;
node3->next = NULL;
head = node1;

// Print the original linked list


printf("Original linked list: ");
struct Node* current = head;
while (current != NULL) {
printf("%d ", current->data);
current = current->next;
}

// Insert a new node with the data 4 at position 2


struct Node* newNode = (struct Node*) malloc(sizeof(struct Node));
newNode->data = 4;
int position = 2;
current = head;
for (int i = 0; i < position - 1; i++)
{
if (current == NULL) {
printf("Invalid position!");
return 0;
}
current = current->next;
}
newNode->next = current->next;
current->next = newNode;

// Print the updated linked list


printf("Updated linked list: ");
current = head;
while (current != NULL) {
printf("%d ", current->data);
current = current->next;
}

return 0;
}

=========================================================

Stack Data Structure


In this Tutorial,you will learn about

 What is Stack Data Structure?


 Memory Representation of Stack using Arrays and Linked List
 Working of Stack
 Basic Operations in Stack
 Algorithm and Program of PUSH operation in Stack
 Algorithm and Program of POP operation in Stack

What is Stack Data Structure?


A stack is a linear data structure that follows the Last-In-First-Out (LIFO) principle,
where the element that is inserted last will be the first to be removed.

A stack has one end, typically called the top, and it can be visualized as a collection of
objects stacked one on top of another. The top of the stack is the element that was most
recently added, and the bottom of the stack is the element that was added first or the
earliest.

The top of the stack is represented using a pointer, called the top pointer, which points
to the last element inserted. When a new element is added to the stack, it is placed on
the top of the stack, and the top pointer is updated to point to the new element.

In a stack, elements are added and removed from the top of the stack using push and
pop operations, respectively. The top pointer points to the last element inserted, and
when an element is added to the stack, the top pointer is updated to point to the new
element. When an element is removed from the stack, the top pointer is moved to the
next element below the top, which becomes the new top of the stack.

Memory Representation of Stack


In computer science, a stack is an abstract data type that represents a collection of
elements with a Last-In-First-Out (LIFO) ordering. A stack can be implemented using an
array or a linked list. In both cases, memory is used to store the stack elements and
manage the stack operations.

Here's a brief overview of the memory representation of stacks:

1. Memory Representation of Stack using Arrays: In an array-based stack, the stack


elements are stored in a contiguous block of memory, typically an array. The stack
pointer points to the topmost element of the stack, and the stack operations modify the
stack pointer and the elements in the array. When the stack size is fixed, the array-
based stack can overflow or underflow, causing memory access errors.

Memory Representation of Stack using Arrays

2. Memory Representation of Stack using Linked List: In a linked list-based stack,


the stack elements are stored in individual nodes, each with a pointer to the next node.
The stack pointer points to the head node of the linked list, and the stack operations
modify the stack pointer and the links between the nodes. When the stack size is
dynamic, the linked list-based stack can allocate or deallocate memory as needed,
avoiding overflow or underflow errors.

Working of Stack
A stack is a data structure that stores elements with a Last-In-First-Out (LIFO) ordering.
It works by using a stack pointer to manage the insertion and removal of elements from
the top of the stack.

Pushing 10, 20, and 30 onto the stack will result in the stack [10, 20, 30] with the top
pointer pointing to the element 30. Performing pop operations on the stack will remove
elements from the top of the stack and decrement the top pointer accordingly,
eventually leaving an empty stack with a null top pointer.
Basic Operations in Stack
The basic operations of a stack are given below:

1. Push: The push operation adds an element to the top of the stack. For example,
if we have a stack containing [10, 20], and we perform the push operation with
the value 30, the resulting stack will be [10, 20, 30], with the top element being
30.

2. Pop: The pop operation removes the topmost element from the stack. For
example, if we have a stack containing [10, 20, 30], and we perform the pop
operation, the resulting stack will be [10, 20], with the top element being 20.

3. Peek: The peek operation allows you to view the topmost element of the stack
without removing it. For example, if we have a stack containing [10, 20, 30], and
we perform the peek operation, the top element will be returned, which is 30, but
the stack will remain unchanged.

4. isEmpty: The isEmpty operation checks whether the stack is empty or not. It
returns true if the stack is empty and false if it contains one or more elements.
For example, if we have an empty stack, and we perform the isEmpty operation,
it will return true.

5. Size: The size operation returns the number of elements in the stack. For
example, if we have a stack containing [10, 20, 30], and we perform the size
operation, it will return 3.

6. Search: The search operation allows you to search for a specific element in the
stack and returns its position (counting from the top of the stack). If the element
is not found in the stack, it returns -1. For example, if we have a stack containing
[10, 20, 30], and we perform the search operation with the value 20, it will return
2 (since 20 is the second element counting from the top of the stack).

7. Updation: The update operation allows you to modify the value of an existing
element in the stack. This operation involves first searching for the element in the
stack, and then replacing its value with a new value.
8. Clear: The clear operation removes all elements from the stack, leaving it empty.
For example, if we have a stack containing [10, 20, 30], and we perform the clear
operation, the resulting stack will be empty, and the top pointer will be null or -1.

Algorithm of PUSH operation in Stack


The push operation is an operation in the stack data structure that adds a new element
to the top of the stack. When a new element is pushed onto the stack, it becomes the
new top element, and the size of the stack is increased by 1. Try to understand the
following points for PUSH operation.

1. Check if the stack is full (if the stack has a fixed size and is already at its
maximum capacity). If the stack is full, the push operation cannot be performed.

2. Increment the top pointer of the stack to move it to the next empty position (i.e.,
the position above the current top element).

3. Add the new element to the position pointed by the top pointer.

4. Update the size of the stack to reflect the addition of the new element.

PUSH Operation in Stack

Algorithm of PUSH Operation in Stack

PUSH(S,TOP,N,X)

Here S is Stack, TOP is Stack Pointer, N is the Maximum Size of the Stack, X is
new element to be inserted

Step 1: If TOP==N

Write "OVERFLOW"

Return

( End of IF Statement)

Step 2: TOP=TOP +1 (Increment the TOP)


Step 3: S[TOP] = X (Insert New Element)

Step 4 END

Program in C to perform PUSH Operation in Stack using Array.

#include <stdio.h>
#define MAX_SIZE 100
int stack[MAX_SIZE];
int top = -1;
void push(int);
int main()
{
push(10);
push(20);
push(30);
push(40);
push(50);
printf("Elements Available in the Stack");
for(int i=top; i>=0;i--)
{
printf("|%d|",stack[i]);
}
return 0;
}

void push(int element)


{
if (top >= MAX_SIZE - 1)
{
printf("Stack Overflow");
}
else
{
top=top+1;
stack[top] = element;
printf("%d pushed to stack", element);
}
}

Algorithm of POP Operation in Stack


The "POP" operation in a stack is used to remove the topmost element from the stack.
When an element is popped from the stack, the stack pointer is decremented, and the
element at the top of the stack is removed from the stack.

The POP operation in a stack is typically used in conjunction with the "PUSH" operation,
which adds an element to the top of the stack. Together, these operations allow you to
implement a Last-In-First-Out (LIFO) data structure.

In summary, the POP operation in a stack:

1. Check if the stack is empty. If the stack is empty, return an error , as popping from an
empty stack is not allowed.
2. Otherwise, retrieve the topmost element from the stack.
3. Decrement the stack pointer to point to the next element in the stack.
4. Remove the topmost element from the stack.
5. Return the removed element if necessary.
Algorithm of POP Operation in Stack

POP(S,TOP,N,X)

Here S is Stack, TOP is Stack Pointer, N is the Maximum Size of the Stack, X is
element to be deleted

Step 1: If TOP==-1 (Stack Empty)

Write "Underflow"

Return

( End of IF Statement)

Step 2: X= S[TOP] (Delete Top Element)

Step 3: TOP=TOP - 1 (Decrement the Top)

Step 4 END

Program in C to perform POP operation in Stack using Array

#include <stdio.h>
#define MAX_SIZE 100
int stack[MAX_SIZE];
int top = -1;
void push(int);
void pop(int);
int main()
{
push(10);
push(20);
push(30);
push(40);
push(50);
printf("Elements Available in the Stack");

for(int i=top; i>=0;i--)


{
printf("|%d|",stack[i]);
}

pop(50);
pop(40);
printf("Stack After Pop Operation");

for(int i=top; i>=0;i--)


{
printf("|%d|",stack[i]);
}

return 0;
}

void push(int element)


{
if (top >= MAX_SIZE - 1)
{
printf("Stack Overflow");
}
else
{
top=top+1;
stack[top] = element;
printf("%d pushed to stack", element);
}
}

void pop(int element)


{
if (top == - 1)
{
printf("Stack Underflow");
}
else
{
element= stack[top];
top=top-1;
printf("%d element deleted from Stack", element);
}
}

===============================================================

Practical Applications of Stack Data


Structure
A stack is a collection of elements where only two operations are allowed - push (add
element) and pop (remove element). It follows the principle of Last-In-First-Out (LIFO),
where the last element added to the stack is the first element to be removed. Stacks are
widely used in computer science for their simplicity and efficiency in certain applications.
In this tutorial we will learn about the practical uses of stack in the real world.

Here are some common applications of stacks along with examples:

1. Parenthesis Matching: In programming languages, parentheses are used to


group expressions and indicate the order of evaluation. Parenthesis matching is
a common problem where we need to check if a given expression contains
matching opening and closing parentheses. This can be easily solved using a
stack. For Example X= a + (b*c), is a valid expression, it has same number of
opening brackets and closing brackets. If the expression do not have same
number of opening and closing brackets, that expression will be invalid.

2. Arithmetic Expression Evaluation: Stacks are often used in programming


languages to evaluate arithmetic expressions. Operators and operands are
added to the stack and the expression is evaluated according to the rules of
precedence. For example, consider the expression 3 * (5 + 2). In this case output
will be 21.

3. Function Call/Return: Stacks are used by programming languages to keep


track of function calls and returns. When a function is called, its return address
and local variables are pushed onto the stack. When the function returns, these
values are popped off the stack. For example, consider a function that calls
another function. The calling function's return address and local variables are
pushed onto the stack, the called function is executed, and then the return
address and variables are popped off the stack.

4. Undo/Redo Operations: Stacks are commonly used in software applications to


keep track of undo/redo operations. Each time an action is performed, it is added
to the stack. When undo is requested, the most recent action is popped off the
stack and reversed. When redo is requested, the previously undone action is
popped off the redo stack and re-executed.

5. Browser History: Stacks are used in web browsers to keep track of visited web
pages. Each time a new page is visited, its URL is added to the stack. When the
back button is clicked, the most recent URL is popped off the stack and the
browser navigates to the previous page.

6. Compiler Design: Stacks are used in compiler design to check the validity of a
program's syntax. The stack is used to keep track of opening and closing
brackets, braces, and parentheses. For example, if a program contains an
opening brace {, it is pushed onto the stack. When a closing brace } is
encountered, the stack is popped to check if the opening brace matches the
closing brace.

7. Depth First Search: Stacks are used in graph algorithms such as Depth First
Search (DFS). The DFS algorithm visits all vertices in a graph by exploring as far
as possible along each branch before backtracking. The stack is used to keep
track of vertices that have been visited but not fully explored. Each time a vertex
is visited, it is pushed onto the stack. When all adjacent vertices have been
visited, the vertex is popped off the stack and the algorithm backtracks.

Parenthesis Matching or Balanced Expression


In programming languages, parentheses are used to group expressions and indicate
the order of evaluation. Parenthesis matching is a common problem where we need to
check if a given expression contains matching opening and closing parentheses. This
can be easily solved using a stack. For Example

 X= a + (b * c) // Balanced / Valid Expression it has valid opening and closing of


brackets.
 X= a + ( b * c // Unbalanced / Invalid Expression No Proper opening and closing of
brackets.

Try to understand this with the help of table given below;

Example 1: Given Expression is X= A + (B * C) – (D * 2)

Here STACK is empty, when scanning of operation is completed and stack is


empty,then the expression is considered valid.

Example 2: Given Expression is X= A + (B * C) – D * 2)

In Step 3 If the stack is empty, then the right parenthesis does not have a
matching left parentheisis in the STACK. This shows that this expression is
Invalid.

Algorithm for Matching Parenthesis using a stack is as


follows:
Step 1: Scan the given expression from left to right and repeat step 2 and 3 for
each parentheis until STACK is empty.

Step 2: If element = "(" then (Left Parenthesis)


PUSH element on the STACK.

End of If Statement

Set ITEM = STACK[TOP]

Step 3: if element = ")" then (Right Parenthesis)

a) If the stack is empty, then the right parenthesis does not have a
matching left parentheisis, stop scanning, print and error and goto Step 5

b) If the stack is not empty, then pop the STACK

Step 4: At the end of the Expression

If STACK is empty then

print " VALID EXPRESSION"

else

print "INVALID EXPRESSION"

End of If Statement

Step 5: EXIT

Click Here : Program in C to implement Matching Parenthesis Problem in Stack using


Arrays

Arithmetic Expression Evaluation


Arithmetic expression evaluation using a stack is a popular algorithm used to calculate
mathematical expressions. The basic idea behind this algorithm is to use a stack to
store the operands and operators of the expression and to evaluate the expression
based on the order of operations. First check the operators precedence as given in the
table below:

Operators Associativity Precedence

^ exponentiation Right to left Highest followed by *Multiplication and /division

*Multiplication, /division Left to right Highest followed by + addition and - subtraction

+ addition, -
Left to right lowest
subtraction

Notations for Arithmetic Expression in Stack


Arithmetic expressions can be written in three different notations:

 Infix notation
 Prefix notation (also known as Polish notation)
 Postfix notation (also known as Reverse Polish notation)

1. Infix notation: In infix notation, the operator is placed between the operands. For
example, 2 + 3. In this notation, the order of operations is determined by the use of
parentheses, as well as by the precedence rules of the operators.

Example: (2 + 3) * 4 - 5 / 6

2. Prefix notation: In prefix notation, the operator is placed before the operands. For
example, + 2 3. In this notation, the order of operations is determined by the order of the
operands and the operator.

Example: - * + 2 3 4 / 5 6

3. Postfix notation: In postfix notation, the operator is placed after the operands. For
example, 2 3 +. In this notation, the order of operations is determined by the order of the
operands and the operator.

Example: 2 3 + 4 * 5 6 / -

Let's evaluate the expression "3 + 4 * 5" in all three notations:

1. Infix notation: 3 + 4 * 5 = 23

2. Prefix notation: + 3 * 4 5 = 23

3. Postfix notation: 3 4 5 * + = 23

As you can see, the order of the operands and operators differs in each notation, but
the result of the expression is the same.

In general, infix notation is the most common notation used in mathematics and is the
notation most commonly used by humans to write arithmetic expressions. However,
prefix and postfix notations are more commonly used in computer programming, as they
can be more easily evaluated using a stack-based algorithm.

Convert Infix Notation into Postfix (Reverse Polish


Notation)
Infix notation is a way of writing mathematical expressions in which operators are
written between the operands. For example, the infix expression 3 + 4 * 5 is written with
the addition operator between the operands 3 and 4, and the multiplication operator
between the operands 4 and 5.

Postfix notation, also known as Reverse Polish Notation, is a way of writing


mathematical expressions in which operators are written after the operands. For
example, the postfix expression corresponding to the infix expression 3 + 4 * 5 is 3 4 5 *
+.
Infix to Postfix conversion is the process of transforming an infix expression into an
equivalent postfix expression. This conversion is useful because it eliminates the need
for parentheses and operator precedence rules, making it easier to evaluate the
expression using a stack-based algorithm.

Algorithm to Convert Infix Notation into Postfix Notation


To convert an infix expression to postfix, we use a stack and the following algorithm:

Step 1: Push "(" onto STACK and add ")" to the end of E.

Step 2: Scan E from left to right and repeat step 3 to 6 for each element of E until
the STACK is empty

Step 3: If element = Operand then

Add the element to P

End of If Statement

Step 4: If element = "(" [left parenthesis] then

PUSH element on the STACK

End of If Statement

Step 5: If element = operator then

a) Repeatedly POP from STACK and add to P each operator which have the
same and higher precedence than the operator.

b) PUSH operator on the STACK

End of If Statement

Step 6: if element = ")" [Right Parenthesis] then

a) Repeatedly POP from the STACK and add to P each element until a left
parenthesis "(" is encountered.

b) POP "(" From the STACK

End of If Statement

End of Step 2 Loop

Step 7: Exit

Example 1: Convert the Infix Notation (A-B*C) into Postfix Notation

Solution : By Following the above algorithm we can convert Infix notation (A-b*C)
into Postfix notation
Postfix
Element Description STACK
Expression (P)

( ( is pushed onto the stack ( (

A A is operand, add it to postfix expression ( A

- has lower precedence than and no operator in stack,


- (- A
Push - onto stack

B b is operand, add it to postfix expression (- AB

* * has higher precedence than -, so push it onto the stack (-* AB

C C is operand, add it to postfix expression (-* ABC

) is encountered, so pop operators from stack and add


) them to the postfix expression until we reach the ABC*-
opening parenthesis

Here result is ABC*- (Postfix Notation).

Example 2: Convert the Infix Notation X+Y* Z^P-(X/Y+Z) into Postfix Notation

Solution : Add the "(" and ")" in the expression : (X+Y*Z^P-(X/Y+Z))

Elemen Postfix
Stack
t Expression(P)

( (

X ( X

+ (+ X

Y (+ XY

* (+* XY

z (+* XYZ

^ (+*^ XYZ

P (+*^ XYZP

(-
- XYZP^*+
- has lower precedence than ^ and * and same priority with +, then we pop
all the operators having high and equal priority to current operator and
PUSH - into STACK
( (-( XYZP^*+

X (-( XYZP^*+X
/ (-(/ XYZP^*+X

Y (-(/ XYZP^*+XY

+ (-(+ XYZP^*+XY/

Z (-(+ XYZP^*+XY/Z

(-
) XYZP^*+XY/Z+
Removes all the operators and add to expression until "(" is encountered
) XYZP^*+XY/Z+-

Click Here for Program in C to Convert Infix Expression into Postfix Expression

Method of Evaluating Postfix Expression


Here is a step-by-step procedure for evaluating a postfix expression using this
algorithm:

1. Create an empty stack.


2. Convert the postfix expression into a list of tokens.
3. For each element in the list:
o If the element is an operand, push it onto the stack.
o If the element is an operator, pop the top two operands from the stack,
apply the operator to them, and push the result back onto the stack.
4. After all tokens have been processed, the value left on the stack is the result of
the expression.

Example 1: Evaluate the postfix expression E=8,7,20,*,+,15,3,/,-

Solution:

Elemen
Description Stack
t

8 PUSH(8) 8

7 PUSH(7) 87

20 PUSH(20) 8 7 20

POP(20)

POP(7)
* 8 140
Perform Multipliation (*) on 20 x7 = 140

PUSH(140)
+ POP(140) 148
Elemen
Description Stack
t

POP(8)

Perform Addition (+) on 8 + 140=148

PUSH(148)
15 PUSH(15) 148 15

3 PUSH(3) 148 15 3

POP(3)

POP(15)
/ 148 5
Perform Division (/) on 15/3=5

PUSH(5)
POP(5)

POP(148)
- 143
Perform Subtraction (-) on 148-5=143

PUSH(143)
Here the result of the expression =143

Convert Infix Notation into Prefix Notation (Polish


Notation)
Prefix notation is a way of writing mathematical expressions in which the operator
comes before the operands. In other words, the operator is placed at the beginning of
the expression, followed by its operands.

For example, the prefix notation for the expression "5 + 6" would be "+ 5 6". In this
notation, the plus sign (+) comes before the operands 5 and 6.

Here's another example: the prefix notation for the expression "3 * (4 + 5)" would be "* 3
+ 4 5". In this case, the multiplication operator (*) comes before the operand 3, and the
addition operator (+) comes before the operands 4 and 5.

Prefix notation is also known as Polish notation, after the Polish logician Jan
Łukasiewicz who introduced it in the 1920s. It is often used in computer programming
and in some types of calculators.

Examples of infix to prefix conversion:

1. Infix expression: A + B * C Prefix expression: + A * B C

2. Infix expression: (A + B) / (C - D) Prefix expression: / + A B - C D


3. Infix expression: A * (B + C) / D Prefix expression: / * A + B C D

Algorithm to Convert Infix Notation into Prefix Notation


To convert an infix expression to prefix, we use a stack and the following algorithm:

Step 1: Reverse the given infix expression (E1)

Step 2: Scan E1 from left to right and repeat step 3 to 7 for each element of E1
until the STACK is empty.

Step 3: if element = Operand then

Add the element to P

End of If Statement

Step 4: if element = ")" then

PUSH element on the STACK

End of If Statement

Step 5: If element = Operator then

a) Repeatedly pop from STACK and add to P each operator which have
the higher precedence than current operator.

b) PUSH operator on the STACK

End of If Statement

Step 6: If element = "(" then

a) Repeatedly pop from STACK and add to P each element until a right
parenthesis ")" is encountered.

b) Remove ')" [Do not add ')" to P]

End of If Statement

Step 7: If there is no more element in E1, POP from STACK the remaining
elements and add them to P

End of Step 2 Loop

Step 8: Reverse the output expression P to get the desired result.

Step 9: Exit

Example 2: Convert the Infix Notation E= (A/B+C) - (D+E*F) into Prefix Notation

Solution : Reverse the above Infix Notation first E1= )F*E+D(-)C+B/A( . Now
Convert the Reversed expression into Postfix then again Reverse the Final
Postfix expression
Elemen Postfix
Stack
t Expression(P)

) )

F ) F

* )* F

E )* FE

+ ) + FE*

D ) + FE *D

POP(+) and Add to P


( FE*D+
POP all operators from Stack unitl ')" is encounterd and remove
brackets
- - FE*D+

) FE*D+
-)
C -) FE*D+C

+ -)+ FE*D+C

B -)+ FE*D+CB

/ -)+/ FE*D+CB

A -)+/ FE*D+CBA

( Remove all Operators until ) is encountered FE*D+CBA/+

-
FE*D+CBA/+-
Expression completed POP element from STACK and add
to P
Now Reverse the Postfix Expression FE*D+CBA/+- . We will get expression after
reversal -+/ABC+D*EF. This is our final Prefix Notation Tesult

===============================================================

Introduction to Queues
A queue is a data structure in computer science that stores elements in a linear
sequence, similar to a stack. However, unlike a stack, elements in a queue are added to
the back and removed from the front. This ordering principle is often referred to as "first-
in, first-out" (FIFO).
Examples of Queue

FIFO stands for "first-in, first-out", which is a principle used in various areas of
computer science and everyday life. The principle is simple - the first item that enters a
queue is the first one to be removed from it. It means that items in a queue are
processed or served in the order that they arrive.

Memory Representaion of Queue


Array and linked list representations of a queue:

1. Array representation of a queue: In this representation, a queue is implemented


using a fixed-size array. The front and rear of the queue are represented by two
pointers, front and rear respectively. When a new element is added to the queue, it is
inserted at the rear end, and the rear pointer is incremented. Similarly, when an element
is removed from the queue, it is removed from the front end, and the front pointer is
incremented.

The array representation of a queue is simple and efficient, but it has a fixed size and
may require resizing if the queue exceeds its capacity.
2. Linked list representation of a queue: In this representation, a queue is
implemented using a linked list. Each node in the linked list contains an element and a
pointer to the next node. The front and rear of the queue are represented by two
pointers, front and rear respectively, which point to the first and last nodes of the linked
list. When a new element is added to the queue, it is added to the rear end of the linked
list, and the rear pointer is updated to point to the new node.

Similarly, when an element is removed from the queue, it is removed from the front end
of the linked list, and the front pointer is updated to point to the next node. The linked list
representation of a queue is flexible and dynamic, but it requires more memory
allocation and deallocation than the array representation.

Overall, both array and linked list representations of a queue have their advantages and
disadvantages. The choice between them depends on the specific requirements of the
application, such as memory usage, speed, and flexibility.

Basic Operations on Queue


 Traversal : Traversal is the process of visiting and processing each element of a
data structure. In the context of queues, traversal means visiting and processing
each element in the queue.
 Insertion or Enqueue : To add a new element or node in the queue..
 Deletion or Dequeue : To delete a element or node from the queue.
 Searching : To search an element(s) by value in the queue.
 Updating : To update a particular value in the queue.
 Sorting: To arrange value in the queue in ascending or descending order.
 Merging: To merge two Queues into one.
Algorithm to Insert a New Element in a Queue
(ENQUEUE Operation)

Here QUEUE is the name of the queue and MAX is Maximum Size of the queue and
FRONT pointer variable , REAR pointer variable, ITEM is the element to be inserted.

Step 1: If REAR = MAX-1 then

Write "OVERFLOW"

End of If Statement

Step 2: IF FRONT = -1 (IS QUEUE EMPTY)

Set FRONT = 0

End of If Statement

Step 3: REAR = REAR +1 (Increment REAR)

Step 4: QUEUE[REAR] = ITEM

Step 5: END
Program in C to Perform Enqueue Operation using Arrays
( Check this Program on Replit )
#include <stdio.h>
#define MAX_SIZE 10
void enqueue(int);
int queue[MAX_SIZE];
int front = -1, rear = -1;
int main()
{
enqueue(10);
enqueue(20);
enqueue(30);
printf("Printing QUEUE Elements");
for(int i=front;i<=rear;i++)
{
printf("|%d|",queue[i]);
}
return 0;
}

void enqueue(int value)


{
if (rear == MAX_SIZE - 1)
{
printf("Queue overflow!");
return;
}
if (front == -1)
{
front = 0;
}
rear=rear+1;
queue[rear] = value;
printf("%d enqueued to queue.", value);
}

Algorithm to Delete an Element from a Queue


(DEQUEUE Operation)
Here QUEUE is the name of the queue and MAX is Maximum Size of the queue and
FRONT pointer variable , REAR pointer variable, ITEM is the element to be inserted. If
FRONT and REAR = -1 then Queue is Empty

Step 1: If FRONT = -1 then

Write "UNDERFLOW"

End of If Statement

Step 2: ITEM = QUEUE[FRONT]

Step 3: IF FRONT = REAR (QUEUE has only 1 Element)

Set FRONT = -1 and REAR = -1

ELSE Step 4

Step 4: FRONT = FRONT +1 (Increment FRONT)

End of If Statement
Step 5: END

Program in C to Perform Dequeue Operation using Arrays


( Check this Program on Replit )
#include <stdio.h>
#define MAX_SIZE 10
void enqueue(int);
void dequeue();
int queue[MAX_SIZE];
int front = -1, rear = -1;
int main()
{
enqueue(10);
enqueue(20);
enqueue(30);
dequeue();
dequeue();
dequeue();
dequeue();
return 0;
}

void enqueue(int value)


{
if (rear == MAX_SIZE - 1)
{
printf("Queue overflow!");
return;
}
if (front == -1)
{
front = 0;
}
rear++;
queue[rear] = value;
printf("%d enqueued to queue.", value);
}

void dequeue()
{
if (front == -1 )
{
printf("Queue underflow!");
return;
}
if (front==rear)
{
front = -1;
rear = -1;
}
else
{
printf("%d dequeued from queue.", queue[front]);
front++;
}

Applications of Queue
Some of the applications of queues include:
1. Operating systems: Operating systems use queues to manage resources such
as CPU time, memory, and input/output devices. For example, when multiple
processes are competing for CPU time, the operating system uses a queue to
schedule each process in a fair and efficient manner.
2. Network routing: In computer networks, packets of data are sent from one
device to another. Queues are used to manage the flow of these packets,
ensuring that they are delivered in the correct order and without delay. For
example, when packets arrive at a router, they are placed in a queue and
processed in the order they were received.
3. Customer service: Many businesses use queues to manage customer service
requests. For example, when customers call a support hotline, their requests are
placed in a queue and processed in the order they were received. This ensures
that all customers are served fairly and efficiently.
4. Print spooling: When multiple users want to print a document, they are placed
in a queue and processed in the order they were received. This ensures that
each user's document is printed in turn and that no user monopolizes the printer.
5. Call centers: When customers call a call center, their calls are placed in a queue
and handled by an available agent. This ensures that all customers are served in
the order they called and that no calls are lost or ignored.
6. Task scheduling: In software development, queues are used to schedule tasks
such as database updates, report generation, and file processing. For example, a
web application might use a queue to handle requests for generating reports.
Each request is placed in the queue and processed in the order it was received.

=============================================================

Circular Queue
In this tutorial,you will learn the followings

 What is the need for a Circular Queue?


 What is Circular Queue?
 Basic Operations of Circular Queue
 Algorithm to insert new element in a Circular Queue
 Algorithm to delete element from a Circular Queue

What is the need for a Circular Queue?


To understant the need for Circular queue first try to understand the limitation or
disadvantages of simple queue. Consider an example where the size of the queue is
four elements. Initially the queue has 3 elements A,B and C. It is requested to delete A
and B and insert D and E. Note that an overflow occurs on trying to insert E even
though the first two locations are empty. This is the main limitation or disadvantage of
simple queue.
What is Circular Queue?
A circular queue is a data structure that represents a queue of elements in a circular
manner, allowing the insertion and deletion of elements at both ends of the queue. It is
also known as a ring buffer. In a circular queue, the elements are stored in a circular
manner, with the first and last elements adjacent to each other.

The key advantage of a circular queue over a regular queue is that it can reuse the
empty space created by removing elements from the front of the queue.The circular
queue can be implemented using an array or a linked list data structure, with a front and
rear pointer to keep track of the start and end of the queue.
Basic Operations of Circular Queue
 Enqueue: Adding an element to the back of the circular queue. This operation is
also called "Insert" or "Push".
 Dequeue: Removing an element from the front of the circular queue. This
operation is also called "Delete" or "Pop".
 Front: Return the element at the front of the circular queue without removing it.
 Rear: Return the element at the back of the circular queue without removing it.
 isEmpty: Check if the circular queue is empty or not.
 isFull: Check if the circular queue is full or not.
 Size: Return the number of elements present in the circular queue.
 Clear: Empty the entire circular queue by removing all the elements.

Algorithm to Insert New element in a Circular Queue


Here QUEUE is the name of the queue and Max is the maximum size of the queue and
ITEM is the element for insertion. FRONT and REAR.

Step 1 : If FRONT == 0 and REAR == MAX - 1 OR FRONT == REAR + 1 (Is Queue


Full)

Write "Overflow"

End of If Statement
Step 2 : If FRONT == -1 then (Is Queue Empty)

Set FRONT and REAR = 0

End of If Statement

Step 3 : If REAR == MAX -1

Set REAR = 0

Else

Set REAR = REAR + 1

Step 4 : QUEUE[REAR] = ITEM

Step 5 : END

Program in C to Perform Insertion in a Circular Queue


#include <stdio.h>
#define MAX_SIZE 10
void enqueue(int);
int queue[MAX_SIZE];
int front = -1, rear = -1;
int main()
{
enqueue(5);
enqueue(10);
enqueue(15);
enqueue(20);
enqueue(25);
return 0;
}

void enqueue(int item)


{
if ((front == 0 && rear == MAX_SIZE - 1) || (front == rear + 1))
{
printf("Queue is full. Cannot insert element.");
return;
}
if (front == -1)
{
front = 0;
rear = 0;
}
else if (rear == MAX_SIZE - 1)
{
rear = 0;
}
else
{
rear++;
}
queue[rear] = item;
printf("Inserted element: %d", item);
}

Algorithm to Delete an Element from a Circular Queue


Here QUEUE is the name of the queue and Max is the maximum size of the queue and
ITEM is the variable to store deleted element. FRONT and REAR.

Step 1 : If FRONT == -1 (Is QUEUE Empty)

Write "UNDERFLOW"

End of If Statement

Step 2 : ITEM = QUEUE[FRONT]

Step 3 : If FRONT == REAR ( Is Queue having One Element)

Set FRONT and REAR = 0

End of If Statement

Step 4 : Else If FRONT == MAX - 1

Set FRONT = 0

Else

Set FRONT = FRONT + 1

End of If Statement

Step 5 : End

Program in C to Perform Deletion in Circular Queue


#include <stdio.h>
#define MAX_SIZE 10
void enqueue(int);
void dequeue();
void display();
int queue[MAX_SIZE];
int front = -1, rear = -1;
int main()
{
enqueue(5);
enqueue(10);
enqueue(15);
enqueue(20);
enqueue(25);
dequeue();
dequeue();
display();
return 0;
}

void enqueue(int item)


{
if ((front == 0 && rear == MAX_SIZE - 1) || (front == rear + 1))
{
printf("Queue is full. Cannot insert element.");
return;
}
if (front == -1)
{
front = 0;
rear = 0;
}
else if (rear == MAX_SIZE - 1)
{
rear = 0;
}
else
{
rear++;
}
queue[rear] = item;
printf("Inserted element: %d", item);
}

void dequeue()
{
if (front == -1)
{
printf("Queue is empty");

}
int data = queue[front];
if (front == rear)
{
front = rear = -1;
printf("Element Removed from the Queue");

} else if (front == MAX_SIZE - 1)


{
front = 0;
printf("Element Removed from the Queue");
}
else
{
front++;
printf("Element Removed from the Queue");
}

void display()
{
if (front == -1)
{
printf("Queue is empty");

}
printf("Queue elements are: ");
if (rear >= front)
{
for (int i = front; i <= rear; i++)
{
printf("%d ", queue[i]);
}
}
else
{
for (int i = front; i < MAX_SIZE; i++)
{
printf("%d ", queue[i]);
}
for (int i = 0; i <= rear; i++)
{
printf("%d ", queue[i]);
}
}

======================================================

Basic Concepts of Tree in Data Structure


In this tutorial, you will learn the followings

 What is Tree Data Structure?


 Basic Terminology of Tree Data Structure
 Memory Representation of Tree using Array and Linked List

What is Tree Data Structure ?


A tree is a non-linear data structure consisting of nodes connected by edges. It is used
to represent hierarchical structures, such as file systems, computer directories, and
organization charts. Trees are a fundamental data structure used in computer science,
and they have many applications, such as in search algorithms, decision-making
processes, and data compression.
A tree is composed of nodes, which are connected by edges. The topmost node in a
tree is called the root node, which has no parent nodes. Each node in a tree can have
one or more child nodes, except for the leaf nodes, which have no children. Nodes that
share the same parent node are called siblings.
There are different types of trees, and they have different characteristics and properties.
Some of the common types of trees are binary trees, binary search trees, balanced
trees, AVL trees, and Red-Black trees.

Basic Terminology of Tree Data Structure


The terminology used in trees includes the following:

1. Root: The topmost node in a tree, which has no parent nodes.

2. Parent: A node that has one or more child nodes.

3. Child: A node that has a parent node.

4. Siblings: Nodes that have the same parent node.

5. Leaf: A node with no children.

6. Depth or Level: The depth(or level) of a node is equal to the number of edges
from tree's root node.

7. Height: The height of the node is the length of the path from that node to the
deepest node in the tree.

8. Subtree: A tree that consists of a node and all its descendants.

The depth and height of a tree are two important concepts in the study of trees.

The depth of a node is the number of edges from the root node to that node. The
depth(or level) of a node is equal to the number of edges from tree's root node. The
depth of the root node is 0.

The height of a node is the number of edges from that node to the furthest leaf node. In
other words, it is the length of the longest path from the node to a leaf node. The height
of a leaf node is 0.

Memory Representation of Tree Data Structure


A tree is a data structure that consists of nodes connected by edges. Each node in a
tree has a parent node and zero or more child nodes.
There are two common ways to represent a tree in computer memory: using an array or
using a linked list.

Memory Representation of Tree Using Array


In an array representation of a tree, the nodes are stored in an array. The root node is
stored at index 0, and the child nodes of each node are stored at indices that can be
calculated based on the index of the parent node. Specifically, if a node is stored
at index i, its left child is stored at index 2i+1 and its right child is stored at index 2i+2.

For example, consider the following tree:


In the array representation, the tree would be stored as:

The root node 'A' is stored at index 0, its left child 'B' is stored at index 1 (using 2i+1) ,
and its right child 'C' is stored at index 2 (using 2i+2). The left child of 'B', 'D', is stored
at index 3, and the right child of 'B', 'E', is stored at index 4. The left child of 'C', None,
indicates that it has no left child, and its right child 'F' is stored at index 6.

Memory Representation of Tree Using Linked List


In a linked list representation of a tree, each node is stored in a struct or class that
contains a value and two pointers to its left and right child nodes. The root node is a
pointer to the first node in the list.

For example, Consider the following linked list representation of Tree.


The linked list representation of a tree can be implemented in different ways. One
common way is to use a Node class or struct to represent each node in the tree. Each
Node object contains the value of the node and two pointers to its left and right child
nodes.

==============================================================

Binary Tree
In this tutorial, you will learn the followings:

 What is Binary Tree in Data Structure?


 Types of Binary Tree
 Characteristics of Binary Tree
 Binary Tree Traversal
What is Binary Tree in Data Structure?
A binary tree is a type of data structure used in computer science for organizing and
storing data. It consists of nodes, where each node has a value and each node can
have a degree equal to or less than two.

In a binary tree, the left child node always has a value less than or equal to its parent
node, while the right child node always has a value greater than or equal to its parent
node. This property makes binary trees useful for searching and sorting data efficiently.

To satisfy the definition of a binary tree, a tree must meet the following conditions:

1. Each node can have at most two children: A binary tree is a tree data structure in
which each node has at most two child nodes. This means that a node can have
either zero, one, or two children.

2. The left child is less than the parent, and the right child is greater than the parent:
In a binary search tree, the values of the nodes in the left subtree are less than
the value of the parent node, while the values of the nodes in the right subtree
are greater than the value of the parent node.

3. The tree is a connected graph: A binary tree is a connected graph, which means
that there is a path from any node to any other node in the tree.

4. The tree has a unique root node: A binary tree has a unique root node, which is
the topmost node in the tree.

5. Each child node has its own subtree: Each node in a binary tree has its own
subtree, which is a subtree that includes all the nodes that are descendants of
the node.

6. The tree is acyclic: A binary tree is acyclic, which means that there are no cycles
in the tree.

7. The tree may be empty: A binary tree may be empty, which means that it has no
nodes.
In the above examples Tree-1 , Tree-2 and Tree-3 are Binary Tree.
Not a Binary Tree

In the above example Tree is not a Binary Tree, because root node having three childs.

Binary trees can be implemented in many different ways, including arrays and linked
lists. One common way to implement a binary tree is to use a linked list, where each
node in the tree points to its left and right child nodes.

Types of Binary Tree


There are several types of binary trees, some of the common types are:
1. Complete Binary Tree: A complete binary tree is a binary tree in which every
level, except possibly the last, is completely filled, and all nodes are as far left as
possible.

2. Full Binary Tree: A full binary tree is a binary tree in which every node has
either 0 or 2 children. In other words, every node in the tree has either two child
nodes or no child nodes.

3. Perfect Binary Tree: A perfect binary tree is a type of binary tree in which all the
leaf nodes are at the same level, and all the internal nodes have exactly two child
nodes. This means that each level of the tree is completely filled, and the total
number of nodes in the tree is a power of two.

Characteristics of Binary Tree


Binary trees have several important characteristics and properties. For example:

1. Height: The height of a binary tree is the number of edges from the root node to
the deepest leaf node.

2. Depth: The depth of a node is the number of edges from the root node to that
node.

3. Balanced vs. Unbalanced: A binary tree is said to be balanced if the heights of


its left and right subtrees differ by 1. If the heights of the subtrees differ by more
than one, the tree is said to be unbalanced.

4. Traversals: Binary trees can be traversed in several different ways, including


inorder, preorder, and postorder. Inorder traversal visits the left subtree, then the
root, and then the right subtree. Preorder traversal visits the root, then the left
subtree, and then the right subtree. Postorder traversal visits the left subtree,
then the right subtree, and then the root.
Binary Tree Traversal
Binary tree traversal is the process of visiting each node in a binary tree exactly once in
a systematic order. There are three main ways to traverse a binary tree: inorder
traversal, preorder traversal, and postorder traversal.

1. Inorder traversal: In inorder traversal, we first visit the left subtree, then the root
node, and finally the right subtree. This traversal results in the nodes being
visited in ascending order.

2. Preorder traversal: In preorder traversal, we first visit the root node, then the left
subtree, and finally the right subtree.

3. Postorder traversal: In postorder traversal, we first visit the left subtree, then
the right subtree, and finally the root node.

Traversal is a fundamental operation in working with binary trees, as it allows us to


process all the nodes in the tree in a specific order. Binary tree traversal algorithms are
widely used in many applications, including search algorithms, sorting algorithms, and
tree-based data structures.

Inorder Traversal
Inorder traversal is a method of visiting every node in a binary tree in a specific order. In
inorder traversal, we first visit the left subtree, then the root node, and finally the right
subtree. This traversal results in the nodes being visited in ascending order.

 Left Subtree
 Root
 Right Subtree

Here's an example:
In inorder traversal, we visit the nodes in the order: 2, 4, 5, 7, 8, 9, 10.

In this example, we first visit the left subtree of node 7, which consists of the nodes 2, 4,
and 5. We then visit the root node 7, followed by the right subtree, which consists of the
nodes 8, 9, and 10.

Algorithm for Inorder Traversal using Recursion


Here R is the Pointer variable gives the address of root node.

INORDER(R)

Step 1: If R == NULL (Is Tree Empty)

Exit

Step 2 : If R-> LEFT != NULL then ( Traverse the Left Subtree)

Call INORDER(R-> LEFT)

Step 3 : Write R -> INFO

Step 4 : If R -> RIGHT != NULL then ( Traverse the Right Subtree)

Call INORDER(R -> RPTR)


Step 5 : Exit

Preorder Traversal
Preorder traversal is a method of visiting every node in a binary tree in a specific order.
In preorder traversal, we first visit the root node, then the left subtree, and finally the
right subtree.

 Root Node
 left Subtree
 Right Subtree

Here's an example:

In preorder traversal, we visit the nodes in the order: 7, 4, 2, 5, 9, 8, 10.

In this example, we first visit the root node 7, followed by the left subtree of node 7,
which consists of the nodes 4, 2, and 5. We then visit the right subtree of node 4, which
consists of the nodes 9, 8, and 10.

Algorithm for Preorder Traversal using Recursion

PREORDER(R)

Step 1: If R == NULL (Is Tree Empty)


Exit

Step 2 : Write R -> INFO

Step 3 : If R-> LEFT != NULL then ( Traverse the Left Subtree)

Call PREORDER(R-> LEFT)

Step 4 : If R -> RIGHT != NULL then ( Traverse the Right Subtree)

Call PREORDER(R -> RPTR)

Step 5 : Exit

Postorder Traversal
Postorder traversal is a method of visiting every node in a binary tree in a specific order.
In postorder traversal, we first visit the left subtree, then the right subtree, and finally the
root node.

 Left Subtree (Visit the Left Child First)


 Right Subtree
 Root Node

Here's an example:
In postorder traversal, we visit the nodes in the order: 2, 5, 4, 8, 10, 9, 7.

In this example, we first visit the left subtree of node 7, which consists of the nodes 2, 4,
and 5. We then visit the right subtree of node 4, which consists of the nodes 8, 9, and
10, before finally visiting the root node 7.

Postorder traversal is often used to delete a binary tree, as it visits the children before
the parent nodes. It is also a fundamental operation in working with binary trees, as it
allows us to process all the nodes in the tree in a specific order.

Algorithm for Postorder Traversal using Recursion


Here R is the Pointer variable gives the address of root node.

POSTORDER(R)

Step 1: If R == NULL (Is Tree Empty)

Exit

Step 2 : If R-> LEFT != NULL then ( Traverse the Left Subtree)

Call POSTORDER(R-> LEFT)

Step 3 : If R -> RIGHT != NULL then ( Traverse the Right Subtree)

Call POSTORDER(R -> RPTR)

Step 4 : Write R -> INFO

Step 5 : Exit

Program in C to Perform Traversal of Tree in Inorder,


Preorder and Postorder
#include <stdio.h>
#include <stdlib.h>

struct Node {
int data;
struct Node* left;
struct Node* right;
};

// Function to create a new node


struct Node* newNode(int data)
{
struct Node* node = (struct Node*)malloc(sizeof(struct Node));
node->data = data;
node->left = NULL;
node->right = NULL;
return node;
}

// Function to perform inorder traversal


void inorder(struct Node* node) {
if (node != NULL) {
inorder(node->left);
printf("%d ", node->data);
inorder(node->right);
}
}

// Function to perform preorder traversal


void preorder(struct Node* node) {
if (node != NULL) {
printf("%d ", node->data);
preorder(node->left);
preorder(node->right);
}
}

// Function to perform postorder traversal


void postorder(struct Node* node) {
if (node != NULL) {
postorder(node->left);
postorder(node->right);
printf("%d ", node->data);
}
}

int main() {
// Create a sample binary tree
struct Node* root = newNode(1);
root->left = newNode(2);
root->right = newNode(3);
root->left->left = newNode(4);
root->left->right = newNode(5);

printf("Inorder traversal: ");


inorder(root);

printf("Preorder traversal: ");


preorder(root);

printf("Postorder traversal: ");


postorder(root);

return 0;
}

================================================================

Binary Search Tree


In this tutorial, you will learn the followings:

 What is Binary Search Tree?


 Properties of Binary Search Tree
 Advantages of Binary Search Tree
 Basic operations on a Binary Search Tree
 Searching in Binary Search Tree
 Insertion in Binary Search Tree
 Deletion in Binary Search Tree
What is Binary Search Tree?
A binary search tree (BST) is a data structure used for efficiently storing and searching
for values in a sorted order. It is a type of binary tree where every node has at most two
children, and the values stored in the left subtree of a node are smaller than the value
stored in the node, while the values stored in the right subtree of a node are greater
than or equal to the value stored in the node.

BSTs are commonly used in computer science for applications such as searching and
sorting algorithms, and are also used in databases for indexing and retrieving data.
Here's an example of a simple binary search tree:
In this example, the root node is 8. The left subtree of the root node contains nodes with
values less than 8, and the right subtree contains nodes with values greater than or
equal to 8. The left subtree has a depth of 2, while the right subtree has a depth of 3.

To perform a search in a BST, we start at the root node and compare the target value
with the value stored in the node. If the target value is less than the value stored in the
node, we move to the left child node and repeat the process. If the target value is
greater than the value stored in the node, we move to the right child node and repeat
the process. We continue this process until we find the target value or reach a null
node, indicating that the target value is not in the tree.

For example, if we wanted to search for the value 7 in the tree above, we would start at
the root node 8. Since 7 is less than 8, we move to the left child node 3. Since 7 is
greater than 3, we move to the right child node 6. Since 7 is greater than 6, we move to
the right child node 7, which is the target value.

Properties of Binary Search Tree


Here are some of the important properties of BSTs:

1. Unique values: A BST can only have unique values in each node. This is
because if there were two nodes with the same value, it would violate the binary
search property.

2. Sorted order: The values in a BST are stored in a sorted order, making it easy
to search for specific values. In-order traversal of the tree will visit the nodes in
ascending order.

3. Balanced tree: A balanced tree is one in which the height of the left and right
subtrees of every node differs by at most one. A balanced BST ensures that
search, insertion, and deletion operations are performed in O(log n) time
complexity on average.

4. Maximum and minimum values: The maximum value in a BST is the rightmost
node, and the minimum value is the leftmost node.

5. Recursive structure: A BST is a recursive data structure, in which each subtree


is itself a BST.

Advantages of Binary Search Tree


Binary search trees (BSTs) offer several advantages over other data structures. Here
are some of the main advantages of BSTs:

1. Efficient searching: BSTs allow for efficient searching of data. Due to the sorted
order of the tree, search operations can be performed in O(log n) time complexity
on average. This makes BSTs an excellent choice for applications that require
frequent searching, such as database indexing and retrieval.

2. Efficient insertion and deletion: BSTs also allow for efficient insertion and
deletion of data. When adding or removing a node from a BST, the tree is
automatically rebalanced to maintain the binary search property. This ensures
that the tree remains efficient even after many insertions and deletions.
3. Memory efficiency: BSTs can be implemented using pointers or references,
which allows for efficient use of memory. Compared to other data structures,
such as arrays or linked lists, BSTs require less memory to store the same
amount of data.

4. Sorted order: The sorted order of a BST makes it easy to perform operations
that require the data to be sorted, such as finding the maximum or minimum
value, or performing in-order traversal.

5. Flexibility: BSTs are highly flexible and can be adapted to suit a wide range of
applications. For example, a BST can be used to implement a priority queue, a
symbol table, or a balanced binary tree.

In summary, BSTs offer several advantages over other data structures, including
efficient searching, efficient insertion and deletion, memory efficiency, sorted order, and
flexibility. These properties make BSTs a popular choice for a wide range of
applications in computer science and beyond.

Basic Operations on BInary Search Tree


Binary search trees (BSTs) support several fundamental operations that allow for
efficient searching, insertion, deletion, and traversal of data. Here are some of the main
operations of BSTs:

1. Search: Searching for a value in a BST involves comparing the value with the
value at the root node. If the value is less than the root, the search continues in
the left subtree. If the value is greater than the root, the search continues in the
right subtree. The search continues recursively until the value is found or it is
determined that the value does not exist in the tree. The time complexity of a
search operation in a balanced BST is O(log n), where n is the number of nodes
in the tree.

2. Insertion: Inserting a value into a BST involves finding the correct position for
the value in the tree. The value is inserted as a new leaf node in the correct
position according to the binary search property. The time complexity of an
insertion operation in a balanced BST is also O(log n).

3. Deletion: Deleting a value from a BST involves first searching for the value in the
tree. If the value is found, there are three cases to consider:

a. If the node has no children, it is simply removed from the tree.

b. If the node has one child, the child takes the place of the deleted node.

c. If the node has two children, then, find the inorder successor of the node to be
deleted. then replace the deleted node with the inorder successor node and
rebalance the tree.

4. Traversal: Traversing a BST involves visiting all the nodes in the tree in a
specific order. There are three main methods of traversing a BST:

a. In-order traversal: In-order traversal visits the left subtree, then the root node,
then the right subtree. In a BST, this results in the nodes being visited in
ascending order of value.
b. Pre-order traversal: Pre-order traversal visits the root node, then the left
subtree, then the right subtree.

c. Post-order traversal: Post-order traversal visits the left subtree, then the right
subtree, then the root node.

Searching in Binary Search Tree


Searching for a value in a BST involves comparing the value with the value at the root
node. If the value is less than the root, the search continues in the left subtree. If the
value is greater than the root, the search continues in the right subtree. The search
continues recursively until the value is found or it is determined that the value does not
exist in the tree. The time complexity of a search operation in a balanced BST is O(log
n), where n is the number of nodes in the tree.

Algorithm to Search for a Node with Given Value in a Binary Search Tree
Step 1: Start at the root node of the BST.

Step 2: Compare the value of the root node with the value being searched for.

Step 3: If the value being searched for is equal to the value of the root node,
return the root node as the node being searched for.

Step 4 : If the value being searched for is less than the value of the root node,
search the left subtree of the root node.

Step 5 : If the value being searched for is greater than the value of the root node,
search the right subtree of the root node.

Step 6 : Repeat steps 2-5 recursively until the value being searched for is found
or it is determined that the value does not exist in the BST.

Step 7 : If the value being searched for is not found in the BST, return null to
indicate that the node does not exist in the tree.

For Example,Consider the Binary Search Tree given below and Search 5 in the Tree
Insertion of New Node in Binary Search Tree
Inserting a value into a BST involves finding the correct position for the value in the tree.
The value is inserted as a new leaf node in the correct position according to the binary
search property. The time complexity of an insertion operation in a balanced BST is also
O(log n).

Algorithm to Insert New Node with Given Value into Binary Search Tree
Step 1 : Start at the root node of the BST.

Step 2: Compare the value of the new node with the value of the root node.

Step 3 : If the value of the new node is less than the value of the root node, go to
the left subtree of the root node.

Step 4 : If the value of the new node is greater than the value of the root node, go
to the right subtree of the root node.

Step 5 : Repeat steps 2-4 recursively until a leaf node is reached.

Step 6 : Insert the new node as a leaf node at the appropriate position in the BST.

For Example, Insert the 11 as New Node in the Binary Search Tree Given Below
Deletion in Binary Search Tree
Deleting a value from a BST involves first searching for the value in the tree. If the value
is found, there are three cases to consider:

a. If the node has no children, it is simply removed from the tree.

b. If the node has one child, the child takes the place of the deleted node.

c. If the node has two children, then, find the inorder successor of the node to be
deleted. then replace the deleted node with the inorder successor node and rebalance
the tree.

Algorithm to Delete Node From Binary Search Tree


algorithm to delete a node with a given value from a binary search tree (BST):

Step 1 : Start at the root node of the BST.

Step 2 : Compare the value of the node to be deleted with the value of the root node.

Step 3 : If the value of the node to be deleted is less than the value of the root node, go
to the left subtree of the root node.

Step 4 : If the value of the node to be deleted is greater than the value of the root node,
go to the right subtree of the root node.

Step 5 : If the value of the node to be deleted is equal to the value of the root node,
perform the deletion operation.

Step 6 : If the node to be deleted is a leaf node or has only one child, perform the
deletion operation directly.

Step 7 : If the node to be deleted has two children, find the minimum value in the right
subtree to replace it with and perform the deletion operation.

Step 8 : Rebalance the tree if necessary to maintain its height-balanced property.

For Example, Consider the following cases

Case 1 : If the node has no childern, it is simply removed from the tree.
Case 2 : If the node has no childern, it is simply removed from the tree.
Case 3 : If the node has two children, then, find the inorder successor of the node to be
deleted. then replace the deleted node with the inorder successor node and rebalance
the tree. Follow the following Steps to delete node havig 2 childerns.

 Visit the right subtree of the deleting node.


 Take the least value node called the in-order successor node.
 Replace the deleting node with its in-order successor node.
Program in C to Perform Operations on Binary Search
Tree
#include <stdio.h>
#include <stdlib.h>

struct Node {
int data;
struct Node* left;
struct Node* right;
};

struct Node* createNode(int value) {


struct Node* newNode = (struct Node*)malloc(sizeof(struct Node));
newNode->data = value;
newNode->left = NULL;
newNode->right = NULL;
return newNode;
}

struct Node* insert(struct Node* root, int value) {


if (root == NULL) {
return createNode(value);
}
if (value < root->data)
{
root->left = insert(root->left, value);
} else if (value > root->data) {
root->right = insert(root->right, value);
}
return root;
}

struct Node* search(struct Node* root, int value)


{
if (root == NULL || root->data == value) {
return root;
}
if (value < root->data) {
return search(root->left, value);
}
return search(root->right, value);
}

struct Node* delete(struct Node* root, int value)


{
if (root == NULL)
{
return root;
}
if (value < root->data)
{
root->left = delete(root->left, value);
} else if (value > root->data)
{
root->right = delete(root->right, value);
} else
{
if (root->left == NULL) {
struct Node* temp = root->right;
free(root);
return temp;
} else if (root->right == NULL) {
struct Node* temp = root->left;
free(root);
return temp;
}
struct Node* temp = root->right;
while (temp->left != NULL) {
temp = temp->left;
}
root->data = temp->data;
root->right = delete(root->right, temp->data);
}
return root;
}

void inorderTraversal(struct Node* root)


{
if (root == NULL) {
return;
}
inorderTraversal(root->left);
printf("%d ", root->data);
inorderTraversal(root->right);
}

int main()
{
struct Node* root = NULL;
root = insert(root, 50);
root = insert(root, 30);
root = insert(root, 20);
root = insert(root, 40);
root = insert(root, 70);
root = insert(root, 60);
root = insert(root, 80);

printf("Inorder traversal of the BST: ");


inorderTraversal(root);

struct Node* searchResult = search(root, 60);


if (searchResult != NULL) {
printf("Element found: %d", searchResult->data);
}
else
{
printf("Element not found.");
}

root = delete(root, 20);


root = delete(root, 30);
root = delete(root, 50);

printf("Inorder traversal after deletion of 20, 30, and 50: ");


inorderTraversal(root);

return 0;
}

You might also like