Trees - Graphs
Trees - Graphs
Trees
Tree represents the nodes connected by edges.
A tree is also one of the data structures that represent hierarchical data.
Suppose we want to show the employees and their positions in the hierarchical form then it
can be represented as shown below:
General Tree:
The general tree is one of the types of tree data structure. In the general tree, a node can
have either 0 or maximum n number of nodes. There is no restriction imposed on the
degree of the node (the number of nodes that a node can contain). The topmost node in a
general tree is known as a root node. The children of the parent node are known
as subtrees
Page |2
.
There can be n number of subtrees in a general tree. In the general tree, the subtrees are
unordered as the nodes in the subtree cannot be ordered.
Every non-empty tree has a downward edge, and these edges are connected to the nodes
known as child nodes. The root node is labeled with level 0. The nodes that have the same
parent are known as siblings.
Tree Terminologies:
In a tree data structure, if we have N number of nodes then we can have a maximum of N-
1 number of links.
Example
Terminology
In a tree data structure, we use the following terminology...
Root
In a tree data structure, the first node is called as Root Node. Every tree must have a root
node. We can say that the root node is the origin of the tree data structure. In any tree,
there must be only one root node. We never have multiple root nodes in a tree.
Page |3
Edge
In a tree data structure, the connecting link between any two nodes is called as EDGE. In a
tree with 'N' number of nodes there will be a maximum of 'N-1' number of edges.
Parent
In a tree data structure, the node which is a predecessor of any node is called as PARENT
NODE. In simple words, the node which has a branch from it to any other node is called a
parent node. Parent node can also be defined as "The node which has child / children".
Child
In a tree data structure, the node which is descendant of any node is called as CHILD Node.
In simple words, the node which has a link from its parent node is called as child node. In a
Page |4
tree, any parent node can have any number of child nodes. In a tree, all the nodes except
root are child nodes.
Siblings
In a tree data structure, nodes which belong to same Parent are called as SIBLINGS. In
simple words, the nodes with the same parent are called Sibling nodes.
Leaf
In a tree data structure, the node which does not have a child is called as LEAF Node. In
simple words, a leaf is a node with no child. In a tree data structure, the leaf nodes are also
called as External Nodes. External node is also a node with no child. In a tree, leaf node is
also called as 'Terminal' node.
Page |5
Internal Nodes
In a tree data structure, the node which has atleast one child is called as INTERNAL Node. In
simple words, an internal node is a node with atleast one child. In a tree data structure,
nodes other than leaf nodes are called as Internal Nodes. The root node is also said to be
Internal Node if the tree has more than one node. Internal nodes are also called as 'Non-
Terminal' nodes.
Degree
In a tree data structure, the total number of children of a node is called as DEGREE of that
Node. In simple words, the Degree of a node is total number of children it has. The highest
degree of a node among all the nodes in a tree is called as 'Degree of Tree'
Page |6
Level
In a tree data structure, the root node is said to be at Level 0 and the children of root node
are at Level 1 and the children of the nodes which are at Level 1 will be at Level 2 and so
on... In simple words, in a tree each step from top to bottom is called as a Level and the
Level count starts with '0' and incremented by one at each level (Step).
Height
In a tree data structure, the total number of edges from leaf node to a particular node in the
longest path is called as HEIGHT of that Node. In a tree, height of the root node is said to
be height of the tree. In a tree, height of all leaf nodes is '0'.
Depth
In a tree data structure, the total number of edges from root node to a particular node is
called as DEPTH of that Node. In a tree, the total number of edges from root node to a leaf
node in the longest path is said to be Depth of the tree. In simple words, the highest depth
of any leaf node in a tree is said to be depth of that tree. In a tree, depth of the root node is
'0'.
Page |7
Path
In a tree data structure, the sequence of Nodes and Edges from one node to another node is
called as PATH between that two Nodes. Length of a Path is total number of nodes in that
path. In below example the path A - B - E - J has length 4.
Sub Tree
In a tree data structure, each child from a node forms a subtree recursively. Every child
node will form a subtree on its parent node.
Binary Tree:
Page |8
The Binary tree means that the node can have maximum two children. Here, binary name
itself suggests that 'two'; therefore, each node can have either 0, 1 or 2 children.
The above tree is a complete binary tree because all the nodes are completely filled, and all
the nodes in the last level are added at the left first.
Heap:
It is a special case of balanced binary tree data structure where the root-node key is
compared with its children and arranged accordingly. If α has child node β then −
key(α) ≥ key(β)
As the value of parent is greater than that of child, this property generates Max Heap.
Based on this criteria, a heap can be of two types −
For Input → 35 33 42 10 14 19 27 44 26 31
Min-Heap − Where the value of the root node is less than or equal to either of its children.
P a g e | 12
Max-Heap − Where the value of the root node is greater than or equal to either of its
children.
Sequential Representation:
Let us consider that we have a tree T. let our tree T is a binary tree that us complete binary
tree. Then there is an efficient way of representing T in the memory called the sequential
representation or array representation of T. This representation uses only a linear array
TREE as follows:
1. The root N of T is stored in TREE [1].
2. If a node occupies TREE [k] then its left child is stored in TREE [2 * k] and its right
child is stored into TREE [2 * k + 1].
For Example:
Consider the following Tree: Binary Tree
P a g e | 13
Sequential Representation:
In this representation of binary tree root will contain the location of the root R of T. If any
one of the subtree is empty, then the corresponding pointer will contain the null value if the
tree T itself is empty, the ROOT will contain the null value.
Binary Tree:
Disadvantages
I. It is difficult to understand.
II. Additional memory is needed for storing pointers
III. Accessing a particular node is not easy.
1. Binary Search tree can be defined as a class of binary trees, in which the nodes are
arranged in a specific order. This is also called ordered binary tree.
2. In a binary search tree, the value of all the nodes in the left sub-tree is less than the
value of the root.
3. Similarly, value of all the nodes in the right sub-tree is greater than or equal to the
value of the root.
4. This rule will be recursively applied to all the left and right sub-trees of the root.
A Binary search tree is shown in the above figure. As the constraint applied on the BST, we
can see that the root node 30 doesn't contain any value greater than or equal to 30 in its left
sub-tree and it also doesn't contain any value less than 30 in its right sub-tree.
Create the binary search tree using the following data elements.
43, 10, 79, 90, 12, 54, 11, 9, 50
1. Insert 43 into the tree as the root of the tree.
2. Read the next element, if it is lesser than the root node element, insert it as the root
of the left sub-tree.
3. Otherwise, insert it as the root of the right of the right sub-tree.
The process of creating BST by using the given elements, is shown in the image below.
P a g e | 16
Basic Operations:
Following are the basic operations of a tree −
Search − Searches an element in a tree.
Insert − Inserts an element in a tree.
Pre-order Traversal − Traverses a tree in a pre-order manner.
In-order Traversal − Traverses a tree in an in-order manner.
Post-order Traversal − Traverses a tree in a post-order manner.
Search Operation:
Whenever an element is to be searched, start searching from the root node. Then if the
data is less than the key value, search for the element in the left subtree. Otherwise, search
for the element in the right subtree. Follow the same algorithm for each node.
Algorithm:
P a g e | 17
This function would determine the position as per value of node to be added and new node
would be added into binary tree. Function is explained in steps below and code snippet lines
are mapped to explanation steps given below.
[Lines 13-19] Check first if tree is empty, then insert node as root.
[Line 21] Check if node value to be inserted is lesser than root node value, then
a. [Line 22] Call insert() function recursively while there is non-NULL left node
b. [Lines 13-19] When reached to leftmost node as NULL, insert new node.
[Line 23] Check if node value to be inserted is greater than root node value, then
a. [Line 24] Call insert() function recursively while there is non-NULL right node
b. [Lines 13-19] When reached to rightmost node as NULL, insert new node.
Algorithm:
55 search(&((*tree)->right), val);
56 }
57 }
This search function would search for value of node whether node of same value already
exists in binary tree or not. If it is found, then searched node is returned otherwise NULL
(i.e. no node) is returned. Function is explained in steps below and code snippet lines are
mapped to explanation steps given below.
1. [Lines 47-49] Check first if tree is empty, then return NULL.
2. [Lines 50-51] Check if node value to be searched is equal to root node value, then return
node
3. [Lines 52-53] Check if node value to be searched is lesser than root node value, then call
search() function recursively with left node
4. [Lines 54-55] Check if node value to be searched is greater than root node value, then
call search() function recursively with right node
5. Repeat step 2, 3, 4 for each recursion call of this search function until node to be
searched is found.
This function would delete all nodes of binary tree in the manner – left node, right node and
root node. Function is explained in steps below and code snippet lines are mapped to
explanation steps given below.
[Line 39] Check first if root node is non-NULL, then
a. [Line 40] Call deltree() function recursively while there is non-NULL left node
b. [Line 41] Call deltree() function recursively while there is non-NULL right node
c. [Line 42] Delete the node.
Preorder traversal:
To traverse a binary tree in preorder, following operations are carried out:
1. Visit the root.
2. Traverse the left sub tree of root.
3. Traverse the right sub tree of root.
Example:
Therefore, the preorder traversal of the above tree will be: 7,1,0,3,2,5,4,6,9,8,10
Inorder traversal:
To traverse a binary tree in inorder traversal, following operations are carried out:
1. Traverse the left most sub tree.
2. Visit the root.
3. Traverse the right most sub tree.
P a g e | 20
Example:
Postorder traversal:
To traverse a binary tree in postorder traversal, following operations are carried out:
1. Traverse the left sub tree of root.
2. Traverse the right sub tree of root.
3. Visit the root.
Note: Postorder traversal is also known as LRN traversal.
Algorithm:
Algorithm postorder(t)
Therefore the postorder traversal of the above tree will be: 0,2,4,6,5,3,1,8,10,9,7
Working Program:
#include<stdlib.h>
#include<stdio.h>
struct bin_tree {
int data;
struct bin_tree * right, * left;
};
typedef struct bin_tree node;
void insert(node ** tree, int val)
{
node *temp = NULL;
if(!(*tree))
{
temp = (node *)malloc(sizeof(node));
temp->left = temp->right = NULL;
temp->data = val;
*tree = temp;
return;
P a g e | 22
root = NULL;
/* Inserting nodes into tree */
insert(&root, 9);
insert(&root, 4);
insert(&root, 15);
insert(&root, 6);
insert(&root, 12);
insert(&root, 17);
insert(&root, 2);
/* Printing nodes of tree */
printf("Pre Order Display\n");
print_preorder(root);
Output:
AVL Tree is invented by GM Adelson - Velsky and EM Landis in 1962. The tree is named AVL
in honour of its inventors.
AVL Tree can be defined as height balanced binary search tree in which each node is
associated with a balance factor which is calculated by subtracting the height of its right
sub-tree from that of its left sub-tree.
Struct AVLNode
{
int data;
struct AVLNode *left, *right;
int balfactor;
};
P a g e | 26
Due to the fact that, AVL tree is also a binary search tree therefore, all the operations are
performed in the same way as they are performed in a binary search tree. Searching and
traversing do not lead to the violation in property of AVL tree. However, insertion and
deletion are the operations which can violate this property and therefore, they need to be
revisited.
SN Operation Description
1 Insertion Insertion in AVL tree is performed in the same way as it is performed in
a binary search tree. However, it may lead to violation in the AVL tree
property and therefore the tree may need balancing. The tree can be
balanced by applying rotations.
2 Deletion Deletion can also be performed in the same way as it is performed in a
binary search tree. Deletion may also disturb the balance of the tree
therefore, various types of rotations are used to rebalance the tree.
Step 1: First, insert a new element into the tree using BST's (Binary Search Tree) insertion
logic.
Step 2: After inserting the elements you have to check the Balance Factor of each node.
Step 3: When the Balance Factor of every node will be found like 0 or 1 or -1 then the
algorithm will proceed for the next operation.
P a g e | 27
Step 4: When the balance factor of any node comes other than the above three values then
the tree is said to be imbalanced. Then perform the suitable Rotation to make it balanced
and then the algorithm will proceed for the next operation.
AVL Rotations
We perform rotation in AVL tree only in case if Balance Factor is other than -1, 0, and 1.
There are basically four types of rotations which are as follows:
Where node A is the node whose balance Factor is other than -1, 0, 1.
1. RR Rotation:
When BST becomes unbalanced, due to a node is inserted into the right subtree of the right
subtree of A, then we perform RR rotation, RR rotation is an anticlockwise rotation, which is
applied on the edge below a node having balance factor -2
P a g e | 28
In above example, node A has balance factor -2 because a node C is inserted in the right
subtree of A right subtree. We perform the RR rotation on the edge below A.
2.LL Rotation
When BST becomes unbalanced, due to a node is inserted into the left subtree of the left
subtree of C, then we perform LL rotation, LL rotation is clockwise rotation, which is applied
on the edge below a node having balance factor 2.
In above example, node C has balance factor 2 because a node A is inserted in the left
subtree of C left subtree. We perform the LL rotation on the edge below A.
3. LR Rotation
Double rotations are bit tougher than single rotation which has already explained above. LR
rotation = RR rotation + LL rotation, i.e., first RR rotation is performed on subtree and then
LL rotation is performed on full tree, by full tree we mean the first node from the path of
inserted node whose balance factor is other than -1, 0, or 1.
State Action
P a g e | 29
A node B has been inserted into the right subtree of A the left
subtree of C, because of which C has become an unbalanced node
having balance factor 2. This case is L R rotation where: Inserted
node is in the right subtree of left subtree of C
4.RL Rotation:
As already discussed, that double rotations are bit tougher than single rotation which has
already explained above. R L rotation = LL rotation + RR rotation, i.e., first LL rotation is
P a g e | 30
performed on subtree and then RR rotation is performed on full tree, by full tree we mean
the first node from the path of inserted node whose balance factor is other than -1, 0, or 1.
State Action
A node B has been inserted into the left subtree of C the right
subtree of A, because of which A has become an unbalanced
node having balance factor - 2. This case is RL rotation
where: Inserted node is in the left subtree of right subtree of
A
1. Insert H, I, J
On inserting the above elements, especially in the case of H, the BST becomes unbalanced
as the Balance Factor of H is -2. Since the BST is right-skewed, we will perform RR Rotation
on node H.
The resultant balance tree is:
2. Insert B, A
P a g e | 32
On inserting the above elements, especially in case of A, the BST becomes unbalanced as
the Balance Factor of H and I is 2, we consider the first node from the last inserted node i.e.
H. Since the BST from H is left-skewed, we will perform LL Rotation on node H.
3. Insert E
4. Insert C, F, D
On inserting C, F, D, BST becomes unbalanced as the Balance Factor of B and H is -2, since if
we travel from D to B we find that it is inserted in the right subtree of left subtree of B, we
will perform RL Rotation on node I. RL = LL + RR rotation.
5. Insert G
6. Insert K
On inserting K, BST becomes unbalanced as the Balance Factor of I is -2. Since the BST is
right-skewed from I to K, hence we will perform RR Rotation on the node I.
The resultant balanced tree after RR rotation is:
7. Insert L
On inserting the L tree is still balanced as the Balance Factor of each node is now either, -1,
0, +1. Hence the tree is a Balanced AVL tree
P a g e | 36
key(α) ≥ key(β)
As the value of parent is greater than that of child, this property generates Max Heap. Based
on this criteria, a heap can be of two types −
For Input → 35 33 42 10 14 19 27 44 26 31
Min-Heap − Where the value of the root node is less than or equal to either of its children.
Max-Heap − Where the value of the root node is greater than or equal to either of its
children.
P a g e | 37
We are going to derive an algorithm for max heap by inserting one element at a time. At any
point of time, heap must maintain its property. While insertion, we also assume that we are
inserting a node in an already heapified tree.
Note − In Min Heap construction algorithm, we expect the value of the parent node to be
less than that of the child node.
Let's understand Max Heap construction by an animated illustration. We consider the same
input sample that we used earlier
First, we have to construct a heap from the given array and convert it into max heap.
First, we have to construct a heap from the given array and convert it into max heap.
P a g e | 39
After converting the given heap into max heap, the array elements are -
Next, we have to delete the root element (89) from the max heap. To delete this node, we
have to swap it with the last node, i.e. (11). After deleting the root element, we again have
to heapify it to convert it into max heap.
After swapping the array element 89 with 11, and converting the heap into max-heap, the
elements of array are -
In the next step, again, we have to delete the root element (81) from the max heap. To
delete this node, we have to swap it with the last node, i.e. (54). After deleting the root
element, we again have to heapify it to convert it into max heap.
P a g e | 40
After swapping the array element 81 with 54 and converting the heap into max-heap, the
elements of array are -
In the next step, we have to delete the root element (76) from the max heap again. To
delete this node, we have to swap it with the last node, i.e. (9). After deleting the root
element, we again have to heapify it to convert it into max heap.
After swapping the array element 76 with 9 and converting the heap into max-heap, the
elements of array are -
In the next step, again we have to delete the root element (54) from the max heap. To
delete this node, we have to swap it with the last node, i.e. (14). After deleting the root
element, we again have to heapify it to convert it into max heap.
P a g e | 41
After swapping the array element 54 with 14 and converting the heap into max-heap, the
elements of array are -
In the next step, again we have to delete the root element (22) from the max heap. To
delete this node, we have to swap it with the last node, i.e. (11). After deleting the root
element, we again have to heapify it to convert it into max heap.
After swapping the array element 22 with 11 and converting the heap into max-heap, the
elements of array are -
In the next step, again we have to delete the root element (14) from the max heap. To
delete this node, we have to swap it with the last node, i.e. (9). After deleting the root
element, we again have to heapify it to convert it into max heap.
P a g e | 42
After swapping the array element 14 with 9 and converting the heap into max-heap, the
elements of array are -
In the next step, again we have to delete the root element (11) from the max heap. To
delete this node, we have to swap it with the last node, i.e. (9). After deleting the root
element, we again have to heapify it to convert it into max heap.
After swapping the array element 11 with 9, the elements of array are -
Now, heap has only one element left. After deleting it, heap will be empty.
#include <stdio.h>
/* function to heapify a subtree. Here 'i' is the
index of root node in array a[], and 'n' is the size of heap. */
void heapify(int a[], int n, int i)
{
int largest = i; // Initialize largest as root
int left = 2 * i + 1; // left child
int right = 2 * i + 2; // right child
// If left child is larger than root
if (left < n && a[left] > a[largest])
largest = left;
// If right child is larger than root
if (right < n && a[right] > a[largest])
largest = right;
// If root is not largest
P a g e | 43
if (largest != i) {
// swap a[i] with a[largest]
int temp = a[i];
a[i] = a[largest];
a[largest] = temp;
heapify(a, n, largest);
}
}
/*Function to implement the heap sort*/
void heapSort(int a[], int n)
{
for (int i = n / 2 - 1; i >= 0; i--)
heapify(a, n, i);
// One by one extract an element from heap
for (int i = n - 1; i >= 0; i--) {
/* Move current root element to end*/
// swap a[0] with a[i]
int temp = a[0];
a[0] = a[i];
a[i] = temp;
heapify(a, i, 0);
}
}
/* function to print the array elements */
void printArr(int arr[], int n)
{
for (int i = 0; i < n; ++i)
{
printf("%d", arr[i]);
printf(" ");
}
}
int main()
{
int a[] = {48, 10, 23, 43, 28, 26, 1};
int n = sizeof(a) / sizeof(a[0]);
printf("Before sorting array elements are - \n");
printArr(a, n);
heapSort(a, n);
printf("\nAfter sorting array elements are - \n");
printArr(a, n);
return 0;
}
P a g e | 44
Output:
Let us consider a trie which stores strings of English alphabets, in this case we will have a
total 26 possible characters. In case of binary tries, we would have only 2 characters.
In the usual implementation, we have a blank root node which points to all other starting
characters of the set of strings, which then point to the corresponding next character.
Advantages of Tries:
Faster search
P a g e | 45
Less Space
Longest Prefix Matching
B Tree:
B Tree is a specialized m-way tree that can be widely used for disk access. A B-Tree of order
m can have at most m-1 keys and m children. One of the main reason of using B tree is its
capability to store large number of keys in a single node and large key values by keeping the
height of the tree relatively small.
A B tree of order m contains all the properties of an M way tree. In addition, it contains the
following properties.
Operations
Searching:
Searching in B Trees is similar to that in Binary search tree. For example, if we search for an
item 49 in the following B Tree.
The process will something like following:
Compare item 49 with root node 78. since 49 < 78 hence, move to its left sub-tree.
Since, 40<49<56, traverse right sub-tree of 40.
49>45, move to right. Compare 49.
P a g e | 46
Inserting:
Insertions are done at the leaf node level. The following algorithm needs to be followed in
order to insert an item into B Tree.
Traverse the B Tree in order to find the appropriate leaf node at which the node can
be inserted.
If the leaf node contain less than m-1 keys then insert the element in the increasing
order.
Else, if the leaf node contains m-1 keys, then follow the following steps.
o Insert the new element in the increasing order of elements.
o Split the node into the two nodes at the median.
o Push the median element upto its parent node.
o If the parent node also contain m-1 number of keys, then split it too by
following the same steps.
Example:
Insert the node 8 into the B Tree of order 5 shown in the following image.
The node, now contain 5 keys which is greater than (5 -1 = 4 ) keys. Therefore split the node
from the median i.e. 8 and push it up to its parent node shown as follows.
Deletion
Deletion is also performed at the leaf nodes. The node which is to be deleted can either be a
leaf node or an internal node. Following algorithm needs to be followed in order to delete a
node from a B tree.
Example 1
Delete the node 53 from the B Tree of order 5 shown in the following figure.
Now, 57 is the only element which is left in the node, the minimum number of elements
that must be present in a B tree of order 5, is 2. it is less than that, the elements in its left
and right sub-tree are also not sufficient therefore, merge it with the left sibling and
intervening element of parent i.e. 49.
Application of B tree:
B tree is used to index the data and provides fast access to the actual data stored on the
disks since, the access to value stored in a large database that is stored on a disk is a very
time consuming process.
Searching an un-indexed and unsorted database containing n key values needs O(n) running
time in worst case. However, if we use B Tree to index this database, it will be searched in
O(log n) time in worst case.
Graphs
A graph can be defined as group of vertices and edges that are used to connect these
vertices. A graph can be seen as a cyclic tree, where the vertices (Nodes) maintain any
complex relationship among them instead of having parent child relationship.
A graph is a pictorial representation of a set of objects where some pairs of objects are
connected by links. The interconnected objects are represented by points termed
as vertices, and the links that connect the vertices are called edges.
Formally, a graph is a pair of sets (V, E), where V is the set of vertices and E is the set of
edges, connecting the pairs of vertices. Take a look at the following graph −
P a g e | 49
Types of Graphs:
Null Graph, Trivial Graph, Non-directed Graph, Directed Graph, Connected Graph,
Disconnected Graph, Regular Graph, Complete Graph, Cycle Graph, Cyclic Graph, Acyclic
Graph, Finite Graph, Infinite Graph, Bipartite Graph, Planar Graph, Simple Graph, Multi
Graph, Pseudo Graph, Euler Graph &Hamiltonian Graph
Null Graph:
A graph whose edge set is empty is called as a null graph.
In other words, a null graph does not contain any edges in it.
Example-
P a g e | 50
Here,
This graph consists only of the vertices and there are no edges in it.
Since the edge set is empty, therefore it is a null graph.
Connected Graph:
A graph in which we can visit from any one vertex to any other vertex is called as a
connected graph.
In connected graph, at least one path exists between every pair of vertices.
Example-
Here,
In this graph, we can visit from any one vertex to any other vertex.
There exists at least one path between every pair of vertices.
Therefore, it is a connected graph.
Disconnected Graph:
A graph in which there does not exist any path between at least one pair of vertices is called
as a disconnected graph.
Example
P a g e | 51
Here,
This graph consists of two independent components which are disconnected.
It is not possible to visit from the vertices of one component to the vertices of other
component.
Therefore, it is a disconnected graph.
Complete Graph:
A graph in which exactly one edge is present between every pair of vertices is called as a
complete graph.
A complete graph of ‘n’ vertices contains exactly nC2 edges.
A complete graph of ‘n’ vertices is represented as Kn.
Examples:
In these graphs,
Each vertex is connected with all the remaining vertices through exactly one edge.
Therefore, they are complete graphs.
Cycle Graph:
P a g e | 52
A simple graph of ‘n’ vertices (n>=3) and n edges forming a cycle of length ‘n’ is called as a
cycle graph.
In a cycle graph, all the vertices are of degree 2.
Examples
In these graphs,
Each vertex is having degree 2.
Therefore, they are cycle graphs.
Cyclic Graph:
A graph containing at least one cycle in it is called as a cyclic graph.
Example:
Here,
This graph contains two cycles in it.
Therefore, it is a cyclic graph.
Acyclic Graph:
A graph not containing any cycle in it is called as an acyclic graph.
Example:
P a g e | 53
Here,
This graph do not contain any cycle in it.
Therefore, it is an acyclic graph.
Weighted Graph:
A weighted graph is a graph in which each branch is given a numerical weight. A weighted
graph is therefore a special type of labelled graph in which the labels are numbers
edges be all directed in the same direction. Paths are fundamental concepts of graph
theory, described in the introductory sections of most graph theory texts.
Length of Graph:
The length of a path is the number of edges it contains. For a simple graph, a path is
equivalent to a trail and is completely specified by an ordered sequence of vertices.
Simple Path:
The path is said to be simple path if the edges in a path of graph are distinct.
In the above figure, we can see the mapping among the vertices (A, B, C, D, E) is represented
by using the adjacency matrix which is also shown in the figure.
There exists different adjacency matrices for the directed and undirected graph. In directed
graph, an entry Aij will be 1 only when there is an edge directed from Vi to Vj.
A directed graph and its adjacency matrix representation is shown in the following figure.
P a g e | 55
Representation of weighted directed graph is different. Instead of filling the entry by 1, the
Non- zero entries of the adjacency matrix are represented by the weight of respective
edges.
The weighted directed graph along with the adjacency matrix representation is shown in the
following figure.
Linked Representation:
In the linked representation, an adjacency list is used to store the Graph into the computer's
memory.
Node list: it is a key value each element in a node list a node in graph.
Node Next adj
Edge List: each element will correspond to edge of graph G.
Dest link
Consider the undirected graph shown in the following figure and check the adjacency list
representation.
P a g e | 56
An adjacency list is maintained for each node present in the graph which stores the node
value and a pointer to the next adjacent node to the respective node. If all the adjacent
nodes are traversed then store the NULL in the pointer field of last node of the list. The sum
of the lengths of adjacency lists is equal to the twice of the number of edges present in an
undirected graph.
Consider the directed graph shown in the following figure and check the adjacency list
representation of the graph.
In a directed graph, the sum of lengths of all the adjacency lists is equal to the number of
edges present in the graph.
In the case of weighted directed graph, each node contains an extra field that is called the
weight of the node. The adjacency list representation of a directed graph is shown in the
following figure.
In this part, we will discuss the techniques by using which, we can traverse all the vertices of
the graph.
Traversing the graph means examining all the nodes and vertices of the graph. There are
two standard methods by using which, we can traverse the graphs. Let’s discuss each one of
them in detail.
o Breadth First Search
o Depth First Search
Breadth First Search (BFS) Algorithm:
Breadth first search is a graph traversal algorithm that starts traversing the graph from root
node and explores all the neighbouring nodes. Then, it selects the nearest node and explore
all the unexplored nodes. The algorithm follows the same process for each of the nearest
node until it finds the goal.
The algorithm of breadth first search is given below. The algorithm starts with examining the
node A and all of its neighbours. In the next step, the neighbours of the nearest node of A
are explored and process continues in the further steps. The algorithm explores all
neighbours of all the nodes and ensures that each node is visited exactly once and no node
is visited twice.
Algorithm:
o Step 1: SET STATUS = 1 (ready state)
for each node in G
o Step 2: Enqueue the starting node A
and set its STATUS = 2
(waiting state)
o Step 3: Repeat Steps 4 and 5 until
QUEUE is empty
o Step 4: Dequeue a node N. Process it
and set its STATUS = 3
(processed state).
o Step 5: Enqueue all the neighbours of
N that are in the ready state
(whose STATUS = 1) and set
their STATUS = 2
(waiting state)
[END OF LOOP]
o Step 6: EXIT
Example:
Consider the graph G shown in the following image, calculate the minimum path p from
node A to node E. Given that each edge has a length of 1.
P a g e | 58
Solution:
Minimum Path P can be found by applying breadth first search algorithm that will begin at
node A and will end at E. the algorithm uses two queues,
namely QUEUE1 and QUEUE2. QUEUE1 holds all the nodes that are to be processed
while QUEUE2 holds all the nodes that are processed and deleted from QUEUE1.
Lets start examining the graph from Node A.
1. Add A to QUEUE1 and NULL to QUEUE2.
1. QUEUE1 = {A}
2. QUEUE2 = {NULL}
2. Delete the Node A from QUEUE1 and insert all its neighbours. Insert Node A into QUEUE2
1. QUEUE1 = {B, D}
2. QUEUE2 = {A}
3. Delete the node B from QUEUE1 and insert all its neighbours. Insert node B into QUEUE2.
1. QUEUE1 = {D, C, F}
2. QUEUE2 = {A, B}
4. Delete the node D from QUEUE1 and insert all its neighbours. Since F is the only
neighbour of it which has been inserted, we will not insert it again. Insert node D into
QUEUE2.
1. QUEUE1 = {C, F}
2. QUEUE2 = { A, B, D}
5. Delete the node C from QUEUE1 and insert all its neighbours. Add node C to QUEUE2.
1. QUEUE1 = {F, E, G}
2. QUEUE2 = {A, B, D, C}
6. Remove F from QUEUE1 and add all its neighbours. Since all of its neighbours has already
been added, we will not add them again. Add node F to QUEUE2.
1. QUEUE1 = {E, G}
2. QUEUE2 = {A, B, D, C, F}
7. Remove E from QUEUE1, all of E's neighbours has already been added to QUEUE1
therefore we will not add them again. All the nodes are visited and the target node i.e. E is
encountered into QUEUE2.
1. QUEUE1 = {G}
2. QUEUE2 = {A, B, D, C, F, E}
P a g e | 59
Algorithm:
o Step 1: SET STATUS = 1 (ready state) for each node in G
o Step 2: Push the starting node A on the stack and set its STATUS = 2 (waiting state)
o Step 3: Repeat Steps 4 and 5 until STACK is empty
o Step 4: Pop the top node N. Process it and set its STATUS = 3 (processed state)
o Step 5: Push on the stack all the neighbours of N that are in the ready state (whose
STATUS = 1) and set their
STATUS = 2 (waiting state)
[END OF LOOP]
o Step 6: EXIT
Example:
Consider the graph G along with its adjacency list, given in the figure below. Calculate the
order to print all the nodes of the graph starting from node H, by using depth first search
(DFS) algorithm.
P a g e | 60
Solution:
Push H onto the stack
1. STACK : H
POP the top element of the stack i.e. H, print it and push all the neighbours of H onto the
stack that are is ready state.
1. Print H
2. STACK : A
Pop the top element of the stack i.e. A, print it and push all the neighbours of A onto the
stack that are in ready state.
1. Print A
2. Stack : B, D
Pop the top element of the stack i.e. D, print it and push all the neighbours of D onto the
stack that are in ready state.
1. Print D
2. Stack : B, F
Pop the top element of the stack i.e. F, print it and push all the neighbours of F onto the
stack that are in ready state.
1. Print F
2. Stack : B
Pop the top of the stack i.e. B and push all the neighbours
1. Print B
2. Stack : C
Pop the top of the stack i.e. C and push all the neighbours.
1. Print C
2. Stack : E, G
Pop the top of the stack i.e. G and push all its neighbours.
1. Print G
2. Stack : E
Pop the top of the stack i.e. E and push all its neighbours.
1. Print E
2. Stack :
Hence, the stack now becomes empty and all the nodes of the graph have been traversed.
The printing sequence of the graph will be :
1. H → A → D → F → B → C → G → E
Definition BFS, stands for Breadth DFS, stands for Depth First
1
First Search. Search.
Data structure BFS uses Queue to find the DFS uses Stack to find the
2
shortest path. shortest path.
Source BFS is better when target is DFS is better when target is far
3
closer to Source. from source.
Suitability for As BFS considers all DFS is more suitable for decision
decision tree neighbour so it is not tree. As with one decision, we
4 suitable for decision tree need to traverse further to
used in puzzle games. augment the decision. If we
reach the conclusion, we won.