KEMBAR78
Unit-4 ECE 3-2 Notes | PDF | I Pv6 | Ip Address
0% found this document useful (0 votes)
34 views46 pages

Unit-4 ECE 3-2 Notes

The document discusses store-and-forward packet switching and the services provided by the network layer to the transport layer, including logical addressing, routing, flow control, and connection-oriented and connectionless services. It also compares virtual circuit and datagram networks and describes adaptive and non-adaptive routing algorithms.

Uploaded by

srinu vas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views46 pages

Unit-4 ECE 3-2 Notes

The document discusses store-and-forward packet switching and the services provided by the network layer to the transport layer, including logical addressing, routing, flow control, and connection-oriented and connectionless services. It also compares virtual circuit and datagram networks and describes adaptive and non-adaptive routing algorithms.

Uploaded by

srinu vas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 46

1

Unit-4
The Network Layer Design Issues
Store-and-Forward Packet Switching:
 Store-and-forward switching is a method of switching data packets by the
switching device that receives the data frame and then checks for errors before
forwarding the packets.

 It supports the efficient transmission of non-corrupted frames. It is generally used


in telecommunication networks.

 In store-and-forward switching, the switching device waits to receive the entire


frame and then stores the frame in the buffer memory.

 Then the frame is checked for errors by using CRC (Cyclic Redundancy Check) if
the error is found then the packet is discarded else it is forwarded to the next
device.

Services provided to the Transport Layer


The services provided to the transport layer are as follows −
 Logical Addressing − Network layer adds header to incoming packet which
includes logical address to identify sender and receiver.
 Routing − It is the mechanism provided by Network Layer for routing the
packets to the final destination in the fastest possible and efficient way.
2

 Flow control − This layer routes the packet to another way, If too many packets
are present at the same time preventing bottlenecks and congestion.
 Breaks Large Packets − Breaks larger packets into small packets.
 Connection Oriented service − It is a network communication mode, where a
communication session is established before any useful data can be transferred
and where a stream of data is delivered in the same order as it was sent.
 Connectionless Service − It is a data transmission method used in packet
switching networks by which each data unit is individually addressed and routed
based on information carried in each unit, rather than in the setup information of a
prearranged, fixed data channel as in connection-oriented communication.
 Datagram − A datagram is a basic transfer unit associated with a packet-
switched network. The delivery, arrival time and order of arrival need not be
guaranteed by the network.
 A virtual circuit − It is a means of transporting data over a packet switched
computer network in such a way that it appears as though there is a dedicated
physical layer link between the source and destination end system of this data.
Connectionless Service
 A connection is similar to a postal system, in which each letter takes along different
route paths from the source to the destination address.
 Connectionless service is used in the network system to transfer data from one end
to another end without creating any connection. S
 o it does not require establishing a connection before sending the data from the
sender to the receiver.
 It is not a reliable network service because it does not guarantee the transfer of data
packets to the receiver, and data packets can be received in any order to the receiver.
 Therefore we can say that the data packet does not follow a defined path.
 In connectionless service, the transmitted data packet is not received by the receiver
due to network congestion, and the data may be lost.
3

Connection-Oriented Service
 A connection-oriented service is a network service that was designed and developed
after the telephone system.
 A connection-oriented service is used to create an end to end connection between the
sender and the receiver before transmitting the data over the same or different
networks.
 In connection-oriented service, packets are transmitted to the receiver in the same
order the sender has sent them.
 It uses a handshake method that creates a connection between the user and sender for
transmitting the data over the network.
 Hence it is also known as a reliable network service.
 Suppose, a sender wants to send data to the receiver. Then, first, the sender sends a
request packet to a receiver in the form of an SYN packet.
 After that, the receiver responds to the sender's request with (SYN-ACK)
signal/packets.
4

 That represents the confirmation is received by the receiver to start the


communication between the sender and the receiver. Now a sender can send the
message or data to the receiver.

Comparison of Virtual Circuit and Datagram Networks:


Virtual Circuits

 Connection-oriented switching is another name for virtual circuits. Before messages


are sent, a virtual circuit switching sets a predetermined routing. This route is
referred to as a virtual circuit since it gives the user the impression that a passionate
physical circuit exists.

 The call request and call accept packets are used to establish the connection between
the sender and the recipient.

 The term "virtual circuit" refers to a logical link between two network nodes,
typically in a communications network. The path consists of many network parts
that are connected by switches.
5

 The transmitter and receiver in the aforementioned diagram are A and B. The sender
and receiver are linked together using the call request and calls accept packets. Once
a path has been established, data will be transferred.

 The receiver transmits an acknowledgement signal after receiving the data to


confirm receipt of the message. If a user wants to break the connection, a clear
signal is sent.

Datagram Networks

 It is a method of switching packets in which every packet, or "datagram," is seen as


a distinct entity.

 The switch uses the destination information contained in each packet to direct it to
the intended location.

 Since no specific channel is classified for a connection session, there is no need to


reserve resources.

 As a result, packets have a header with all of the information about the destination.
6

 A packet's header is examined by the intermediate nodes, which then select an


appropriate link to another node that is closer to the destination.

 Datagram networks assign resources according to the First-Come-First-Serve


(FCFS) principle. Regardless of its source or destination, if another packet is being
processed when a packet arrives at a router, it must wait.

 Datagram packets transmitted between hosts H1 and H2 are shown in the diagram
above. The same message is being carried by the four datagram packets bearing the
labels A, B, C, and D, each of which is being sent by a separate path.

 The packets of the message reach their destination out of order. It is H2's
responsibility to rearrange the packets in order to recover the original message

Routing algorithm
o In order to transfer the packets from source to the destination, the network layer
must determine the best route through which packets can be transmitted.
7

o Whether the network layer provides datagram service or virtual circuit service,
the main job of the network layer is to provide the best route. The routing
protocol provides this job.
o The routing protocol is a routing algorithm that provides the best path from the
source to the destination. The best path is the path that has the "least-cost path"
from source to the destination.
o Routing is the process of forwarding the packets from source to the destination
but the best route to send the packets is determined by the routing algorithm.
Classification of a Routing algorithm
The Routing algorithm is divided into two categories:
o Adaptive Routing algorithm
o Non-adaptive Routing algorithm

Adaptive Routing algorithm


o An adaptive routing algorithm is also known as dynamic routing algorithm.
o This algorithm makes the routing decisions based on the topology and network
traffic.
o The main parameters related to this algorithm are hop count, distance and
estimated transit time.
An adaptive routing algorithm can be classified into three parts:
8

o Centralized algorithm: It is also known as global routing algorithm as it


computes the least-cost path between source and destination by using complete
and global knowledge about the network.
o Isolation algorithm: It is an algorithm that obtains the routing information by
using local information rather than gathering information from other nodes.
o Distributed algorithm: It is also known as decentralized algorithm as it
computes the least-cost path between source and destination in an iterative and
distributed manner. In the decentralized algorithm, no node has the knowledge
about the cost of all the network links.

Non-Adaptive Routing algorithm


o Non Adaptive routing algorithm is also known as a static routing algorithm.
o When booting up the network, the routing information stores to the routers.
o Non Adaptive routing algorithms do not take the routing decision based on the
network topology or network traffic.
The Non-Adaptive Routing algorithm is of two types:
Flooding: In case of flooding, every incoming packet is sent to all the outgoing links
except the one from it has been reached. The disadvantage of flooding is that node may
contain several copies of a particular packet.
Random walks: In case of random walks, a packet sent by the node to one of its
neighbors randomly. An advantage of using random walks is that it uses the alternative
routes very efficiently.

Optimality Principle
 It states that if the router J is on the optimal path from router I to router K, then the
optimal path from J to K also falls along the same route. Call the route from I to
J r1 and the rest of the route r2. it could be concatenated with r1 to improve the
9

route from I to K, contradicting our statement that r1r2 is optimal only if a route
better than r2 existed from J to K.
Sink Tree for routers:
We can see that the set of optimal routes from all sources to a given destination
from a tree rooted at the destination as a directed consequence of the optimality
principle. This tree is called a sink tree and is illustrated in fig(1).
In the given figure the distance metric is the number of hops. Therefore, the
goal of all routing algorithms is to discover and use the sink trees for all routers.
The sink tree is not unique also other trees with the same path lengths may
exist. If we allow all of the possible paths to be chosen, the tree becomes a more
general structure called a DAG (Directed Acyclic Graph). DAGs have no loops.
We will use sink trees as convenient shorthand for both cases. we will take
technical assumption for both cases that the paths do not interfere with each other so,
for example, a traffic jam on one path will not cause another path to divert .

General Principles of Congestion Control


 Congestion control refers to the techniques used to control or prevent congestion.
Congestion control techniques can be broadly classified into two categories:
10

Open Loop Congestion Control


Open loop congestion control policies are applied to prevent congestion before it
happens. The congestion control is handled either by the source or the destination.
Policies adopted by open loop congestion control –
1. Retransmission Policy:
It is the policy in which retransmission of the packets are taken care of. If the
sender feels that a sent packet is lost or corrupted, the packet needs to be
retransmitted. This transmission may increase the congestion in the network.
2. Window Policy:
The type of window at the sender’s side may also affect the congestion. Several
packets in the Go-back-n window are re-sent, although some packets may be
received successfully at the receiver side.
3. Discarding Policy:
A good discarding policy adopted by the routers is that the routers may prevent
congestion and at the same time partially discard the corrupted or less sensitive
packages and also be able to maintain the quality of a message.
4. Acknowledgment Policy:
Since acknowledgements are also the part of the load in the network, the
acknowledgment policy imposed by the receiver may also affect congestion.
Several approaches can be used to prevent congestion related to acknowledgment.
5. Admission Policy:
In admission policy a mechanism should be used to prevent congestion. Switches
11

in a flow should first check the resource requirement of a network flow before
transmitting it further.
Closed Loop Congestion Control
 Closed loop congestion control techniques are used to treat or alleviate congestion
after it happens. Several techniques are used by different protocols; some of them
are:
1. Backpressure:
Backpressure is a technique in which a congested node stops receiving packets from
upstream node. This may cause the upstream node or nodes to become congested and
reject receiving data from above nodes. Backpressure is node-to-node congestion
control techniques that propagate in the opposite direction of data flow.

2. Choke Packet Technique:


Choke packet technique is applicable to both virtual networks as well as datagram
subnets. A choke packet is a packet sent by a node to the source to inform it of
congestion. Each router monitors its resources and the utilization at each of its output
lines.
12

3. Implicit Signaling:
In implicit signaling, there is no communication between the congested nodes and the
source. The source guesses that there is congestion in a network.
4. Explicit Signaling:
.In explicit signaling, if a node experiences congestion it can explicitly sends a packet
to the source or destination to inform about congestion. The difference between choke
packet and explicit signaling is that the signal is included in the packets that carry data
rather than creating a different packet as in case of choke packet technique.

Approaches to Congestion Control


There are some approaches for congestion control over a network which are usually
applied on different time scales to either prevent congestion or react to it once it has
occurred.

Step 1 − The basic way to avoid congestion is to build a network that is well matched
to the traffic that it carries. If more traffic is directed but a low-bandwidth link is
available, definitely congestion occurs.
13

Step 2 − Sometimes resources can be added dynamically like routers and links when
there is serious congestion. This is called provisioning, and which happens on a
timescale of months, driven by long-term trends.
Step 3 − To utilise most existing network capacity, routers can be tailored to traffic
patterns making them active during daytime when network users are using more and
sleep in different time zones.
Step 4 − Some of local radio stations have helicopters flying around their cities to
report on road congestion to make it possible for their mobile listeners to route their
packets (cars) around hotspots. This is called traffic aware routing.
Step 5 − Sometimes it is not possible to increase capacity. The only way to reduce the
congestion is to decrease the load. In a virtual circuit network, new connections can be
refused if they would cause the network to become congested. This is called admission
control.
Step 6 − Routers can monitor the average load, queueing delay, or packet loss. In all
these cases, the rising number indicates growing congestion. The network is forced to
discard packets that it cannot deliver. The general name for this is Load shedding. The
better technique for choosing which packets to discard can help to prevent congestion
collapse.

Traffic Aware Routing


 Traffic awareness is one of the approaches for congestion control over the network.
The basic way to avoid congestion is to build a network that is well matched to the
traffic that it carries.

 If more traffic is directed but a low-bandwidth link is available, congestion occurs.

 The main goal of traffic aware routing is to identify the best routes by considering
the load, set the link weight to be a function of fixed link bandwidth and propagation
delay and the variable measured load or average queuing delay.
14

 Least-weight paths will then favour paths that are more lightly loaded, remaining all
are equal.

Step 1 − Consider a network which is divided into two parts, East and West both are
connected by links CF and EI.

Step 2 − Suppose most of the traffic in between East and West is using link CF, and as
a result CF link is heavily loaded with long delays. Including queueing delay in the
weight which is used for shortest path calculation will make EI more attractive.

Step 3 − After installing the new routing tables, most of East-West traffic will now go
over the EI link. As a result in the next update CF link will appear to be the shortest
path.

Step 4 − As a result the routing tables may oscillate widely, leading to erratic routing
and many potential problems.

Step 5 − If we consider only bandwidth and propagation delay by ignoring the load,
this problem does not occur

Step 6 − Two techniques can contribute for successful solution, which are as follows −

 Multipath routing
 The routing scheme to shift traffic across routes.
15

Admission control approach


 The presence of congestion means the load is greater than the resources available
over a network to handle.

 Generally, we will get an idea to reduce the congestion by trying to increase the
resources or decrease the load, but it is not that much of a good idea. There are some
approaches for congestion control over a network which are usually applied on
different time scales to either prevent congestion or react to it once it has occurred.

Admission Control
 It is one of techniques that is widely used in virtual-circuit networks to keep
congestion at bay. The idea is do not set up a new virtual circuit unless the network
can carry the added traffic without becoming congested.
 Admission control can also be combined with traffic aware routing by considering
routes around traffic hotspots as part of the setup procedure.
 Take two networks (a) A congestion network and (b) The portion of the network
that is not congested. A virtual circuit A to B is also shown below −
16

Step 1 − Suppose a host attached to router A wants to set up a connection to a host


attached to router B. Normally this connection passes through one of the congested
routers.

Step 2 − To avoid this situation, we can redraw the network as shown in figure (b),
removing the congested routers and all of their lines.

Step 3 − The dashed line indicates a possible route for the virtual circuit that avoids the
congested routers.

Traffic Throttling
 Traffic throttling is one of the approaches for congestion control. In the internet and
other computer networks, senders trying to adjust the transmission need to send as
much traffic as the network can readily deliver. In this setting the network aim is to
operate just before the onset of congestion.

 There are some approaches to throttling traffic that can be used in both datagram
and virtual-circuit networks.

Each approach has to solve two problems −

Firs
Routers have to determine when congestion is approaching ideally before it has arrived.
Each router can continuously monitor the resources it is using.
17

There are three possibilities, which are as follows −

 Utilisation of output links.


 Buffering of queued packets inside the router.
 Numbers of packets are lost due to insufficient buffering.
Second
Average of utilization does not directly account for burstiness of most traffic and
queueing delay inside routers directly captures any congestion experienced by packets.
To manage the good estimation of queueing delay d, a sample of queue length s, can be
made periodically and d updated according to,

dnew=αdold+(1−α)sdnew=�dold+(1−�)s

Where the constant α determines how fast the router forgets recent history. This is
called EWMA (Exponentially Weighted Moving Average)

It smoothest out fluctuations and is equivalent to allow-pass filter. Whenever d moves


above the threshold, the router notes the onset of congestion.

Routers must deliver timely feedback to the senders that are causing the congestion.
Routers must also identify the appropriate senders. It must then warn carefully, without
sending many more packets into an already congested network.

There are many feedback mechanisms one of them is as follows −


Explicit Congestion Notification (ECN)
The Explicit Congestion Notification (ECN) is diagrammatically represented as follows
18

Step 1 − Instead of generating additional packets to warn of congestion, a router can


tag any packet it forwards by setting a bit in the packet header to signal that it is
experiencing congestion.
Step 2 − When the network delivers the packet, the destination can note that there is
congestion and inform the sender when it sends a reply packet.

Step 3 − The sender can then throttle its transmissions as before.

Step 4 − This design is called explicit congestion notification and is mostly used on the
Internet.

Load shedding
 The presence of congestion means the load is greater than the resources available
over a network to handle.

 Generally we will get an idea to reduce the congestion by trying to increase the
resources or decrease the load, but it is not that much of a good idea.

 There are some approaches for congestion control over a network which are usually
applied on different time scales to either prevent congestion or react to it once it has
occurred.

Load Shedding
19

 It is one of the approaches to congestion control. Router contains a buffer to store


packets and route it to destination. When the buffer is full, it simply discards some
packets.
 It chooses the packet to be discarded based on the strategy implemented in the data
link layer. This is called load shedding
 Load shedding will use dropping the old packets than new to avoid congestion.
Dropping packets that are part of the difference is preferable because a future packet
depends on full frame.
 To implement an intelligent discard policy, applications must mark their packets to
indicate to the network how important they are.
 When packets have to be discarded, routers can first drop packets from the least
important class, then the next most important class, and so on.

Traffic control Algorithm-Leaky bucket &Token bucket


Congestion control algorithms
 Congestion Control is a mechanism that controls the entry of data packets into
the network, enabling a better use of a shared network infrastructure and avoiding
congestive collapse.
 Congestive-Avoidance Algorithms (CAA) are implemented at the TCP layer as
the mechanism to avoid congestive collapse in a network.
 There are two congestion control algorithm which are as follows:
Leaky Bucket Algorithm
 The leaky bucket algorithm discovers its use in the context of network traffic
shaping or rate-limiting.
 A leaky bucket execution and a token bucket execution are predominantly used
for traffic shaping algorithms.
20

 This algorithm is used to control the rate at which traffic is sent to the network
and shape the burst traffic to a steady traffic stream.
 The disadvantages compared with the leaky-bucket algorithm are the inefficient
use of available network resources.
 The large area of network resources such as bandwidth is not being used
effectively.
Imagine a bucket with a small hole in the bottom.No matter at what rate water enters
the bucket, the outflow is at constant rate.When the bucket is full with water
additional water entering spills over the sides and is lost.

Similarly, each network interface contains a leaky bucket and the following steps are
involved in leaky bucket algorithm:
1. When host wants to send packet, packet is thrown into the bucket.
2. The bucket leaks at a constant rate, meaning the network interface transmits
packets at a constant rate.
3. Bursty traffic is converted to a uniform traffic by the leaky bucket.
4. In practice the bucket is a finite queue that outputs at a finite rate.
Token bucket Algorithm
 The leaky bucket algorithm has a rigid output design at an average rate
independent of the bursty traffic.
 In some applications, when large bursts arrive, the output is allowed to speed
up. This calls for a more flexible algorithm, preferably one that never loses
21

information. Therefore, a token bucket algorithm finds its uses in network traffic
shaping or rate-limiting.
 It is a control algorithm that indicates when traffic should be sent. This order
comes based on the display of tokens in the bucket.
 The bucket contains tokens. Each of the tokens defines a packet of
predetermined size. Tokens in the bucket are deleted for the ability to share a
packet.
 When tokens are shown, a flow to transmit traffic appears in the display of
tokens.
 No token means no flow sends its packets. Hence, a flow transfers traffic up to
its peak burst rate in good tokens in the bucket.
Need of token bucket Algorithm:-
The leaky bucket algorithm enforces output pattern at the average rate, no matter how
bursty the traffic is. So in order to deal with the bursty traffic we need a flexible
algorithm so that the data is not lost. One such algorithm is token bucket algorithm.
Steps of this algorithm can be described as follows:
1. In regular intervals tokens are thrown into the bucket. ƒ
2. The bucket has a maximum capacity. ƒ
3. If there is a ready packet, a token is removed from the bucket, and the packet is
sent.
4. If there is no token in the bucket, the packet cannot be sent.
Let’s understand with an example,

Formula: M * s = C + ρ * s where S – is time taken M – Maximum output rate ρ –


Token arrival rate C – Capacity of the token bucket in byte
Let’s understand with an example,
22

Internetworking:
Tunneling:
 A technique of inter-networking called Tunneling is used when source and
destination networks of the same type are to be connected through a network of
different types. Tunneling uses a layered protocol model such as those of the OSI
or TCP/IP protocol suite.
 So, in other words, when data moves from host A to B it covers all the different
levels of the specified protocol (OSI, TCP/IP, etc.) while moving between different
levels, data conversion (Encapsulation) to suit different interfaces of the particular
layer is called tunneling.
23

Tunneling
The task is sent on an IP packet from host A of Ethernet-1 to host B of Ethernet-2 via
a WAN.
Steps
 Host A constructs a packet that contains the IP address of Host B.
 It then inserts this IP packet into an Ethernet frame and this frame is addressed
to the multiprotocol router M1
 Host A then puts this frame on Ethernet.
 When M1 receives this frame, it removes the IP packet, inserts it in the payload
packet of the WAN network layer packet, and addresses the WAN packet to M2.
The multiprotocol router M2 removes the IP packet and sends it to host B in an
Ethernet frame.

Fragmentation
 Fragmentation is an important function of network layer. It is technique in
which gateways break up or divide larger packets into smaller ones called
24

fragments. Each fragment is then sent as a separate internal packet. Each


fragment has its separate header and trailer.
 Sometimes, a fragmented datagram can also get fragmented further when it
encounters a network that handles smaller fragments. Thus, a datagram can be
fragmented several times before it reaches final destination.
 Reverse process of the fragmentation is difficult. Reassembling of fragments is
usually done by the destination host because each fragment has become an
independent datagram.
 There are two different strategies for the recombination or we can say
reassembly of fragments : Transparent Fragmentation, and Non-Transparent
Fragmentation.
1. Transparent Fragmentation:
This fragmentation is done by one network is made transparent to all other subsequent
networks through which packet will pass. Whenever a large packet arrives at a
gateway, it breaks the packet into smaller fragments as shown in the following figure
i.e the gateway G1 breaks a packet into smaller fragments.

After this, each fragment is going to address to same exit gateway. Exit gateway of a
network reassembles or recombines all fragments as shown in above figure. The exit
gateway, G2 of network 1 recombines all fragments created by G1 before passing
them to network 2. Thus, subsequent network is not aware that fragmentation has
25

occurred. This type of strategy is used by ATM networks . These networks use special
hardware that provides transparent fragmentation of packets.

2. Non-Transparent Fragmentation:
This fragmentation is done by one network is non-transparent to the subsequent
networks through which a packet passes. Packet fragmented by a gateway of a
network is not recombined by exit gateway of same network as shown in the below
figure.

Once a packet is fragmented, each fragment is treated as original packet. All


fragments of a packet are passed through exit gateway and recombination of these
fragments is done at the destination host.

IP version 4 protocol:
IP stands for Internet Protocol and v4 stands for Version Four (IPv4). IPv4 was the
primary version brought into action for production within the ARPANET in 1983.
IP version four addresses are 32-bit integers which will be expressed in decimal
notation.
Example- 192.0.2.126 could be an IPv4 address.
Parts of IPv4
 Network part:
The network part indicates the distinctive variety that’s appointed to the network.
The network part conjointly identifies the category of the network that’s assigned.
26

 Host Part:
The host part uniquely identifies the machine on your network. This part of the
IPv4 address is assigned to every host.
For each host on the network, the network part is the same, however, the host half
must vary.
 Subnet number:
This is the nonobligatory part of IPv4. Local networks that have massive numbers
of hosts are divided into subnets and subnet numbers are appointed to that.
IPv4 Header Format:
 IPv4 is a connectionless protocol used for packet-switched networks. It operates on
a best effort delivery model, in which neither delivery is guaranteed, nor is proper
sequencing or avoidance of duplicate delivery assured.
 Internet Protocol Version 4 (IPv4) is the fourth revision of the Internet Protocol
and a widely used protocol in data communication over different kinds of
networks.
 IPv4 is a connectionless protocol used in packet-switched layer networks, such as
Ethernet. It provides a logical connection between network devices by providing
identification for each device.
 There are many ways to configure IPv4 with all kinds of devices – including
manual and automatic configurations – depending on the network type.
 IPv4 is defined and specified in IETF publication RFC 791.
IPv4 uses 32-bit addresses for Ethernet communication in five classes: A, B, C, D
and E.
 Classes A, B and C have a different bit length for addressing the network host.
Class D addresses are reserved for military purposes, while class E addresses are
reserved for future use.
 IPv4 uses 32-bit (4 byte) addressing, which gives 2 32 addresses. IPv4 addresses are
written in the dot-decimal notation, which comprises of four octets of the address
27

expressed individually in decimal and separated by periods, for instance,


192.168.1.5.
IPv4 Datagram Header
Size of the header is 20 to 60 bytes.

VERSION: Version of the IP protocol (4 bits), which is 4 for IPv4


HLEN: IP header length (4 bits), which is the number of 32 bit words in the header.
The minimum value for this field is 5 and the maximum is 15.
Type of service: Low Delay, High Throughput, Reliability (8 bits)
Total Length: Length of header + Data (16 bits), which has a minimum value 20
bytes and the maximum is 65,535 bytes.
Identification: Unique Packet Id for identifying the group of fragments of a single IP
datagram (16 bits)
Flags: 3 flags of 1 bit each : reserved bit (must be zero), do not fragment flag, more
fragments flag (same order)
Fragment Offset: Represents the number of Data Bytes ahead of the particular
fragment in the particular Datagram. Specified in terms of number of 8 bytes, which
has the maximum value of 65,528 bytes.
28

Time to live: Datagram’s lifetime (8 bits), It prevents the datagram to loop through
the network by restricting the number of Hops taken by a Packet before delivering to
the Destination.
Protocol: Name of the protocol to which the data is to be passed (8 bits)
Header Checksum: 16 bits header checksum for checking errors in the datagram
header
Source IP address: 32 bits IP address of the sender
Destination IP address: 32 bits IP address of the receiver
Option: Optional information such as source route, record route. Used by the
Network administrator to check whether a path is working or not.

CIDR:
 CIDR stands for Classless Inter-Domain Routing. It is an IP address assigning
method that improves the efficiency of address distribution. It is also known as
super netting that replaces the older system based on classes A, B, and C networks.
By using a single CIDR IP address many unique IP addresses can be designated.
CIDR IP address is the same as the normal IP address except that it ends with a
slash followed by a number.
172.200.0.0/16 It is called IP network prefix.

NAT:
 To access the Internet, one public IP address is needed, but we can use a private IP
address in our private network. The idea of NAT is to allow multiple devices to
access the Internet through a single public address. To achieve this, the translation
of a private IP address to a public IP address is required.
29

 Network Address Translation (NAT) is a process in which one or more local IP


address is translated into one or more Global IP address and vice versa in order to
provide Internet access to the local hosts.
 NAT inside and outside addresses –
Inside refers to the addresses which must be translated. Outside refers to the
addresses which are not in control of an organization. These are the network
Addresses in which the translation of the addresses will be done.

 Inside local address – An IP address that is assigned to a host on the Inside


(local) network. The address is probably not an IP address assigned by the service
provider i.e., these are private IP addresses. This is the inside host seen from the
inside network.
 Inside global address – IP address that represents one or more inside local IP
addresses to the outside world. This is the inside host as seen from the outside
network.
 Outside local address – This is the actual IP address of the destination host in
the local network after translation.
 Outside global address – This is the outside host as seen from the outside
network. It is the IP address of the outside destination host before translation.
Network Address Translation (NAT) Types –
There are 3 ways to configure NAT:
1. Static NAT – In this, a single unregistered (Private) IP address is mapped with
a legally registered (Public) IP address i.e one-to-one mapping between local and
30

global addresses. This is generally used for Web hosting. These are not used in
organizations as there are many devices that will need Internet access and to
provide Internet access, a public IP address is needed.
2. Dynamic NAT – In this type of NAT, an unregistered IP address is translated
into a registered (Public) IP address from a pool of public IP addresses. If the IP
address of the pool is not free, then the packet will be dropped as only a fixed
number of private IP addresses can be translated to public addresses.
3. Port Address Translation (PAT) – This is also known as NAT overload. In
this, many local (private) IP addresses can be translated to a single registered IP
address. Port numbers are used to distinguish the traffic i.e., which traffic belongs
to which IP address. This is most frequently used as it is cost-effective as
thousands of users can be connected to the Internet by using only one real global
(public) IP address.

Internet Protocol version 6 (IPv6)


IPv6 was developed by Internet Engineering Task Force (IETF) to deal with the
problem of IPv4 exhaustion. IPv6 is a 128-bits address having an address space of
2128, which is way bigger than IPv4. IPv6 use Hexa-Decimal format separated by
colon (:)
Components in Address format :
1. There are 8 groups and each group represents 2 Bytes (16-bits).
2. Each Hex-Digit is of 4 bits (1 nibble)
3. Delimiter used – colon (:)
31

Need for IPv6:


1. Large address space
An IPv6 address is 128 bits long .compared with the 32 bit address of IPv4, this is a
huge(2 raised 96 times) increases in the address space.

2. Better header format


IPv6 uses a new header format in which options are separated from the base header
and inserted, when needed, between the base header and the upper layer data .

3. New options
IPv6 has new options to allow for additional functionalities.

4. Allowance for extension


IPv6 is designed to allow the extension of the protocol if required by new technologies
or applications.

5. Support for resource allocation


In IPv6,the type of service field has been removed, but two new fields , traffic class
and flow label have been added to enables the source to request special handling of the
packet .

6. Support for more security


The encryption and authentication options in IPv6 provide confidentiality and
integrity of the packet.
32

Internet Protocol version 6 (IPv6) Header


IP version 6 is the new version of Internet Protocol, which is way better than IP
version 4 in terms of complexity and efficiency. Let’s look at the header of IP version
6 and understand how it is different from the IPv4 header.

IP version 6 Header Format:

Version (4-bits): Indicates version of Internet Protocol which contains bit sequence
0110.
Traffic Class (8-bits): The Traffic Class field indicates class or priority of IPv6
packet which is similar to Service Field in IPv4 packet. It helps routers to handle the
traffic based on the priority of the packet. If congestion occurs on the router then
packets with the least priority will be discarded.
As of now, only 4-bits are being used in which 0 to 7 are assigned to Congestion
controlled traffic and 8 to 15 are assigned to Uncontrolled traffic.
Flow Label (20-bits): Flow Label field is used by a source to label the packets
belonging to the same flow in order to request special handling by intermediate IPv6
33

routers, such as non-default quality of service or real-time service. In order to


distinguish the flow, an intermediate router can use the source address, a destination
address, and flow label of the packets.
Payload Length (16-bits): It is a 16-bit (unsigned integer) field, indicates the total
size of the payload which tells routers about the amount of information a particular
packet contains in its payload.
Next Header (8-bits): Next Header indicates the type of extension header(if present)
immediately following the IPv6 header. Whereas In some cases it indicates the
protocols contained within upper-layer packets, such as TCP, UDP.
Hop Limit (8-bits): Hop Limit field is the same as TTL in IPv4 packets. It indicates
the maximum number of intermediate nodes IPv6 packet is allowed to travel. Its value
gets decremented by one, by each node that forwards the packet and the packet is
discarded if the value decrements to 0. This is used to discard the packets that are
stuck in an infinite loop because of some routing error.
Source Address (128-bits): Source Address is the 128-bit IPv6 address of the original
source of the packet.
Destination Address (128-bits): The destination Address field indicates the IPv6
address of the final destination(in most cases). All the intermediate nodes can use this
information in order to correctly route the packet.
Extension Headers: In order to rectify the limitations of the IPv4 Option Field,
Extension Headers are introduced in IP version 6. The extension header mechanism is
a very important part of the IPv6 architecture. The next Header field of IPv6 fixed
header points to the first Extension Header and this first extension header points to the
second extension header and so on.
34

Transition from IPv4 to IPv6 address


 When we want to send a request from an IPv4 address to an IPv6 address, but it
isn’t possible because IPv4 and IPv6 transition is not compatible.
 For a solution to this problem, we use some technologies. These technologies
are Dual Stack Routers, Tunneling, and NAT Protocol Translation. These are
explained as following below.
1. Dual-Stack Routers:
In dual-stack router, A router’s interface is attached with IPv4 and IPv6 addresses
configured are used in order to transition from IPv4 to IPv6.

In this above diagram, A given server with both IPv4 and IPv6 addresses configured
can communicate with all hosts of IPv4 and IPv6 via dual-stack router (DSR). The
dual stack router (DSR) gives the path for all the hosts to communicate with the server
without changing their IP addresses.

2. Tunneling:
Tunneling is used as a medium to communicate the transit network with the different
IP versions.
35

In this above diagram, the different IP versions such as IPv4 and IPv6 are present. The
IPv4 networks can communicate with the transit or intermediate network on IPv6 with
the help of the Tunnel. It’s also possible that the IPv6 network can also communicate
with IPv4 networks with the help of a Tunnel.

3. NAT Protocol Translation:


With the help of the NAT Protocol Translation technique, the IPv4 and IPv6 networks
can also communicate with each other which do not understand the address of
different IP version.

In the above diagram, an IPv4 address communicates with the IPv6 address via a
NAT-PT device to communicate easily. In this situation, the IPv6 address understands
that the request is sent by the same IP version (IPv6) and it responds.
36

Differences between IPv4 and IPv6

Ipv6
Ipv4

Address length IPv4 is a 32-bit address. IPv6 is a 128-bit address.

Fields IPv4 is a numeric address that IPv6 is an alphanumeric


consists of 4 fields which are address that consists of 8
separated by dot (.). fields, which are separated
by colon.

Classes IPv4 has 5 different classes of IP IPv6 does not contain classes
address that includes Class A, of IP addresses.
Class B, Class C, Class D, and
Class E.

Number of IP IPv4 has a limited number of IP IPv6 has a large number of


address addresses. IP addresses.

VLSM It supports VLSM (Virtual It does not support VLSM.


Length Subnet Mask). Here,
VLSM means that Ipv4 converts
IP addresses into a subnet of
different sizes.

Address It supports manual and DHCP It supports manual, DHCP,


configuration configuration. auto-configuration, and
renumbering.

Address space It generates 4 billion unique It generates 340 undecillion


addresses unique addresses.

End-to-end In IPv4, end-to-end connection In the case of IPv6, end-to-


connection integrity is unachievable. end connection integrity is
integrity achievable.
37

Security features In IPv4, security depends on the In IPv6, IPSEC is developed


application. This IP address is not for security purposes.
developed in keeping the security
feature in mind.

Address In IPv4, the IP address is In IPv6, the representation of


representation represented in decimal. the IP address in
hexadecimal.

Fragmentation Fragmentation is done by the Fragmentation is done by the


senders and the forwarding senders only.
routers.

Packet flow It does not provide any It uses flow label field in the
identification mechanism for packet flow header for the packet flow
identification. identification.

Checksum field The checksum field is available The checksum field is not
in IPv4. available in IPv6.

Transmission IPv4 is broadcasting. On the other hand, IPv6 is


scheme multicasting, which provides
efficient network operations.

Encryption and It does not provide encryption It provides encryption and


Authentication and authentication. authentication.

Number of octets It consists of 4 octets. It consists of 8 fields, and


each field contains 2 octets.
Therefore, the total number
of octets in IPv6 is 16.
38

ICMP Protocol
The ICMP stands for Internet Control Message Protocol. It is a network layer protocol.
It is used for error handling in the network layer, and it is primarily used on network
devices such as routers. As different types of errors can exist in the network layer, so
ICMP can be used to report these errors and to debug those errors.
The IP protocol does not have any error-reporting or error-correcting mechanism, so it
uses a message to convey the information
Position of ICMP in the network layer

Messages
The ICMP messages are usually divided into two categories:

o Error-reporting messages
The error-reporting message means that the router encounters a problem when it
processes an IP packet then it reports a message.
o Query messages
The query messages are those messages that help the host to get the specific
information of another host. For example, suppose there are a client and a server, and
39

the client wants to know whether the server is live or not, then it sends the ICMP
message to the server.
ICMP Message Format
The message format has two things; one is a category that tells us which type of
message it is. If the message is of error type, the error message contains the type and the
code. The type defines the type of message while the code defines the subtype of the
message.
The ICMP message contains the following fields:

o Type: It is an 8-bit field. It defines the ICMP message type. The values range
from 0 to 127 are defined for ICMPv6, and the values from 128 to 255 are the
informational messages.
o Code: It is an 8-bit field that defines the subtype of the ICMP message
o Checksum: It is a 16-bit field to detect whether the error exists in the message or
not.
Types of Error Reporting messages
The error reporting messages are broadly classified into the following categories:
40

o Destination unreachable
The destination unreachable error occurs when the packet does not reach the
destination. Suppose the sender sends the message, but the message does not reach the
destination, then the intermediate router reports to the sender that the destination is
unreachable.

The above diagram shows the message format of the destination unreachable message.
In the message format:
Type: It defines the type of message. The number 3 specifies that the destination is
unreachable.
Code (0 to 15): It is a 4-bit number which identifies whether the message comes from
some intermediate router or the destination itself.
Sometimes the destination does not want to process the request, so it sends the
destination unreachable message to the source. A router does not detect all the problems
that prevent the delivery of a packet.
o Source quench
There is no flow control or congestion control mechanism in the network layer or the IP
protocol. The sender is concerned with only sending the packets, and the sender does
not think whether the receiver is ready to receive those packets or is there any
congestion occurs in the network layer so that the sender can send a lesser number of
packets, so there is no flow control or congestion control mechanism.
41

o Time exceeded
Sometimes the situation arises when there are many routers that exist between the
sender and the receiver. When the sender sends the packet, then it moves in a routing
loop. The time exceeded is based on the time-to-live value. When the packet traverses
through the router, then each router decreases the value of TTL by one. Whenever a
router decreases a datagram with a time-to-live value to zero, then the router discards a
datagram and sends the time exceeded message to the original source.

Parameter problems
The router and the destination host can send a parameter problem message. This
message conveys that some parameters are not properly set.

The above diagram shows the message format of the parameter problem. The type of
message is 12, and the code can be 0 or 1.
42

Redirection

When the packet is sent, then the routing table is gradually augmented and updated. The
tool used to achieve this is the redirection message. For example, A wants to send the
packet to B, and there are two routers exist between A and B. First, A sends the data to
the router 1. The router 1 sends the IP packet to router 2 and redirection message to A
so that A can update its routing table.

Address Resolution Protocol (ARP) and its types


Address Resolution Protocol (ARP) is a communication protocol used to find the MAC
(Media Access Control) address of a device from its IP address. This protocol is used
when a device wants to communicate with another device on a Local Area Network or
Ethernet.
Types of ARP
There are four types of Address Resolution Protocol, which is given below:
o Proxy ARP
o Gratuitous ARP
o Reverse ARP (RARP)
o Inverse ARP
43

Proxy ARP - Proxy ARP is a method through which a Layer 3 devices may respond to
ARP requests for a target that is in a different network from the sender. The
Proxy ARP configured router responds to the ARP and map the MAC address of the
router with the target IP address and fool the sender that it is reached at its destination.
At the backend, the proxy router sends its packets to the appropriate destination because
the packets contain the necessary information.
Gratuitous ARP - Gratuitous ARP is an ARP request of the host that helps to identify
the duplicate IP address. It is a broadcast request for the IP address of the router. If an
ARP request is sent by a switch or router to get its IP address and no ARP responses are
received, so all other nodes cannot use the IP address allocated to that switch or router.
Yet if a router or switch sends an ARP request for its IP address and receives an ARP
response, another node uses the IP address allocated to the switch or router.
Reverse ARP (RARP) - It is a networking protocol used by the client system in a local
area network (LAN) to request its IPv4 address from the ARP gateway router table. A
table is created by the network administrator in the gateway-router that is used to find
out the MAC address to the corresponding IP address.
44

When a new system is set up or any machine that has no memory to store the IP
address, then the user has to find the IP address of the device. The device sends a RARP
broadcast packet, including its own MAC address in the address field of both the sender
and the receiver hardware. A host installed inside of the local network called the RARP-
server is prepared to respond to such type of broadcast packet. The RARP server is then
trying to locate a mapping table entry in the IP to MAC address. If any entry matches
the item in the table, then the RARP server sends the response packet along with the IP
address to the requesting computer.
Inverse ARP (InARP) - Inverse ARP is inverse of the ARP, and it is used to find the
IP addresses of the nodes from the data link layer addresses. These are mainly used for
the frame relays, and ATM networks, where Layer 2 virtual circuit addressing are often
acquired from Layer 2 signaling. When using these virtual circuits, the relevant Layer 3
addresses are available.
ARP conversions Layer 3 addresses to Layer 2 addresses. However, its opposite address
can be defined by InARP. The InARP has a similar packet format as ARP, but
operational codes are different.

Dynamic Host Configuration Protocol (DHCP)


DHCP stands for Dynamic Host Configuration Protocol. It is the critical feature on
which the users of an enterprise network communicate. DHCP helps enterprises to
smoothly manage the allocation of IP addresses to the end-user clients’ devices such
as desktops, laptops, cellphones, etc. is an application layer protocol that is used to
provide:
Subnet Mask (Option 1 - e.g., 255.255.255.0)

Router Address (Option 3 - e.g., 192.168.1.1)

DNS Address (Option 6 - e.g., 8.8.8.8)

Vendor Class Identifier (Option 43 - e.g.,


45

'unifi' = 192.168.1.9 ##where unifi = controller)

Components of DHCP
 DHCP Server: DHCP Server is basically a server that holds IP Addresses and
other information related to configuration.
 DHCP Client: It is basically a device that receives configuration information
from the server. It can be a mobile, laptop, computer, or any other electronic device
that requires a connection.
 DHCP Relay: DHCP relays basically work as a communication channel
between DHCP Client and Server.
 IP Address Pool: It is the pool or container of IP Addresses possessed by the
DHCP Server. It has a range of addresses that can be allocated to devices.
 Subnets: Subnets are smaller portions of the IP network partitioned to keep
networks under control.
 Lease: It is simply the time that how long the information received from the
server is valid, in case of expiration of the lease, the tenant must have to re-assign
the lease.
 DNS Servers: DHCP servers can also provide DNS (Domain Name System)
server information to DHCP clients, allowing them to resolve domain names to IP
addresses.
 Default Gateway: DHCP servers can also provide information about the
default gateway, which is the device that packets are sent to when the destination is
outside the local network.
 Options: DHCP servers can provide additional configuration options to clients,
such as the subnet mask, domain name, and time server information.
 Renewal: DHCP clients can request to renew their lease before it expires to
ensure that they continue to have a valid IP address and configuration information.
46

Working of DHCP
DHCP works on the Application layer of the TCP/IP Protocol. The main task of
DHCP is to dynamically assigns IP Addresses to the Clients and allocate information
on TCP/IP configuration to Clients. For more, you can refer to the Article Working of
DHCP.
The DHCP port number for the server is 67 and for the client is 68. It is a client-
server protocol that uses UDP services. An IP address is assigned from a pool of
addresses. In DHCP, the client and the server exchange mainly 4 DHCP messages in
order to make a connection, also called the DORA process, but there are 8 DHCP
messages in the process.

Working of DHCP

You might also like