KEMBAR78
Network Layer | PDF | Routing | Network Congestion
0% found this document useful (0 votes)
56 views21 pages

Network Layer

Uploaded by

thirosul
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
56 views21 pages

Network Layer

Uploaded by

thirosul
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Chapter No.

:– 5

Network Layer

 Network Layer:-
 The network layer is responsible for the source to destination delivery of a
packet, possibly across multiple network (link).
 The network layer ensures that each packet gets form its point of origin to its
final destination. The need for a network layer arises when the two end system
are attached to different network.
 A network layer adds a header that includes the logical addresses of the sender
and receiver to the packet from the upper layer.
 When two or more network connected to each other to create an internetwork,
routers or switches routes packets to their final destination.
 One of the function of the network layer is to provide a router mechanism.

 Network layer Design issues:-


The design issues of network layer are as follows:-
1. Services provided to transport layer.
2. Routing
3. Congesting control.
4. Internal organization of the network layer.

1. Service provided to the Transport layer:-


The network layer services to the transport layer at the network
layer/transport layer interface.
Network layer provides connection-oriented or connectionless services to
the transport layer. The network layer services have been designed with the
following goal:-
a. The service should be independent of subnet technology.
b. The transport layer should be shielded from the number, type and topology
of subnets present.
c. The network addresses use a uniform numbering.

Page 1 Prof. Nale V.D.


A. Need for connectionless service:-
i. No connection is established between the end systems before
transmission.
ii. The subnet is considered to be unreliable i.e., packet can get lost or
damaged.
iii. The hosts should do error control & flow control mechanism.
iv. Each packet contains the full source & destination address.
v. Each packet is sent independent of all other packet.

B. Need for connection oriented service:-


i. The job of the subnet is to provide reliable connection oriented
service.
ii. A virtual circuit is establish between the source & destination
machine.
iii. Packet should be delivered in sequence & without error.

2. Routing:-
The real function of the network layer is to router packets from the source
to the destination machine. The network layer has to perform routing to deliver
packets from the source to the destination machine in an internetwork.
The network layer uses various algorithms for route selection. Router use
routing algorithms to build and maintain routing database or tables and
determines which route to use when forwarding packets.

3. Internal organization (design) of the network layer:-


There are basically two different philosophies for organizing the subnet.
One using connection oriented mechanism and other working connectionless.
Two mechanisms used in the network layers:-
a. Data gram packet switching
b. Virtual-circuit packet switching

a. Datagram packet switching:-


i. It is a connectionless service.
ii. No connection is established between the end systems before transmission.
iii. Each packet is sent independent of all other packets.
iv. Each packet contains full source & destination address.
v. Packet are called datagram and the subnet is called a datagram subnet.

Page 2 Prof. Nale V.D.


b. Virtual circuit packet switching:-
i. It is connection oriented service.
ii. A virtual circuit is established between the source and destination machine.
The route is also decided at this time.
iii. All packets from the source to destination follow the predefined route
(Virtual circuit).
iv. Since each packet follow the same route, the packet need not carry
complete address. Only a small VCI (virtual circuit identifier)is needed.
v. The design of the subnet is called virtual circuit subnet.

4. Congestion Control:-
 An important design issue of a network is the prevention & control of
congestion.
 When too many packets are present in a part of the subnet, performance
degrades. This situation is called congestion.
 Congestion is a situation in which the number of packets in a network is
greater than the carrying capacity of the network. This means that the load on
the network exceeds network capacity. When congestion occurs in a subnet or
a part of it, packets will be discarded due to or non-availability of buffer. The
sender will retransmit, thereby adding to congestion.
 Congestion control refers to techniques and mechanisms that can either
prevent congestion before it happens or remove it after it has happened.

Fig.:- Effect of Congestion

Page 3 Prof. Nale V.D.


 Causes of congestion:-
1. Insufficient memory to hold packets within an intermediate processor will
cause packet to be discarded. The sender will retransmit, thereby increasing the
number of packets.
2. Slow processor can cause congestion. If the intermediate processors process
packets very slowly or do tasks like route calculation & table updation slowly,
queues will buildup.
3. Low-Bandwidth Lines will allow less packets to move per unit time, thereby
increasing the propagation delay for the others.
4. Failed router or processors due to which there is excessive load on other
routers in the network.
5. In some cases, there is excess traffic to some specific host of network. In
such cases, there will be congestion at that part of the subnet.

 Congestion control Algorithms:-

A. General principles of congestion control:-


Congestion control refers to the techniques and mechanisms that can
either prevent congestion before it happens or remove it after it has happened.
Congestion control mechanisms can be divided into two broad
categories:-
1. Open loop congestion control (Prevention)
2. Closed loop congestion control (removal)

1. Open loop congestion control:-


These strategies are based on a good design to make sure that
congestion does not occur in the first place.
The features of the open-loop system are:-
a. Congestion control is handled by either the source or destination.
b. Have a good transmission policy to set timer, so that efficiency is
optimum.
c. Make a choice of a good sliding window protocol, so that retransmissions
will be less & many packets can be sent. Example:- Selective repeat
protocol is a good protocol.
d. Use piggybacked acknowledgements, so that excess traffic is minimized.
e. Have a good discarding policy to decide when to discard packets & which
packet to discard.

Page 4 Prof. Nale V.D.


2. Closed loop congesting control:-
These mechanisms try to reduce or remove congestion once it has
occurred. they are based on the concept of a feedback loop which essentially
monitors the network & takes corrective action.
The mechanisms has three parts when applied to congestion control:-
a. Monitor the subnet to detect when and where congestion occurs. Variety
of metrics can be used to monitor the subnet for congestion are average
delays, percentage of packets discarded due to lack of buffers, number of
retransmissions, the average of queue lengths etc..
b. Transfer the congestion information about the congestion from the points
where it is detected to points where action can be taken. A congested
router can inform the previous route to reduce the rate of outgoing
packets. This information can passed all the way to the source.
c. Adjust the system operation to adjust the problem. This can be done by
decreasing the load, denying service to some users, temporarily stopping
senders from transmitting etc.

B. Congestion Prevention Polices:-


In open-loop system, the focus is on building policies to avoid
congestion totally rather than taking action after congestion has occurred.
Every layer in the network model has certain policies which can affect
congestion:-
1. Data Link layer policies:-
a. Retransmission policy:-
The retransmission policy is concerned with how fast a sender times
out and what it transmit upon timeout. A jumpy sender that times out
quickly and retransmits all outstanding packets using ‘go back n’ will put a
heavier load on the system than will a sender that uses selective repeat.
b. Out-of-Order caching policy/Buffering policy:-
This determines whether or not out-of-order packets should be
buffered or not. If receivers routinely discard all out of order packets, these
packets will have to be transmitted again later, this creates extra load. With
respect to congesting control ‘selective repeat’ is clearly better than ‘go
back n’.
c. Acknowledgement policy:-
Acknowledgement policy also affects congestion. If separate
Acknowledgements sent, it increases traffic. To avoid this use
piggybacking to send acknowledgement.

Page 5 Prof. Nale V.D.


d. Flow control policy:-
A good flow control policy ensures proper co-ordination between
sender and receiver and prevent congestion.

2. Network layer policies:-


a. Datagram subnet versus virtual circuit subnet:-
Many congestion control algorithms work on virtual circuit subnet
only. Hence, choice can affect congestion control.

b. Packet queuing policy:-


Packet queuing and service policy related to whether router have
one queue per input line, one queue per output line or both. The order in
which packed are processed are important factors. E.g. router robin or
priority based.

c. Packed discard policy:-


Packet discard policy tells when a packet is to be discarded & which
one to be discarded. This decision is crucial for congestion control.
d. Packet life time management:-
Lost packets or duplicates may move around the network for a long
time, thereby increasing traffic. Packet lifetime decides how long a packet
can live before being discarded. If the lifetime is high, traffic will increase.
If it’s too short, there will be many retransmissions.

3. Transport layer policies:-


The transport layer takes care of process to process delivery, where as
the data link layer deals with node to node delivery. In the transport layer, the
same issues occur as in the data link layer. The additional policy added is
timeout determination policy. The transport layer policies are:
a. Retransmission policy
b. Out-of-order caching policy
c. Acknowledgement policy
d. Flow control policy
e. Timeout determination

Page 6 Prof. Nale V.D.


C. Congestion Control in datagram subnet:-
Each router can easily monitor the utilization of its output lines and
other resources. Each newly arrived packet is checked to see if its output line is
in warning state. If it is in warning sate, some action is taken. The action can be
one of several alternatives, these are as follows:-
1. The warning bit:-
a. In old network architecture, the congested router signal (inform) to the
source station about warning state by setting a special bit in the packet
header.
b. When the packet arrived at its destination, the transport entity copied the bit
into next acknowledgement send back to the source.
c. After receiving the acknowledgement source decreases the traffic.
d. As long as the router was in the warning state, it continued to set the
warning bit, which means that the source continued to get
acknowledgement with warning bit.
e. The source monitors the fraction of acknowledgement with the bit set and
adjusts its transmission rate accordingly.
f. As long as the warning bits continued to decrease its transmission rate.
g. Traffic increased only when no router was in trouble.

2. Chock packets:-
a. Choke packet is packet send by a node to the source to inform it of
congestion.
b. In the choke packet method, the warning is send from the router, which has
encountered congestion to the source station directly.
c. The intermediate nodes through which the packet has traveled are not
warned.
d. In choke packet method, router monitors the utilization of each output line.
Whenever a set limit is crossed (congestion occurred), the output line
enters a ‘warning state’.
e. Congested router sends choke packet back to the source.
f. When source receives choke packet, it reduce the traffic to a specified
destination.
g. Since other packets to the sane destination may have been already under
way & destination will generate yet more choke packets, the host should
ignore choke packets referring to that destination for a fixes time interval.
h. If no choke packet arrives in listening period, the host may increase the
flow again.
i. Host can reduce traffic by adjusting their policy parameters, for example,
their window size.
Page 7 Prof. Nale V.D.
3. Hop-by-Hop choke packet:-
a. At high speeds or over long distances, sending a Choke packet to the
source host does not work well because the reaction is so slow.
b. An alternative approach is to have the choke packet take effect at every
hop it passed through this is called hop by hop choke packet.
c. In following fig. as soon as choke packet reached F, E is required to
reduce the flow to D. It gives D immediate relief. In next step the choke
packet reaches E, which tells E to reduce flow to F. This action puts
greater demand on E’s buffers but gives F immediate relief. Finally, the
choke reaches A and the flow genuinely slow down.

Fig. (a):- The choke packet that affects only the source.
Fig. (b):- A choke packet that affects each hop it process through.

Page 8 Prof. Nale V.D.


D. Congestion control in virtual circuit subnet:-
When congestion occurs within a virtual circuit subnet, then we need to
some mechanism to reduce or remove the congestion. Following techniques are
used in any subnet:-
1. Admission control:-
This technique is widely used to keep congestion that has already
stated from getting worse.
a. Once congestion has been signaled, no more virtual circuits are set up
until the problem has gone away from this circuit.
b. This technique widely used to keep congestion that has already started.
c. An alternative approach is to allow new virtual circuit but carefully
route new virtual circuits around problem areas.
d. To avoid congestion, when new circuit is set up, it omitting the
congested routers and all of their lines.

Fig. (a):- A congested subnet.


Fig. (b):- A redrawn subnet that eliminates the congestion. A virtual circuit
from A to B is also shown.

2. The agreement between host and subnet:-


a. This technique is used in virtual circuit to make agreement between the
host and subnet when a virtual circuit is setup.
b. This agreement normally specifies the volume and shape of the traffic,
quality of service required and other parameters.

Page 9 Prof. Nale V.D.


c. To keep its part of the agreement, the subnet will typically reserve
resource along the path when the circuit is set up.
d. These resources can include table and buffer space in the routers and
bandwidth on the lines.
e. In this way, congesting is unlikely to occur on the new virtual circuit
because all the necessary resources are guaranteed to be available
(reserved).

Page 10 Prof. Nale V.D.


 Routing:-
a. The network layer has to perform routing to deliver packets from the source
machine to the destination machine in an internetwork.
b. Routing means selecting a best path for transmission packets.
c. The network layer design has to consider various algorithms for route
calculation as well as the data structures used by these algorithms. A routing
protocol specifies the network layer functionality. It allows router to do the
following:-
1. Learn Routes.
2. Maintain route information.
3. Alert other routers of failed or congested routes.
4. Advertise the path costs for each route.
Routing protocols use routing algorithm to build and maintain
routing databases or tables and determine which route to use when
forwarding packets.

 Routing Algorithm:-
1. Routing algorithm is a part of network layer software. It is responsible for
deciding the output line over which a packet is to be sent.
2. Such a decision is depending on whether the subnet is a virtual circuit or it is
datagram switching.

 Optimality principle:-
Optimality principle states that if router J is on the optimal path from
router I to router K, then the optimal path from J to K also falls along the
same route.
To see this, call the part of the route from I to J is r1 and the rest of
the route r2. If a route better than r2 existed from J to k, it could be
concatenated with r1 to improve the route from I to K, contradicting our
statement that r1r2 is optimal.
As a consequence of that principle, we can see that the set of optimal
routes from all sources to a given destination from a tree rooted at the
destination. Such a tree is called a sink tree.

Page 11 Prof. Nale V.D.


Fig.:- A Subnet. Fig.:- A sink tree for router B.

 Desirable characteristics of routing algorithms / Properties of


routing algorithms:-
1. Correctness:-
To forward packets from node-to-node, switched subnet must determine
the correct path from the source to destination host.

2. Simplicity:-
The routing algorithm should not be very complex and should be simple
in terms of the algorithm and data structures used.

3. Robustness:-
The routing algorithm must be able to cope with and survive hardware
failures, changes in topology and traffic so that the network can run
continuously for many years.

4. Stability:-
The routing algorithm should give consistently correct result. Stable
algorithms should converge to a correct answer and stay at that equilibrium
point. This means that, it must react quickly to good news (some router
started working, new paths being set up) as well as bas news (breaks in
cables, host going down etc.).

5. Fairness:-
The routing algorithms must ensures that individual hosts are treated in a
fair manner, i.e. they are given a fair chance to transmit data to any host they
want to.

Page 12 Prof. Nale V.D.


6. Optimality:-
The algorithm must try to make optimum use of the paths so as to
maximize the total network throughout. This means that the router should try
to utilize the maximum channel bandwidth.

 Types of routing Algorithms:-


Routing algorithms can be classified into two types depending upon the
way routing decisions are made:-
1. Non-Adaptive routing algorithms.
2. Adaptive Routing algorithms.

1. Non-Adaptive routing algorithms:-


a. These are also called static routing algorithms.
b. These algorithms do not calculate routes on the basis of current network
traffic or topology.
c. The routing decisions are made in advance and are not changed to reflect
the changes in network status.
d. Example:- Shortest path algorithm, Flooding Algorithms.

2. Adaptive Routing algorithms:-


a. These are also called as dynamic routing algorithms.
b. Modern computer network use dynamic routing algorithms.
c. These algorithms takes into account current network traffic and changes in
topology to make routing decisions.
d. Hence, they adapt according to network conditions.
e. Examples:- Distance Vector Routing, Link State Routing.

A) Distance Vector Routing Algorithm:-


1. In distance vector routing, the least-cost (“distance”) route between any
two nodes is the route with minimum distance.
2. In this protocol, as the name implies, each node maintains a vector (table)
of minimum distances to every node.
3. The table at each node also guides the packets to the desired node by
showing the next stop in the route (next-hop routing).
4. Following Figure show a system of five nodes with their corresponding
tables.

Page 13 Prof. Nale V.D.


Fig.:- Distance Vector routing table.

a. Initialization:-
Initially, each node can know only the distance between itself and its
immediate neighbors, those directly connected to it.
Each node can send a message to the immediate neighbors and find the
distance between itself and these neighbors.
The distance for any entry that is not a neighbor is marked as infinite
(unreachable).

Fig.:- Initialization of tables in Distance Vector Routing.

Page 14 Prof. Nale V.D.


b. Sharing:-
The whole idea of distance vector routing is the sharing of information
between neighbors.
Although node A does not know about node E, node C does. So, if node
C shares its routing table with A, node A can also know how to reach node E.
In distance vector routing, each node shares its routing table with its
immediate neighbors periodically and when there is a change. Sharing here
means sharing only the first two columns. Nodes shares its tables first two
columns.

c. Updating:-
When a node receives a two-column table from a neighbor, it needs to
update its routing table. Updating takes three steps:
1. The receiving node needs to add the cost between itself and the sending
node to each value in the second column. If node C claims that its distance
to a destination is x, and the distance between A and C is y, then the
distance between A and that destination, via C, is x + y.

Fig.:- Updating in distance vector routing.

2. The receiving node needs to add the name of the sending node to each row
as the third column if the receiving node uses information from any row.
The sending node is the next node in the route.
3. The receiving node needs to compare each row of its old table with the
corresponding row of the modified version of the received table.

Page 15 Prof. Nale V.D.


i. If the next-node entry is different, the receiving node chooses the row
with the smaller cost. If there is a tie, the old one is kept.
ii. If the next-node entry is the same, the receiving node chooses the new
row. For example, suppose node C has previously advertised a route to
node X with distance
iii. Suppose that now there is no path between C and X; node C now
advertises this route with a distance of infinity. Node A must not
ignore this value even though its old entry is smaller. The old route
does not exist anymore. The new route has a distance of infinity.

After every node has exchanged a few updates with it’s directly
connected neighbors, all nodes know the least-cost path to all the other nodes.

B) Link State Routing:-


1. The link state protocol is performed by every router (switching node) in the
network.
2. The basic concept of Link state routing is that every node constructs a map
of the connectivity to the network, in the form of a graph, showing which
nodes are connected to which other nodes.
3. Each node then independently calculates the next best logical path from it
to every possible destination in the network.

Fig.:- Concept of Link State Routing.

4. The collection of best paths will then form the nodes routing table.

Page 16 Prof. Nale V.D.


5. In link state routing, if each node in the domain has the entire topology of
the domain the list of nodes and links, how they are connected including
the type, cost (metric), and condition of the links (up or down).
6. The nodes can use Dijkstra's algorithm to build a routing table.

Each node uses the same topology to create a routing table, but the
routing table for each node is unique because the calculations are based on
different interpretations of the topology.

 Building Routing Tables:-


In link state routing, four sets of actions are required to ensure that
each node has the routing table showing the least-cost node to every other
node.
1. Creation of the states of the links by each node, called the link state packet
(LSP).
2. Distribution of LSP to every other router, called flooding, in an efficient
and reliable way.
3. Formation of a shortest path tree for each node.
4. Calculation of a routing table based on the shortest path tree.

1. Creation of Link State Packet (LSP):-


A link state packet contains information like- identity of the sender
(node), followed by sequence number and age and list of neighbors are
needed to make the topology.
The sequence number, facilitates flooding and distinguishes new
LSP from old ones. Age prevents old LSP from remaining in the domain for
a long time.
LSP are generated on two occasions:-
a. When there is a change in the topology of the domain.
b. On a periodic basis.

Page 17 Prof. Nale V.D.


2. Flooding of LSPs:-
After a node has prepared an LSP, it must be distributed to all other
nodes, not only to its neighbors. The process is called flooding and based
on the following:-
a. The creating node sends a copy of the LSP out of each interface.
b. A node that receives an LSP compares it with the copy it may already
have. If the newly arrived LSP is older than the one it has (found by
checking the sequence number), it discards the LSP. If it is newer, the
node does the following:
i. It discards the old LSP and keeps the new one.
ii. It sends a copy of it out of each interface except the one from
which the packet arrived. This guarantees that flooding stops
somewhere in the domain (where a node has only one interface).

3. Formation of Shortest Path Tree: Dijkstra Algorithm


After receiving all LSPs, each node will have a copy of the whole
topology. However, the topology is not sufficient to find the shortest path to
every other node; a shortest path tree is needed.
The Dijkstra algorithm used to create a shortest path tree. The
Dijkstra algorithm can run locally to construct the shortest path to all
possible destinations. The result of this algorithm can be installed in the
routing tables, and normal operation resumed.

Fig.:- Formation of shortest path tree (for A node).

Page 18 Prof. Nale V.D.


4. Calculation of Routing table from shortest path tree:-
Each node uses the shortest path tree protocol to construct its routing table.
The routing table shows the cost of reaching each node from the root.
Following table shows routing table for node.

Fig.:- Routing table for node A

C) Broadcast routing:-
In some applications, host need to send messages to many or all other
hosts. Sending a packets, to all destination simultaneously is called
broadcasting. A broadcast message is destined to all network devices.
Broadcast routing can be done in two ways (algorithm):
 A router creates a data packet and then sends it to each host one by one.
In this case, the router creates multiple copies of single data packet with
different destination addresses. All packets are sent as unicast but because
they are sent to all, it simulates as if router is broadcasting.
This method consumes lots of bandwidth and router must destination
address of each node.
 Secondly, when router receives a packet that is to be broadcasted, it
simply floods those packets out of all interfaces. All routers are configured in
the same way.

Page 19 Prof. Nale V.D.


This method is easy on router's CPU but may cause the problem of
duplicate packets received from peer routers.
Reverse path forwarding is a technique, in which router knows in
advance about its predecessor from where it should receive broadcast. This
technique is used to detect and discard duplicates.

D) Multicast Routing:-
Multicast routing is special case of broadcast routing with significance
difference and challenges. In broadcast routing, packets are sent to all nodes
even if they do not want it. But in Multicast routing, the data is sent to only
nodes which wants to receive the packets.

The router must know that there are nodes, which wish to receive
multicast packets (or stream) then only it should forward. Multicast routing
works spanning tree protocol to avoid looping.

Page 20 Prof. Nale V.D.


Multicast routing also uses reverse path Forwarding technique, to detect
and discard duplicates and loops.

Page 21 Prof. Nale V.D.

You might also like