KEMBAR78
CN Module 3 Network Layer Notes 2024 | PDF | Wireless Ad Hoc Network | Routing
0% found this document useful (0 votes)
78 views48 pages

CN Module 3 Network Layer Notes 2024

The document discusses the network layer's role in packet transmission across networks, focusing on routing, packet switching, and the differences between connection-oriented and connectionless services. It highlights the importance of routing algorithms, including the shortest path algorithm and flooding, in determining efficient packet routes. Additionally, it compares virtual-circuit and datagram networks, outlining their respective advantages and challenges.

Uploaded by

pranavkeshav4
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
78 views48 pages

CN Module 3 Network Layer Notes 2024

The document discusses the network layer's role in packet transmission across networks, focusing on routing, packet switching, and the differences between connection-oriented and connectionless services. It highlights the importance of routing algorithms, including the shortest path algorithm and flooding, in determining efficient packet routes. Additionally, it compares virtual-circuit and datagram networks, outlining their respective advantages and challenges.

Uploaded by

pranavkeshav4
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 48

Computer Networks

MODULE 3
THE NETWORK LAYER
Introductions
The network layer is responsible for ensuring packets travel from the source to the destination.
This often involves multiple hops through intermediate routers. Therefore, the network layer
handles end-to-end transmission.
To accomplish its tasks, the network layer must understand the network's topology, select
appropriate paths, and manage traffic to prevent congestion. It also addresses challenges when the
source and destination are in different networks. In this chapter, with a primary focus on the
Internet and its network layer protocol, IP.

I. NETWORK LAYER DESIGN ISSUES


1. Store-and-Forward Packet Switching
In the context of network layer protocols, it's essential to understand the components of the
network. In Figure 5-1, we see two key elements: ISP equipment (routers connected by
transmission lines) within the shaded oval and customers' equipment outside the oval. Host H1 is
directly connected to an ISP router, while H2 is on a LAN, connected to a customer-owned router.
Although routers like F are outside the ISP's ownership, they are considered part of the ISP network
for the purpose of this chapter, as they run similar algorithms. This distinction is relevant because
our focus here is on these algorithms.

When a host has a packet to send, it sends it to the nearest router, either on its own LAN or via a
point-to-point link to the ISP. The router stores the packet until it's fully received and processed
(including checksum verification), after which it forwards the packet to the next router along the
path. This process continues until the packet reaches its destination host, where it is delivered. This
mechanism is known as store-and-forward packet switching.

1
Computer Networks

2. Services Provided to the Transport Layer


The network layer provides services to the transport layer, which must be carefully designed with
specific goals in mind:
1. Services should be router technology independent.
2. The transport layer should remain unaware of the number, type, and topology of routers.
3. Network addresses available to the transport layer should follow a uniform numbering plan,
even across LANs and WANs.
The primary debate in network layer design centers on whether it should offer a connection-
oriented or connectionless service. One perspective, advocated by the Internet community,
suggests that the network's inherent unreliability necessitates hosts handling error control and flow
control independently. Thus, a connectionless service with minimal primitives like SEND
PACKET and RECEIVE PACKET is preferable.
In contrast, another camp, represented by telephone companies, argues for a reliable, connection-
oriented service. They draw from a century of experience with the telephone system, emphasizing
quality of service, especially for real-time traffic like voice and video.
This ongoing controversy has persisted for decades. While early networks were often connection-
oriented, connectionless network layers, like IP, have gained popularity, guided by the end-to-end
argument. Still, even within the Internet, connection-oriented features are evolving as quality-of-
service gains importance, as seen in technologies like MPLS and VLANs.
3. Implementation of Connectionless Service
In implementing a connectionless service at the network layer, two distinct approaches are
employed, depending on the type of service being offered.
a. Connectionless Service (Datagram Networks):
• In a connectionless service, packets, often referred to as datagrams, are injected into the
network independently and are routed without any prior setup.
• Each packet is treated as a standalone unit and is forwarded based on its own routing
information.
• No advance path establishment is required for transmitting individual packets.

In contrast, if a connection-oriented service is employed:


b. Connection-Oriented Service (Virtual-Circuit Networks):
• Before transmitting data packets, a path from the source router to the destination router,
known as a Virtual Circuit (VC), must be established.
• This setup process involves configuring a specific path that the data packets will follow.
• The network is referred to as a virtual-circuit network, drawing an analogy to the physical
circuits established in the traditional telephone system.

In a datagram network, let's consider the process of transmitting a long message from process P1
on host H1 to process P2 on host H2. Here's how it works:

1. Process P1 hands the message to the transport layer, which adds a transport header to the
message. This transport layer code runs on H1.

2
Computer Networks

2. The resulting data is then passed to the network layer, typically another procedure within
the operating system.
3. Since the message is longer than the maximum packet size, it's divided into four packets:
1, 2, 3, and 4. Each packet is sent one after the other to router A using a point-to-point
protocol like PPP.
4. Within the ISP's network, each router has a routing table indicating where to send packets
for each possible destination. These tables contain pairs of destinations and the outgoing
lines to use.

5. Router A initially has a routing table as shown in the figure. Packets 1, 2, and 3 are briefly
stored and checked for checksum on arrival. Then they are forwarded according to A's
table, reaching H2 via routers E and F.
6. However, something different happens to packet 4. Router A sends it to router B instead of
following the same route as the first three packets. This change in routing might occur if A
learns of a traffic jam along the initial route and updates its routing table.
7. The algorithm responsible for managing routing tables and making routing decisions is
called the routing algorithm, a central topic in this chapter. Different types of routing
algorithms exist.
8. IP (Internet Protocol), the foundation of the Internet, is a prime example of a connectionless
network service. Each packet carries a destination IP address, allowing routers to forward
packets individually. IPv4 packets have 32-bit addresses, while IPv6 packets have 128-bit
addresses.

3
Computer Networks

4. Implementation of Connection-Oriented Service

In a connection-oriented service, a virtual-circuit network is employed to streamline packet


routing. Here's how it works:

1. Instead of choosing a new route for every packet as in connectionless service, a route from
the source to the destination is selected and stored in tables within the routers when a
connection is established. This chosen route is then used for all data traffic over that
connection, similar to how the telephone system operates.
2. When the connection is terminated, the virtual circuit is also released. Each packet in a
connection-oriented service carries an identifier indicating which virtual circuit it belongs
to.

3. For example, if host H1 establishes connection 1 with host H2, this connection is recorded
as the first entry in the routing tables of each router along the route. Packets from H1 to H2
are then directed through this established route.
4. If another host, say H3, wants to establish a connection to H2, it chooses connection
identifier 1 and requests the network to establish a new virtual circuit. This leads to the
creation of a second entry in the routing tables.
5. To avoid conflicts, routers need the ability to replace connection identifiers in outgoing
packets. This process is sometimes called label switching. An example of a connection-
oriented network service is MPLS (Multi-Protocol Label Switching), used within ISP
networks. MPLS uses a 20-bit connection identifier or label to route traffic efficiently.
6. MPLS is often employed within ISP networks to establish long-term connections for large
traffic volumes. It helps ensure quality of service and assists with various traffic
management tasks, even though it's often transparent to customers.

4
Computer Networks

5. Comparison of Virtual-Circuit and Datagram Networks

Inside a network, there are trade-offs between virtual circuits and datagrams:

1. Setup Time vs. Address Parsing Time:


• Virtual circuits require a setup phase, which consumes time and resources but
simplifies packet routing. Datagram networks don't need setup but require more
complex address lookup.
2. Address Length:
• Datagram networks use longer destination addresses with global meaning, which
can result in significant overhead for short packets, wasting bandwidth.
3. Table Space in Router Memory:
• Datagram networks need entries for every possible destination, whereas virtual-
circuit networks only need entries for each virtual circuit. However, connection
setup packets in virtual-circuit networks also use destination addresses.
4. Quality of Service and Congestion Management:
• Virtual circuits offer advantages in guaranteeing quality of service and managing
congestion by reserving resources in advance during connection setup. Datagram
networks face more challenges in congestion avoidance.
5. Use Case Dependence:

5
Computer Networks

• For transaction processing systems, setting up and clearing virtual circuits may be
too costly compared to their use. However, for long-running applications like
VPNs, permanent virtual circuits can be valuable.
6. Vulnerability:
• Virtual circuits are vulnerable to disruptions if a router crashes and loses memory,
requiring all circuits to be aborted. Datagram networks are more resilient in this
regard.
7. Network Load Balancing:
• Datagram networks allow routers to balance traffic by changing routes during a
sequence of packet transmissions, providing flexibility compared to fixed virtual
circuits.

II. ROUTING ALGORITHMS


The network layer's primary function is routing packets from the source machine to the destination
machine, often requiring multiple hops in most networks. Routing algorithms and data structures
are central to network layer design.

• Routing Algorithm: It decides which output line an incoming packet should be sent on. For
datagram networks, this decision is made for every data packet, while virtual-circuit networks
determine routes during setup and reuse them. This process can be divided into routing (route
selection) and forwarding (packet arrival handling).
• Desirable Properties: Routing algorithms should be correct, simple, robust (able to handle
failures and topology changes), stable (converging to fixed paths), fair, and efficient.
• Trade-offs: Efficiency and fairness can sometimes conflict in routing decisions, requiring a
balance. Networks often optimize goals like minimizing packet delay, maximizing total
throughput, or reducing the number of hops a packet travels.
• Routing Algorithm Classes: Routing algorithms are categorized as nonadaptive (pre-
computed routes) and adaptive (responding to changes in topology and traffic). Adaptive
algorithms can differ in information sources, frequency of route changes, and optimization
metrics.

1. The Optimality Principle


Before diving into specific routing algorithms, it's important to understand the optimality principle,
which holds true regardless of network topology or traffic patterns. This principle, introduced by
Bellman in 1957, states that if router J lies on the optimal path from router I to router K, then the
optimal path from J to K also follows the same route. This can be explained by considering routes
from I to J (r1) and from J to K (r2). If a better route than r2 existed from J to K, it could be
combined with r1 to create a superior route from I to K, contradicting the optimality of r1r2.

6
Computer Networks

• Sink Tree: The sink tree is a tree-like structure formed by optimal routes from all sources
to a specific destination, adhering to the optimality principle. Sink trees play a central role
in routing algorithms, helping efficiently route packets in networks fig(b).
• Directed Acyclic Graphs (DAGs): Sink trees can extend to become DAGs if all possible
paths are considered, allowing for more flexibility in route selection. However, the
fundamental structure of a sink tree is retained in DAGs.
• Network Dynamics: In practice, network dynamics, such as link and router failures, can
affect the stability and accuracy of sink trees. Routers may have varying views of the
current topology, leading to dynamic adjustments in the sink tree.

2. Shortest Path Algorithm


The Shortest Path Algorithm is a fundamental routing technique that computes the optimal paths
within a network when provided with a complete network overview. These are the paths we aim
for a distributed routing algorithm to discover, even if not all routers possess comprehensive
network details.
Here's how it works: We construct a network graph, where each node represents a router, and each
edge signifies a communication link. When determining the route between a specific pair of
routers, the algorithm simply seeks the shortest path within this graph.
The concept of a "shortest path" may be interpreted in various ways. One approach measures path
length in terms of hops, making paths ABC and ABE in Fig. 5-7 equal in length. Another metric
could be geographical distance in kilometers, where ABC would be considerably longer than ABE
if the figure is drawn to scale. Beyond hops and physical distance, numerous other metrics are
feasible. For instance, edges could be labeled with the mean delay of a standard test packet, making
the shortest path the fastest path, regardless of the number of edges or kilometers.
In the broader context, edge labels could be computed based on various factors such as distance,
bandwidth, average traffic, communication cost, measured delay, and more. By adjusting the
weighting function, the algorithm can compute the "shortest" path according to different criteria
or a combination thereof.
7
Computer Networks

Several algorithms are available for calculating the shortest path between two nodes in a graph.
The one we'll discuss is attributed to Dijkstra (1959) and is utilized to find the shortest paths from
a source to all destinations within the network. Each node is assigned a label (in parentheses)
indicating its distance from the source node along the best-known path. Distances must always be
non-negative, particularly when they are based on real factors like bandwidth and delay.
Initially, no paths are known, so all nodes are labeled with infinity. As the algorithm progresses
and identifies paths, labels may change to reflect better routes. Labels can be either tentative or
permanent, with all labels beginning as tentative. Once it's determined that a label represents the
shortest possible path from the source to a particular node, it becomes permanent and remains
unchanged thereafter.

In this algorithm illustration, we'll demonstrate the process using a weighted, undirected graph
(Fig. 5-7a), where the weights represent parameters like distance. Our objective is to discover the
shortest path from A to D. Here's how it works:
1. We begin by marking node A as permanent (indicated by a filled-in circle).
2. Next, we examine each node adjacent to A, one by one (the working node). We relabel each
of these nodes with the distance to A. Additionally, we record the node from which the
probe was made to reconstruct the final path later. If multiple shortest paths exist from A
to D, we must remember all the probe nodes that could reach a node with the same distance.

8
Computer Networks

3. Having examined all nodes adjacent to A, we then inspect all tentatively labeled nodes
across the entire graph. We make the one with the smallest label permanent (as shown in
Fig. 5-7b), and this node becomes the new working node.
4. We repeat the process, starting from node B and examining all nodes adjacent to it. If the
sum of the label on B and the distance from B to the considered node is less than the label
on that node, we have a shorter path, so the node is relabeled.
5. After inspecting all nodes adjacent to the working node and updating tentative labels if
necessary, we search the entire graph for the tentatively labeled node with the smallest
value. This node becomes permanent and serves as the working node for the next round.
Fig. 5-7 illustrates the first six steps of the algorithm.
To understand why this algorithm works, consider Fig. 5-7c. At this point, we've made E
permanent. Now, suppose there were a shorter path than ABE, let's say AXYZE (with X and Y as
intermediate nodes). There are two possibilities: Either node Z has already been made permanent,
or it hasn't. If it has, then E has already been probed in a previous round when Z became permanent,
so the AXYZE path has been considered, and it cannot be shorter.
Now, let's look at the case where Z is still tentatively labeled. If the label at Z is greater than or
equal to that at E, then the AXYZE path cannot be shorter than ABE. If the label is less than that
of E, then Z will become permanent before E, allowing E to be probed from Z.

9
Computer Networks

Source Node is 1

10
Computer Networks

11
Computer Networks

3. Flooding
Routing algorithms often rely on local knowledge rather than a complete network view. One basic
local technique is "flooding," where every incoming packet is sent out on all outgoing lines except
the one it arrived on.
To control the potential chaos of flooding, a hop counter is added to each packet's header,
decrementing with each hop until the packet is discarded when the counter hits zero. Ideally, the
hop counter should be initialized to the estimated path length from source to destination.
To further manage flooding, routers keep track of flooded packets to avoid duplication. This can
be achieved by having source routers include sequence numbers in their packets and maintaining
lists of seen sequence numbers for each source.
While not practical for most packets, flooding has its uses. It ensures delivery to every node in the
network, making it useful for broadcasting information and maintaining robustness, even in
challenging conditions. Flooding's minimal setup requirements also make it a foundational element
for more efficient routing algorithms and a valuable metric for performance comparisons. Flooding
inherently chooses the shortest path, as it explores all possible paths simultaneously.
4. Distance Vector Routing

Distance Vector Routing is a common dynamic routing algorithm used in computer networks. It
operates by having each router maintain a table, known as a vector, which contains information
about the best-known distance to each destination and the appropriate link to reach it. These tables
are continually updated through communication with neighboring routers, allowing each router to
determine the optimal path to every destination.
In distance vector routing, each router's routing table includes entries for every router in the
network. Each entry consists of the preferred outgoing line for that destination and an estimate of
the distance to reach it. The distance can be measured in various ways, such as the number of hops
or other metrics like propagation delay, which can be determined using specialized ECHO packets.
For instance, let's consider using delay as a metric, and assume that each router knows the delay
to its neighboring routers. Periodically, let's say every T milliseconds, each router shares a list of
its estimated delays to reach various destinations with its neighbors. In return, it receives similar
lists from its neighbors.

12
Computer Networks

For instance, if a router receives a table from neighbor X with Xi representing X's delay estimates
to different routers, and it already knows the delay to X is m milliseconds, it can calculate that it
can reach router i via X in Xi + m milliseconds. By performing this calculation for each neighbor's
estimates, the router can determine the best estimate and corresponding link to update its routing
table. Notably, the old routing table is not used in this calculation.
This updating process is demonstrated in Figure 5-9. Part (a) depicts a network, and part (b)
illustrates delay vectors received from router J's neighbors. Each neighbor claims different delay
values to various destinations. Assuming J has its own delay estimates to its neighbors A, I, H, and
K as 8, 10, 12, and 6 milliseconds, respectively, it calculates its new route to router G. For example,
J knows it can reach A in 8 milliseconds, and A claims a 18-millisecond delay to G. Thus, J can
expect a 26-millisecond delay to G if it forwards packets through A. It performs similar
calculations for other neighbors and destinations, updating its routing table accordingly.

Bellman-Ford Routing
The update is typically done using the Bellman-Ford equation, where the new distance to a
destination is calculated as the sum of the cost to the neighbor and the neighbor's distance to the
destination.
The process of exchanging and updating continues until the routing tables converge, meaning that
no further changes are needed.
To formalize:
13
Computer Networks

First fix the destination node.


• 𝐷𝑗= 𝑐𝑢𝑟𝑟𝑒𝑛𝑡 𝑒𝑠𝑡𝑖𝑚𝑎𝑡𝑖𝑜𝑛 𝑐𝑜𝑠𝑡 𝑓𝑟𝑜𝑚 𝑛𝑜𝑑𝑒 𝑗 𝑡𝑜 𝑑𝑒𝑠𝑡𝑖𝑛𝑎𝑡𝑖𝑜𝑛.

• 𝐶𝑖,𝑗= 𝑙𝑖𝑛𝑘 𝑐𝑜𝑠𝑡 𝑓𝑟𝑜𝑚 𝑛𝑜𝑑𝑒 𝑖 𝑡𝑜 𝑛𝑜𝑑𝑒 𝑗.

• 𝐶𝑖,𝑖= 𝑙𝑖𝑛𝑘 𝑐𝑜𝑠𝑡 𝑓𝑟𝑜𝑚 𝑛𝑜𝑑𝑒 𝑖 𝑡𝑜 𝑖𝑡𝑠𝑒𝑙𝑓 𝑖𝑠 0.

• 𝐶𝑖,𝑘= 𝑙𝑖𝑛𝑘 𝑐𝑜𝑠𝑡 𝑓𝑟𝑜𝑚 𝑛𝑜𝑑𝑒 𝑖 𝑡𝑜 𝑛𝑜𝑑𝑒 𝑘 𝑖𝑠 ∞

(𝑖. 𝑒 𝑛𝑜𝑑𝑒 𝑖𝑠 𝑛𝑜𝑡 𝑑𝑖𝑟𝑒𝑐𝑡𝑙𝑦 𝑐𝑜𝑛𝑛𝑒𝑐𝑡𝑒𝑑)


Algorithm:
Step 1: Initialization
• 𝐷𝑖= ∞ where ∀ i ≠ d;
Step 2: Updating.
Find the minimum distance to destination from its neighbors.
• 𝐷𝑖 = min {𝐶𝑖, 𝑗 + 𝐷𝑗 } where ∀ i ≠ d;

Repeat Step 2 until converge.

Problem: node 6 is destination calculate minimum distance from all nodes.

14
Computer Networks

With Break

15
Computer Networks

The Count-to-Infinity Problem


Convergence refers to the process of routers settling on the best paths within a network. Distance
vector routing, while a straightforward method for routers to collectively calculate shortest paths,
suffers from a significant drawback: it can be slow to converge. It quickly responds to positive
changes but sluggishly adapts to negative ones.
To illustrate the speed at which good news spreads, let's examine a simple five-node network (Fig.
5-10) with hop count as the metric. Initially, router A is down, and all other routers register A's
delay as infinity. When A comes back up, the routers learn about it through vector exchanges.
Imagine there's a signal (like a gong) that triggers simultaneous vector exchanges across all routers.
During the first exchange, B learns that its left neighbor has zero delay to A and updates its routing
table accordingly. The others still think A is down. The good news about A's revival propagates at
one hop per exchange. In a network with the longest path of N hops, it takes N exchanges for
everyone to know about revived links or routers.

Now consider the scenario in Fig. 5-10(b), where all links and routers are initially operational.
Routers B, C, D, and E have distances to A of 1, 2, 3, and 4 hops, respectively. Suddenly, either A
goes down or the A-B link is severed (effectively the same from B's perspective).
In the first exchange, B doesn't receive any news from A. Fortunately, C reports having a 2-hop
path to A. However, B doesn't know that C's path includes itself, so it believes the path to A via C
is 3 hops long. D and E don't update their entries for A in the first exchange.
In the second exchange, C realizes that its neighbors all claim to have 3-hop paths to A, so it
randomly picks one and updates its distance to A as 4 hops. Subsequent exchanges follow the
history shown in the figure.
This scenario illustrates why bad news travels slowly: no router's value exceeds the minimum of
its neighbors' values by more than one. Gradually, all routers increase their distance to infinity, but
the number of exchanges required depends on the value set for infinity. Therefore, setting infinity
to the longest path plus 1 is a prudent choice.
This problem is known as the count-to-infinity problem, and various attempts have been made to
solve it, like using heuristics such as the split horizon with poisoned reverse rule discussed in RFC
16
Computer Networks

1058. However, none of these heuristics work well in practice due to the fundamental challenge:
routers can't determine if they are on a path when another router advertises it.
5. Link State Routing
Link State Routing replaced Distance Vector Routing in the ARPANET in 1979 due to the count-
to-infinity problem, which caused slow convergence after network topology changes. This
innovative algorithm, now known as Link State Routing, forms the basis for widely used routing
protocols like IS-IS and OSPF. The Link State Routing concept comprises five key steps for routers
to function effectively:

1. Neighbor Discovery: Routers identify their neighboring devices and learn their network
addresses.
2. Distance Metric Assignment: Each router assigns distance or cost metrics to its
neighboring routers.
3. Packet Construction: Routers construct packets containing the information they've
gathered.
4. Packet Exchange: These packets are sent to and received from all other routers in the
network.
5. Shortest Path Computation: Routers calculate the shortest path to reach every other
router in the network.

Learning about the Neighbors: When a router boots up, it identifies its neighbors by sending
HELLO packets over point-to-point connections, receiving replies with unique neighbor names.
For broadcast links (e.g., Ethernet), the setup is more complex. Instead of modeling each link
separately, a LAN is treated as a single node. This node, with a designated router, connects routers
like A, C, and F. This simplifies the topology and reduces message overhead. Connectivity between
routers, like A and C, is represented as paths within this LAN node, such as ANC.

Setting Link Costs: In the link state routing algorithm, each link must have a distance or cost
metric for calculating the shortest paths. These metrics can be automatically determined or
configured by the network operator. Typically, cost is inversely proportional to link bandwidth,
making higher-capacity links preferable. For geographically dispersed networks, link delay can be
factored into the cost, measured using Round-Trip Time with special ECHO packets, ensuring
shorter links are favored in path selection.

17
Computer Networks

Building Link State Packets: In the link state routing algorithm, each router creates a packet
containing sender identity, sequence number, age, and neighbor list with associated costs. Packet
creation is straightforward. The challenge lies in determining when to build these packets. They
can be generated periodically or triggered by significant events like a line or neighbor status change
in fig 5.11.

Distributing the Link State Packets: Distributing link state packets is a crucial aspect of the
algorithm. To ensure all routers quickly and reliably receive these packets, a flooding approach is
used. Each packet contains a sequence number that is incremented with each new transmission,
preventing duplicates. However, to address potential issues, packet age is also included and
decremented periodically. Once the age reaches zero, the information is discarded. This technique
ensures timely updates and prevents obsolete data. Additionally, some refinements include a
holding area for incoming packets and the comparison of sequence numbers to handle duplicates
and errors, with all link state packets acknowledged to enhance robustness.

• Router B uses a data structure, as shown in Fig. 5-13, to manage link state packets. Each
row in this structure represents an incoming packet with information about its origin,
sequence number, age, and data. It also includes flags for sending and acknowledging
packets on B's three links (to A, C, and F).
• For example, when a packet arrives directly from A, it's sent to C and F and acknowledged
to A as indicated by the flag bits. Similarly, a packet from F is forwarded to A and C and
acknowledged to F.
• However, if a packet, like the one from E, arrives through multiple paths (EAB and EFB),
it's sent only to C but acknowledged to both A and F, reflecting the bit settings.

• In cases of duplicate packets arriving while the original is still in the buffer, the flag bits
are adjusted. For instance, if a copy of C's state arrives from F before the original is
forwarded, the bits are changed to indicate acknowledgment to F but no sending.

Computing the New Routes: In link state routing, once a router has collected all link state packets
representing the entire network graph, it can construct a comprehensive view of the network,
considering links in both directions with potential different costs. Dijkstra's algorithm is then
18
Computer Networks

employed locally to compute the shortest paths to all destinations, informing the router of the
preferred link for each destination, which is added to the routing tables.

Compared to distance vector routing, link state routing demands more memory and computation,
with memory needs proportional to k*n (n = number of routers, k = average neighbors), potentially
exceeding the size of a routing table. However, link state routing offers faster convergence, making
it practical for many scenarios. Actual networks often use link state routing protocols like IS-IS
(Intermediate System-Intermediate System) and OSPF (Open Shortest Path First).

6. Hierarchical Routing
Challenges with Growing Networks:
• As networks increase in size, router routing tables also grow proportionally.
• This leads to increased memory consumption, higher CPU usage, and greater bandwidth
requirements for status reports.
Feasibility Issues:
• Eventually, the network may reach a point where it becomes impractical for each router to have
an entry for every other router.
Hierarchical Routing Solution:
• Hierarchical routing becomes necessary, similar to the structure in the telephone network.
• Routers are divided into regions, and each router has knowledge about routing within its own
region.
Interconnected Networks:
• When different networks are interconnected, each network is treated as a separate region.
• This approach frees routers from having to know the internal structure of other interconnected
networks.
Multilevel Hierarchy:
• For huge networks, a two-level hierarchy might be insufficient.
• Regions can be grouped into clusters, clusters into zones, and so on, creating a multilevel
hierarchy.
Example of Two-Level Hierarchy:
• Figure 5-14 illustrates a quantitative example of routing in a two-level hierarchy with five
regions.

19
Computer Networks

Routing Table Comparison:


• Full routing table for router 1A has 17 entries, as shown in Fig. 5-14(b).
• Hierarchical routing, as in Fig. 5-14(c), condenses all other regions into a single router,
reducing the table from 17 to 7 entries.
Space Savings and Trade-offs:
• Hierarchical routing reduces the routing table size, especially as the ratio of regions to routers
per region increases.
• Savings in table space come at the cost of increased path length.
Path Length Increase:
• The example highlights that the best route from 1A to 5C is via region 2, but hierarchical
routing sends all traffic to region 5 via region 3 for most destinations.
Determining Hierarchy Levels:
• The question of how many levels the hierarchy should have is raised for large networks.
• For instance, a network with 720 routers can be partitioned into 24 regions of 30 routers each,
reducing the entries per router from 720 to 53.
Optimal Number of Levels:
• Kamoun and Kleinrock (1979) discovered that the optimal number of hierarchy levels for an
N router network is ln N.
• This results in a total of e ln N entries per router.

20
Computer Networks

7. Broadcast Routing.

• In some applications, hosts need to send messages to many or all other hosts, known as
broadcasting.
• Examples include services distributing weather reports, stock market updates, or live radio
programs.
Basic Broadcasting Method:
• A simple method is for the source to send a distinct packet to each destination.
• This method is inefficient, slow, and requires the source to have a complete list of all
destinations.
Multi-destination Routing:
• Improvement over basic broadcasting.
• Each packet contains a list of destinations or a bitmap indicating desired destinations.
• Routers determine the set of output lines needed and generate new copies of the packet for
each line.
• Effectively partitions the destination set among output lines, making network bandwidth usage
more efficient.
• Still requires the source to know all destinations, and routers face similar workload as with
distinct packets.
Flooding as a Broadcast Routing Technique:
• Flooding is a better broadcast routing technique.
• Uses a sequence number per source and efficiently utilizes links.
• Simple decision rules at routers make flooding suitable for broadcasting.
• Although not ideal for point-to-point communication, it is effective for broadcasting.
Reverse Path Forwarding (RPF):
• A more advanced broadcast routing technique introduced by Dalal and Metcalfe in 1978.
• When a broadcast packet arrives at a router, the router checks if it arrived on the link normally
used for sending packets toward the source.
• If yes, the packet likely followed the best route and is forwarded to all links except the
incoming one.
• If the packet arrived on a different link, it is discarded as a likely duplicate.
• This method efficiently leverages already computed shortest path routes for regular packets.

21
Computer Networks

• RPF provides an elegant and remarkably simple approach for broadcast routing.
• It efficiently utilizes network resources and simplifies decision-making at routers, making it a
preferable choice for broadcasting.

1. Packet Arrival:
• When a broadcast packet arrives at a router, the router needs to examine the incoming
interface of the packet.
2. Check Preferred Outgoing Interface:
• Determine the outgoing interface that is normally used for sending packets toward the
source of the broadcast. This is often the shortest or preferred route.
3. RPF Check:
• Compare the incoming interface of the broadcast packet with the preferred outgoing
interface.
• If the incoming interface matches the preferred outgoing interface:
1. This suggests that the broadcast packet likely followed the best route from the
source.
2. Proceed to the next step for forwarding copies.
4. Forwarding Copies:
• Forward copies of the broadcast packet onto all interfaces except the one it arrived on.
1. This ensures the packet reaches all possible destinations in the network.
5. Duplicate Check:
• If the incoming interface does not match the preferred outgoing interface:
1. Discard the packet as a likely duplicate.
2. This is based on the assumption that the packet arriving on a different interface
might be a duplicate or has taken a less optimal route.

Reverse Path Forwarding Example 1 (Fig. 5-15):


• Part (a) depicts a network, part (b) illustrates a sink tree for router I, and part (c) shows the
reverse path algorithm in action.
• On the first hop, router I sends packets to F, H, J, and N along the preferred path.
• Second hop generates eight packets, of which five arrive along the preferred line.
• After five hops and 24 packets, broadcasting terminates, compared to four hops and 14
packets with the sink tree.

22
Computer Networks

Reverse Path Forwarding Example 2

Reverse Path Forwarding Example 2, consist of 7 routers as A, B, C, D, E, F & G connected like


A is root. A connected to C and B with one hop and B connected to D with one hop, C connected
to F and E with one hop, D connected to E with one hop, D connected to G with one hop count.
Node A is forwarding packet to C and B node and node C forward packet to it neighbors F, E and
B. but B will not receive packet from C as per the Reverse Path Forwarding algorithm and similarly
C node will not receive packet from B. similarly works for all nodes or routers.

Spanning Tree Utilization:


• A spanning tree is a subset of the network that includes all routers but contains no loops.
• Sink trees are considered spanning trees.
• If each router knows which of its lines belong to the spanning tree, it can efficiently copy an
incoming broadcast packet onto all spanning tree lines except the one it arrived on.
• This method optimally utilizes bandwidth by generating the absolute minimum number of
packets necessary to broadcast the information.

23
Computer Networks

8. Multicast Routing.
Multicasting and Multicast Routing:
• Some applications, like multiplayer games or live sports event streaming, send packets to
multiple receivers.
• Sending distinct packets to each receiver is expensive for large groups, and broadcasting is
wasteful if many recipients are not interested.
• Multicasting: Sending messages to well-defined groups that are numerically large
compared to individual machines but small compared to the entire network.
• Multicast Routing: The routing algorithm used for multicasting.
• Group Management: All multicasting schemes require mechanisms to create, destroy,
and identify groups.
Multicast Routing Schemes:
• The spanning tree depends on whether the group is dense (spread over most of the network)
or sparse (occupying only specific parts).
• Efficiently sending packets along spanning trees to deliver them to group members while
optimizing bandwidth usage.
• Pruning the Spanning Tree: To eliminate wasteful broadcast to non-group members,
prune the broadcast spanning tree by removing links that do not lead to group members.
• Sparse Group Multicast Routing: Optimizing the spanning tree becomes crucial to avoid
unnecessary utilization of resources.

• Network Configuration (Fig. 5-16a):


• Two multicast groups, Group 1 and Group 2, in the network.
• Routers are connected to hosts belonging to one or both groups, as indicated.
24
Computer Networks

• Initial Spanning Tree (Fig. 5-16b):


• A spanning tree for the leftmost router is shown. Suitable for broadcast but not optimal
for multicast.
• Pruned Spanning Trees for Multicast (Fig. 5-16c and 5-16d):
• Group 1 (Fig. 5-16c):
• Links not leading to hosts in Group 1 are removed.
• Resulting multicast spanning tree is more efficient with 7 links instead of 10.
• Group 2 (Fig. 5-16d):
• Similar pruning for Group 2 results in an efficient multicast spanning tree with
5 links.

9. Anycast Routing.
• Unicast: Source sends to a single destination.
• Broadcast: Source sends to all destinations.
• Multicast: Source sends to a group of destinations.
• Anycast: Packet delivered to the nearest member of a group.
Anycast Usage:
• Anycast is used when the specific identity of the destination node is not crucial, and any
available node offering a particular service will suffice.
• Commonly employed in services like DNS (Domain Name System) in the Internet.
• Routing for Anycast: No need for new routing schemes; existing distance vector and link state
routing can produce anycast routes.

Routing Scheme for Anycast (Fig. 5-18a):


• Members of group 1 have the same address '1.'

25
Computer Networks

• Distance vector routing results in nodes sending to the nearest instance of destination '1.'
• Routing protocol doesn't recognize multiple instances, treating them as the same node.
Topology Consideration (Fig. 5-18b):
• The routing protocol believes that all instances of node 1 are the same node, as shown in the
simplified topology.
• The procedure works because the protocol doesn't realize the existence of multiple instances.

10. Routing for Mobile Hosts.


• Mobile hosts refer to devices used in truly mobile situations (e.g., in moving cars) or
nomadic situations (e.g., laptops used in various locations).
• The term encompasses users who desire connectivity regardless of their location.
Routing Challenge for Mobile Hosts:
1. Routing a packet to a mobile host requires the network to locate the host's current position.
2. The challenge is to efficiently route packets to mobile hosts using their fixed home
addresses while accommodating their dynamic movements.
Model of the Mobile Hosts:
1. Hosts have permanent home locations that do not change.
2. The routing goal is to send packets to mobile hosts using their fixed home addresses and
efficiently reach them wherever they are located.
3. The challenge lies in finding the mobile hosts as they move.
4. Mobile hosts inform a host at their home location about their current location.
5. The host at the home location, acting as a representative for the mobile host, is called the
home agent.
6. Once the home agent knows the mobile host's current location, it can forward packets to
ensure delivery.
7. Packets destined for the mobile host are sent to the home agent, which forwards them to
the current location of the mobile host.

26
Computer Networks

1. Mobile Host Registration:


1. Mobile host sends a registration message to the home agent (step 1).
2. Registration includes the care-of address, indicating the mobile host's current location.
3. Dashed line indicates a control message, not a data message.
2. Data Packet Sent to Home Location:
1. Sender sends a data packet to the mobile host using its permanent address (step 2).
2. Packet routed to the host's home location in New York.

27
Computer Networks

1. Tunneling and Encapsulation:


1. Home agent intercepts the packet, encapsulates it with a new header, and sends it to the
care-of address (step 3).
2. Tunneling: Wrapping the packet in a new header to facilitate routing to the mobile host's
current location.
2. Packet Delivery to Mobile Host:
1. Encapsulated packet arrives at the care-of address.
2. Mobile host unwraps the packet and retrieves the original data from the sender (step 4).
3. Mobile host sends a reply packet directly to the sender.
3. Triangle Routing:
1. Overall route may be circuitous if the remote location is far from the home location.
2. Sender may learn the current care-of address during step 4.
3. Subsequent packets can be routed directly to the mobile host by tunneling them to the
care-of address (step 5).

Routing in Ad Hoc Networks


Ad Hoc Networks are decentralized wireless networks where devices communicate directly with
each other without the need for a centralized infrastructure such as routers or access points. In
these networks, devices collaborate to form a temporary and dynamic communication network,
allowing them to establish connections and share information on-the-fly. Examples include
emergency workers at an earthquake site, military vehicles on a battlefield, a fleet of ships at sea,
or a group of people with laptops in an area lacking 802.11.

Key features of Ad Hoc Networks include:


1. Decentralized Architecture: Ad Hoc Networks operate without a fixed infrastructure.
Devices in the network act both as hosts and routers, forwarding data for other devices.
2. Dynamic Topology: The network topology in Ad Hoc Networks can change rapidly as
devices move in and out of the network or as connections are established and broken.
• Ad Hoc Networks differ from wired networks in that their topology is highly dynamic.
• Nodes can join, leave, or relocate frequently, leading to constant changes in the network
structure.
• Unlike wired networks where valid paths remain stable, in ad hoc networks, the
desirability and validity of paths can change spontaneously without warning.

3. Routing Challenges:
28
Computer Networks

1. Routing in ad hoc networks is more challenging compared to fixed counterparts


due to the dynamic and unpredictable nature of the topology.
2. The topology may change frequently, making it essential for routing algorithms to
adapt to these changes.
3. This dynamic environment poses challenges to the desirability and validity of
routes, which can change suddenly.
4. Self-Organization: Devices in an Ad Hoc Network can self-organize and configure
themselves to form a network without requiring manual intervention.
5. Wireless Communication: Ad Hoc Networks typically rely on wireless communication
technologies, such as Wi-Fi, Bluetooth, or other wireless standards.
6. Mobile Nodes: Ad Hoc Networks are well-suited for environments where devices are
mobile, such as in conferences, disaster recovery scenarios, military operations, or sensor
networks.
Ad Hoc Networks can be categorized into various types based on their application and structure,
including mobile ad hoc networks (MANETs) and vehicular ad hoc networks (VANETs). MANETs
are formed by mobile devices (e.g., smartphones, laptops) that communicate with each other, while
VANETs involve communication between vehicles on the road.
Ad Hoc Networks, or Mobile Ad hoc NET works (MANETs), involve nodes that are both hosts
and routers. These networks can form spontaneously among nodes in proximity, without a fixed
infrastructure.
AODV (Ad hoc On-demand Distance Vector):
• One popular routing algorithm for ad hoc networks is AODV (Ad hoc On-demand
Distance Vector).
• AODV is a variant of the distance vector algorithm adapted for the mobile environment,
where nodes often have limited bandwidth and battery lifetimes.
• The algorithm focuses on on-demand route discovery and maintenance, allowing nodes
to find routes as needed in response to communication requirements.
Ad Hoc Networks have both advantages and challenges. They are flexible, self-configuring, and
can be quickly deployed in situations where a fixed infrastructure is unavailable. However, they
also face issues such as limited bandwidth, security concerns, and the need for efficient routing
protocols to handle the dynamic nature of the network.

Route Discovery in AODV:


• Routes in AODV are discovered on-demand, triggered only when a packet needs to be sent to
a specific destination. This approach saves resources by avoiding unnecessary route discovery
when the topology might change before the route is utilized.
• The topology of an ad hoc network is represented by a graph of connected nodes. Nodes are
considered connected if they can communicate directly using their radios.
29
Computer Networks

• The basic model assumes that each node can communicate with all others within its coverage
circle, considering symmetric connections. Although real networks may involve obstacles like
buildings and hills, the simplicity assumption is made that connections are symmetric.
• In Figure, if node A can communicate with B, it is assumed that B can also communicate with
A. AODV maintains a distance vector table at each node, keyed by destination, providing
information about the destination and the neighbor to which packets should be sent to reach it.
• When a node, for example, node A, wants to send a packet to node I and does not find an entry
for I in its table, it initiates route discovery.
1. A ROUTE REQUEST packet is constructed and broadcasted using flooding across the
network.
2. The ROUTE REQUEST packet is transmitted from source node A to neighboring nodes
B and D.
3. Each receiving node rebroadcasts the request, extending the reach of the broadcast to
nodes F, G, C, H, E, and I.
• Sequence numbers are set at the source to prevent duplicate requests during the flood. Nodes
discard duplicate transmissions, ensuring efficient routing information dissemination.
4. Once the ROUTE REQUEST reaches the destination node I, a ROUTE REPLY packet
is unicast back to the sender along the reverse path followed by the request.

• Intermediate nodes remember the sender of the request, facilitating the construction of the
reverse path. Intermediate nodes increase a hop count as they forward the reply, indicating their
distance from the destination.
• The replies guide intermediate nodes on the best route to the destination, updating their routing
tables.
• In large networks, the algorithm generates many broadcasts, even for nearby destinations. To
reduce overhead, the scope of broadcasts is limited using the Time to Live (TTL) field in the
IP packet. The TTL field is decremented on each hop, and if it reaches 0, the packet is
discarded. The route discovery process uses a progressively increasing TTL to search locally
first and then in widening rings for the destination.

Route Maintenance in AODV:

30
Computer Networks

Nodes in an ad hoc network can move or be turned off, leading to spontaneous changes in the
network topology.
1. For example, if node G is switched off in Fig. 5-20, node A may not realize that the route
it was using to reach I (ADGI) is no longer valid.
2. Periodically, each node broadcasts a Hello message, expecting responses from its
neighbors.
3. Lack of response indicates that the neighbor has moved out of range or failed, helping
identify changes in connectivity. Unresponsive neighbors are considered disconnected.
4. Each node keeps track of active neighbors for each destination, recording those that have
communicated in the last ΔT seconds. When a neighbor becomes unreachable, the node
checks its routing table to identify routes via the now-unavailable neighbor.
5. Active neighbors are informed that routes through the unavailable node are invalid and
must be purged from their routing tables.
6. Routes include a sequence number controlled by the destination, functioning like a logical
clock.
7. The destination increments the sequence number with each fresh ROUTE REPLY it sends.
8. Senders request fresh routes by including the destination's sequence number in the ROUTE
REQUEST, enabling rapid convergence after a topology change.
Intermediate Node Handling:
1. Intermediate nodes store routes with higher sequence numbers or the fewest hops for
the current sequence number.
2. Only routes in use are stored, and other route information learned during broadcasts is
timed out after a short delay.
3. This on-demand approach helps conserve bandwidth and battery life compared to
periodic updates in a standard distance vector protocol.
Shared Route Discovery and Maintenance:
4. To optimize resource usage, route discovery and maintenance are shared when routes
overlap.
5. If multiple nodes, such as B, also want to send packets to I, overlapping with the
existing route, they can leverage the information without redundant work.
6. In such cases, the request may reach an intermediate node (e.g., D), which already has
a route to I. Node D can reply to the requesting node (e.g., B) without additional route
discovery efforts.

31
Computer Networks

CONGESTION CONTROL ALGORITHMS


Congestion occurs when there are too many packets in a part of the network, causing delays and
packet loss, ultimately degrading performance. Both the network and transport layers share the
responsibility for handling congestion. Effective congestion control involves collaboration
between the network and transport layers to reduce the load placed on the network by the transport
layer.

Figure 5-21
The onset of congestion is depicted in Figure 5-21, where the number of packets sent into the
network affects the number of packets delivered. When the offered load approaches the network's
carrying capacity, occasional bursts of traffic may fill up buffers inside routers, resulting in packet
loss and decreased delivered packets. This signifies congestion, and without proper design, it may
lead to a congestion collapse, causing a plummet in performance as the offered load surpasses the
network's capacity.
Congestion Collapse Factors:
• Delayed Packets: Packets may experience delays within the network, making them no
longer useful upon leaving, leading to congestion collapse.
• Retransmission Issues: Senders may retransmit delayed packets, causing duplicate
deliveries and wasting network capacity.
To address these complexities, the concept of goodput, representing the rate at which useful
packets are delivered by the network, is introduced in Figure 5-21. The goal is to design networks
that minimize congestion where possible and can recover gracefully without collapsing under
heavy loads.
Challenges in Avoiding Congestion:
• Inevitability: Congestion cannot be wholly avoided due to sudden bursts overwhelming
the network, leading to queue buildup and potential packet loss.
• Memory Expansion: Infinite memory in routers may worsen congestion as timed-out
packets and duplicates contribute to the problem (Nagle, 1987).

32
Computer Networks

Congestion on Low-Bandwidth Links:


• Low-bandwidth links or slow routers may experience congestion.
• Diverting traffic away from bottlenecks can alleviate the situation temporarily, but
eventually, the entire network may become congested.
Options in Congested Situations:
• Traffic Redirection: Directing traffic away from congested points can be a temporary
solution.
• Load Shedding or Network Upgrade: In severe congestion, load shedding (dropping
excess packets) or upgrading to a faster network are potential options.
It's essential to differentiate between congestion control and flow control. Congestion control
focuses on ensuring the network can handle the overall traffic, involving the behavior of all hosts
and routers. In contrast, flow control pertains to managing the traffic between a specific sender
and receiver, preventing a fast sender from overwhelming a slower receiver. The subtle relationship
between these two concepts becomes evident in scenarios where a network may not experience
congestion but requires flow control to match the capabilities of individual sender-receiver pairs.
Example Differentiating Congestion and Flow Control:
• Network Scenario: A network with 100-Gbps links where a supercomputer is sending
data to a personal computer capable of handling only 1 Gbps.
• Congestion vs. Flow Control: No congestion in the network, but flow control is
necessary to regulate data flow between the fast sender and slow receiver.
Another Example Scenario:
• Network Scenario: A network with 1-Mbps lines and 1000 large computers
transferring files, half at 100 kbps and half at 1 Mbps.
• Issue: The total offered traffic exceeds what the network can handle, leading to
congestion.

Approaches to Congestion Control


Nature of Congestion: Congestion occurs when the temporary load is greater than the resources
in a part of the network can handle. As shown in Fig. 5-22, these solutions are usually applied on
different time scales to either prevent congestion or react to it once it has occurred.

33
Computer Networks

Two Fundamental Solutions:


1. Increase Resources:
• Dynamically adding resources during serious congestion, such as activating spare
routers, enabling backup lines, or purchasing additional bandwidth.
• Regularly upgrading heavily utilized links and routers, known as provisioning,
driven by long-term traffic trends over months.
2. Decrease Load:
• Refusing new connections in a virtual-circuit network if they would cause
congestion (admission control).
• Providing feedback to sources of traffic flows responsible for congestion, either
requesting them to throttle traffic or slowing down the traffic itself.

Tailoring Routes to Traffic Patterns:


• Adapting routes based on changing traffic patterns during the day, optimizing network
capacity utilization.
• Examples include traffic-aware routing, where routes are adjusted to avoid congestion
hotspots, and splitting traffic across multiple paths.
Identifying Onset of Congestion:
• Monitoring average load, queueing delay, or packet loss to identify growing congestion.
• Rising numbers in these metrics signal the potential onset of congestion.
Feedback Loop for Congestion Control:
• Establishing a feedback loop between routers and traffic sources.
• Adjusting the time scale carefully to avoid oscillations or sluggish reactions.
• Router participation in the feedback loop involves timely communication with sources
about congestion.
Load Shedding as a Last Resort:
• When all else fails, the network may discard packets it cannot deliver to prevent congestion
collapse.
• Load shedding involves a policy for selecting which packets to discard, aiming to prioritize
critical data and prevent network degradation.
Challenges and Considerations:
• Timely Feedback: Ensuring timely feedback to traffic sources is crucial for effective
congestion control.
• Message Overhead: Avoiding the generation of excessive messages by routers, especially
during congestion, is a concern.
• Policies for Discarding Packets: Implementing effective policies for choosing which
packets to discard during load shedding can impact congestion control success.

Traffic-Aware Routing
Objective: Adapt routing schemes to changes in load, in addition to changes in topology.
Goal: Shift traffic away from hotspots to prevent congestion in the network.

34
Computer Networks

Link Weight Considerations:


• Fixed Link Weights: Traditional routing schemes use fixed link weights that adapt to
changes in topology but not to changes in load.
• Variable Link Weights: The link weight is set as a function of the fixed link bandwidth,
propagation delay, and the variable measured load or average queuing delay.
Routing Calculation with Load:
• Least-Weight Paths: Paths with lower load, in addition to bandwidth and propagation
delay, are favored in the routing calculations.
• Avoiding Congestion: The objective is to route traffic away from heavily loaded paths
that might lead to congestion.

Challenges and Solutions:


• Ignoring Load: If load is ignored, the routing problem may be stable.
• Narrow Weight Range: Attempting to include load but changing weights within a
narrow range may slow down routing oscillations.
• Multipath Routing: Introducing multiple paths from a source to a destination to spread
traffic and avoid concentration on a single link.
• Slow Traffic Shifting: Shifting traffic across routes slowly to allow convergence,
preventing rapid oscillations (Gallagher, 1977).
• Traffic Engineering: In the Internet, routing protocols typically do not dynamically
adjust routes based on load.
• External Adjustments: Adjustments are made outside the routing protocol by slowly
changing its inputs, a practice known as traffic engineering.

Admission Control
In virtual-circuit networks, a widely used technique to prevent congestion is admission control.
The fundamental idea is straightforward: do not establish a new virtual circuit unless the network
can handle the additional traffic without becoming congested. This preventive measure helps avoid
worsening the situation by admitting more connections when the network is busy, analogous to a
telephone switch not providing dial tones during overload.
Traffic Characterization for Admission Control: Determining when a new virtual circuit might
lead to congestion requires characterizing the traffic it will generate. In the telephone network,
with fixed call bandwidths, this is straightforward, but computer networks host virtual circuits of
various shapes and sizes. Traffic characterization often involves rate and shape descriptions,
considering bursts, and using descriptors like the leaky bucket or token bucket, which capture
bursty traffic effects. The leaky bucket, a commonly used descriptor for quality of service, has

35
Computer Networks

parameters limiting the average rate and instantaneous burst size of traffic. It plays a crucial role
in admission control.

Network Decision Making: The network can decide whether to admit the new virtual circuit.
Reserving sufficient capacity along the paths of each virtual circuit can prevent congestion,
forming a service agreement that guarantees network performance. Alternatively, even without
guarantees, traffic descriptions aid admission control by estimating how many circuits can fit
within the network's capacity without congestion.
Optimizing Admissions: Real networks leverage past behavior measurements to estimate the
number of circuits to admit, balancing better performance with acceptable risk. For instance, if
multiple circuits may transmit at rates up to 10 Mbps through a 100Mbps link, admitting all 10
circuits may be wasteful, and statistical data helps optimize admissions based on actual network
behavior.
Integration with Traffic-Aware Routing: Admission control can be integrated with traffic-aware
routing to enhance its effectiveness. By considering routes around traffic hotspots during the setup
procedure, the network can optimize connections. Load-sensitive routing designs, as presented by
Shaikh et al. (1999), showcase how admission control and traffic-aware routing can be combined
to enhance network efficiency and prevent congestion in scenarios like avoiding congested routers.

Traffic Throttling
Sender Adjustments: In computer networks, senders dynamically adjust their transmissions to
match the network's capacity, aiming to operate just before the onset of congestion.
Network Operation: The network's goal is to avoid congestion and, when pending, instruct
senders to throttle back their transmissions.
Congestion Avoidance: The term "congestion avoidance" describes the network's operating point
just before congestion, contrasting with the situation when the network becomes overly congested.
Approaches to Throttling Traffic: Throttling traffic can be applied in both datagram networks
and virtual-circuit networks.

36
Computer Networks

Two Key Problems to Solve:


1. Congestion Detection
2. Timely Feedback Delivery
Congestion Detection:
• Routers must identify when congestion is approaching, preferably before it occurs.
• Monitoring possibilities include:
• Utilization of output links.
• Buffering of queued packets inside the router.
• Counting packets lost due to insufficient buffering.
Monitoring queueing delays inside routers, capturing congestion experienced by packets.
Utilization averages may not account for traffic burstiness, and packet loss counts come too late.
Estimation Approach: Maintain an estimate of the queueing delay (d) using an Exponentially
Weighted Moving Average (EWMA) formula:

𝑑new = 𝛼𝑑old + (1 − 𝛼)𝑠


Where α determines how fast the router forgets recent history, smoothing fluctuations.
• 𝑑new is the new value.
• 𝑑old is the old or existing value.
• 𝛼 is a weight or coefficient (with 0≤α≤1). It determines the influence of the old value versus
the new value. If α is close to 1, more weight is given to the old value; if α is close to 0,
more weight is given to the new value.
• s is the new information or update being incorporated.

Timely Feedback Delivery:


• Routers must deliver timely feedback to the senders causing congestion.
• Identifying Congestion: Routers need to identify appropriate senders contributing to
congestion.
• Careful Warning: Routers must warn senders without exacerbating congestion.

Choke Packets:
• Direct Congestion Notification: The primary method to inform a sender about congestion
is through choke packets, where a router selects a congested packet and sends a choke
packet back to the source host, specifying the destination found in the original packet.

37
Computer Networks

• Tagging Original Packet: The original packet may be tagged, turning on a header bit to
prevent it from generating more choke packets along the path, and then forwarded as usual.
• Controlled Choke Packet Rate: To avoid increasing network load during congestion,
routers may send choke packets at a low rate.
Source Host Reaction to Choke Packet:
• Traffic Reduction Requirement: Upon receiving the choke packet, the source host is
obligated to reduce the traffic sent to the specified destination, often by a predetermined
percentage (e.g., 50%).
• Avoiding Random Choking: In datagram networks, randomly selecting packets during
congestion may lead to choke packets being sent to fast senders with larger queues. The
protocol's feedback helps prevent congestion without needlessly throttling senders unless
they cause trouble.
• Early Internet Usage: An example of a choke packet in the early Internet is the SOURCE-
QUENCH message (Postel, 1981).

Explicit Congestion Notification (ECN): Instead of generating additional packets to indicate


congestion, an alternative approach is the use of Explicit Congestion Notification (ECN), where a
router tags any forwarded packet by setting a bit in the packet's header to signal congestion. When
the network delivers the packet to the destination, the recipient can detect the congestion indication
and notify the sender when sending a reply packet. This feedback enables the sender to adjust its
transmission rate, similar to the response triggered by traditional congestion signaling methods.

As illustrated in Figure 5-25. Packets start unmarked, and if any router in their path experiences
congestion, the router will mark the packet accordingly during forwarding. Subsequently, the
destination echoes these marks back to the sender in its reply packet, denoted by a dashed line in
the figure, occurring above the IP level (e.g., in TCP). Upon receiving explicit congestion signals,
the sender must then throttle its transmissions, akin to the response triggered by traditional choke

38
Computer Networks

packets. The ECN mechanism provides an efficient and less intrusive method for handling
congestion in network communication.
Hop-by-Hop Backpressure:
An alternative approach is hop-by-hop backpressure, where the choke packet takes effect at every
hop it passes through, providing immediate relief at the point of congestion.
• Sequence Illustration: Fig. 5-26(b) shows the sequence where each hop (F, E, A)
immediately reacts to the choke packet, requiring each intermediate router to reduce
the flow to the next hop.
• Benefits: The hop-by-hop scheme prevents packet loss and efficiently addresses
congestion, as discussed in detail by Mishra et al. (1996).

Random Early Detection (RED)

• The RED algorithm is employed to drop packets strategically before congestion becomes
severe, allowing time for the source to adjust.
• Routers maintain a running average of queue lengths, and when the average exceeds a
threshold, indicating congestion, a small fraction of packets is randomly dropped as shown in
figure.

Figure RED

39
Computer Networks

• Random selection increases the likelihood that fast senders will experience packet drops,
prompting the sender to slow down.
• This implicit message delivery serves the same purpose as a choke packet, without an explicit
signal from the router.

Performance Enhancement with RED:


• RED routers offer improved performance compared to routers that only drop packets
when buffers are full.
• Tuning may be required, and the ideal number of packets to drop depends on the
number of senders needing congestion notification.
• Preference for ECN:
• While RED is effective when explicit signals cannot be received by hosts, ECN is the
preferred option if available.
• ECN delivers a congestion signal explicitly, functioning similarly to RED but providing
a clear notification.

QUALITY OF SERVICE (QoS)


Network techniques aim to reduce congestion and enhance performance, but certain applications
demand stronger performance guarantees. Multimedia applications require maximum throughput
and minimum latency for optimal functioning. The focus now shifts to exploring ways to provide
Quality of Service (QoS) tailored to application needs, indicating an ongoing upgrade in the
Internet.

Overprovisioning as a Solution:
Overprovisioning involves building a network with ample capacity to handle anticipated traffic,
ensuring minimal loss and low-latency packet delivery. While effective, overprovisioning is
expensive, relying on significant financial investment to meet performance demands.

Quality of Service Mechanisms:


Quality of Service (QoS) mechanisms offer an alternative, allowing networks with less capacity to
meet application requirements efficiently and cost-effectively. These mechanisms ensure the
network can uphold performance guarantees even during traffic spikes, though some requests may
need to be declined.

Four Key Issues in Ensuring Quality of Service:


1. Application Requirements: Understanding the specific needs of applications within the
network.
2. Traffic Regulation: Implementing measures to control and regulate incoming network traffic.

40
Computer Networks

3. Resource Reservation: Allocating and reserving resources at routers to guarantee


performance.
4. Network Acceptance of Traffic: Determining the network's capacity to safely handle
additional traffic.

Diverse Techniques for Quality of Service:


No single technique efficiently addresses all quality-of-service issues. Multiple techniques have
been developed, operating at the network and transport layers. Practical QoS solutions often
involve the combination of various techniques to effectively manage performance demands. Two
prominent versions of Quality of Service for the Internet are Integrated Services and
Differentiated Services. These versions represent distinct approaches to implementing QoS, each
with its own set of principles and mechanisms.

Application Requirements:
The applications of network requirements are based on their characteristics and functionalities.

1. Parameters Characterizing Flow Needs:


• Bandwidth: The amount of data that can be transmitted in a given time.
• Delay: The time it takes for packets to travel from the source to the destination.
• Jitter: The variation in delay or packet arrival times.
• Loss: The percentage of packets that do not reach the destination.
2. QoS Requirements:
• Different applications have different QoS requirements based on their characteristics.
• QoS is a measure of the quality of service that a flow requires, determined by the
combination of bandwidth, delay, jitter, and loss.
3. Application Specifics:
• File transfer applications (e.g., email and video) are not delay-sensitive.
• Interactive applications (e.g., Web surfing and remote login) are more delay-sensitive.
• Real-time applications (e.g., telephony and videoconferencing) have strict delay
requirements.
4. Bandwidth Needs:
• Email, audio, and remote login require less bandwidth.
• File sharing and video require a significant amount of bandwidth.
5. Delay Sensitivity:
• File transfer applications are not sensitive to delays.
• Interactive applications are somewhat sensitive to delays.
• Real-time applications have strict delay requirements.
6. Jitter Sensitivity:
• Jitter is the variation in the delay of packet arrival times.
• Video and audio applications are extremely sensitive to jitters.
7. Network vs. Application Capabilities:
• Networks do not need to be lossless for reliable file transfer.
• Some applications can improve the service provided by the network (e.g.,
retransmissions for packet loss).
41
Computer Networks

Traffic Shaping/ Traffic Regulation:

Traffic shaping is a technique used to regulate the flow of data into the network, and traffic policing
ensures that users adhere to the agreed-upon traffic patterns, especially critical for applications
with stringent QoS requirements. These mechanisms contribute to reducing congestion and
ensuring a more predictable and reliable network performance.

Bursty Nature of Data Network Traffic: Unlike telephone networks with constant-rate traffic,
data networks experience bursty traffic due to varying rates, user interactions, and task-switching
by computers. Bursty traffic can be challenging to handle, potentially leading to buffer overflows
and packet loss.
Traffic Shaping: Traffic shaping is a technique used to regulate the average rate and burstiness of
data flows entering the network. The objective is to allow applications to transmit a variety of
traffic patterns while providing a simple and useful way to describe these patterns to the network.
User-Provider Agreement on Traffic Patterns: When setting up a flow, the user and the network
provider agree on a specific traffic pattern or shape for that flow. This agreement, sometimes
referred to as a Service Level Agreement (SLA), outlines the expected transmission pattern for the
flow. An SLA may cover aggregate flows and extended periods (e.g., all traffic for a specific
customer). As long as the customer adheres to the agreed-upon contract, the provider promises to
deliver the packets in a timely fashion.
Traffic Policing: To enforce the SLA, the network provider needs to monitor traffic flows. Traffic
policing involves examining packets to ensure they conform to the agreed-upon traffic pattern.
Excess packets beyond the agreed pattern may be dropped or marked with lower priority. Shaping
and policing are crucial for real-time data, such as audio and video connections, which have
stringent quality-of-service requirements. While less critical for peer-to-peer transfers that
consume all available bandwidth, they are essential for maintaining QoS in time-sensitive
applications.

Leaky and Token Buckets


These algorithms are valuable in maintaining network performance by preventing bursts of traffic
that could overwhelm the network and lead to congestion. They find applications in both host-
level traffic shaping and network-level traffic policing, contributing to the effective management
of data flows within a network.

Leaky Bucket Algorithm:


1. Conceptualized as a bucket with a small hole at the bottom.
2. Water (representing packets) is continuously added to the bucket.
3. The outflow rate is constant at R as long as there is water in the bucket.
4. Once the bucket reaches its capacity (B), any additional water spills over and is lost.
5. Used to shape or police packets entering the network.
6. Each host is connected to the network through an interface containing a leaky bucket.

42
Computer Networks

7. If a packet arrives when the bucket is full, it either waits until enough water leaks out
to accommodate it or is discarded.
8. Useful for shaping traffic at the host level or policing at the network interface.

Token Bucket Algorithm:


1. Conceptualized as a bucket being filled with tokens at a constant rate (R).
2. The bucket has a capacity of B tokens.
3. To send a packet, a fixed number of tokens (B) must be taken out of the bucket.
4. If the bucket is empty, waiting is required until more tokens accumulate before sending
another packet.
5. Equivalent to the leaky bucket algorithm but with a different conceptualization.
6. The tap (token generator) controls the rate at which tokens are added to the bucket.

Packet Scheduling
The significance of packet scheduling in ensuring a fair distribution of resources and preventing
degradation of Quality of Service in computer networks. Packet scheduling involves allocating
router resources among packets of a flow and between competing flows. Ensures that sufficient
resources are reserved along the route for packets to follow a specific path, similar to a virtual
circuit. Various algorithms have been proposed to address the challenges posed by different traffic
patterns and user behaviors.
Three types of resources that can be reserved for different flows are
1. Bandwidth
2. Buffer space
3. CPU cycles.

• Bandwidth reservation prevents oversubscription of output lines.


• Buffer space absorbs small bursts of traffic, preventing packet loss.
• CPU cycles ensure timely processing of packets, particularly those requiring more processing.

1. FIFO Scheduling Algorithm:


FIFO (First-In First-Out) scheduling is a straightforward algorithm where packets are sent in
the order they arrived. Tail drop occurs when newly arriving packets are dropped if the queue
is full. FIFO scheduling is simple but not well-suited for providing good QoS, especially in the
presence of multiple flows. Aggressive flows can impact the performance of other flows,
leading to reduced QoS.

2. Fair Queueing Algorithm:


Fair queueing algorithm, proposed by Nagle, uses separate queues for each flow.
When the line becomes idle, the router scans the queues round-robin, allowing each flow to
send one packet in turn. Aims to provide fairness by ensuring that all flows get to send packets
at the same rate.

43
Computer Networks

The fair queueing algorithm treats all hosts equally, providing them with the same priority. In
practical scenarios, there is often a need to prioritize certain hosts, such as video servers over
file servers.

3. Weighted Fair Queueing (WFQ):


WFQ extends fair queueing by allowing hosts to have different weights, reflecting their priority.
Video servers, for example, may be assigned a higher weight, giving them more bandwidth
compared to other servers.
The weight (W) of a flow is the number of bytes per round allocated to that flow.

The finish time (Fᵢ) for a packet in a flow is computed using the formula:
𝐹𝑖 = 𝑚𝑎𝑥(𝐴𝑖, 𝐹𝑖 − 1) + 𝑊/𝐿𝑖
Where Ai is the arrival time, Fi−1 is the finish time of the previous packet, Li is the length of
packet i, and W is the weight of the flow.

• WFQ requires packets to be inserted into a sorted queue based on their finish times.
• This operation is at best an O(logN) operation per packet with N flows, which can be
challenging for high-speed routers.
• An approximation called deficit round robin, described by Shreedhar and Varghese, offers
efficient implementation with O(1) operations per packet.
• WFQ can often provide a better alternative to priority scheduling.

44
Computer Networks

• By assigning a large weight to the high-priority queue, high-priority packets can still go
through a short line, but low-priority packets continue to be sent during high-priority traffic.

Question Bank (Network Layer)


1. What are the primary design goals and responsibilities of the network layer?
2. Explain the concept of routing and its significance in network layer design.
3. What is store-and-forward packet switching, and how does it differ from other switching
techniques?
4. Describe the advantages and disadvantages of store-and-forward packet switching.
5. Explain the term "packet switching delay" in the context of store-and-forward switching.
6. What services does the network layer provide to the transport layer, and why are these
services important?
7. Compare and contrast the services offered by the network layer with those of the data link
layer.
8. How does the network layer handle routing and forwarding of packets to their destinations?
9. What is a connectionless service in the network layer, and when is it typically used?
10. Describe the steps involved in implementing a connectionless service in a network.
11. Discuss the advantages and disadvantages of connectionless service in comparison to
connection-oriented service.
12. What is a connection-oriented service, and in what scenarios is it beneficial?
13. Explain the steps involved in establishing and releasing a connection in a connection-
oriented network.
14. How does connection-oriented service handle packet delivery and reordering?
15. Define virtual-circuit and datagram networks and highlight their key characteristics.
16. Compare the advantages and disadvantages of virtual-circuit and datagram network
architectures.
17. Explain the concept of the Optimality Principle in routing. How does it influence routing
decisions?
18. Describe Dijkstra's algorithm for finding the shortest path in a network. What are its
limitations?
19. How does the Bellman-Ford algorithm work, and how does it handle negative-weight
edges?
20. What is flooding in the context of routing? When is it used, and what are its advantages
and disadvantages?
21. How can network loops be prevented when using flooding as a routing mechanism?
22. Explain the principles behind distance vector routing algorithms. Provide examples of such
algorithms.
23. What is the Bellman-Ford equation, and how is it used in distance vector routing?
45
Computer Networks

24. Describe the fundamentals of link state routing. How do routers exchange link state
information?
25. What is the SPF (Shortest Path First) algorithm, and how is it used in link state routing?
26. What is hierarchical routing, and why is it important in large-scale networks?
27. Discuss the advantages of using hierarchical addressing and routing.
28. What is the basic broadcasting method, and why is it considered inefficient for sending
messages to many or all other hosts?
29. How does multi-destination routing improve upon basic broadcasting, and what role do
routers play in this approach?
30. What is flooding as a broadcast routing technique, and why is it considered a better
approach?
31. What is Reverse Path Forwarding (RPF), and how does it address the limitations of basic
broadcasting and multi-destination routing?
32. Walk through the steps of the Reverse Path Forwarding (RPF) process when a broadcast
packet arrives at a router.
33. Explain the concept of multicast routing. How is it different from unicast and broadcast
routing?
34. What are the challenges in multicast routing, especially in terms of building efficient
multicast trees?
35. What is multicasting, and how does it address the inefficiencies of sending distinct packets
to each receiver or using broadcasting for large groups?
36. Define Multicast Routing. How does it differ from unicast and broadcast routing in network
communication?
37. Discuss the importance of group management in multicast communication. What
mechanisms are necessary for creating, destroying, and identifying multicast groups?
38. Explain the concept of a spanning tree in the context of multicast routing. How does the
structure of the spanning tree depend on whether the multicast group is dense or sparse?
39. In what scenarios is pruning the spanning tree essential in multicast routing? What purpose
does it serve, and how does it contribute to efficient network utilization?
40. Describe the challenges and considerations involved in optimizing the spanning tree for
sparse group multicast routing. Why is this optimization crucial?
41. Examine the network configuration in Figure 5-16a. How are routers connected to hosts,
and what is the significance of multicast groups, Group 1 and Group 2, in this context?
42. Analyze the initial spanning tree in Figure 5-16b. Why is this tree suitable for broadcast
but not optimal for multicast?
43. Explain the process of pruning the spanning trees for multicast, as illustrated in Figures 5-
16c and 5-16d. What links are removed, and how does it enhance the efficiency of the
multicast spanning tree for Group 1 and Group 2?
44. What are the benefits of the pruned multicast spanning trees for Group 1 and Group 2 in
terms of the number of links?
45. Discuss the role of network configuration in influencing the efficiency of multicast routing.
How does the arrangement of routers and hosts impact the optimization of multicast
spanning trees?

46
Computer Networks

46. How does multicast routing contribute to optimizing bandwidth usage while delivering
packets to multiple group members? Compare this to other routing methods in the context
of group communication.
47. What is anycast routing, and when is it used in network design?
48. Describe the process of routing packets to the nearest or best anycast node.
49. What is the primary routing challenge for mobile hosts, and why is it essential for the
network to locate the host's current position?
50. Explain the model of mobile hosts, highlighting the concept of permanent home locations.
How does the routing goal differ for mobile hosts compared to stationary hosts?
51. Discuss the key components of the model of mobile hosts. What role does the home agent
play in facilitating communication with mobile hosts?
52. Describe the responsibilities of the home agent when it comes to forwarding packets for
mobile hosts. How does the home agent ensure the delivery of packets to the current
location of the mobile host?
53. Examine the scenario where packets destined for a mobile host are sent to the home agent.
Why is this step necessary, and how does it contribute to successful packet delivery to a
mobile host?
54. How do mobile IP protocols enable routing for hosts that change their network attachment
points?
55. What are the key components of mobile IP, and how do they work together?
56. How do mobile IP protocols enable routing for hosts that change their network attachment
points?
57. What are the key components of mobile IP, and how do they work together?
58. What is congestion control in computer networks, and why is it important?
59. Explain the key approaches to congestion control in computer networks.
60. Compare and contrast open-loop and closed-loop congestion control algorithms.
61. Describe the basic principles behind TCP congestion control algorithms, such as TCP Reno
or TCP Vegas.
62. What is traffic-aware routing, and how does it contribute to congestion control?
63. Explain how traffic-aware routing algorithms can dynamically adapt to network conditions.
64. What is admission control in the context of network management, and why is it necessary?
65. Discuss the role of admission control in preventing network congestion and maintaining
QoS.
66. How does traffic throttling work, and what is its purpose in congestion control?
67. Provide examples of situations where traffic throttling might be applied.
68. Describe load shedding as a congestion control mechanism. When is it typically used?
69. What factors are considered when deciding which traffic to shed during congestion?
70. Explain the concept of Quality of Service (QoS) in computer networks.
71. What are the key metrics used to measure QoS, and how are they defined?
72. How does QoS benefit applications with different requirements in a network?
73. Discuss how the specific requirements of applications (e.g., voice, video, data) impact QoS
design.

47
Computer Networks

74. Provide examples of applications that demand low latency, high bandwidth, or other QoS
parameters.
75. Describe the concept of packet scheduling in QoS management.
76. How do different packet scheduling algorithms, such as Weighted Fair Queuing (WFQ)
and Priority Queuing, work?
77. What is Integrated Services (IntServ) in the context of QoS provisioning?
78. How does RSVP (Resource Reservation Protocol) contribute to IntServ?
79. Explain the principles of Differentiated Services (DiffServ) in QoS management.
80. How are Differentiated Services Code Points (DSCPs) used to mark and classify packets?

48

You might also like