CN Module 3 Network Layer Notes 2024
CN Module 3 Network Layer Notes 2024
MODULE 3
THE NETWORK LAYER
Introductions
The network layer is responsible for ensuring packets travel from the source to the destination.
This often involves multiple hops through intermediate routers. Therefore, the network layer
handles end-to-end transmission.
To accomplish its tasks, the network layer must understand the network's topology, select
appropriate paths, and manage traffic to prevent congestion. It also addresses challenges when the
source and destination are in different networks. In this chapter, with a primary focus on the
Internet and its network layer protocol, IP.
When a host has a packet to send, it sends it to the nearest router, either on its own LAN or via a
point-to-point link to the ISP. The router stores the packet until it's fully received and processed
(including checksum verification), after which it forwards the packet to the next router along the
path. This process continues until the packet reaches its destination host, where it is delivered. This
mechanism is known as store-and-forward packet switching.
1
Computer Networks
In a datagram network, let's consider the process of transmitting a long message from process P1
on host H1 to process P2 on host H2. Here's how it works:
1. Process P1 hands the message to the transport layer, which adds a transport header to the
message. This transport layer code runs on H1.
2
Computer Networks
2. The resulting data is then passed to the network layer, typically another procedure within
the operating system.
3. Since the message is longer than the maximum packet size, it's divided into four packets:
1, 2, 3, and 4. Each packet is sent one after the other to router A using a point-to-point
protocol like PPP.
4. Within the ISP's network, each router has a routing table indicating where to send packets
for each possible destination. These tables contain pairs of destinations and the outgoing
lines to use.
5. Router A initially has a routing table as shown in the figure. Packets 1, 2, and 3 are briefly
stored and checked for checksum on arrival. Then they are forwarded according to A's
table, reaching H2 via routers E and F.
6. However, something different happens to packet 4. Router A sends it to router B instead of
following the same route as the first three packets. This change in routing might occur if A
learns of a traffic jam along the initial route and updates its routing table.
7. The algorithm responsible for managing routing tables and making routing decisions is
called the routing algorithm, a central topic in this chapter. Different types of routing
algorithms exist.
8. IP (Internet Protocol), the foundation of the Internet, is a prime example of a connectionless
network service. Each packet carries a destination IP address, allowing routers to forward
packets individually. IPv4 packets have 32-bit addresses, while IPv6 packets have 128-bit
addresses.
3
Computer Networks
1. Instead of choosing a new route for every packet as in connectionless service, a route from
the source to the destination is selected and stored in tables within the routers when a
connection is established. This chosen route is then used for all data traffic over that
connection, similar to how the telephone system operates.
2. When the connection is terminated, the virtual circuit is also released. Each packet in a
connection-oriented service carries an identifier indicating which virtual circuit it belongs
to.
3. For example, if host H1 establishes connection 1 with host H2, this connection is recorded
as the first entry in the routing tables of each router along the route. Packets from H1 to H2
are then directed through this established route.
4. If another host, say H3, wants to establish a connection to H2, it chooses connection
identifier 1 and requests the network to establish a new virtual circuit. This leads to the
creation of a second entry in the routing tables.
5. To avoid conflicts, routers need the ability to replace connection identifiers in outgoing
packets. This process is sometimes called label switching. An example of a connection-
oriented network service is MPLS (Multi-Protocol Label Switching), used within ISP
networks. MPLS uses a 20-bit connection identifier or label to route traffic efficiently.
6. MPLS is often employed within ISP networks to establish long-term connections for large
traffic volumes. It helps ensure quality of service and assists with various traffic
management tasks, even though it's often transparent to customers.
4
Computer Networks
Inside a network, there are trade-offs between virtual circuits and datagrams:
5
Computer Networks
• For transaction processing systems, setting up and clearing virtual circuits may be
too costly compared to their use. However, for long-running applications like
VPNs, permanent virtual circuits can be valuable.
6. Vulnerability:
• Virtual circuits are vulnerable to disruptions if a router crashes and loses memory,
requiring all circuits to be aborted. Datagram networks are more resilient in this
regard.
7. Network Load Balancing:
• Datagram networks allow routers to balance traffic by changing routes during a
sequence of packet transmissions, providing flexibility compared to fixed virtual
circuits.
• Routing Algorithm: It decides which output line an incoming packet should be sent on. For
datagram networks, this decision is made for every data packet, while virtual-circuit networks
determine routes during setup and reuse them. This process can be divided into routing (route
selection) and forwarding (packet arrival handling).
• Desirable Properties: Routing algorithms should be correct, simple, robust (able to handle
failures and topology changes), stable (converging to fixed paths), fair, and efficient.
• Trade-offs: Efficiency and fairness can sometimes conflict in routing decisions, requiring a
balance. Networks often optimize goals like minimizing packet delay, maximizing total
throughput, or reducing the number of hops a packet travels.
• Routing Algorithm Classes: Routing algorithms are categorized as nonadaptive (pre-
computed routes) and adaptive (responding to changes in topology and traffic). Adaptive
algorithms can differ in information sources, frequency of route changes, and optimization
metrics.
6
Computer Networks
• Sink Tree: The sink tree is a tree-like structure formed by optimal routes from all sources
to a specific destination, adhering to the optimality principle. Sink trees play a central role
in routing algorithms, helping efficiently route packets in networks fig(b).
• Directed Acyclic Graphs (DAGs): Sink trees can extend to become DAGs if all possible
paths are considered, allowing for more flexibility in route selection. However, the
fundamental structure of a sink tree is retained in DAGs.
• Network Dynamics: In practice, network dynamics, such as link and router failures, can
affect the stability and accuracy of sink trees. Routers may have varying views of the
current topology, leading to dynamic adjustments in the sink tree.
Several algorithms are available for calculating the shortest path between two nodes in a graph.
The one we'll discuss is attributed to Dijkstra (1959) and is utilized to find the shortest paths from
a source to all destinations within the network. Each node is assigned a label (in parentheses)
indicating its distance from the source node along the best-known path. Distances must always be
non-negative, particularly when they are based on real factors like bandwidth and delay.
Initially, no paths are known, so all nodes are labeled with infinity. As the algorithm progresses
and identifies paths, labels may change to reflect better routes. Labels can be either tentative or
permanent, with all labels beginning as tentative. Once it's determined that a label represents the
shortest possible path from the source to a particular node, it becomes permanent and remains
unchanged thereafter.
In this algorithm illustration, we'll demonstrate the process using a weighted, undirected graph
(Fig. 5-7a), where the weights represent parameters like distance. Our objective is to discover the
shortest path from A to D. Here's how it works:
1. We begin by marking node A as permanent (indicated by a filled-in circle).
2. Next, we examine each node adjacent to A, one by one (the working node). We relabel each
of these nodes with the distance to A. Additionally, we record the node from which the
probe was made to reconstruct the final path later. If multiple shortest paths exist from A
to D, we must remember all the probe nodes that could reach a node with the same distance.
8
Computer Networks
3. Having examined all nodes adjacent to A, we then inspect all tentatively labeled nodes
across the entire graph. We make the one with the smallest label permanent (as shown in
Fig. 5-7b), and this node becomes the new working node.
4. We repeat the process, starting from node B and examining all nodes adjacent to it. If the
sum of the label on B and the distance from B to the considered node is less than the label
on that node, we have a shorter path, so the node is relabeled.
5. After inspecting all nodes adjacent to the working node and updating tentative labels if
necessary, we search the entire graph for the tentatively labeled node with the smallest
value. This node becomes permanent and serves as the working node for the next round.
Fig. 5-7 illustrates the first six steps of the algorithm.
To understand why this algorithm works, consider Fig. 5-7c. At this point, we've made E
permanent. Now, suppose there were a shorter path than ABE, let's say AXYZE (with X and Y as
intermediate nodes). There are two possibilities: Either node Z has already been made permanent,
or it hasn't. If it has, then E has already been probed in a previous round when Z became permanent,
so the AXYZE path has been considered, and it cannot be shorter.
Now, let's look at the case where Z is still tentatively labeled. If the label at Z is greater than or
equal to that at E, then the AXYZE path cannot be shorter than ABE. If the label is less than that
of E, then Z will become permanent before E, allowing E to be probed from Z.
9
Computer Networks
Source Node is 1
10
Computer Networks
11
Computer Networks
3. Flooding
Routing algorithms often rely on local knowledge rather than a complete network view. One basic
local technique is "flooding," where every incoming packet is sent out on all outgoing lines except
the one it arrived on.
To control the potential chaos of flooding, a hop counter is added to each packet's header,
decrementing with each hop until the packet is discarded when the counter hits zero. Ideally, the
hop counter should be initialized to the estimated path length from source to destination.
To further manage flooding, routers keep track of flooded packets to avoid duplication. This can
be achieved by having source routers include sequence numbers in their packets and maintaining
lists of seen sequence numbers for each source.
While not practical for most packets, flooding has its uses. It ensures delivery to every node in the
network, making it useful for broadcasting information and maintaining robustness, even in
challenging conditions. Flooding's minimal setup requirements also make it a foundational element
for more efficient routing algorithms and a valuable metric for performance comparisons. Flooding
inherently chooses the shortest path, as it explores all possible paths simultaneously.
4. Distance Vector Routing
Distance Vector Routing is a common dynamic routing algorithm used in computer networks. It
operates by having each router maintain a table, known as a vector, which contains information
about the best-known distance to each destination and the appropriate link to reach it. These tables
are continually updated through communication with neighboring routers, allowing each router to
determine the optimal path to every destination.
In distance vector routing, each router's routing table includes entries for every router in the
network. Each entry consists of the preferred outgoing line for that destination and an estimate of
the distance to reach it. The distance can be measured in various ways, such as the number of hops
or other metrics like propagation delay, which can be determined using specialized ECHO packets.
For instance, let's consider using delay as a metric, and assume that each router knows the delay
to its neighboring routers. Periodically, let's say every T milliseconds, each router shares a list of
its estimated delays to reach various destinations with its neighbors. In return, it receives similar
lists from its neighbors.
12
Computer Networks
For instance, if a router receives a table from neighbor X with Xi representing X's delay estimates
to different routers, and it already knows the delay to X is m milliseconds, it can calculate that it
can reach router i via X in Xi + m milliseconds. By performing this calculation for each neighbor's
estimates, the router can determine the best estimate and corresponding link to update its routing
table. Notably, the old routing table is not used in this calculation.
This updating process is demonstrated in Figure 5-9. Part (a) depicts a network, and part (b)
illustrates delay vectors received from router J's neighbors. Each neighbor claims different delay
values to various destinations. Assuming J has its own delay estimates to its neighbors A, I, H, and
K as 8, 10, 12, and 6 milliseconds, respectively, it calculates its new route to router G. For example,
J knows it can reach A in 8 milliseconds, and A claims a 18-millisecond delay to G. Thus, J can
expect a 26-millisecond delay to G if it forwards packets through A. It performs similar
calculations for other neighbors and destinations, updating its routing table accordingly.
Bellman-Ford Routing
The update is typically done using the Bellman-Ford equation, where the new distance to a
destination is calculated as the sum of the cost to the neighbor and the neighbor's distance to the
destination.
The process of exchanging and updating continues until the routing tables converge, meaning that
no further changes are needed.
To formalize:
13
Computer Networks
14
Computer Networks
With Break
15
Computer Networks
Now consider the scenario in Fig. 5-10(b), where all links and routers are initially operational.
Routers B, C, D, and E have distances to A of 1, 2, 3, and 4 hops, respectively. Suddenly, either A
goes down or the A-B link is severed (effectively the same from B's perspective).
In the first exchange, B doesn't receive any news from A. Fortunately, C reports having a 2-hop
path to A. However, B doesn't know that C's path includes itself, so it believes the path to A via C
is 3 hops long. D and E don't update their entries for A in the first exchange.
In the second exchange, C realizes that its neighbors all claim to have 3-hop paths to A, so it
randomly picks one and updates its distance to A as 4 hops. Subsequent exchanges follow the
history shown in the figure.
This scenario illustrates why bad news travels slowly: no router's value exceeds the minimum of
its neighbors' values by more than one. Gradually, all routers increase their distance to infinity, but
the number of exchanges required depends on the value set for infinity. Therefore, setting infinity
to the longest path plus 1 is a prudent choice.
This problem is known as the count-to-infinity problem, and various attempts have been made to
solve it, like using heuristics such as the split horizon with poisoned reverse rule discussed in RFC
16
Computer Networks
1058. However, none of these heuristics work well in practice due to the fundamental challenge:
routers can't determine if they are on a path when another router advertises it.
5. Link State Routing
Link State Routing replaced Distance Vector Routing in the ARPANET in 1979 due to the count-
to-infinity problem, which caused slow convergence after network topology changes. This
innovative algorithm, now known as Link State Routing, forms the basis for widely used routing
protocols like IS-IS and OSPF. The Link State Routing concept comprises five key steps for routers
to function effectively:
1. Neighbor Discovery: Routers identify their neighboring devices and learn their network
addresses.
2. Distance Metric Assignment: Each router assigns distance or cost metrics to its
neighboring routers.
3. Packet Construction: Routers construct packets containing the information they've
gathered.
4. Packet Exchange: These packets are sent to and received from all other routers in the
network.
5. Shortest Path Computation: Routers calculate the shortest path to reach every other
router in the network.
Learning about the Neighbors: When a router boots up, it identifies its neighbors by sending
HELLO packets over point-to-point connections, receiving replies with unique neighbor names.
For broadcast links (e.g., Ethernet), the setup is more complex. Instead of modeling each link
separately, a LAN is treated as a single node. This node, with a designated router, connects routers
like A, C, and F. This simplifies the topology and reduces message overhead. Connectivity between
routers, like A and C, is represented as paths within this LAN node, such as ANC.
Setting Link Costs: In the link state routing algorithm, each link must have a distance or cost
metric for calculating the shortest paths. These metrics can be automatically determined or
configured by the network operator. Typically, cost is inversely proportional to link bandwidth,
making higher-capacity links preferable. For geographically dispersed networks, link delay can be
factored into the cost, measured using Round-Trip Time with special ECHO packets, ensuring
shorter links are favored in path selection.
17
Computer Networks
Building Link State Packets: In the link state routing algorithm, each router creates a packet
containing sender identity, sequence number, age, and neighbor list with associated costs. Packet
creation is straightforward. The challenge lies in determining when to build these packets. They
can be generated periodically or triggered by significant events like a line or neighbor status change
in fig 5.11.
Distributing the Link State Packets: Distributing link state packets is a crucial aspect of the
algorithm. To ensure all routers quickly and reliably receive these packets, a flooding approach is
used. Each packet contains a sequence number that is incremented with each new transmission,
preventing duplicates. However, to address potential issues, packet age is also included and
decremented periodically. Once the age reaches zero, the information is discarded. This technique
ensures timely updates and prevents obsolete data. Additionally, some refinements include a
holding area for incoming packets and the comparison of sequence numbers to handle duplicates
and errors, with all link state packets acknowledged to enhance robustness.
• Router B uses a data structure, as shown in Fig. 5-13, to manage link state packets. Each
row in this structure represents an incoming packet with information about its origin,
sequence number, age, and data. It also includes flags for sending and acknowledging
packets on B's three links (to A, C, and F).
• For example, when a packet arrives directly from A, it's sent to C and F and acknowledged
to A as indicated by the flag bits. Similarly, a packet from F is forwarded to A and C and
acknowledged to F.
• However, if a packet, like the one from E, arrives through multiple paths (EAB and EFB),
it's sent only to C but acknowledged to both A and F, reflecting the bit settings.
• In cases of duplicate packets arriving while the original is still in the buffer, the flag bits
are adjusted. For instance, if a copy of C's state arrives from F before the original is
forwarded, the bits are changed to indicate acknowledgment to F but no sending.
Computing the New Routes: In link state routing, once a router has collected all link state packets
representing the entire network graph, it can construct a comprehensive view of the network,
considering links in both directions with potential different costs. Dijkstra's algorithm is then
18
Computer Networks
employed locally to compute the shortest paths to all destinations, informing the router of the
preferred link for each destination, which is added to the routing tables.
Compared to distance vector routing, link state routing demands more memory and computation,
with memory needs proportional to k*n (n = number of routers, k = average neighbors), potentially
exceeding the size of a routing table. However, link state routing offers faster convergence, making
it practical for many scenarios. Actual networks often use link state routing protocols like IS-IS
(Intermediate System-Intermediate System) and OSPF (Open Shortest Path First).
6. Hierarchical Routing
Challenges with Growing Networks:
• As networks increase in size, router routing tables also grow proportionally.
• This leads to increased memory consumption, higher CPU usage, and greater bandwidth
requirements for status reports.
Feasibility Issues:
• Eventually, the network may reach a point where it becomes impractical for each router to have
an entry for every other router.
Hierarchical Routing Solution:
• Hierarchical routing becomes necessary, similar to the structure in the telephone network.
• Routers are divided into regions, and each router has knowledge about routing within its own
region.
Interconnected Networks:
• When different networks are interconnected, each network is treated as a separate region.
• This approach frees routers from having to know the internal structure of other interconnected
networks.
Multilevel Hierarchy:
• For huge networks, a two-level hierarchy might be insufficient.
• Regions can be grouped into clusters, clusters into zones, and so on, creating a multilevel
hierarchy.
Example of Two-Level Hierarchy:
• Figure 5-14 illustrates a quantitative example of routing in a two-level hierarchy with five
regions.
19
Computer Networks
20
Computer Networks
7. Broadcast Routing.
• In some applications, hosts need to send messages to many or all other hosts, known as
broadcasting.
• Examples include services distributing weather reports, stock market updates, or live radio
programs.
Basic Broadcasting Method:
• A simple method is for the source to send a distinct packet to each destination.
• This method is inefficient, slow, and requires the source to have a complete list of all
destinations.
Multi-destination Routing:
• Improvement over basic broadcasting.
• Each packet contains a list of destinations or a bitmap indicating desired destinations.
• Routers determine the set of output lines needed and generate new copies of the packet for
each line.
• Effectively partitions the destination set among output lines, making network bandwidth usage
more efficient.
• Still requires the source to know all destinations, and routers face similar workload as with
distinct packets.
Flooding as a Broadcast Routing Technique:
• Flooding is a better broadcast routing technique.
• Uses a sequence number per source and efficiently utilizes links.
• Simple decision rules at routers make flooding suitable for broadcasting.
• Although not ideal for point-to-point communication, it is effective for broadcasting.
Reverse Path Forwarding (RPF):
• A more advanced broadcast routing technique introduced by Dalal and Metcalfe in 1978.
• When a broadcast packet arrives at a router, the router checks if it arrived on the link normally
used for sending packets toward the source.
• If yes, the packet likely followed the best route and is forwarded to all links except the
incoming one.
• If the packet arrived on a different link, it is discarded as a likely duplicate.
• This method efficiently leverages already computed shortest path routes for regular packets.
21
Computer Networks
• RPF provides an elegant and remarkably simple approach for broadcast routing.
• It efficiently utilizes network resources and simplifies decision-making at routers, making it a
preferable choice for broadcasting.
1. Packet Arrival:
• When a broadcast packet arrives at a router, the router needs to examine the incoming
interface of the packet.
2. Check Preferred Outgoing Interface:
• Determine the outgoing interface that is normally used for sending packets toward the
source of the broadcast. This is often the shortest or preferred route.
3. RPF Check:
• Compare the incoming interface of the broadcast packet with the preferred outgoing
interface.
• If the incoming interface matches the preferred outgoing interface:
1. This suggests that the broadcast packet likely followed the best route from the
source.
2. Proceed to the next step for forwarding copies.
4. Forwarding Copies:
• Forward copies of the broadcast packet onto all interfaces except the one it arrived on.
1. This ensures the packet reaches all possible destinations in the network.
5. Duplicate Check:
• If the incoming interface does not match the preferred outgoing interface:
1. Discard the packet as a likely duplicate.
2. This is based on the assumption that the packet arriving on a different interface
might be a duplicate or has taken a less optimal route.
22
Computer Networks
23
Computer Networks
8. Multicast Routing.
Multicasting and Multicast Routing:
• Some applications, like multiplayer games or live sports event streaming, send packets to
multiple receivers.
• Sending distinct packets to each receiver is expensive for large groups, and broadcasting is
wasteful if many recipients are not interested.
• Multicasting: Sending messages to well-defined groups that are numerically large
compared to individual machines but small compared to the entire network.
• Multicast Routing: The routing algorithm used for multicasting.
• Group Management: All multicasting schemes require mechanisms to create, destroy,
and identify groups.
Multicast Routing Schemes:
• The spanning tree depends on whether the group is dense (spread over most of the network)
or sparse (occupying only specific parts).
• Efficiently sending packets along spanning trees to deliver them to group members while
optimizing bandwidth usage.
• Pruning the Spanning Tree: To eliminate wasteful broadcast to non-group members,
prune the broadcast spanning tree by removing links that do not lead to group members.
• Sparse Group Multicast Routing: Optimizing the spanning tree becomes crucial to avoid
unnecessary utilization of resources.
9. Anycast Routing.
• Unicast: Source sends to a single destination.
• Broadcast: Source sends to all destinations.
• Multicast: Source sends to a group of destinations.
• Anycast: Packet delivered to the nearest member of a group.
Anycast Usage:
• Anycast is used when the specific identity of the destination node is not crucial, and any
available node offering a particular service will suffice.
• Commonly employed in services like DNS (Domain Name System) in the Internet.
• Routing for Anycast: No need for new routing schemes; existing distance vector and link state
routing can produce anycast routes.
25
Computer Networks
• Distance vector routing results in nodes sending to the nearest instance of destination '1.'
• Routing protocol doesn't recognize multiple instances, treating them as the same node.
Topology Consideration (Fig. 5-18b):
• The routing protocol believes that all instances of node 1 are the same node, as shown in the
simplified topology.
• The procedure works because the protocol doesn't realize the existence of multiple instances.
26
Computer Networks
27
Computer Networks
3. Routing Challenges:
28
Computer Networks
• The basic model assumes that each node can communicate with all others within its coverage
circle, considering symmetric connections. Although real networks may involve obstacles like
buildings and hills, the simplicity assumption is made that connections are symmetric.
• In Figure, if node A can communicate with B, it is assumed that B can also communicate with
A. AODV maintains a distance vector table at each node, keyed by destination, providing
information about the destination and the neighbor to which packets should be sent to reach it.
• When a node, for example, node A, wants to send a packet to node I and does not find an entry
for I in its table, it initiates route discovery.
1. A ROUTE REQUEST packet is constructed and broadcasted using flooding across the
network.
2. The ROUTE REQUEST packet is transmitted from source node A to neighboring nodes
B and D.
3. Each receiving node rebroadcasts the request, extending the reach of the broadcast to
nodes F, G, C, H, E, and I.
• Sequence numbers are set at the source to prevent duplicate requests during the flood. Nodes
discard duplicate transmissions, ensuring efficient routing information dissemination.
4. Once the ROUTE REQUEST reaches the destination node I, a ROUTE REPLY packet
is unicast back to the sender along the reverse path followed by the request.
• Intermediate nodes remember the sender of the request, facilitating the construction of the
reverse path. Intermediate nodes increase a hop count as they forward the reply, indicating their
distance from the destination.
• The replies guide intermediate nodes on the best route to the destination, updating their routing
tables.
• In large networks, the algorithm generates many broadcasts, even for nearby destinations. To
reduce overhead, the scope of broadcasts is limited using the Time to Live (TTL) field in the
IP packet. The TTL field is decremented on each hop, and if it reaches 0, the packet is
discarded. The route discovery process uses a progressively increasing TTL to search locally
first and then in widening rings for the destination.
30
Computer Networks
Nodes in an ad hoc network can move or be turned off, leading to spontaneous changes in the
network topology.
1. For example, if node G is switched off in Fig. 5-20, node A may not realize that the route
it was using to reach I (ADGI) is no longer valid.
2. Periodically, each node broadcasts a Hello message, expecting responses from its
neighbors.
3. Lack of response indicates that the neighbor has moved out of range or failed, helping
identify changes in connectivity. Unresponsive neighbors are considered disconnected.
4. Each node keeps track of active neighbors for each destination, recording those that have
communicated in the last ΔT seconds. When a neighbor becomes unreachable, the node
checks its routing table to identify routes via the now-unavailable neighbor.
5. Active neighbors are informed that routes through the unavailable node are invalid and
must be purged from their routing tables.
6. Routes include a sequence number controlled by the destination, functioning like a logical
clock.
7. The destination increments the sequence number with each fresh ROUTE REPLY it sends.
8. Senders request fresh routes by including the destination's sequence number in the ROUTE
REQUEST, enabling rapid convergence after a topology change.
Intermediate Node Handling:
1. Intermediate nodes store routes with higher sequence numbers or the fewest hops for
the current sequence number.
2. Only routes in use are stored, and other route information learned during broadcasts is
timed out after a short delay.
3. This on-demand approach helps conserve bandwidth and battery life compared to
periodic updates in a standard distance vector protocol.
Shared Route Discovery and Maintenance:
4. To optimize resource usage, route discovery and maintenance are shared when routes
overlap.
5. If multiple nodes, such as B, also want to send packets to I, overlapping with the
existing route, they can leverage the information without redundant work.
6. In such cases, the request may reach an intermediate node (e.g., D), which already has
a route to I. Node D can reply to the requesting node (e.g., B) without additional route
discovery efforts.
31
Computer Networks
Figure 5-21
The onset of congestion is depicted in Figure 5-21, where the number of packets sent into the
network affects the number of packets delivered. When the offered load approaches the network's
carrying capacity, occasional bursts of traffic may fill up buffers inside routers, resulting in packet
loss and decreased delivered packets. This signifies congestion, and without proper design, it may
lead to a congestion collapse, causing a plummet in performance as the offered load surpasses the
network's capacity.
Congestion Collapse Factors:
• Delayed Packets: Packets may experience delays within the network, making them no
longer useful upon leaving, leading to congestion collapse.
• Retransmission Issues: Senders may retransmit delayed packets, causing duplicate
deliveries and wasting network capacity.
To address these complexities, the concept of goodput, representing the rate at which useful
packets are delivered by the network, is introduced in Figure 5-21. The goal is to design networks
that minimize congestion where possible and can recover gracefully without collapsing under
heavy loads.
Challenges in Avoiding Congestion:
• Inevitability: Congestion cannot be wholly avoided due to sudden bursts overwhelming
the network, leading to queue buildup and potential packet loss.
• Memory Expansion: Infinite memory in routers may worsen congestion as timed-out
packets and duplicates contribute to the problem (Nagle, 1987).
32
Computer Networks
33
Computer Networks
Traffic-Aware Routing
Objective: Adapt routing schemes to changes in load, in addition to changes in topology.
Goal: Shift traffic away from hotspots to prevent congestion in the network.
34
Computer Networks
Admission Control
In virtual-circuit networks, a widely used technique to prevent congestion is admission control.
The fundamental idea is straightforward: do not establish a new virtual circuit unless the network
can handle the additional traffic without becoming congested. This preventive measure helps avoid
worsening the situation by admitting more connections when the network is busy, analogous to a
telephone switch not providing dial tones during overload.
Traffic Characterization for Admission Control: Determining when a new virtual circuit might
lead to congestion requires characterizing the traffic it will generate. In the telephone network,
with fixed call bandwidths, this is straightforward, but computer networks host virtual circuits of
various shapes and sizes. Traffic characterization often involves rate and shape descriptions,
considering bursts, and using descriptors like the leaky bucket or token bucket, which capture
bursty traffic effects. The leaky bucket, a commonly used descriptor for quality of service, has
35
Computer Networks
parameters limiting the average rate and instantaneous burst size of traffic. It plays a crucial role
in admission control.
Network Decision Making: The network can decide whether to admit the new virtual circuit.
Reserving sufficient capacity along the paths of each virtual circuit can prevent congestion,
forming a service agreement that guarantees network performance. Alternatively, even without
guarantees, traffic descriptions aid admission control by estimating how many circuits can fit
within the network's capacity without congestion.
Optimizing Admissions: Real networks leverage past behavior measurements to estimate the
number of circuits to admit, balancing better performance with acceptable risk. For instance, if
multiple circuits may transmit at rates up to 10 Mbps through a 100Mbps link, admitting all 10
circuits may be wasteful, and statistical data helps optimize admissions based on actual network
behavior.
Integration with Traffic-Aware Routing: Admission control can be integrated with traffic-aware
routing to enhance its effectiveness. By considering routes around traffic hotspots during the setup
procedure, the network can optimize connections. Load-sensitive routing designs, as presented by
Shaikh et al. (1999), showcase how admission control and traffic-aware routing can be combined
to enhance network efficiency and prevent congestion in scenarios like avoiding congested routers.
Traffic Throttling
Sender Adjustments: In computer networks, senders dynamically adjust their transmissions to
match the network's capacity, aiming to operate just before the onset of congestion.
Network Operation: The network's goal is to avoid congestion and, when pending, instruct
senders to throttle back their transmissions.
Congestion Avoidance: The term "congestion avoidance" describes the network's operating point
just before congestion, contrasting with the situation when the network becomes overly congested.
Approaches to Throttling Traffic: Throttling traffic can be applied in both datagram networks
and virtual-circuit networks.
36
Computer Networks
Choke Packets:
• Direct Congestion Notification: The primary method to inform a sender about congestion
is through choke packets, where a router selects a congested packet and sends a choke
packet back to the source host, specifying the destination found in the original packet.
37
Computer Networks
• Tagging Original Packet: The original packet may be tagged, turning on a header bit to
prevent it from generating more choke packets along the path, and then forwarded as usual.
• Controlled Choke Packet Rate: To avoid increasing network load during congestion,
routers may send choke packets at a low rate.
Source Host Reaction to Choke Packet:
• Traffic Reduction Requirement: Upon receiving the choke packet, the source host is
obligated to reduce the traffic sent to the specified destination, often by a predetermined
percentage (e.g., 50%).
• Avoiding Random Choking: In datagram networks, randomly selecting packets during
congestion may lead to choke packets being sent to fast senders with larger queues. The
protocol's feedback helps prevent congestion without needlessly throttling senders unless
they cause trouble.
• Early Internet Usage: An example of a choke packet in the early Internet is the SOURCE-
QUENCH message (Postel, 1981).
As illustrated in Figure 5-25. Packets start unmarked, and if any router in their path experiences
congestion, the router will mark the packet accordingly during forwarding. Subsequently, the
destination echoes these marks back to the sender in its reply packet, denoted by a dashed line in
the figure, occurring above the IP level (e.g., in TCP). Upon receiving explicit congestion signals,
the sender must then throttle its transmissions, akin to the response triggered by traditional choke
38
Computer Networks
packets. The ECN mechanism provides an efficient and less intrusive method for handling
congestion in network communication.
Hop-by-Hop Backpressure:
An alternative approach is hop-by-hop backpressure, where the choke packet takes effect at every
hop it passes through, providing immediate relief at the point of congestion.
• Sequence Illustration: Fig. 5-26(b) shows the sequence where each hop (F, E, A)
immediately reacts to the choke packet, requiring each intermediate router to reduce
the flow to the next hop.
• Benefits: The hop-by-hop scheme prevents packet loss and efficiently addresses
congestion, as discussed in detail by Mishra et al. (1996).
• The RED algorithm is employed to drop packets strategically before congestion becomes
severe, allowing time for the source to adjust.
• Routers maintain a running average of queue lengths, and when the average exceeds a
threshold, indicating congestion, a small fraction of packets is randomly dropped as shown in
figure.
Figure RED
39
Computer Networks
• Random selection increases the likelihood that fast senders will experience packet drops,
prompting the sender to slow down.
• This implicit message delivery serves the same purpose as a choke packet, without an explicit
signal from the router.
Overprovisioning as a Solution:
Overprovisioning involves building a network with ample capacity to handle anticipated traffic,
ensuring minimal loss and low-latency packet delivery. While effective, overprovisioning is
expensive, relying on significant financial investment to meet performance demands.
40
Computer Networks
Application Requirements:
The applications of network requirements are based on their characteristics and functionalities.
Traffic shaping is a technique used to regulate the flow of data into the network, and traffic policing
ensures that users adhere to the agreed-upon traffic patterns, especially critical for applications
with stringent QoS requirements. These mechanisms contribute to reducing congestion and
ensuring a more predictable and reliable network performance.
Bursty Nature of Data Network Traffic: Unlike telephone networks with constant-rate traffic,
data networks experience bursty traffic due to varying rates, user interactions, and task-switching
by computers. Bursty traffic can be challenging to handle, potentially leading to buffer overflows
and packet loss.
Traffic Shaping: Traffic shaping is a technique used to regulate the average rate and burstiness of
data flows entering the network. The objective is to allow applications to transmit a variety of
traffic patterns while providing a simple and useful way to describe these patterns to the network.
User-Provider Agreement on Traffic Patterns: When setting up a flow, the user and the network
provider agree on a specific traffic pattern or shape for that flow. This agreement, sometimes
referred to as a Service Level Agreement (SLA), outlines the expected transmission pattern for the
flow. An SLA may cover aggregate flows and extended periods (e.g., all traffic for a specific
customer). As long as the customer adheres to the agreed-upon contract, the provider promises to
deliver the packets in a timely fashion.
Traffic Policing: To enforce the SLA, the network provider needs to monitor traffic flows. Traffic
policing involves examining packets to ensure they conform to the agreed-upon traffic pattern.
Excess packets beyond the agreed pattern may be dropped or marked with lower priority. Shaping
and policing are crucial for real-time data, such as audio and video connections, which have
stringent quality-of-service requirements. While less critical for peer-to-peer transfers that
consume all available bandwidth, they are essential for maintaining QoS in time-sensitive
applications.
42
Computer Networks
7. If a packet arrives when the bucket is full, it either waits until enough water leaks out
to accommodate it or is discarded.
8. Useful for shaping traffic at the host level or policing at the network interface.
Packet Scheduling
The significance of packet scheduling in ensuring a fair distribution of resources and preventing
degradation of Quality of Service in computer networks. Packet scheduling involves allocating
router resources among packets of a flow and between competing flows. Ensures that sufficient
resources are reserved along the route for packets to follow a specific path, similar to a virtual
circuit. Various algorithms have been proposed to address the challenges posed by different traffic
patterns and user behaviors.
Three types of resources that can be reserved for different flows are
1. Bandwidth
2. Buffer space
3. CPU cycles.
43
Computer Networks
The fair queueing algorithm treats all hosts equally, providing them with the same priority. In
practical scenarios, there is often a need to prioritize certain hosts, such as video servers over
file servers.
The finish time (Fᵢ) for a packet in a flow is computed using the formula:
𝐹𝑖 = 𝑚𝑎𝑥(𝐴𝑖, 𝐹𝑖 − 1) + 𝑊/𝐿𝑖
Where Ai is the arrival time, Fi−1 is the finish time of the previous packet, Li is the length of
packet i, and W is the weight of the flow.
• WFQ requires packets to be inserted into a sorted queue based on their finish times.
• This operation is at best an O(logN) operation per packet with N flows, which can be
challenging for high-speed routers.
• An approximation called deficit round robin, described by Shreedhar and Varghese, offers
efficient implementation with O(1) operations per packet.
• WFQ can often provide a better alternative to priority scheduling.
44
Computer Networks
• By assigning a large weight to the high-priority queue, high-priority packets can still go
through a short line, but low-priority packets continue to be sent during high-priority traffic.
24. Describe the fundamentals of link state routing. How do routers exchange link state
information?
25. What is the SPF (Shortest Path First) algorithm, and how is it used in link state routing?
26. What is hierarchical routing, and why is it important in large-scale networks?
27. Discuss the advantages of using hierarchical addressing and routing.
28. What is the basic broadcasting method, and why is it considered inefficient for sending
messages to many or all other hosts?
29. How does multi-destination routing improve upon basic broadcasting, and what role do
routers play in this approach?
30. What is flooding as a broadcast routing technique, and why is it considered a better
approach?
31. What is Reverse Path Forwarding (RPF), and how does it address the limitations of basic
broadcasting and multi-destination routing?
32. Walk through the steps of the Reverse Path Forwarding (RPF) process when a broadcast
packet arrives at a router.
33. Explain the concept of multicast routing. How is it different from unicast and broadcast
routing?
34. What are the challenges in multicast routing, especially in terms of building efficient
multicast trees?
35. What is multicasting, and how does it address the inefficiencies of sending distinct packets
to each receiver or using broadcasting for large groups?
36. Define Multicast Routing. How does it differ from unicast and broadcast routing in network
communication?
37. Discuss the importance of group management in multicast communication. What
mechanisms are necessary for creating, destroying, and identifying multicast groups?
38. Explain the concept of a spanning tree in the context of multicast routing. How does the
structure of the spanning tree depend on whether the multicast group is dense or sparse?
39. In what scenarios is pruning the spanning tree essential in multicast routing? What purpose
does it serve, and how does it contribute to efficient network utilization?
40. Describe the challenges and considerations involved in optimizing the spanning tree for
sparse group multicast routing. Why is this optimization crucial?
41. Examine the network configuration in Figure 5-16a. How are routers connected to hosts,
and what is the significance of multicast groups, Group 1 and Group 2, in this context?
42. Analyze the initial spanning tree in Figure 5-16b. Why is this tree suitable for broadcast
but not optimal for multicast?
43. Explain the process of pruning the spanning trees for multicast, as illustrated in Figures 5-
16c and 5-16d. What links are removed, and how does it enhance the efficiency of the
multicast spanning tree for Group 1 and Group 2?
44. What are the benefits of the pruned multicast spanning trees for Group 1 and Group 2 in
terms of the number of links?
45. Discuss the role of network configuration in influencing the efficiency of multicast routing.
How does the arrangement of routers and hosts impact the optimization of multicast
spanning trees?
46
Computer Networks
46. How does multicast routing contribute to optimizing bandwidth usage while delivering
packets to multiple group members? Compare this to other routing methods in the context
of group communication.
47. What is anycast routing, and when is it used in network design?
48. Describe the process of routing packets to the nearest or best anycast node.
49. What is the primary routing challenge for mobile hosts, and why is it essential for the
network to locate the host's current position?
50. Explain the model of mobile hosts, highlighting the concept of permanent home locations.
How does the routing goal differ for mobile hosts compared to stationary hosts?
51. Discuss the key components of the model of mobile hosts. What role does the home agent
play in facilitating communication with mobile hosts?
52. Describe the responsibilities of the home agent when it comes to forwarding packets for
mobile hosts. How does the home agent ensure the delivery of packets to the current
location of the mobile host?
53. Examine the scenario where packets destined for a mobile host are sent to the home agent.
Why is this step necessary, and how does it contribute to successful packet delivery to a
mobile host?
54. How do mobile IP protocols enable routing for hosts that change their network attachment
points?
55. What are the key components of mobile IP, and how do they work together?
56. How do mobile IP protocols enable routing for hosts that change their network attachment
points?
57. What are the key components of mobile IP, and how do they work together?
58. What is congestion control in computer networks, and why is it important?
59. Explain the key approaches to congestion control in computer networks.
60. Compare and contrast open-loop and closed-loop congestion control algorithms.
61. Describe the basic principles behind TCP congestion control algorithms, such as TCP Reno
or TCP Vegas.
62. What is traffic-aware routing, and how does it contribute to congestion control?
63. Explain how traffic-aware routing algorithms can dynamically adapt to network conditions.
64. What is admission control in the context of network management, and why is it necessary?
65. Discuss the role of admission control in preventing network congestion and maintaining
QoS.
66. How does traffic throttling work, and what is its purpose in congestion control?
67. Provide examples of situations where traffic throttling might be applied.
68. Describe load shedding as a congestion control mechanism. When is it typically used?
69. What factors are considered when deciding which traffic to shed during congestion?
70. Explain the concept of Quality of Service (QoS) in computer networks.
71. What are the key metrics used to measure QoS, and how are they defined?
72. How does QoS benefit applications with different requirements in a network?
73. Discuss how the specific requirements of applications (e.g., voice, video, data) impact QoS
design.
47
Computer Networks
74. Provide examples of applications that demand low latency, high bandwidth, or other QoS
parameters.
75. Describe the concept of packet scheduling in QoS management.
76. How do different packet scheduling algorithms, such as Weighted Fair Queuing (WFQ)
and Priority Queuing, work?
77. What is Integrated Services (IntServ) in the context of QoS provisioning?
78. How does RSVP (Resource Reservation Protocol) contribute to IntServ?
79. Explain the principles of Differentiated Services (DiffServ) in QoS management.
80. How are Differentiated Services Code Points (DSCPs) used to mark and classify packets?
48