Computer Networks - Module 2 Notes
Computer Networks - Module 2 Notes
Transport Layer
Introduction and Transport-Layer Services
The Transport Layer is a crucial component in the hierarchical structure of
computer networks (like the TCP/IP model or the OSI model).
It sits above the Network Layer and below the Application Layer.
While the Network Layer (e.g., IP) is responsible for delivering data between hosts
(computers), the Transport Layer provides end-to-end communication between
specific processes or applications running on those hosts.
Think of it this way:
Network Layer (IP): Like the postal service delivering a letter to your house. It
ensures the letter reaches the correct building (your computer).
Transport Layer (TCP/UDP): Like your mailroom or a person in your house who
then takes the letter and delivers it to the specific person or department
(application) within the house.
This "process-to-process" delivery is achieved using port numbers. Each
application running on a computer is assigned a unique port number. When data
arrives at a host, the Transport Layer looks at the destination port number in the
packet header to determine which application should receive the data.
Introduction to Transport Layer Protocols: TCP and UDP
The two most prominent protocols at the Transport Layer are:
1.TCP (Transmission Control Protocol):
Connection-Oriented: Before data transfer begins, TCP establishes a
logical connection (a "three-way handshake") between the sender and
receiver. This connection is maintained throughout the communication.
Reliable: TCP guarantees that data will arrive at the destination in order,
without errors, and without duplication. If packets are lost or corrupted,
TCP will retransmit them.
Flow Control: Prevents a fast sender from overwhelming a slow receiver.
Congestion Control: Prevents the network itself from becoming
overwhelmed with too much traffic.
Example Applications: Web Browse (HTTP), email (SMTP, POP3, IMAP),
file transfer (FTP), secure shell (SSH).
UDP (User Datagram Protocol):
Connectionless: UDP sends data without establishing a prior
connection. It simply sends packets (called datagrams) and
hopes they reach the destination.
Unreliable (Best-Effort Delivery): UDP does not guarantee
delivery, order, or error-free transmission. It doesn't retransmit
lost packets.
Minimal Overhead: Due to its simplicity, UDP is much faster
and has less overhead than TCP.
Example Applications: Online gaming, voice over IP (VoIP), video
streaming, DNS lookups. These applications prioritize speed over
absolute reliability, as a small amount of data loss is often
acceptable (e.g., a dropped frame in a video stream is less disruptive
than a frozen stream while waiting for retransmission).
Transport-Layer Services
The Transport Layer provides several critical services to the applications above it.
Process-to-Process Delivery (Multiplexing and Demultiplexing):
Concept: This is the core function. The Network Layer delivers segments to a
host, but the Transport Layer is responsible for delivering them to the correct
application process within that host. It uses port numbers for this.
Multiplexing (Sender Side): Imagine your computer has multiple applications
(web browser, email client, online game) all trying to send data over the internet
simultaneously. The Transport Layer at your computer takes data from these
different applications, adds source and destination port numbers to create
"segments," and passes them down to the Network Layer. It's like combining
several smaller streams into one larger pipe.
Example: You browsing a website on port 80, sending an email on port 25, and
playing an online game on port 7777. The Transport Layer takes data from each of
these applications, adds their respective port numbers to the data segments, and
hands them to the next layer.
Demultiplexing (Receiver Side):
When segments arrive at the destination host, the Transport Layer
examines the destination port number in each segment's header.
It then delivers the data payload to the correct application
process listening on that port. It's like splitting the larger pipe back
into its individual streams.
The relationship between the Transport Layer and the Network Layer is
fundamental to how computer networks function, particularly in the widely used
TCP/IP model.
They work in tandem but have distinct responsibilities, creating a powerful layered
approach to network communication.
3. Packet Forwarding: Moving individual data packets from one hop (router) to the
next along the determined path.
Key takeaway for Network Layer: It gets the data to the correct computer (host). It
doesn't care which specific application on that computer should receive the data,
nor does it guarantee reliability, order, or flow control. It's a "best-effort" delivery
service.
Transport Layer: Process-to-Process Delivery
The Transport Layer (with protocols like TCP and UDP) builds upon the
services of the Network Layer to provide process-to-process (or
application-to-application) delivery.
Its main responsibilities include:
1.Port Addressing (Port Numbers): Assigning unique port numbers to
different application processes running on a host. This allows the
Transport Layer to identify which application should receive incoming
data.
Example: A web server listens on port 80, an email client might use
port 25 (for sending) or 110/995 (for receiving), and an online game
might use a specific high-numbered port.
2. Multiplexing and Demultiplexing:
Multiplexing (Sender): Taking data from multiple application
processes on the sending host, adding port numbers, and
encapsulating them into segments (for TCP) or datagrams (for
UDP), then passing them down to the Network Layer.
Key takeaway for Transport Layer: It gets the data to the correct
application on the destination host, and depending on the
protocol (TCP), it can add crucial reliability and control features.
The Interplay: How They Work Together
1.Dependency:
The Transport Layer relies on the services of the Network Layer.
It assumes that the Network Layer will handle the hop-by-hop delivery of data
packets between hosts.
The Transport Layer doesn't care how the packet gets from one host to
another, only that it does get there.
2.Encapsulation:
At the sender: The Application Layer passes data to the Transport Layer.
The Transport Layer adds its header (containing port numbers, sequence
numbers, etc.) to form a segment (TCP) or datagram (UDP).
This segment/datagram is then passed to the Network Layer.
The Network Layer adds its header (containing IP addresses) to form an IP
packet.
This IP packet is then passed down to lower layers for physical
transmission.
3. Decapsulation:
At the receiver: The Network Layer receives the IP packet, verifies the
destination IP address, removes its header, and passes the contained
segment/datagram up to the Transport Layer.
The Transport Layer then examines the port number in the segment/datagram
header, removes its header, and delivers the application data to the correct
process.
4. Scope of Responsibility:
Network Layer: Responsible for host-to-host delivery across potentially
diverse and large-scale networks. It's like the global postal service ensuring a
letter reaches the right house.
Transport Layer: Responsible for process-to-process delivery within those
hosts. It's like the internal mailroom or a person sorting letters to specific
individuals/departments within the house.
Why are they separate layers?
The separation of concerns between the Transport and Network Layers is a
cornerstone of network architecture design, offering several benefits:
1.Modularity and Independence:
Each layer can be developed and updated independently without affecting the
others, as long as the interfaces between them remain consistent.
This simplifies design, implementation, and maintenance.
4. Abstraction: Each layer abstracts away the complexities of the layers below it. An
application programmer using TCP doesn't need to know how IP routing works, and an
IP router doesn't need to understand the specifics of TCP's flow control.
In essence, the Network Layer provides the basic reachability to a destination host,
while the Transport Layer provides the specific communication services (reliability,
ordering, flow control, process addressing) required by applications running on those
hosts. They are two distinct, yet highly complementary, pieces of the networking puzzle.
Multiplexing and Demultiplexing
Multiplexing (at the Sender)
Concept: Multiplexing is the process of taking multiple data streams from different sources
(e.g., different applications on a single computer) and combining them into a single,
aggregated stream for transmission over a shared network medium.
How it works in Networks (Transport Layer):
Application Data: Various applications (web browser, email client, chat app, streaming
video) generate data.
Port Numbers: Each application is assigned a unique port number on the sending host
(e.g., web browser might use a random ephemeral port, email client might use 50000).
Encapsulation: The Transport Layer takes a chunk of data from each application, adds a
header that includes both the source port number (identifying the sending application)
and the destination port number (identifying the intended receiving application on the
other host), along with other information (like sequence numbers for TCP). This combined
unit is called a segment (for TCP) or a datagram (for UDP).
Passing to Network Layer: These segments/datagrams are then passed down to the
Network Layer. The Network Layer will then add its own header (including source and
destination IP addresses) to create a packet, and send it out onto the network.
Demultiplexing (at the Receiver)
Concept: Demultiplexing is the reverse process of multiplexing. It involves taking the
single, aggregated data stream received from the network and separating it back into
its original, individual data streams, delivering each stream to its intended
destination application process.
How it works in Networks (Transport Layer):
Receive Packet: The Network Layer on the receiving host receives an IP packet.
After checking the destination IP address, it removes its header and passes the
contained Transport Layer segment/datagram up to the Transport Layer.
Inspect Header: The Transport Layer at the receiver examines the destination
port number in the segment's/datagram's header.
Direct to Application: Based on this port number, the Transport Layer identifies
which application process on that host is "listening" for data on that specific port.
It then delivers the data payload of the segment/datagram to that correct
application.
Why are Multiplexing and Demultiplexing Important?
1.Resource Sharing: They allow multiple applications and processes to share a
single physical network connection (like your Ethernet cable or Wi-Fi antenna)
simultaneously. Without them, you'd need a separate physical connection for every
application or connection, which is impractical and expensive.
2.Efficiency: They maximize the utilization of available bandwidth. Instead of idle
periods on a connection while one application finishes its data transfer, others can
utilize the same medium.
3.Process-to-Process Delivery: They extend the host-to-host delivery service of the
Network Layer to the vital process-to-process delivery that applications require.
4.Simplicity for Applications: Applications don't need to worry about the underlying
network infrastructure; they just send and receive data to/from specific port
numbers, and the Transport Layer handles the rest.
In essence, multiplexing and demultiplexing are the mechanisms that allow the
internet to function as a multi-tasking, multi-user environment, ensuring that data
packets reach not just the right computer, but the right program on that computer.
Connectionless Transport: UDP, UDP Segment Structure, UDP Checksum
Concept: In connectionless transport, data is sent from a source to a destination without first
establishing a dedicated, continuous communication path or "connection." Each data unit
(often called a packet or datagram) is treated independently, and no prior handshaking or
agreement is required between the sender and receiver before transmission.
Key Characteristics of Connectionless Transport:
No Handshake: There's no connection setup phase (like TCP's three-way handshake) or
teardown phase. Data transfer begins immediately.
Independent Data Units: Each packet is self-contained and carries all the necessary
addressing information (source and destination port numbers).
No Guaranteed Delivery: The protocol does not guarantee that data will reach its
destination, arrive in order, or be free of errors. Lost, duplicated, or out-of-order packets
are possible.
Minimal Overhead: The lack of connection management and reliability mechanisms
means less overhead in terms of header size and control messages. This makes it faster
and more efficient for certain types of applications.
Stateless: The communicating endpoints don't maintain a "state" about the ongoing
conversation (e.g., what packets have been sent/received, what sequence numbers are
next).
UDP (User Datagram Protocol)
UDP is the prime example of a connectionless transport-layer protocol.
It's often referred to as "Unreliable Datagram Protocol" because it
doesn't offer the reliability guarantees of TCP.
However, its simplicity and speed make it ideal for specific applications.
5. Sequence Numbers:
•Principle: Each packet (or segment) is assigned a unique, sequential number.
•Purpose:
Ordering: Allows the receiver to correctly reassemble packets that might arrive out of order.
Duplicate Detection: Helps the receiver identify and discard duplicate copies of packets that
might have been retransmitted unnecessarily (e.g., if an ACK was lost, leading to a
retransmission).
Tracking: Enables the sender to know which specific packet is being acknowledged.
6. Timers:
Principle: The sender starts a timer when it transmits a packet.
Purpose: If the timer expires before an ACK for that packet is received, the sender assumes the
packet (or its ACK) has been lost and triggers a retransmission. This is crucial for handling silent
packet loss.
Pipelined Reliable Data Transfer Protocols, Go-Back-N, Selective repeat, Connection-Oriented Transport
Pipelined Reliable Data Transfer Protocols
In computer networking, reliable data transfer protocols are essential for ensuring that
data is delivered from a sender to a receiver without errors, loss, or duplication.
A simple approach, like the Stop-and-Wait protocol, is highly inefficient, especially on
links with a high bandwidth-delay product (high speed and long distance).
This is because the sender must transmit a packet and then wait for an
acknowledgment (ACK) before sending the next one, leaving the network channel idle
for long periods.
Pipelined reliable data transfer protocols overcome this limitation by allowing the
sender to transmit multiple packets without waiting for an ACK for each one. This
technique, known as pipelining, significantly improves network utilization and
throughput.
To achieve this, pipelined protocols require a larger sequence number space and the
ability for both the sender and receiver to buffer multiple packets.
There are two primary approaches to pipelined error recovery: Go-Back-N and Selective
Repeat.
Go-Back-N (GBN) Protocol
The Go-Back-N protocol is a simple but effective pipelined protocol.
Sender: The sender maintains a send window of size N, which represents the
maximum number of unacknowledged packets that can be in flight at any
given time.
The sender keeps a single timer for the oldest unacknowledged packet.
When a packet is sent, its sequence number is marked.
If the timer for the oldest unacknowledged packet expires, the sender
assumes that the packet (and potentially subsequent packets) has been
lost.
It then retransmits that packet and all subsequent packets in its window,
even if they were correctly received by the receiver.
This is where the name "Go-Back-N" comes from—the sender goes back to
the unacknowledged packet and retransmits everything from that point.
Receiver: The receiver's window size is always 1.
It only accepts packets that arrive in order and with the correct sequence
number.
If a packet with an out-of-order sequence number arrives, the receiver
discards it and sends an ACK for the last correctly received packet.
This simplifies the receiver's logic, as it doesn't need to buffer out-of-order
packets.
The acknowledgments are cumulative, meaning an ACK for packet k implies
that all packets up to and including k have been received correctly.
The inclusion of DevRTT makes the timeout value larger when there is a high
variance in RTT, preventing unnecessary retransmissions on a congested network.
If a timeout occurs, the RTO is doubled for each subsequent retransmission of the
same segment, a strategy known as exponential backoff, to avoid further
congesting an already slow network.
A key rule, known as Karn's Algorithm, is that RTT samples from retransmitted
segments are not used in the RTT estimation to avoid ambiguity caused by
retransmissions.
Reliable Data Transfer, Flow Control, TCP Connection Management
Reliable Data Transfer (RDT)
RDT protocols ensure that data is delivered correctly and in the right order, even over unreliable
networks. This is achieved through a combination of mechanisms including checksums, sequence
numbers, acknowledgements (ACKs), and timers.
RDT 2.0 (ARQ): Introduces error detection with checksums and uses acknowledgements (ACK) and
negative acknowledgements (NAK) to signal if a packet was received correctly or not.
RDT 3.0 (Stop-and-Wait): Adds a timer to RDT 2.0. If an ACK isn't received within a specific time, the
sender assumes the packet is lost and retransmits it. It also uses sequence numbers to identify
duplicate packets.
Pipelined Protocols: To increase efficiency, these protocols allow the sender to transmit multiple
packets without waiting for an acknowledgement for each one.
Go-Back-N ARQ: The sender can transmit up to N packets. If a packet is lost, the receiver
discards all subsequent packets. The sender, upon receiving a NAK or timing out, retransmits the
lost packet and all the packets that followed it.
Selective Repeat ARQ: A more efficient approach where the receiver accepts out-of-order
packets and buffers them. It sends a selective ACK for each correctly received packet. If a packet
is lost, the sender only retransmits that specific lost packet, not the entire window.
2. Flow Control
TCP uses a sliding window protocol for flow control. The receiver advertises a
"receive window" size to the sender, which is the amount of available buffer space it
has.
The sender is only allowed to send data up to the size of the receiver's advertised
window.
The window "slides" as the receiver processes data and sends ACKs, allowing the
sender to transmit more data.
If the receiver's buffer is full, it advertises a window size of zero, and the sender
stops transmitting until the window opens up again.
This mechanism prevents a fast sender from overwhelming a slow receiver.
Congestion control, while related, is a different mechanism where the sender
limits its transmission rate to avoid overwhelming the network itself, not just the
receiver. This is achieved through algorithms like Slow Start and Congestion
Avoidance.
TCP connection management is the process of establishing and terminating a
connection between a sender and receiver. This is handled through a series of
"handshakes":
•Three-way handshake (connection establishment):
1.The client sends a SYN (synchronize) packet to the server.
2.The server responds with a SYN-ACK (synchronize-acknowledge) packet.
3.The client sends a final ACK (acknowledge) packet, and the connection is
established.
•Four-way handshake (connection termination):
1.One side sends a FIN (finish) packet to the other.
2.The other side responds with an ACK.
3.The second side then sends its own FIN packet.
4.The first side responds with a final ACK, and the connection is terminated. This
process allows each side to independently close its end of the connection.
Sample Questions
1. Explain the key services provided by the Transport Layer in computer networks.
2. Describe the relationship between the Transport Layer and the Network Layer with
suitable examples.
3. Give an overview of the Transport Layer in the Internet and list its main protocols.
4. What is multiplexing and demultiplexing in the Transport Layer? Explain their role in
communication.
5. Describe the structure of a UDP segment and explain how the UDP checksum is
calculated.
6. Discuss the principles of reliable data transfer and the steps involved in building a reliable
data transfer protocol.
7. Differentiate between Go-Back-N and Selective Repeat pipelined protocols with
diagrams.
8. Explain the TCP connection establishment and termination process.
9. Describe the TCP segment structure and the method for estimating round-trip time and
timeout values.
10. What is flow control in TCP? How is it implemented using the sliding window
mechanism?