KEMBAR78
Computer Networks - Module 2 Notes | PDF | Transmission Control Protocol | Port (Computer Networking)
0% found this document useful (0 votes)
13 views49 pages

Computer Networks - Module 2 Notes

Uploaded by

sanjaymoolya08
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views49 pages

Computer Networks - Module 2 Notes

Uploaded by

sanjaymoolya08
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 49

MODULE 2 (9 Hours)

Transport Layer
Introduction and Transport-Layer Services
The Transport Layer is a crucial component in the hierarchical structure of
computer networks (like the TCP/IP model or the OSI model).
 It sits above the Network Layer and below the Application Layer.
 While the Network Layer (e.g., IP) is responsible for delivering data between hosts
(computers), the Transport Layer provides end-to-end communication between
specific processes or applications running on those hosts.
Think of it this way:
 Network Layer (IP): Like the postal service delivering a letter to your house. It
ensures the letter reaches the correct building (your computer).
 Transport Layer (TCP/UDP): Like your mailroom or a person in your house who
then takes the letter and delivers it to the specific person or department
(application) within the house.
 This "process-to-process" delivery is achieved using port numbers. Each
application running on a computer is assigned a unique port number. When data
arrives at a host, the Transport Layer looks at the destination port number in the
packet header to determine which application should receive the data.
Introduction to Transport Layer Protocols: TCP and UDP
The two most prominent protocols at the Transport Layer are:
1.TCP (Transmission Control Protocol):
 Connection-Oriented: Before data transfer begins, TCP establishes a
logical connection (a "three-way handshake") between the sender and
receiver. This connection is maintained throughout the communication.
 Reliable: TCP guarantees that data will arrive at the destination in order,
without errors, and without duplication. If packets are lost or corrupted,
TCP will retransmit them.
 Flow Control: Prevents a fast sender from overwhelming a slow receiver.
 Congestion Control: Prevents the network itself from becoming
overwhelmed with too much traffic.
 Example Applications: Web Browse (HTTP), email (SMTP, POP3, IMAP),
file transfer (FTP), secure shell (SSH).
UDP (User Datagram Protocol):
Connectionless: UDP sends data without establishing a prior
connection. It simply sends packets (called datagrams) and
hopes they reach the destination.
Unreliable (Best-Effort Delivery): UDP does not guarantee
delivery, order, or error-free transmission. It doesn't retransmit
lost packets.
Minimal Overhead: Due to its simplicity, UDP is much faster
and has less overhead than TCP.
Example Applications: Online gaming, voice over IP (VoIP), video
streaming, DNS lookups. These applications prioritize speed over
absolute reliability, as a small amount of data loss is often
acceptable (e.g., a dropped frame in a video stream is less disruptive
than a frozen stream while waiting for retransmission).
Transport-Layer Services
The Transport Layer provides several critical services to the applications above it.
Process-to-Process Delivery (Multiplexing and Demultiplexing):
Concept: This is the core function. The Network Layer delivers segments to a
host, but the Transport Layer is responsible for delivering them to the correct
application process within that host. It uses port numbers for this.
Multiplexing (Sender Side): Imagine your computer has multiple applications
(web browser, email client, online game) all trying to send data over the internet
simultaneously. The Transport Layer at your computer takes data from these
different applications, adds source and destination port numbers to create
"segments," and passes them down to the Network Layer. It's like combining
several smaller streams into one larger pipe.
Example: You browsing a website on port 80, sending an email on port 25, and
playing an online game on port 7777. The Transport Layer takes data from each of
these applications, adds their respective port numbers to the data segments, and
hands them to the next layer.
Demultiplexing (Receiver Side):
 When segments arrive at the destination host, the Transport Layer
examines the destination port number in each segment's header.
 It then delivers the data payload to the correct application
process listening on that port. It's like splitting the larger pipe back
into its individual streams.

Example: A web server (listening on port 80) receives an HTTP


request.
The Transport Layer at the server sees destination port 80 and
passes the request to the web server application. Simultaneously, an
email server on the same machine (listening on port 25) receives an
email; the Transport Layer directs that data to the email server
application.
Reliable Data Transfer (primarily TCP):
Concept: Guarantees that data sent by the application layer at the sender will be delivered
correctly and in order to the application layer at the receiver. This is crucial over an
unreliable network layer (like IP, which doesn't guarantee delivery).
Mechanisms:
 Error Detection: Uses checksums to detect if any bits in a segment have been
corrupted during transit.
 Acknowledgments (ACKs): The receiver sends an ACK message to the sender to
confirm successful receipt of data.
 Sequence Numbers: Each segment is assigned a sequence number, allowing the
receiver to reorder out-of-order segments and detect missing segments.
 Timers and Retransmission: The sender sets a timer after sending a segment. If an
ACK isn't received before the timer expires, the sender assumes the segment was lost
and retransmits it.
 Example: When you download a file (using FTP or HTTP, which ride on TCP), you expect
the file to be identical to the original. If a packet of the file is lost or corrupted in transit,
TCP's reliable data transfer mechanisms ensure it's retransmitted until the entire file
arrives perfectly. Without it, your downloaded file would likely be corrupted and unusable.
•Flow Control (primarily TCP):
Concept: Prevents the sender from sending data too quickly,
overwhelming the receiver's buffer capacity. It's about matching the
sender's transmission rate to the receiver's consumption rate.
Mechanism: The receiver advertises its "receive window" to the sender,
indicating how much buffer space is currently available. The sender will
not send more data than what the receiver's window allows.
Example: Imagine your computer is downloading a large video file from a
super-fast server. If your computer's buffer for incoming data is small, and
the server keeps blasting data at full speed, your buffer would quickly
overflow, and data would be lost. TCP's flow control mechanism ensures the
server slows down its transmission rate to match your computer's
processing and buffering capacity, preventing data loss due to an
overwhelmed receiver.
Congestion Control (primarily TCP):
Concept: Prevents the network itself from becoming congested (overloaded)
with too much traffic, which can lead to increased delays and packet loss for
all traffic.
Mechanism: TCP uses various algorithms (like "slow start," "congestion
avoidance," and "fast retransmit/recovery") to infer network congestion.
When congestion is detected (e.g., through packet loss or increased round-
trip times), TCP drastically reduces its sending rate to ease the load on the
network.
Example: If many users simultaneously start downloading large files, the
shared network links and routers could become overwhelmed. TCP's
congestion control acts like a responsible driver in traffic: if everyone tries to
speed up during congestion, it gets worse. TCP protocols detect the "traffic
jam" and encourage senders to slow down, allowing the network to clear up.
This ensures a fairer distribution of bandwidth among users and prevents a
"congestion collapse."
Connection Establishment and Termination (TCP only):
Concept: TCP explicitly sets up and tears down a connection before and after data transfer,
respectively. This involves a handshake process.
Mechanism:
 Three-Way Handshake (Establishment):
 Client sends a SYN (synchronize) segment to the server.
 Server sends a SYN-ACK (synchronize-acknowledgment) segment to the client.
 Client sends an ACK (acknowledgment) segment to the server. At this point, the
connection is established.
 Four-Way Handshake (Termination):
 One side sends a FIN (finish) segment.
 The other side acknowledges the FIN.
 The other side then sends its own FIN.
 The first side acknowledges the second FIN.
Example: When your web browser initiates a connection to a website's server, it first performs
the TCP three-way handshake. Only after this handshake is complete does the browser send the
HTTP GET request for the web page. Similarly, when you close your browser tab, a termination
handshake occurs to gracefully close the connection. UDP, being connectionless, does not
perform any of these handshakes.
Relationship Between Transport and Network Layers
Overview of the Transport Layer in the Internet

The relationship between the Transport Layer and the Network Layer is
fundamental to how computer networks function, particularly in the widely used
TCP/IP model.
They work in tandem but have distinct responsibilities, creating a powerful layered
approach to network communication.

Network Layer: Host-to-Host Delivery


The Network Layer (often associated with the Internet Protocol - IP) is concerned
with host-to-host delivery.
Its primary responsibilities include:
1.Logical Addressing (IP Addresses): Assigning unique IP addresses to
devices to identify them on a network and across different networks.
 Example: When your computer sends data, the Network Layer adds your
computer's IP address (source) and the destination computer's IP address to
the data packet.
2. Routing: Determining the best path for data packets to travel from the source host
to the destination host, potentially across multiple interconnected networks and
through various routers. Routers operate at this layer.
•Example: If you send an email to someone across the globe, the Network Layer
protocols (like OSPF or BGP used by routers) will figure out the sequence of
routers your data packet needs to traverse to reach the destination network.

3. Packet Forwarding: Moving individual data packets from one hop (router) to the
next along the determined path.

Key takeaway for Network Layer: It gets the data to the correct computer (host). It
doesn't care which specific application on that computer should receive the data,
nor does it guarantee reliability, order, or flow control. It's a "best-effort" delivery
service.
Transport Layer: Process-to-Process Delivery
The Transport Layer (with protocols like TCP and UDP) builds upon the
services of the Network Layer to provide process-to-process (or
application-to-application) delivery.
Its main responsibilities include:
1.Port Addressing (Port Numbers): Assigning unique port numbers to
different application processes running on a host. This allows the
Transport Layer to identify which application should receive incoming
data.
 Example: A web server listens on port 80, an email client might use
port 25 (for sending) or 110/995 (for receiving), and an online game
might use a specific high-numbered port.
2. Multiplexing and Demultiplexing:
 Multiplexing (Sender): Taking data from multiple application
processes on the sending host, adding port numbers, and
encapsulating them into segments (for TCP) or datagrams (for
UDP), then passing them down to the Network Layer.

 Demultiplexing (Receiver): Receiving packets from the


Network Layer, inspecting the destination port number, and
delivering the data to the correct application process on the
receiving host.
3. Reliable Data Transfer (TCP): Ensuring that data arrives at the destination
application completely, in order, and without errors. This involves:
 Sequence numbers to order segments.
 Acknowledgments (ACKs) to confirm receipt.
 Retransmission of lost or corrupted segments using timers.
 Checksums for error detection.
Example: When you download a document via HTTP (which uses TCP), the
Transport Layer ensures that every byte of the file arrives correctly and in the
original order, even if some packets were lost or duplicated on the network.

4. Flow Control (TCP): Preventing a fast sender from overwhelming a slower


receiver by managing the data transmission rate based on the receiver's available
buffer space.
Example: A high-speed server won't flood a slower client with data; the Transport
Layer ensures the data flow is adjusted to the client's capacity.
5. Congestion Control (TCP): Adjusting the sending rate to
alleviate or prevent network congestion, benefiting the overall
network performance.
Example: If the internet backbone is becoming overloaded, TCP
will temporarily reduce its sending window to ease the burden,
preventing a complete network slowdown.
6. Connection Management (TCP): Establishing and terminating
logical connections between applications (three-way handshake
for setup, four-way handshake for teardown).

Key takeaway for Transport Layer: It gets the data to the correct
application on the destination host, and depending on the
protocol (TCP), it can add crucial reliability and control features.
The Interplay: How They Work Together
1.Dependency:
 The Transport Layer relies on the services of the Network Layer.
 It assumes that the Network Layer will handle the hop-by-hop delivery of data
packets between hosts.
 The Transport Layer doesn't care how the packet gets from one host to
another, only that it does get there.

2.Encapsulation:
 At the sender: The Application Layer passes data to the Transport Layer.
The Transport Layer adds its header (containing port numbers, sequence
numbers, etc.) to form a segment (TCP) or datagram (UDP).
 This segment/datagram is then passed to the Network Layer.
 The Network Layer adds its header (containing IP addresses) to form an IP
packet.
 This IP packet is then passed down to lower layers for physical
transmission.
3. Decapsulation:
 At the receiver: The Network Layer receives the IP packet, verifies the
destination IP address, removes its header, and passes the contained
segment/datagram up to the Transport Layer.
 The Transport Layer then examines the port number in the segment/datagram
header, removes its header, and delivers the application data to the correct
process.

4. Scope of Responsibility:
 Network Layer: Responsible for host-to-host delivery across potentially
diverse and large-scale networks. It's like the global postal service ensuring a
letter reaches the right house.
 Transport Layer: Responsible for process-to-process delivery within those
hosts. It's like the internal mailroom or a person sorting letters to specific
individuals/departments within the house.
Why are they separate layers?
The separation of concerns between the Transport and Network Layers is a
cornerstone of network architecture design, offering several benefits:
1.Modularity and Independence:
 Each layer can be developed and updated independently without affecting the
others, as long as the interfaces between them remain consistent.
 This simplifies design, implementation, and maintenance.

2.Flexibility and Protocol Variety:


 The Network Layer can use different routing protocols (e.g., OSPF (Open
shortest path first), BGP( border Gateway Protocol)) without affecting how TCP
or UDP work.
 The Transport Layer can offer different types of services (reliable TCP vs.
unreliable UDP) over the same underlying Network Layer (IP). This allows
applications to choose the appropriate service for their needs.
3. Scalability: The layered approach helps manage the complexity of large networks like
the internet. IP (Network Layer) provides the universal addressing and routing
foundation, while TCP/UDP (Transport Layer) adds the necessary application-specific
services on top.

4. Abstraction: Each layer abstracts away the complexities of the layers below it. An
application programmer using TCP doesn't need to know how IP routing works, and an
IP router doesn't need to understand the specifics of TCP's flow control.

In essence, the Network Layer provides the basic reachability to a destination host,
while the Transport Layer provides the specific communication services (reliability,
ordering, flow control, process addressing) required by applications running on those
hosts. They are two distinct, yet highly complementary, pieces of the networking puzzle.
Multiplexing and Demultiplexing
Multiplexing (at the Sender)
Concept: Multiplexing is the process of taking multiple data streams from different sources
(e.g., different applications on a single computer) and combining them into a single,
aggregated stream for transmission over a shared network medium.
How it works in Networks (Transport Layer):
 Application Data: Various applications (web browser, email client, chat app, streaming
video) generate data.
 Port Numbers: Each application is assigned a unique port number on the sending host
(e.g., web browser might use a random ephemeral port, email client might use 50000).
 Encapsulation: The Transport Layer takes a chunk of data from each application, adds a
header that includes both the source port number (identifying the sending application)
and the destination port number (identifying the intended receiving application on the
other host), along with other information (like sequence numbers for TCP). This combined
unit is called a segment (for TCP) or a datagram (for UDP).
 Passing to Network Layer: These segments/datagrams are then passed down to the
Network Layer. The Network Layer will then add its own header (including source and
destination IP addresses) to create a packet, and send it out onto the network.
Demultiplexing (at the Receiver)
Concept: Demultiplexing is the reverse process of multiplexing. It involves taking the
single, aggregated data stream received from the network and separating it back into
its original, individual data streams, delivering each stream to its intended
destination application process.
How it works in Networks (Transport Layer):
 Receive Packet: The Network Layer on the receiving host receives an IP packet.
After checking the destination IP address, it removes its header and passes the
contained Transport Layer segment/datagram up to the Transport Layer.
 Inspect Header: The Transport Layer at the receiver examines the destination
port number in the segment's/datagram's header.
 Direct to Application: Based on this port number, the Transport Layer identifies
which application process on that host is "listening" for data on that specific port.
It then delivers the data payload of the segment/datagram to that correct
application.
Why are Multiplexing and Demultiplexing Important?
1.Resource Sharing: They allow multiple applications and processes to share a
single physical network connection (like your Ethernet cable or Wi-Fi antenna)
simultaneously. Without them, you'd need a separate physical connection for every
application or connection, which is impractical and expensive.
2.Efficiency: They maximize the utilization of available bandwidth. Instead of idle
periods on a connection while one application finishes its data transfer, others can
utilize the same medium.
3.Process-to-Process Delivery: They extend the host-to-host delivery service of the
Network Layer to the vital process-to-process delivery that applications require.
4.Simplicity for Applications: Applications don't need to worry about the underlying
network infrastructure; they just send and receive data to/from specific port
numbers, and the Transport Layer handles the rest.
In essence, multiplexing and demultiplexing are the mechanisms that allow the
internet to function as a multi-tasking, multi-user environment, ensuring that data
packets reach not just the right computer, but the right program on that computer.
Connectionless Transport: UDP, UDP Segment Structure, UDP Checksum
Concept: In connectionless transport, data is sent from a source to a destination without first
establishing a dedicated, continuous communication path or "connection." Each data unit
(often called a packet or datagram) is treated independently, and no prior handshaking or
agreement is required between the sender and receiver before transmission.
Key Characteristics of Connectionless Transport:
 No Handshake: There's no connection setup phase (like TCP's three-way handshake) or
teardown phase. Data transfer begins immediately.
 Independent Data Units: Each packet is self-contained and carries all the necessary
addressing information (source and destination port numbers).
 No Guaranteed Delivery: The protocol does not guarantee that data will reach its
destination, arrive in order, or be free of errors. Lost, duplicated, or out-of-order packets
are possible.
 Minimal Overhead: The lack of connection management and reliability mechanisms
means less overhead in terms of header size and control messages. This makes it faster
and more efficient for certain types of applications.
 Stateless: The communicating endpoints don't maintain a "state" about the ongoing
conversation (e.g., what packets have been sent/received, what sequence numbers are
next).
UDP (User Datagram Protocol)
UDP is the prime example of a connectionless transport-layer protocol.
It's often referred to as "Unreliable Datagram Protocol" because it
doesn't offer the reliability guarantees of TCP.
However, its simplicity and speed make it ideal for specific applications.

Why use UDP if it's unreliable?


For many applications, the overhead of establishing and maintaining a
reliable connection, and dealing with retransmissions, is unacceptable
due to latency requirements.
These applications can tolerate some data loss or handle reliability at
the application layer.
UDP Checksum
The UDP checksum is a mechanism for error detection,
not error correction or guaranteed delivery.
It helps the receiver determine if any bits in the UDP
segment (header and data) have been accidentally altered
during transmission.
It's an optional field; if it's not used, it's set to all zeros.
In IPv4, the checksum calculation is mandatory, but
sending a zero checksum (meaning it's not calculated or
checked) is permissible.
 In IPv6, the checksum is also optional.
Principles of Reliable Data Transfer: Building a Reliable Data Transfer Protocol

Why Do We Need Reliable Data Transfer?


The Network Layer, particularly IP, provides a "best-effort" delivery service. This
means:
 Packet Loss: Packets can be dropped due to network congestion, router buffer
overflows, or physical link failures.
 Bit Errors/Corruption: Bits within a packet can be flipped due to noise on the
transmission medium.
 Out-of-Order Delivery: Packets can take different paths through the network,
leading to them arriving at the destination in a different order than they were sent.
 Duplication: Due to retransmissions (which we'll discuss), a receiver might
sometimes get multiple copies of the same packet.
The goal of a Reliable Data Transfer protocol at the Transport Layer (like TCP) is to
abstract away these network imperfections and provide a seemingly perfect, error-
free, and ordered stream of data to the application layer.
Fundamental Principles of building Reliable Data Transfer
To achieve reliability over an unreliable channel, RDT protocols typically employ a combination of
these core mechanisms:
1.Error Detection (Checksums):
1. Principle: The sender computes a checksum (a small value derived from the data) and
includes it in the packet. The receiver performs the same calculation on the received data.
2. Purpose: If the receiver's checksum doesn't match the sender's, it indicates that bits have
been corrupted during transit.
3. Example: UDP and TCP both use checksums to detect corrupted segments. If corruption is
detected, TCP will discard the segment and eventually retransmit it.
2.Feedback (Acknowledgments - ACKs / Negative Acknowledgments - NAKs):
1. Principle: The receiver sends control messages back to the sender to indicate whether a
packet was received correctly or incorrectly.
2. ACK (Acknowledgment): A message from the receiver confirming that a specific packet (or
range of packets) has been received correctly.
3. NAK (Negative Acknowledgment): A message from the receiver indicating that a specific
packet was received, but was corrupted or is missing. NAKs explicitly tell the sender what
needs to be retransmitted. (Note: TCP primarily uses ACKs and duplicate ACKs, not NAKs).
4. Purpose: Provides the sender with information about the status of the transmitted data,
allowing it to decide whether retransmission is needed.
4. Retransmission:
 Principle: If the sender does not receive a positive acknowledgment (ACK) for a packet within a
certain time, or receives a negative acknowledgment (NAK), it re-sends a copy of that packet.
 Purpose: To recover from packet loss or corruption.

5. Sequence Numbers:
•Principle: Each packet (or segment) is assigned a unique, sequential number.
•Purpose:
 Ordering: Allows the receiver to correctly reassemble packets that might arrive out of order.
 Duplicate Detection: Helps the receiver identify and discard duplicate copies of packets that
might have been retransmitted unnecessarily (e.g., if an ACK was lost, leading to a
retransmission).
 Tracking: Enables the sender to know which specific packet is being acknowledged.

6. Timers:
 Principle: The sender starts a timer when it transmits a packet.
 Purpose: If the timer expires before an ACK for that packet is received, the sender assumes the
packet (or its ACK) has been lost and triggers a retransmission. This is crucial for handling silent
packet loss.
Pipelined Reliable Data Transfer Protocols, Go-Back-N, Selective repeat, Connection-Oriented Transport
Pipelined Reliable Data Transfer Protocols
 In computer networking, reliable data transfer protocols are essential for ensuring that
data is delivered from a sender to a receiver without errors, loss, or duplication.
 A simple approach, like the Stop-and-Wait protocol, is highly inefficient, especially on
links with a high bandwidth-delay product (high speed and long distance).
 This is because the sender must transmit a packet and then wait for an
acknowledgment (ACK) before sending the next one, leaving the network channel idle
for long periods.
 Pipelined reliable data transfer protocols overcome this limitation by allowing the
sender to transmit multiple packets without waiting for an ACK for each one. This
technique, known as pipelining, significantly improves network utilization and
throughput.
 To achieve this, pipelined protocols require a larger sequence number space and the
ability for both the sender and receiver to buffer multiple packets.

There are two primary approaches to pipelined error recovery: Go-Back-N and Selective
Repeat.
Go-Back-N (GBN) Protocol
The Go-Back-N protocol is a simple but effective pipelined protocol.
Sender: The sender maintains a send window of size N, which represents the
maximum number of unacknowledged packets that can be in flight at any
given time.
The sender keeps a single timer for the oldest unacknowledged packet.
When a packet is sent, its sequence number is marked.
If the timer for the oldest unacknowledged packet expires, the sender
assumes that the packet (and potentially subsequent packets) has been
lost.
It then retransmits that packet and all subsequent packets in its window,
even if they were correctly received by the receiver.
This is where the name "Go-Back-N" comes from—the sender goes back to
the unacknowledged packet and retransmits everything from that point.
Receiver: The receiver's window size is always 1.
It only accepts packets that arrive in order and with the correct sequence
number.
If a packet with an out-of-order sequence number arrives, the receiver
discards it and sends an ACK for the last correctly received packet.
This simplifies the receiver's logic, as it doesn't need to buffer out-of-order
packets.
The acknowledgments are cumulative, meaning an ACK for packet k implies
that all packets up to and including k have been received correctly.

Key characteristics of Go-Back-N:


Simple receiver implementation.
Can lead to unnecessary retransmissions of packets that were correctly
received but followed a lost packet. This can waste bandwidth, especially on
links with a high error rate.
Selective Repeat (SR) Protocol
The Selective Repeat protocol is a more refined pipelined protocol that addresses the
inefficiency of Go-Back-N.
 Sender: The sender also maintains a send window of size N. However, unlike Go-Back-N, the
sender sets a separate timer for each unacknowledged packet. If a timer expires, the sender
retransmits only the single packet for which the timer expired.
 Receiver: The receiver's window size is also N (or at least greater than 1). The receiver can
accept and buffer packets that arrive out of order, as long as they fall within its window. When
a packet is received, the receiver sends a separate, selective acknowledgment for that
specific packet. Once a complete sequence of in-order packets has been received and
buffered, the receiver delivers them to the application layer.

Key characteristics of Selective Repeat:


 Efficient use of bandwidth, as it only retransmits lost packets.
 More complex implementation for both the sender (managing multiple timers) and the
receiver (buffering and reordering packets).
 The size of the send and receive windows must be chosen carefully to avoid ambiguity in
handling duplicate packets, typically satisfying the condition N≤(2m/2), where m is the
number of bits in the sequence number field.
Connection-Oriented Transport
Connection-oriented transport is a communication paradigm
where a logical connection or session is established between two
communicating applications before data transfer begins.
This is in contrast to connectionless transport, where data is sent
as independent packets without any prior setup.
The most prominent example of a connection-oriented transport
protocol is the Transmission Control Protocol (TCP), which is the
foundation of the internet's most widely used applications, such as
the World Wide Web, email, and file transfer.
The process of a connection-oriented transport service typically involves three phases:
1.Connection Establishment: The two endpoints perform a "handshake" to agree on the
parameters of the connection. For TCP, this is a three-way handshake. The client sends a SYN
(synchronize) segment, the server replies with a SYN-ACK (synchronize-acknowledge), and the client
completes the handshake with an ACK. This process establishes a shared state and initial sequence
numbers for the data transfer.
2.Data Transfer: Once the connection is established, data can be reliably and efficiently
exchanged. Key features of connection-oriented protocols during this phase include:
1. Reliable Data Transfer: Protocols like TCP use pipelining (similar to Go-Back-N and Selective
Repeat), sequence numbers, acknowledgments, and timers to ensure that all data is
delivered without loss or corruption.
2. In-Order Delivery: The receiver uses sequence numbers to reconstruct the data stream in the
correct order, even if packets arrive out of sequence.
3. Flow Control: The protocol prevents a fast sender from overwhelming a slow receiver by using
a receive window, which tells the sender how much buffer space is available.
4. Congestion Control: The protocol dynamically adjusts the sending rate to prevent network
congestion.
3.Connection Termination: After the data transfer is complete, the connection is closed. For TCP,
this is a four-way handshake (two pairs of FIN-ACK exchanges) to ensure both sides have finished
sending and receiving data.
Connection-Oriented
Feature Pipelining Go-Back-N Selective Repeat
Transport (TCP)
A pipelined protocol
A pipelined protocol
Sends multiple where the sender A communication model that
where the sender
Concept packets without retransmits a lost packet establishes a session before
retransmits only the
waiting for an ACK. and all subsequent data transfer.
lost packet.
packets.
Sliding window (managed by
Sender Window Size N>1 Size N>1 Size N>1
flow and congestion control).
Sliding window (managed by
Receiver Window Size N≥1 Size 1 Size N>1
flow control).
Retransmits the lost Similar to Selective Repeat,
Retransmits only the
Retransmission Based on timeouts. packet and all but with cumulative ACKs
lost packet.
subsequent packets. and out-of-order buffering.
Cumulative ACKs with
Individual (selective)
ACKs Positive ACKs. Cumulative ACKs. selective acknowledgments
ACKs.
(SACKs) as an option.
More complex than Simple receiver, complex Complex sender and
Complexity Highly complex and robust.
Stop-and-Wait. sender. receiver.
Less efficient on noisy Efficient on noisy
High utilization and efficient
Bandwidth High utilization. channels due to channels, minimal
bandwidth usage.
retransmissions. retransmissions.
TCP: The TCP Connection, TCP Segment Structure, Round-Trip Time Estimation and Timeout
The TCP Connection
TCP (Transmission Control Protocol) is a connection-oriented protocol, meaning it establishes a logical
connection between a sender and a receiver before any data is transferred. This process is crucial for
ensuring reliable and ordered data delivery.

Connection Establishment: The Three-Way Handshake


A TCP connection is established using a three-way handshake:
1.SYN (Synchronize): The client initiates the connection by sending a TCP segment with the SYN flag set.
This segment contains a randomly chosen initial sequence number (ISN), let's call it A. The client is
essentially saying, "I want to start a connection and my initial sequence number is A."
2.SYN-ACK (Synchronize-Acknowledge): The server, upon receiving the SYN segment, responds with a
segment that has both the SYN and ACK flags set. The server's segment contains its own random ISN, let's
call it B. The acknowledgment number is set to A+1, which acknowledges the client's SYN and indicates the
next sequence number it expects from the client. The server is saying, "I received your request, my
sequence number is B, and I'm expecting your next packet to have a sequence number of A+1."
3.ACK (Acknowledge): The client completes the handshake by sending a segment with the ACK flag set.
The sequence number is A+1 and the acknowledgment number is B+1, confirming that it received the
server's SYN-ACK. The client can now begin sending data.
Connection Termination: The Four-Way Handshake
Closing a TCP connection is a two-way process, where each side
independently terminates its end of the connection. This typically
uses a four-way handshake:
1.FIN (Finish): The host that wants to close the connection sends a
segment with the FIN flag set, indicating it has no more data to send.
2.ACK: The receiving host acknowledges the FIN with an ACK. At this
point, the connection is "half-closed." The first host can no longer
send data, but it can still receive data from the other side.
3.FIN: Once the second host is finished sending data, it also sends a
segment with the FIN flag set.
4.ACK: The first host acknowledges the second host's FIN with a
final ACK, and the connection is fully terminated.
TCP Segment Structure
A TCP segment is the basic unit of data transfer in TCP. It consists of a header
and a data payload. The header contains crucial control information and is
typically a minimum of 20 bytes long.
Key Fields in the TCP Header:
Source Port (16 bits): Identifies the port number of the sending application.
Destination Port (16 bits): Identifies the port number of the receiving
application.
Sequence Number (32 bits): The sequence number of the first byte of data
in the current segment. It ensures in-order delivery.
Acknowledgment Number (32 bits): If the ACK flag is set, this number
contains the sequence number of the next byte of data the sender is
expecting from the receiver. It provides a cumulative acknowledgment.
Data Offset (4 bits): Also known as Header Length, this field indicates the
length of the TCP header in 32-bit words. Its value ranges from 5 (20 bytes) to
15 (60 bytes).
Flags (9 bits): Control bits that manage the state of the connection:
SYN: Used to initiate a connection.
ACK: Indicates that the acknowledgment number field is valid.
FIN: Used to terminate a connection.
RST: Resets the connection, often due to an error.
PSH: Requests that the receiver "push" the data to the application immediately.
URG: Indicates that the urgent pointer field is valid.
Window Size (16 bits): The number of bytes the receiver is willing to accept,
starting from the byte specified in the acknowledgment number. This is used for
flow control.
Checksum (16 bits): A field used for error detection to ensure the integrity of the
header and data.
Options (variable): An optional field that can be used for various purposes, such
as specifying the maximum segment size (MSS), window scaling, or timestamps.
Round-Trip Time Estimation and Timeout
For TCP to be reliable, it must be able to detect lost segments and retransmit them. This
requires an accurate estimation of the Round-Trip Time (RTT) and a dynamic setting of the
retransmission timeout (RTO).

Round-Trip Time (RTT) Estimation


 TCP's RTT is the time it takes for a segment to travel to the destination and for its
acknowledgment to return. A key challenge is that network conditions can change, so
RTT is not a fixed value. TCP uses an adaptive algorithm to estimate the RTT.
 The most common algorithm is based on the work of Van Jacobson. It calculates a
"smoothed" RTT (EstimatedRTT) and a deviation in the RTT (DevRTT) to account for
network fluctuations.
 EstimatedRTT: This is a weighted average of the previous EstimatedRTT and the most
recent SampleRTT (the time measured for a specific segment).
 DevRTT: This measures the deviation of the RTT from the estimated average, which
helps in setting a more robust timeout..
Timeout (RTO)
 The Retransmission Timeout (RTO) is the duration TCP waits for an
acknowledgment before retransmitting a segment.
 The RTO is calculated using the EstimatedRTT and DevRTT to make it more
responsive to network conditions.

 The inclusion of DevRTT makes the timeout value larger when there is a high
variance in RTT, preventing unnecessary retransmissions on a congested network.
 If a timeout occurs, the RTO is doubled for each subsequent retransmission of the
same segment, a strategy known as exponential backoff, to avoid further
congesting an already slow network.
 A key rule, known as Karn's Algorithm, is that RTT samples from retransmitted
segments are not used in the RTT estimation to avoid ambiguity caused by
retransmissions.
Reliable Data Transfer, Flow Control, TCP Connection Management
Reliable Data Transfer (RDT)
 RDT protocols ensure that data is delivered correctly and in the right order, even over unreliable
networks. This is achieved through a combination of mechanisms including checksums, sequence
numbers, acknowledgements (ACKs), and timers.
 RDT 2.0 (ARQ): Introduces error detection with checksums and uses acknowledgements (ACK) and
negative acknowledgements (NAK) to signal if a packet was received correctly or not.
 RDT 3.0 (Stop-and-Wait): Adds a timer to RDT 2.0. If an ACK isn't received within a specific time, the
sender assumes the packet is lost and retransmits it. It also uses sequence numbers to identify
duplicate packets.
 Pipelined Protocols: To increase efficiency, these protocols allow the sender to transmit multiple
packets without waiting for an acknowledgement for each one.
 Go-Back-N ARQ: The sender can transmit up to N packets. If a packet is lost, the receiver
discards all subsequent packets. The sender, upon receiving a NAK or timing out, retransmits the
lost packet and all the packets that followed it.
 Selective Repeat ARQ: A more efficient approach where the receiver accepts out-of-order
packets and buffers them. It sends a selective ACK for each correctly received packet. If a packet
is lost, the sender only retransmits that specific lost packet, not the entire window.
2. Flow Control
TCP uses a sliding window protocol for flow control. The receiver advertises a
"receive window" size to the sender, which is the amount of available buffer space it
has.
The sender is only allowed to send data up to the size of the receiver's advertised
window.
The window "slides" as the receiver processes data and sends ACKs, allowing the
sender to transmit more data.
If the receiver's buffer is full, it advertises a window size of zero, and the sender
stops transmitting until the window opens up again.
This mechanism prevents a fast sender from overwhelming a slow receiver.
Congestion control, while related, is a different mechanism where the sender
limits its transmission rate to avoid overwhelming the network itself, not just the
receiver. This is achieved through algorithms like Slow Start and Congestion
Avoidance.
TCP connection management is the process of establishing and terminating a
connection between a sender and receiver. This is handled through a series of
"handshakes":
•Three-way handshake (connection establishment):
1.The client sends a SYN (synchronize) packet to the server.
2.The server responds with a SYN-ACK (synchronize-acknowledge) packet.
3.The client sends a final ACK (acknowledge) packet, and the connection is
established.
•Four-way handshake (connection termination):
1.One side sends a FIN (finish) packet to the other.
2.The other side responds with an ACK.
3.The second side then sends its own FIN packet.
4.The first side responds with a final ACK, and the connection is terminated. This
process allows each side to independently close its end of the connection.
Sample Questions
1. Explain the key services provided by the Transport Layer in computer networks.
2. Describe the relationship between the Transport Layer and the Network Layer with
suitable examples.
3. Give an overview of the Transport Layer in the Internet and list its main protocols.
4. What is multiplexing and demultiplexing in the Transport Layer? Explain their role in
communication.
5. Describe the structure of a UDP segment and explain how the UDP checksum is
calculated.
6. Discuss the principles of reliable data transfer and the steps involved in building a reliable
data transfer protocol.
7. Differentiate between Go-Back-N and Selective Repeat pipelined protocols with
diagrams.
8. Explain the TCP connection establishment and termination process.
9. Describe the TCP segment structure and the method for estimating round-trip time and
timeout values.
10. What is flow control in TCP? How is it implemented using the sliding window
mechanism?

You might also like