Transport Layer
Transport Layer
The services provided by the transport layer protocols can be divided into five categories:
o End-to-end delivery
o Addressing
o Reliable delivery
o Flow control
o Multiplexing
End-to-end delivery:
The transport layer transmits the entire message to the destination. Therefore, it ensures the end-
to-end delivery of an entire message from a source to the destination.
Reliable delivery:
The transport layer provides reliability services by retransmitting the lost and damaged packets.
o Error control
o Sequence control
o Loss control
o Duplication control
Error Control
o The primary role of reliability is Error Control. In reality, no transmission will be 100
percent error-free delivery. Therefore, transport layer protocols are designed to provide
error-free transmission.
o The data link layer also provides the error handling mechanism, but it ensures only node-
to-node error-free delivery. However, node-to-node reliability does not ensure the end-
to-end reliability.
o The data link layer checks for the error between each network. If an error is introduced
inside one of the routers, then this error will not be caught by the data link layer. It only
detects those errors that have been introduced between the beginning and end of the
link. Therefore, the transport layer performs the checking for the errors end-to-end to
ensure that the packet has arrived correctly.
Sequence Control
o The second aspect of the reliability is sequence control which is implemented at the
transport layer.
o On the sending end, the transport layer is responsible for ensuring that the packets
received from the upper layers can be used by the lower layers. On the receiving end, it
ensures that the various segments of a transmission can be correctly reassembled.
Loss Control
Loss Control is a third aspect of reliability. The transport layer ensures that all the fragments of a
transmission arrive at the destination, not some of them. On the sending end, all the fragments of
transmission are given sequence numbers by a transport layer. These sequence numbers allow the
receiver?s transport layer to identify the missing segment.
Duplication Control
Duplication Control is the fourth aspect of reliability. The transport layer guarantees that no
duplicate data arrive at the destination. Sequence numbers are used to identify the lost packets;
similarly, it allows the receiver to identify and discard duplicate segments.
Flow Control
Flow control is used to prevent the sender from overwhelming the receiver. If the receiver is
overloaded with too much data, then the receiver discards the packets and asking for the
retransmission of packets. This increases network congestion and thus, reducing the system
performance. The transport layer is responsible for flow control. It uses the sliding window protocol
that makes the data transmission more efficient as well as it controls the flow of data so that the
receiver does not become overwhelmed. Sliding window protocol is byte oriented rather than
frame oriented.
Multiplexing
Addressing
o According to the layered model, the transport layer interacts with the functions of the
session layer. Many protocols combine session, presentation, and application layer
protocols into a single layer known as the application layer. In these cases, delivery to the
session layer means the delivery to the application layer. Data generated by an application
on one machine must be transmitted to the correct application on another machine. In
this case, addressing is provided by the transport layer.
o The transport layer provides the user address which is specified as a station or port. The
port variable represents a particular TS user of a specified station known as a Transport
Service access point (TSAP). Each station has only one transport entity.
o The transport layer protocols need to know which upper-layer protocols are
communicating.
Process-to-Process Delivery
Process-to-Process Delivery: A transport-layer protocol's first task is to perform process-to-
process delivery. A process is an entity of the application layer which uses the services of the
transport layer. Two processes can be communicated between the client/server relationships.
Client/Server Paradigm
There are many ways to obtain the process-to-process communication, and the most
common way is through the client/server paradigm. A process is called a client on the local-
host. Usually, the remote host is needed services on the processes, that is called server. The
same name applies to both processes (client and server). IP address and port number
combination are called socket address, and that address defines a process and a host.
Where,
o Source port address: It defines the address of the application process that has delivered a
message. The source port address is of 16 bits address.
o Destination port address: It defines the address of the application process that will
receive the message. The destination port address is of a 16-bit address.
o Total length: It defines the total length of the user datagram in bytes. It is a 16-bit field.
o Checksum: The checksum is a 16-bit field which is used in error detection.
o UDP provides basic functions needed for the end-to-end delivery of a transmission.
o It does not provide any sequencing or reordering functions and does not specify the
damaged packet when reporting an error.
o UDP can discover that an error has occurred, but it does not specify which packet has
been lost as it does not contain an ID or sequencing number of a particular data segment.
TCP
o Stream data transfer: TCP protocol transfers the data in the form of contiguous stream of
bytes. TCP group the bytes in the form of TCP segments and then passed it to the IP layer
for transmission to the destination. TCP itself segments the data and forward to the IP.
o Reliability: TCP assigns a sequence number to each byte transmitted and expects a
positive acknowledgement from the receiving TCP. If ACK is not received within a timeout
interval, then the data is retransmitted to the destination.
The receiving TCP uses the sequence number to reassemble the segments if they arrive
out of order or to eliminate the duplicate segments.
o Flow Control: When receiving TCP sends an acknowledgement back to the sender
indicating the number the bytes it can receive without overflowing its internal buffer. The
number of bytes is sent in ACK in the form of the highest sequence number that it can
receive without any problem. This mechanism is also referred to as a window mechanism.
o Multiplexing: Multiplexing is a process of accepting the data from different applications
and forwarding to the different applications on different computers. At the receiving end,
the data is forwarded to the correct application. This process is known as demultiplexing.
TCP transmits the packet to the correct application by using the logical channels known as
ports.
o Logical Connections: The combination of sockets, sequence numbers, and window sizes,
is called a logical connection. Each connection is identified by the pair of sockets used by
sending and receiving processes.
o Full Duplex: TCP provides Full Duplex service, i.e., the data flow in both the directions at
the same time. To achieve Full Duplex service, each TCP should have sending and
receiving buffers so that the segments can flow in both the directions. TCP is a
connection-oriented protocol. Suppose the process A wants to send and receive the data
from process B. The following steps occur:
Where,
o Source port address: It is used to define the address of the application program in a
source computer. It is a 16-bit field.
o Destination port address: It is used to define the address of the application program in a
destination computer. It is a 16-bit field.
o Sequence number: A stream of data is divided into two or more TCP segments. The 32-
bit sequence number field represents the position of the data in an original data stream.
o Acknowledgement number: A 32-field acknowledgement number acknowledge the data
from other communicating devices. If ACK field is set to 1, then it specifies the sequence
number that the receiver is expecting to receive.
o Header Length (HLEN): It specifies the size of the TCP header in 32-bit words. The
minimum size of the header is 5 words, and the maximum size of the header is 15 words.
Therefore, the maximum size of the TCP header is 60 bytes, and the minimum size of the
TCP header is 20 bytes.
o Reserved: It is a six-bit field which is reserved for future use.
o Control bits: Each bit of a control field functions individually and independently. A control
bit defines the use of a segment or serves as a validity check for other fields.
o URG: The URG field indicates that the data in a segment is urgent.
o ACK: When ACK field is set, then it validates the acknowledgement number.
o PSH: The PSH field is used to inform the sender that higher throughput is needed so if
possible, data must be pushed with higher throughput.
o RST: The reset bit is used to reset the TCP connection when there is any confusion occurs
in the sequence numbers.
o SYN: The SYN field is used to synchronize the sequence numbers in three types of
segments: connection request, connection confirmation ( with the ACK bit set ), and
confirmation acknowledgement.
o FIN: The FIN field is used to inform the receiving TCP module that the sender has finished
sending data. It is used in connection termination in three types of segments: termination
request, termination confirmation, and acknowledgement of termination confirmation.
o Window Size: The window is a 16-bit field that defines the size of the window.
o Checksum: The checksum is a 16-bit field used in error detection.
o Urgent pointer: If URG flag is set to 1, then this 16-bit field is an offset from the
sequence number indicating that it is a last urgent data byte.
o Options and padding: It defines the optional fields that convey the additional
information to the receiver.
Differences b/w TCP & UDP
What is Multiplexing?
Multiplexing is a technique used to combine and send the multiple data streams over a single
medium. The process of combining the data streams is known as multiplexing and hardware used
for multiplexing is known as a multiplexer.
Multiplexing is achieved by using a device called Multiplexer (MUX) that combines n input lines to
generate a single output line. Multiplexing follows many-to-one, i.e., n input lines and one output
line.
Why Multiplexing?
o The transmission medium is used to send the signal from sender to receiver. The medium
can only have one signal at a time.
o If there are multiple signals to share one medium, then the medium must be divided in
such a way that each signal is given some portion of the available bandwidth. For example:
If there are 10 signals and bandwidth of medium is100 units, then the 10 unit is shared by
each signal.
o When multiple signals share the common medium, there is a possibility of collision.
Multiplexing concept is used to avoid such collision.
o Transmission services are very expensive.
History of Multiplexing
Concept of Multiplexing
o The 'n' input lines are transmitted through a multiplexer and multiplexer combines the
signals to form a composite signal.
o The composite signal is passed through a Demultiplexer and demultiplexer separates a
signal to component signals and transfers them to their respective destinations.
Advantages of Multiplexing:
Multiplexing Techniques
Multiplexing techniques can be classified as:
Frequency-division Multiplexing (FDM)
o It is an analog technique.
o Frequency Division Multiplexing is a technique in which the available bandwidth of a
single transmission medium is subdivided into several channels.
o In the above diagram, a single transmission medium is subdivided into several frequency
channels, and each frequency channel is given to different devices. Device 1 has a
frequency channel of range from 1 to 5.
o The input signals are translated into frequency bands by using modulation techniques,
and they are combined by a multiplexer to form a composite signal.
o The main aim of the FDM is to subdivide the available bandwidth into different frequency
channels and allocate them to different devices.
o Using the modulation technique, the input signals are transmitted into frequency bands
and then combined to form a composite signal.
o The carriers which are used for modulating the signals are known as sub-carriers. They
are represented as f1,f2..fn.
o FDM is mainly used in radio broadcasts and TV networks.
Advantages Of FDM:
Disadvantages Of FDM:
Applications Of FDM:
o It is a digital technique.
o In Frequency Division Multiplexing Technique, all signals operate at the same time with
different frequency, but in case of Time Division Multiplexing technique, all signals operate
at the same frequency with different time.
o In Time Division Multiplexing technique, the total time available in the channel is
distributed among different users. Therefore, each user is allocated with different time
interval known as a Time slot at which data is to be transmitted by the sender.
o A user takes control of the channel for a fixed amount of time.
o In Time Division Multiplexing technique, data is not transmitted simultaneously rather the
data is transmitted one-by-one.
o In TDM, the signal is transmitted in the form of frames. Frames contain a cycle of time
slots in which each frame contains one or more slots dedicated to each user.
o It can be used to multiplex both digital and analog signals but mainly used to multiplex
digital signals.
o Synchronous TDM
o Asynchronous TDM
Synchronous TDM
o The capacity of the channel is not fully utilized as the empty slots are also transmitted
which is having no data. In the above figure, the first frame is completely filled, but in the
last two frames, some slots are empty. Therefore, we can say that the capacity of the
channel is not utilized efficiently.
o The speed of the transmission medium should be greater than the total speed of the input
lines. An alternative approach to the Synchronous TDM is Asynchronous Time Division
Multiplexing.
Asynchronous TDM
o The difference between Asynchronous TDM and Synchronous TDM is that many slots in
Synchronous TDM are unutilized, but in Asynchronous TDM, slots are fully utilized. This
leads to the smaller transmission time and efficient utilization of the capacity of the
channel.
o In Synchronous TDM, if there are n sending devices, then there are n time slots. In
Asynchronous TDM, if there are n sending devices, then there are m time slots where m is
less than n (m<n).
o The number of slots in a frame depends on the statistical analysis of the number of input
lines.
In the above diagram, there are 4 devices, but only two devices are sending the data, i.e., A and C.
Therefore, the data of A and C are only transmitted through the transmission line.
The above figure shows that the data part contains the address to determine the source of the
data.
Connection Management :-
.
Connection Establishment:
.
Three-Way Handshake (TCP): In TCP, connection establishment involves a three-way
handshake process. The initiating host (client) sends a SYN (synchronize) packet to the
receiving host (server). The server responds with a SYN-ACK (synchronize-
acknowledgment) packet to acknowledge the SYN and indicate its readiness to establish a
connection. Finally, the client sends an ACK (acknowledgment) packet to acknowledge the
server's SYN-ACK, completing the handshake and establishing the connection.
Connectionless Protocols (UDP): Connectionless protocols like UDP do not require a
connection establishment phase. Communication sessions are established implicitly when
data is sent from one host to another. However, UDP does not provide mechanisms for
ensuring reliable delivery or maintaining connection state.
.
Connection Maintenance:
.
Keep-Alive Mechanisms: TCP includes keep-alive mechanisms to maintain the
connection state by periodically exchanging small packets (keep-alive probes) between
the client and server. This helps detect inactive or idle connections and ensures they are
not prematurely terminated by intermediate network devices (e.g., routers, firewalls).
Acknowledgment and Retransmission (TCP): During data transfer, TCP uses
acknowledgment and retransmission mechanisms to ensure reliable delivery of data. The
sender waits for acknowledgment packets from the receiver and retransmits any
unacknowledged data after a timeout period.
.
Connection Termination:
.
Four-Way Handshake (TCP): In TCP, connection termination involves a four-way
handshake process. The initiating host (client) sends a FIN (finish) packet to indicate its
desire to terminate the connection. The receiving host (server) acknowledges the FIN with
an ACK packet. The server then sends its own FIN packet to initiate the termination of its
end of the connection. Finally, the client acknowledges the server's FIN with an ACK
packet, completing the termination process.
Connectionless Protocols (UDP): Since UDP does not maintain connection state, there is
no explicit connection termination phase. Communication sessions are terminated
implicitly when data exchange is complete, and hosts stop sending or receiving data.
.
Resource Management:
.
Connection Tracking: Network devices such as routers, firewalls, and load balancers may
maintain connection state information to facilitate packet forwarding, filtering, or load
distribution.
Connection Limits: Some network devices enforce limits on the number of concurrent
connections to prevent resource exhaustion or denial-of-service (DoS) attacks.
Overall, connection management plays a critical role in ensuring reliable, orderly, and efficient
communication between network hosts, enabling various applications and services to interact
seamlessly across the network.
Flow Control :-
Flow control in the transport layer is a mechanism used to manage the rate of data
transmission between two communicating hosts to ensure that the sender does not
overwhelm the receiver with data. It is essential for preventing packet loss, buffer
overflow, and congestion in the network. Flow control is primarily implemented in
protocols like TCP (Transmission Control Protocol) within the TCP/IP suite. Here's
how flow control works in the transport layer:
.
Receiver Window: In TCP, flow control is achieved through the use of a sliding
window mechanism. The receiver maintains a buffer called the receiver window,
which represents the amount of data it is willing to accept from the sender at any
given time. The size of the receiver window is dynamically adjusted based on factors
such as available buffer space and the receiver's processing capacity.
.
.
Advertised Window: The receiver informs the sender about the size of its receiver
window by including this information in TCP segments sent back to the sender. This
value is known as the advertised window or receive window size.
.
.
Sender Behavior:
.
The sender keeps track of the receiver's advertised window size.
The sender limits the amount of data it sends to the receiver based on the size of the
receiver window.
The sender adjusts its transmission rate dynamically to match the receiver's ability to process
incoming data.
If the sender's data transmission rate exceeds the receiver's advertised window size, the
sender must pause or slow down transmission until the receiver advertises a larger window
size.
.
Window Scaling: To support high-speed networks and large buffer sizes, TCP
includes an option called window scaling, which allows the receiver to advertise
larger window sizes by scaling up the value in the TCP header.
.
.
Congestion Avoidance: Flow control mechanisms work in conjunction with
congestion control mechanisms to ensure efficient and fair utilization of network
resources. While flow control regulates the rate of data transmission between sender
and receiver, congestion control regulates the overall rate of data transmission in the
network to prevent congestion and ensure network stability.
.
Overall, flow control in the transport layer ensures smooth and efficient data transfer
between communicating hosts by preventing the sender from overwhelming the
receiver with data, thereby preventing packet loss and network congestion. It plays a
crucial role in achieving reliable and efficient communication in TCP/IP networks.
Retransmission :-
.
Acknowledgment and Timeout: When the sender sends data packets to the receiver,
it expects to receive acknowledgment (ACK) packets from the receiver indicating
successful receipt of the data. If the sender does not receive an ACK for a certain
period (known as the retransmission timeout), it assumes that the packet was lost or
damaged in transit.
.
.
Retransmission Timer: The sender maintains a retransmission timer for each data
packet it sends. If an ACK is not received within the timeout period, the sender
retransmits the packet.
.
.
Selective Retransmission: TCP uses selective retransmission, meaning that only the
lost or damaged packets are retransmitted, rather than resending the entire data
stream. This approach minimizes unnecessary retransmissions and reduces network
congestion.
.
.
Fast Retransmit: In addition to waiting for the retransmission timer to expire, TCP
also employs a fast retransmit mechanism. If the sender receives duplicate ACKs
(indicating that a packet was received out of order), it assumes that the next packet in
sequence was lost and immediately retransmits it without waiting for the
retransmission timer to expire.
.
.
Congestion Control: TCP's congestion control mechanisms work in conjunction with
retransmission to adapt the transmission rate based on network conditions. When
packet loss is detected, TCP reduces its transmission rate to alleviate network
congestion and minimize the likelihood of further packet loss.
.
.
Exponential Backoff: To avoid congestion collapse and further network congestion,
TCP employs an exponential backoff algorithm when retransmitting packets. After a
certain number of retransmissions without success, the sender doubles the
retransmission timeout period, exponentially increasing the time between
retransmissions.
.
Overall, retransmission in the transport layer, particularly in TCP, plays a crucial role
in ensuring reliable data delivery in the face of network errors and congestion. By
retransmitting lost or damaged packets and adapting transmission rates based on
network conditions, TCP provides a robust mechanism for end-to-end data
communication in computer networks.
Window Management :-
Window management in the transport layer, especially in protocols like TCP
(Transmission Control Protocol), involves the management of sliding windows used
for flow control and congestion control. Here's an explanation of window
management in the transport layer:
.
Sliding Window Protocol:
.
TCP uses a sliding window protocol to manage the flow of data between the sender and
receiver efficiently.
The sliding window represents the range of sequence numbers of the data that the sender
can transmit and the receiver can accept.
.
Sender's Window:
.
The sender maintains a sending window that specifies the sequence numbers of the packets
it can transmit without waiting for acknowledgment.
As the sender sends data packets, it advances the sending window based on the
acknowledgment received from the receiver.
.
Receiver's Window:
.
The receiver maintains a receiving window that specifies the sequence numbers of the
packets it can accept.
As the receiver receives data packets in order, it advances the receiving window and
acknowledges the receipt of the packets.
.
Window Size:
.
The window size determines the number of packets that can be sent or received without
acknowledgment.
It is dynamically adjusted based on factors such as available buffer space, network
conditions, and congestion level.
.
Flow Control:
.
Window management is essential for flow control, ensuring that the sender does not
overwhelm the receiver with data.
The receiver advertises its window size to the sender through TCP segments, indicating the
amount of buffer space available for incoming data.
The sender adjusts its transmission rate based on the receiver's advertised window size,
ensuring that it does not exceed the receiver's capacity.
.
Congestion Control:
.
Window management also plays a role in congestion control, helping to prevent network
congestion and packet loss.
TCP's congestion control mechanisms adjust the window size dynamically based on network
conditions, such as packet loss and round-trip time.
If congestion is detected, TCP reduces the window size to alleviate congestion and prevent
further packet loss.
.
Window Scaling:
.
To support high-speed networks and large window sizes, TCP includes a window scaling
option that allows the sender and receiver to scale up the window size beyond the
limitations of the 16-bit window field in the TCP header.
Overall, window management in the transport layer, particularly in TCP, is crucial for
efficient and reliable data transmission, enabling optimal flow control and congestion
control in computer networks.
窗体顶端
窗体底端
Exponential increment: In this phase after every RTT the congestion window size
increments exponentially.
Example:- If the initial congestion window size is 1 segment, and the first segment
is successfully acknowledged, the congestion window size becomes 2 segments. If
the next transmission is also acknowledged, the congestion window size doubles to
4 segments. This exponential growth continues as long as all segments are
successfully acknowledged.
Initially cwnd = 1
After 1 RTT, cwnd = 2^(1) = 2
2 RTT, cwnd = 2^(2) = 4
3 RTT, cwnd = 2^(3) = 8
Congestion Avoidance Phase
Additive increment: This phase starts after the threshold value also denoted as
ssthresh. The size of cwnd(congestion window) increases additive. After each RTT
cwnd = cwnd + 1.
Example:- if the congestion window size is 20 segments and all 20 segments are
successfully acknowledged within an RTT, the congestion window size would be
increased to 21 segments in the next RTT. If all 21 segments are again successfully
acknowledged, the congestion window size would be increased to 22 segments, and
so on.
Initially cwnd = i
After 1 RTT, cwnd = i+1
2 RTT, cwnd = i+2
3 RTT, cwnd = i+3
窗体顶端
Key Aspect :-
.
Traffic Classification: QoS mechanisms classify traffic into different classes or
priority levels based on predefined criteria such as application type, destination, or
service level agreements (SLAs). For example, voice and video traffic may be
classified as high priority, while bulk data transfer may be classified as low priority.
.
.
Traffic Policing and Shaping: QoS mechanisms enforce traffic policies to control
the flow of data and ensure that traffic conforms to predefined QoS parameters.
Traffic policing involves dropping or marking packets that exceed specified rate
limits, while traffic shaping involves buffering and delaying packets to smooth out
traffic flows and ensure compliance with QoS requirements.
.
.
Traffic Prioritization: QoS mechanisms prioritize traffic based on its class or
priority level to ensure that high-priority traffic receives preferential treatment over
lower-priority traffic. This may involve giving high-priority traffic access to network
resources during times of congestion or limiting the impact of lower-priority traffic on
network performance.
.
.
Resource Reservation: QoS mechanisms can allocate and reserve network resources
such as bandwidth, buffer space, and processing capacity for specific traffic classes or
flows. This ensures that critical applications receive the necessary resources to meet
their performance requirements, even during periods of network congestion.
.
.
Traffic Queuing and Scheduling: QoS mechanisms use queuing and scheduling
algorithms to manage the order in which packets are transmitted from the network
buffers. Priority queuing, weighted fair queuing, and class-based queuing are
examples of queuing algorithms used to prioritize traffic based on QoS parameters.
.
.
Congestion Avoidance and Management: QoS mechanisms implement congestion
avoidance and management techniques to prevent network congestion and mitigate its
effects. This may include dynamically adjusting traffic rates, implementing flow
control mechanisms, and signaling congestion to network devices and endpoints.
.
.
End-to-End QoS Guarantees: QoS mechanisms provide end-to-end QoS guarantees
by coordinating QoS policies and mechanisms across multiple network segments and
devices. This ensures that QoS requirements are met consistently throughout the
network path, from source to destination.
.
Overall, QoS in the transport layer plays a critical role in ensuring that network
resources are allocated efficiently and fairly, and that applications and services can
meet their performance requirements in diverse and dynamic network environments.
窗体顶端
窗体底端
窗体底端