KEMBAR78
Transport Layer | PDF | Transmission Control Protocol | Multiplexing
0% found this document useful (0 votes)
37 views27 pages

Transport Layer

The transport layer provides communication services between processes running on different hosts. It provides end-to-end delivery, addressing, reliable delivery through error control, flow control and multiplexing. Common transport protocols are TCP and UDP, with TCP providing reliable connections and UDP being faster but unreliable.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views27 pages

Transport Layer

The transport layer provides communication services between processes running on different hosts. It provides end-to-end delivery, addressing, reliable delivery through error control, flow control and multiplexing. Common transport protocols are TCP and UDP, with TCP providing reliable connections and UDP being faster but unreliable.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 27

Transport Layer

o The transport layer is a 4th layer from the top.


o The main role of the transport layer is to provide the communication services directly to
the application processes running on different hosts.
o The transport layer provides a logical communication between application processes
running on different hosts. Although the application processes on different hosts are not
physically connected, application processes use the logical communication provided by
the transport layer to send the messages to each other.
o The transport layer protocols are implemented in the end systems but not in the network
routers.
o A computer network provides more than one protocol to the network applications. For
example, TCP and UDP are two transport layer protocols that provide a different set of
services to the network layer.
o All transport layer protocols provide multiplexing/demultiplexing service. It also provides
other services such as reliable data transfer, bandwidth guarantees, and delay guarantees.
o Each of the applications in the application layer has the ability to send a message by using
TCP or UDP. The application communicates by using either of these two protocols. Both
TCP and UDP will then communicate with the internet protocol in the internet layer. The
applications can read and write to the transport layer. Therefore, we can say that
communication is a two-way process.

Services provided by the Transport Layer


The services provided by the transport layer are similar to those of the data link layer. The data link
layer provides the services within a single network while the transport layer provides the services
across an internetwork made up of many networks. The data link layer controls the physical layer
while the transport layer controls all the lower layers.

The services provided by the transport layer protocols can be divided into five categories:

o End-to-end delivery
o Addressing
o Reliable delivery
o Flow control
o Multiplexing

End-to-end delivery:

The transport layer transmits the entire message to the destination. Therefore, it ensures the end-
to-end delivery of an entire message from a source to the destination.

Reliable delivery:

The transport layer provides reliability services by retransmitting the lost and damaged packets.

The reliable delivery has four aspects:

o Error control
o Sequence control
o Loss control
o Duplication control
Error Control

o The primary role of reliability is Error Control. In reality, no transmission will be 100
percent error-free delivery. Therefore, transport layer protocols are designed to provide
error-free transmission.
o The data link layer also provides the error handling mechanism, but it ensures only node-
to-node error-free delivery. However, node-to-node reliability does not ensure the end-
to-end reliability.
o The data link layer checks for the error between each network. If an error is introduced
inside one of the routers, then this error will not be caught by the data link layer. It only
detects those errors that have been introduced between the beginning and end of the
link. Therefore, the transport layer performs the checking for the errors end-to-end to
ensure that the packet has arrived correctly.
Sequence Control

o The second aspect of the reliability is sequence control which is implemented at the
transport layer.
o On the sending end, the transport layer is responsible for ensuring that the packets
received from the upper layers can be used by the lower layers. On the receiving end, it
ensures that the various segments of a transmission can be correctly reassembled.

Loss Control

Loss Control is a third aspect of reliability. The transport layer ensures that all the fragments of a
transmission arrive at the destination, not some of them. On the sending end, all the fragments of
transmission are given sequence numbers by a transport layer. These sequence numbers allow the
receiver?s transport layer to identify the missing segment.

Duplication Control

Duplication Control is the fourth aspect of reliability. The transport layer guarantees that no
duplicate data arrive at the destination. Sequence numbers are used to identify the lost packets;
similarly, it allows the receiver to identify and discard duplicate segments.

Flow Control

Flow control is used to prevent the sender from overwhelming the receiver. If the receiver is
overloaded with too much data, then the receiver discards the packets and asking for the
retransmission of packets. This increases network congestion and thus, reducing the system
performance. The transport layer is responsible for flow control. It uses the sliding window protocol
that makes the data transmission more efficient as well as it controls the flow of data so that the
receiver does not become overwhelmed. Sliding window protocol is byte oriented rather than
frame oriented.

Multiplexing

The transport layer uses the multiplexing to improve transmission efficiency.

Multiplexing can occur in two ways:

o Upward multiplexing: Upward multiplexing means multiple transport layer connections


use the same network connection. To make more cost-effective, the transport layer sends
several transmissions bound for the same destination along the same path; this is
achieved through upward multiplexing.
o Downward multiplexing: Downward multiplexing means one transport layer connection
uses the multiple network connections. Downward multiplexing allows the transport layer
to split a connection among several paths to improve the throughput. This type of
multiplexing is used when networks have a low or slow capacity.

Addressing
o According to the layered model, the transport layer interacts with the functions of the
session layer. Many protocols combine session, presentation, and application layer
protocols into a single layer known as the application layer. In these cases, delivery to the
session layer means the delivery to the application layer. Data generated by an application
on one machine must be transmitted to the correct application on another machine. In
this case, addressing is provided by the transport layer.
o The transport layer provides the user address which is specified as a station or port. The
port variable represents a particular TS user of a specified station known as a Transport
Service access point (TSAP). Each station has only one transport entity.
o The transport layer protocols need to know which upper-layer protocols are
communicating.

Process-to-Process Delivery
Process-to-Process Delivery: A transport-layer protocol's first task is to perform process-to-
process delivery. A process is an entity of the application layer which uses the services of the
transport layer. Two processes can be communicated between the client/server relationships.
Client/Server Paradigm
There are many ways to obtain the process-to-process communication, and the most
common way is through the client/server paradigm. A process is called a client on the local-
host. Usually, the remote host is needed services on the processes, that is called server. The
same name applies to both processes (client and server). IP address and port number
combination are called socket address, and that address defines a process and a host.

Transport Layer protocols


o The transport layer is represented by two protocols: TCP and UDP.
o The IP protocol in the network layer delivers a datagram from a source host to the
destination host.
o Nowadays, the operating system supports multiuser and multiprocessing environments,
an executing program is called a process. When a host sends a message to other host
means that source process is sending a process to a destination process. The transport
layer protocols define some connections to individual ports known as protocol ports.
o An IP protocol is a host-to-host protocol used to deliver a packet from source host to the
destination host while transport layer protocols are port-to-port protocols that work on
the top of the IP protocols to deliver the packet from the originating port to the IP
services, and from IP services to the destination port.
o Each port is defined by a positive integer address, and it is of 16 bits.
UDP

o UDP stands for User Datagram Protocol.


o UDP is a simple protocol and it provides nonsequenced transport functionality.
o UDP is a connectionless protocol.
o This type of protocol is used when reliability and security are less important than speed
and size.
o UDP is an end-to-end transport level protocol that adds transport-level addresses,
checksum error control, and length information to the data from the upper layer.
o The packet produced by the UDP protocol is known as a user datagram.

User Datagram Format

The user datagram has a 16-byte header which is shown below:

Where,

o Source port address: It defines the address of the application process that has delivered a
message. The source port address is of 16 bits address.
o Destination port address: It defines the address of the application process that will
receive the message. The destination port address is of a 16-bit address.
o Total length: It defines the total length of the user datagram in bytes. It is a 16-bit field.
o Checksum: The checksum is a 16-bit field which is used in error detection.

Disadvantages of UDP protocol

o UDP provides basic functions needed for the end-to-end delivery of a transmission.
o It does not provide any sequencing or reordering functions and does not specify the
damaged packet when reporting an error.
o UDP can discover that an error has occurred, but it does not specify which packet has
been lost as it does not contain an ID or sequencing number of a particular data segment.

TCP

o TCP stands for Transmission Control Protocol.


o It provides full transport layer services to applications.
o It is a connection-oriented protocol means the connection established between both the
ends of the transmission. For creating the connection, TCP generates a virtual circuit
between sender and receiver for the duration of a transmission.

Features Of TCP protocol

o Stream data transfer: TCP protocol transfers the data in the form of contiguous stream of
bytes. TCP group the bytes in the form of TCP segments and then passed it to the IP layer
for transmission to the destination. TCP itself segments the data and forward to the IP.
o Reliability: TCP assigns a sequence number to each byte transmitted and expects a
positive acknowledgement from the receiving TCP. If ACK is not received within a timeout
interval, then the data is retransmitted to the destination.
The receiving TCP uses the sequence number to reassemble the segments if they arrive
out of order or to eliminate the duplicate segments.
o Flow Control: When receiving TCP sends an acknowledgement back to the sender
indicating the number the bytes it can receive without overflowing its internal buffer. The
number of bytes is sent in ACK in the form of the highest sequence number that it can
receive without any problem. This mechanism is also referred to as a window mechanism.
o Multiplexing: Multiplexing is a process of accepting the data from different applications
and forwarding to the different applications on different computers. At the receiving end,
the data is forwarded to the correct application. This process is known as demultiplexing.
TCP transmits the packet to the correct application by using the logical channels known as
ports.
o Logical Connections: The combination of sockets, sequence numbers, and window sizes,
is called a logical connection. Each connection is identified by the pair of sockets used by
sending and receiving processes.
o Full Duplex: TCP provides Full Duplex service, i.e., the data flow in both the directions at
the same time. To achieve Full Duplex service, each TCP should have sending and
receiving buffers so that the segments can flow in both the directions. TCP is a
connection-oriented protocol. Suppose the process A wants to send and receive the data
from process B. The following steps occur:

o Establish a connection between two TCPs.


o Data is exchanged in both the directions.
o The Connection is terminated.
TCP Segment Format

Where,

o Source port address: It is used to define the address of the application program in a
source computer. It is a 16-bit field.
o Destination port address: It is used to define the address of the application program in a
destination computer. It is a 16-bit field.
o Sequence number: A stream of data is divided into two or more TCP segments. The 32-
bit sequence number field represents the position of the data in an original data stream.
o Acknowledgement number: A 32-field acknowledgement number acknowledge the data
from other communicating devices. If ACK field is set to 1, then it specifies the sequence
number that the receiver is expecting to receive.
o Header Length (HLEN): It specifies the size of the TCP header in 32-bit words. The
minimum size of the header is 5 words, and the maximum size of the header is 15 words.
Therefore, the maximum size of the TCP header is 60 bytes, and the minimum size of the
TCP header is 20 bytes.
o Reserved: It is a six-bit field which is reserved for future use.
o Control bits: Each bit of a control field functions individually and independently. A control
bit defines the use of a segment or serves as a validity check for other fields.

There are total six types of flags in control field:

o URG: The URG field indicates that the data in a segment is urgent.
o ACK: When ACK field is set, then it validates the acknowledgement number.
o PSH: The PSH field is used to inform the sender that higher throughput is needed so if
possible, data must be pushed with higher throughput.
o RST: The reset bit is used to reset the TCP connection when there is any confusion occurs
in the sequence numbers.
o SYN: The SYN field is used to synchronize the sequence numbers in three types of
segments: connection request, connection confirmation ( with the ACK bit set ), and
confirmation acknowledgement.
o FIN: The FIN field is used to inform the receiving TCP module that the sender has finished
sending data. It is used in connection termination in three types of segments: termination
request, termination confirmation, and acknowledgement of termination confirmation.

o Window Size: The window is a 16-bit field that defines the size of the window.
o Checksum: The checksum is a 16-bit field used in error detection.
o Urgent pointer: If URG flag is set to 1, then this 16-bit field is an offset from the
sequence number indicating that it is a last urgent data byte.
o Options and padding: It defines the optional fields that convey the additional
information to the receiver.
Differences b/w TCP & UDP

Basis for TCP UDP


Comparison

Definition TCP establishes a UDP transmits the


virtual circuit data directly to the
before destination
transmitting the computer without
data. verifying whether
the receiver is
ready to receive or
not.

Connection Type It is a Connection- It is a


Oriented protocol Connectionless
protocol

Speed slow high

Reliability It is a reliable It is an unreliable


protocol. protocol.

Header size 20 bytes 8 bytes

acknowledgement It waits for the It neither takes the


acknowledgement acknowledgement,
of data and has nor it retransmits
the ability to the damaged
resend the lost frame.
packets.

What is Multiplexing?
Multiplexing is a technique used to combine and send the multiple data streams over a single
medium. The process of combining the data streams is known as multiplexing and hardware used
for multiplexing is known as a multiplexer.

Multiplexing is achieved by using a device called Multiplexer (MUX) that combines n input lines to
generate a single output line. Multiplexing follows many-to-one, i.e., n input lines and one output
line.

Demultiplexing is achieved by using a device called Demultiplexer (DEMUX) available at the


receiving end. DEMUX separates a signal into its component signals (one input and n outputs).
Therefore, we can say that demultiplexing follows the one-to-many approach.

Why Multiplexing?

o The transmission medium is used to send the signal from sender to receiver. The medium
can only have one signal at a time.
o If there are multiple signals to share one medium, then the medium must be divided in
such a way that each signal is given some portion of the available bandwidth. For example:
If there are 10 signals and bandwidth of medium is100 units, then the 10 unit is shared by
each signal.
o When multiple signals share the common medium, there is a possibility of collision.
Multiplexing concept is used to avoid such collision.
o Transmission services are very expensive.

History of Multiplexing

o Multiplexing technique is widely used in telecommunications in which several telephone


calls are carried through a single wire.
o Multiplexing originated in telegraphy in the early 1870s and is now widely used in
communication.
o George Owen Squier developed the telephone carrier multiplexing in 1910.

Concept of Multiplexing

o The 'n' input lines are transmitted through a multiplexer and multiplexer combines the
signals to form a composite signal.
o The composite signal is passed through a Demultiplexer and demultiplexer separates a
signal to component signals and transfers them to their respective destinations.

Advantages of Multiplexing:

o More than one signal can be sent over a single medium.


o The bandwidth of a medium can be utilized effectively.

Multiplexing Techniques
Multiplexing techniques can be classified as:
Frequency-division Multiplexing (FDM)

o It is an analog technique.
o Frequency Division Multiplexing is a technique in which the available bandwidth of a
single transmission medium is subdivided into several channels.

o In the above diagram, a single transmission medium is subdivided into several frequency
channels, and each frequency channel is given to different devices. Device 1 has a
frequency channel of range from 1 to 5.
o The input signals are translated into frequency bands by using modulation techniques,
and they are combined by a multiplexer to form a composite signal.
o The main aim of the FDM is to subdivide the available bandwidth into different frequency
channels and allocate them to different devices.
o Using the modulation technique, the input signals are transmitted into frequency bands
and then combined to form a composite signal.
o The carriers which are used for modulating the signals are known as sub-carriers. They
are represented as f1,f2..fn.
o FDM is mainly used in radio broadcasts and TV networks.
Advantages Of FDM:

o FDM is used for analog signals.


o FDM process is very simple and easy modulation.
o A Large number of signals can be sent through an FDM simultaneously.
o It does not require any synchronization between sender and receiver.

Disadvantages Of FDM:

o FDM technique is used only when low-speed channels are required.


o It suffers the problem of crosstalk.
o A Large number of modulators are required.
o It requires a high bandwidth channel.

Applications Of FDM:

o FDM is commonly used in TV networks.


o It is used in FM and AM broadcasting. Each FM radio station has different frequencies, and
they are multiplexed to form a composite signal. The multiplexed signal is transmitted in
the air.

Wavelength Division Multiplexing (WDM)


o Wavelength Division Multiplexing is same as FDM except that the optical signals are
transmitted through the fibre optic cable.
o WDM is used on fibre optics to increase the capacity of a single fibre.
o It is used to utilize the high data rate capability of fibre optic cable.
o It is an analog multiplexing technique.
o Optical signals from different source are combined to form a wider band of light with the
help of multiplexer.
o At the receiving end, demultiplexer separates the signals to transmit them to their
respective destinations.
o Multiplexing and Demultiplexing can be achieved by using a prism.
o Prism can perform a role of multiplexer by combining the various optical signals to form a
composite signal, and the composite signal is transmitted through a fibre optical cable.
o Prism also performs a reverse operation, i.e., demultiplexing the signal.

Time Division Multiplexing

o It is a digital technique.
o In Frequency Division Multiplexing Technique, all signals operate at the same time with
different frequency, but in case of Time Division Multiplexing technique, all signals operate
at the same frequency with different time.
o In Time Division Multiplexing technique, the total time available in the channel is
distributed among different users. Therefore, each user is allocated with different time
interval known as a Time slot at which data is to be transmitted by the sender.
o A user takes control of the channel for a fixed amount of time.
o In Time Division Multiplexing technique, data is not transmitted simultaneously rather the
data is transmitted one-by-one.
o In TDM, the signal is transmitted in the form of frames. Frames contain a cycle of time
slots in which each frame contains one or more slots dedicated to each user.
o It can be used to multiplex both digital and analog signals but mainly used to multiplex
digital signals.

There are two types of TDM:

o Synchronous TDM
o Asynchronous TDM

Synchronous TDM

o A Synchronous TDM is a technique in which time slot is preassigned to every device.


o In Synchronous TDM, each device is given some time slot irrespective of the fact that the
device contains the data or not.
o If the device does not have any data, then the slot will remain empty.
o In Synchronous TDM, signals are sent in the form of frames. Time slots are organized in
the form of frames. If a device does not have data for a particular time slot, then the
empty slot will be transmitted.
o The most popular Synchronous TDM are T-1 multiplexing, ISDN multiplexing, and SONET
multiplexing.
o If there are n devices, then there are n slots.

Concept Of Synchronous TDM


In the above figure, the Synchronous TDM technique is implemented. Each device is allocated with
some time slot. The time slots are transmitted irrespective of whether the sender has data to send
or not.

Disadvantages Of Synchronous TDM:

o The capacity of the channel is not fully utilized as the empty slots are also transmitted
which is having no data. In the above figure, the first frame is completely filled, but in the
last two frames, some slots are empty. Therefore, we can say that the capacity of the
channel is not utilized efficiently.
o The speed of the transmission medium should be greater than the total speed of the input
lines. An alternative approach to the Synchronous TDM is Asynchronous Time Division
Multiplexing.

Asynchronous TDM

o An asynchronous TDM is also known as Statistical TDM.


o An asynchronous TDM is a technique in which time slots are not fixed as in the case of
Synchronous TDM. Time slots are allocated to only those devices which have the data to
send. Therefore, we can say that Asynchronous Time Division multiplexor transmits only
the data from active workstations.
o An asynchronous TDM technique dynamically allocates the time slots to the devices.
o In Asynchronous TDM, total speed of the input lines can be greater than the capacity of
the channel.
o Asynchronous Time Division multiplexor accepts the incoming data streams and creates a
frame that contains only data with no empty slots.
o In Asynchronous TDM, each slot contains an address part that identifies the source of the
data.

o The difference between Asynchronous TDM and Synchronous TDM is that many slots in
Synchronous TDM are unutilized, but in Asynchronous TDM, slots are fully utilized. This
leads to the smaller transmission time and efficient utilization of the capacity of the
channel.
o In Synchronous TDM, if there are n sending devices, then there are n time slots. In
Asynchronous TDM, if there are n sending devices, then there are m time slots where m is
less than n (m<n).
o The number of slots in a frame depends on the statistical analysis of the number of input
lines.

Concept Of Asynchronous TDM

In the above diagram, there are 4 devices, but only two devices are sending the data, i.e., A and C.
Therefore, the data of A and C are only transmitted through the transmission line.

Frame of above diagram can be represented as:

The above figure shows that the data part contains the address to determine the source of the
data.

Connection Management :-

Connection management in computer networks involves establishing, maintaining, and terminating


communication sessions between network hosts. This process is essential for enabling reliable and
efficient data exchange across the network. Connection management is typically implemented at
the transport layer of the OSI (Open Systems Interconnection) model or the TCP/IP (Transmission
Control Protocol/Internet Protocol) suite. Here's an overview of connection management in
computer networks:

.
Connection Establishment:
.
 Three-Way Handshake (TCP): In TCP, connection establishment involves a three-way
handshake process. The initiating host (client) sends a SYN (synchronize) packet to the
receiving host (server). The server responds with a SYN-ACK (synchronize-
acknowledgment) packet to acknowledge the SYN and indicate its readiness to establish a
connection. Finally, the client sends an ACK (acknowledgment) packet to acknowledge the
server's SYN-ACK, completing the handshake and establishing the connection.
 Connectionless Protocols (UDP): Connectionless protocols like UDP do not require a
connection establishment phase. Communication sessions are established implicitly when
data is sent from one host to another. However, UDP does not provide mechanisms for
ensuring reliable delivery or maintaining connection state.
.
Connection Maintenance:
.
 Keep-Alive Mechanisms: TCP includes keep-alive mechanisms to maintain the
connection state by periodically exchanging small packets (keep-alive probes) between
the client and server. This helps detect inactive or idle connections and ensures they are
not prematurely terminated by intermediate network devices (e.g., routers, firewalls).
 Acknowledgment and Retransmission (TCP): During data transfer, TCP uses
acknowledgment and retransmission mechanisms to ensure reliable delivery of data. The
sender waits for acknowledgment packets from the receiver and retransmits any
unacknowledged data after a timeout period.
.
Connection Termination:
.
 Four-Way Handshake (TCP): In TCP, connection termination involves a four-way
handshake process. The initiating host (client) sends a FIN (finish) packet to indicate its
desire to terminate the connection. The receiving host (server) acknowledges the FIN with
an ACK packet. The server then sends its own FIN packet to initiate the termination of its
end of the connection. Finally, the client acknowledges the server's FIN with an ACK
packet, completing the termination process.
 Connectionless Protocols (UDP): Since UDP does not maintain connection state, there is
no explicit connection termination phase. Communication sessions are terminated
implicitly when data exchange is complete, and hosts stop sending or receiving data.
.
Resource Management:
.
 Connection Tracking: Network devices such as routers, firewalls, and load balancers may
maintain connection state information to facilitate packet forwarding, filtering, or load
distribution.
 Connection Limits: Some network devices enforce limits on the number of concurrent
connections to prevent resource exhaustion or denial-of-service (DoS) attacks.

Overall, connection management plays a critical role in ensuring reliable, orderly, and efficient
communication between network hosts, enabling various applications and services to interact
seamlessly across the network.
Flow Control :-

Flow control in the transport layer is a mechanism used to manage the rate of data
transmission between two communicating hosts to ensure that the sender does not
overwhelm the receiver with data. It is essential for preventing packet loss, buffer
overflow, and congestion in the network. Flow control is primarily implemented in
protocols like TCP (Transmission Control Protocol) within the TCP/IP suite. Here's
how flow control works in the transport layer:

.
Receiver Window: In TCP, flow control is achieved through the use of a sliding
window mechanism. The receiver maintains a buffer called the receiver window,
which represents the amount of data it is willing to accept from the sender at any
given time. The size of the receiver window is dynamically adjusted based on factors
such as available buffer space and the receiver's processing capacity.
.
.
Advertised Window: The receiver informs the sender about the size of its receiver
window by including this information in TCP segments sent back to the sender. This
value is known as the advertised window or receive window size.
.
.
Sender Behavior:
.
 The sender keeps track of the receiver's advertised window size.
 The sender limits the amount of data it sends to the receiver based on the size of the
receiver window.
 The sender adjusts its transmission rate dynamically to match the receiver's ability to process
incoming data.
 If the sender's data transmission rate exceeds the receiver's advertised window size, the
sender must pause or slow down transmission until the receiver advertises a larger window
size.
.
Window Scaling: To support high-speed networks and large buffer sizes, TCP
includes an option called window scaling, which allows the receiver to advertise
larger window sizes by scaling up the value in the TCP header.
.
.
Congestion Avoidance: Flow control mechanisms work in conjunction with
congestion control mechanisms to ensure efficient and fair utilization of network
resources. While flow control regulates the rate of data transmission between sender
and receiver, congestion control regulates the overall rate of data transmission in the
network to prevent congestion and ensure network stability.
.

Overall, flow control in the transport layer ensures smooth and efficient data transfer
between communicating hosts by preventing the sender from overwhelming the
receiver with data, thereby preventing packet loss and network congestion. It plays a
crucial role in achieving reliable and efficient communication in TCP/IP networks.
Retransmission :-

Retransmission in the transport layer, particularly in protocols like TCP


(Transmission Control Protocol), is a mechanism used to ensure reliable data delivery
in the presence of packet loss, network congestion, or other transmission errors.
Here's how retransmission works in the transport layer:

.
Acknowledgment and Timeout: When the sender sends data packets to the receiver,
it expects to receive acknowledgment (ACK) packets from the receiver indicating
successful receipt of the data. If the sender does not receive an ACK for a certain
period (known as the retransmission timeout), it assumes that the packet was lost or
damaged in transit.
.
.
Retransmission Timer: The sender maintains a retransmission timer for each data
packet it sends. If an ACK is not received within the timeout period, the sender
retransmits the packet.
.
.
Selective Retransmission: TCP uses selective retransmission, meaning that only the
lost or damaged packets are retransmitted, rather than resending the entire data
stream. This approach minimizes unnecessary retransmissions and reduces network
congestion.
.
.
Fast Retransmit: In addition to waiting for the retransmission timer to expire, TCP
also employs a fast retransmit mechanism. If the sender receives duplicate ACKs
(indicating that a packet was received out of order), it assumes that the next packet in
sequence was lost and immediately retransmits it without waiting for the
retransmission timer to expire.
.
.
Congestion Control: TCP's congestion control mechanisms work in conjunction with
retransmission to adapt the transmission rate based on network conditions. When
packet loss is detected, TCP reduces its transmission rate to alleviate network
congestion and minimize the likelihood of further packet loss.
.
.
Exponential Backoff: To avoid congestion collapse and further network congestion,
TCP employs an exponential backoff algorithm when retransmitting packets. After a
certain number of retransmissions without success, the sender doubles the
retransmission timeout period, exponentially increasing the time between
retransmissions.
.

Overall, retransmission in the transport layer, particularly in TCP, plays a crucial role
in ensuring reliable data delivery in the face of network errors and congestion. By
retransmitting lost or damaged packets and adapting transmission rates based on
network conditions, TCP provides a robust mechanism for end-to-end data
communication in computer networks.

Window Management :-
Window management in the transport layer, especially in protocols like TCP
(Transmission Control Protocol), involves the management of sliding windows used
for flow control and congestion control. Here's an explanation of window
management in the transport layer:

.
Sliding Window Protocol:
.
 TCP uses a sliding window protocol to manage the flow of data between the sender and
receiver efficiently.
 The sliding window represents the range of sequence numbers of the data that the sender
can transmit and the receiver can accept.
.
Sender's Window:
.
 The sender maintains a sending window that specifies the sequence numbers of the packets
it can transmit without waiting for acknowledgment.
 As the sender sends data packets, it advances the sending window based on the
acknowledgment received from the receiver.
.
Receiver's Window:
.
 The receiver maintains a receiving window that specifies the sequence numbers of the
packets it can accept.
 As the receiver receives data packets in order, it advances the receiving window and
acknowledges the receipt of the packets.
.
Window Size:
.
 The window size determines the number of packets that can be sent or received without
acknowledgment.
 It is dynamically adjusted based on factors such as available buffer space, network
conditions, and congestion level.
.
Flow Control:
.
 Window management is essential for flow control, ensuring that the sender does not
overwhelm the receiver with data.
 The receiver advertises its window size to the sender through TCP segments, indicating the
amount of buffer space available for incoming data.
 The sender adjusts its transmission rate based on the receiver's advertised window size,
ensuring that it does not exceed the receiver's capacity.
.
Congestion Control:
.
 Window management also plays a role in congestion control, helping to prevent network
congestion and packet loss.
 TCP's congestion control mechanisms adjust the window size dynamically based on network
conditions, such as packet loss and round-trip time.
 If congestion is detected, TCP reduces the window size to alleviate congestion and prevent
further packet loss.
.
Window Scaling:
.
 To support high-speed networks and large window sizes, TCP includes a window scaling
option that allows the sender and receiver to scale up the window size beyond the
limitations of the 16-bit window field in the TCP header.

Overall, window management in the transport layer, particularly in TCP, is crucial for
efficient and reliable data transmission, enabling optimal flow control and congestion
control in computer networks.
窗体顶端

窗体底端

TCP Congestion Control :-


TCP congestion control is a method used by the TCP protocol to manage data flow
over a network and prevent congestion. TCP uses a congestion window and
congestion policy that avoids congestion. Previously, we assumed that only the
receiver could dictate the sender’s window size. We ignored another entity here, the
network. If the network cannot deliver the data as fast as it is created by the sender,
it must tell the sender to slow down. In other words, in addition to the receiver, the
network is a second entity that determines the size of the sender’s window
Congestion Policy in TCP
1. Slow Start Phase: Starts slow increment is exponential to the threshold.
2. Congestion Avoidance Phase: After reaching the threshold increment is by 1.
3. Congestion Detection Phase: The sender goes back to the Slow start phase or
the Congestion avoidance phase.

Slow Start Phase

Exponential increment: In this phase after every RTT the congestion window size
increments exponentially.
Example:- If the initial congestion window size is 1 segment, and the first segment
is successfully acknowledged, the congestion window size becomes 2 segments. If
the next transmission is also acknowledged, the congestion window size doubles to
4 segments. This exponential growth continues as long as all segments are
successfully acknowledged.
Initially cwnd = 1
After 1 RTT, cwnd = 2^(1) = 2
2 RTT, cwnd = 2^(2) = 4
3 RTT, cwnd = 2^(3) = 8
Congestion Avoidance Phase

Additive increment: This phase starts after the threshold value also denoted as
ssthresh. The size of cwnd(congestion window) increases additive. After each RTT
cwnd = cwnd + 1.
Example:- if the congestion window size is 20 segments and all 20 segments are
successfully acknowledged within an RTT, the congestion window size would be
increased to 21 segments in the next RTT. If all 21 segments are again successfully
acknowledged, the congestion window size would be increased to 22 segments, and
so on.
Initially cwnd = i
After 1 RTT, cwnd = i+1
2 RTT, cwnd = i+2
3 RTT, cwnd = i+3

Congestion Detection Phase

Multiplicative decrement: If congestion occurs, the congestion window size is


decreased. The only way a sender can guess that congestion has happened is the
need to retransmit a segment. Retransmission is needed to recover a missing
packet that is assumed to have been dropped by a router due to congestion.
Retransmission can occur in one of two cases: when the RTO timer times out or
when three duplicate ACKs are received.
Case 1: Retransmission due to Timeout – In this case, the congestion possibility is
high.
(a) ssthresh is reduced to half of the current window size.
(b) set cwnd = 1
(c) start with the slow start phase again.
Case 2: Retransmission due to 3 Acknowledgement Duplicates – The congestion
possibility is less.
(a) ssthresh value reduces to half of the current window size.
(b) set cwnd= ssthresh
(c) start with congestion avoidance phase
Example
Assume a TCP protocol experiencing the behavior of slow start. At the 5th
transmission round with a threshold (ssthresh) value of 32 goes into the congestion
avoidance phase and continues till the 10th transmission. At the 10th transmission
round, 3 duplicate ACKs are received by the receiver and entered into additive
increase mode. Timeout occurs at the 16th transmission round. Plot the
transmission round (time) vs congestion window size of TCP segments.
Quality of Service :-
Quality-of-Service (QoS) refers to traffic control mechanisms that seek to either
differentiate performance based on application or network-operator requirements or
provide predictable or guaranteed performance to applications, sessions, or traffic
aggregates. Basic phenomenon for QoS means in terms of packet delay and losses
of various kinds.
Need for QoS –
 Video and audio conferencing require bounded delay and loss rate.
 Video and audio streaming requires bounded packet loss rate, it may not be so
sensitive to delay.
 Time-critical applications (real-time control) in which bounded delay is considered
to be an important factor.
 Valuable applications should be provided better services than less valuable
applications.
QoS Specification –
QoS requirements can be specified as:
1.Delay
2.Delay Variation(Jitter)
3.Throughput
4.Error Rate
There are two types of QoS Solutions:
1. Stateless Solutions –
Routers maintain no fine-grained state about traffic, one positive factor of it is that
it is scalable and robust. But it has weak services as there is no guarantee about
the kind of delay or performance in a particular application which we have to
encounter.
2. Stateful Solutions –
Routers maintain a per-flow state as flow is very important in providing the
Quality-of-Service i.e. providing powerful services such as guaranteed services
and high resource utilization, providing protection, and is much less scalable and
robust.
Integrated Services(IntServ) –
1. An architecture for providing QoS guarantees in IP networks for individual
application sessions.
2. Relies on resource reservation, and routers need to maintain state information of
allocated resources and respond to new call setup requests.
3. Network decides whether to admit or deny a new call setup request.

窗体顶端
Key Aspect :-

.
Traffic Classification: QoS mechanisms classify traffic into different classes or
priority levels based on predefined criteria such as application type, destination, or
service level agreements (SLAs). For example, voice and video traffic may be
classified as high priority, while bulk data transfer may be classified as low priority.
.
.
Traffic Policing and Shaping: QoS mechanisms enforce traffic policies to control
the flow of data and ensure that traffic conforms to predefined QoS parameters.
Traffic policing involves dropping or marking packets that exceed specified rate
limits, while traffic shaping involves buffering and delaying packets to smooth out
traffic flows and ensure compliance with QoS requirements.
.
.
Traffic Prioritization: QoS mechanisms prioritize traffic based on its class or
priority level to ensure that high-priority traffic receives preferential treatment over
lower-priority traffic. This may involve giving high-priority traffic access to network
resources during times of congestion or limiting the impact of lower-priority traffic on
network performance.
.
.
Resource Reservation: QoS mechanisms can allocate and reserve network resources
such as bandwidth, buffer space, and processing capacity for specific traffic classes or
flows. This ensures that critical applications receive the necessary resources to meet
their performance requirements, even during periods of network congestion.
.
.
Traffic Queuing and Scheduling: QoS mechanisms use queuing and scheduling
algorithms to manage the order in which packets are transmitted from the network
buffers. Priority queuing, weighted fair queuing, and class-based queuing are
examples of queuing algorithms used to prioritize traffic based on QoS parameters.
.
.
Congestion Avoidance and Management: QoS mechanisms implement congestion
avoidance and management techniques to prevent network congestion and mitigate its
effects. This may include dynamically adjusting traffic rates, implementing flow
control mechanisms, and signaling congestion to network devices and endpoints.
.
.
End-to-End QoS Guarantees: QoS mechanisms provide end-to-end QoS guarantees
by coordinating QoS policies and mechanisms across multiple network segments and
devices. This ensures that QoS requirements are met consistently throughout the
network path, from source to destination.
.

Overall, QoS in the transport layer plays a critical role in ensuring that network
resources are allocated efficiently and fairly, and that applications and services can
meet their performance requirements in diverse and dynamic network environments.
窗体顶端

窗体底端

窗体底端

You might also like