Module-4
Transport layer
What is transport Layer?
Transport Layer is the fourth layer from the top in OSI Model which provide
communication services to the application processes that was running on different hosts.
Transport Layer provides the services to the session layer and it receives the services
from network layer.
The services provided by transport layer includes error correction as well as segmenting
and desegmenting data before and after it's sent on the network.
Transport layer also provides the flow control functionality and ensures that segmented
data is delivered across the network in the right sequence.
Note: Main duty of transport layer is to provide process to process communication.
Services provided by Transport Layer
1. Process to Process Communication
Transport Layer is responsible for delivery of message to appropriate process.
Transport Layer uses a port number to deliver the segmented data to the correct process amongst
the multiple processes that are running on a particular host. A port number is a 16-bit
address used by transport layer to identify any client-server program.
2. Muliplexing and Demultiplexing
The transport layer provides the multiplexing service to improve transmission efficiency in data
communication. At the receiver side, demultiplexing is required to collect the data coming from
different processes. Transport Layer provides Upward and Downward Multiplexing:
Upward multiplexing means multiple transport layer connections utilizes the connection of
same network. Transport layer transmits several transmissions bound for the same destination
along the same path in network.
Downward multiplexing means a transport layer connection utilizes the multiple connections.
This multiplexing allows the transport layer to split a network connection among several paths to
improve the throughput in network.
3. Flow Control
Flow control makes sure that the data is transmitted at a rate that is accept table for both
sender and receiver by managing data flow.
The transport layer provides a flow control service between the adjacent layers of the
TCP/IP model. Transport Layer uses the concept of sliding window protocol to provide
flow control.
4. Data integrity
Transport Layer provides data integrity by:
Detecting and discarding corrupted packets.
Tracking of lost and discarded packets and re-transmit them.
Recognizing duplicate packets and discarding them.
Buffering out of order packets until the missing the packets arrive.
5. Congestion avoidance
In network, if the load on network is greater than the network load capacity, then the
congestion may occur.
Congestion Control refers to the mechanisms and techniques to control the congestion
and keep the load below the capacity.
The transport layer recognizes overloaded nodes and reduced flow rates and take proper
steps to overcome this.
Transport Layer protocols
o The transport layer is represented by two protocols: TCP and UDP.
o The IP protocol in the network layer delivers a datagram from a source host to the
destination host.
o Nowadays, the operating system supports multiuser and multiprocessing environments,
an executing program is called a process. When a host sends a message to other host
means that source process is sending a process to a destination process. The transport
layer protocols define some connections to individual ports known as protocol ports.
o An IP protocol is a host-to-host protocol used to deliver a packet from source host to
the destination host while transport layer protocols are port-to-port protocols that work
on the top of the IP protocols to deliver the packet from the originating port to the IP
services, and from IP services to the destination port.
o Each port is defined by a positive integer address, and it is of 16 bits.
UDP
o UDP stands for User Datagram Protocol.
o UDP is a simple protocol and it provides nonsequenced transport functionality.
o UDP is a connectionless protocol.
o This type of protocol is used when reliability and security are less important than speed
and size.
o UDP is an end-to-end transport level protocol that adds transport-level addresses,
checksum error control, and length information to the data from the upper layer.
o The packet produced by the UDP protocol is known as a user datagram.
User Datagram Format
The user datagram has a 16-byte header which is shown below:
Where,
o Source port address: It defines the address of the application process that has delivered
a message. The source port address is of 16 bits address.
o Destination port address: It defines the address of the application process that will
receive the message. The destination port address is of a 16-bit address.
o Total length: It defines the total length of the user datagram in bytes. It is a 16-bit field.
o Checksum: The checksum is a 16-bit field which is used in error detection.
Disadvantages of UDP protocol
o UDP provides basic functions needed for the end-to-end delivery of a transmission.
o It does not provide any sequencing or reordering functions and does not specify the
damaged packet when reporting an error.
o UDP can discover that an error has occurred, but it does not specify which packet has
been lost as it does not contain an ID or sequencing number of a particular data
segment.
TCP
o TCP stands for Transmission Control Protocol.
o It provides full transport layer services to applications.
o It is a connection-oriented protocol means the connection established between both the
ends of the transmission. For creating the connection, TCP generates a virtual circuit
between sender and receiver for the duration of a transmission.
Features Of TCP protocol
o Stream data transfer: TCP protocol transfers the data in the form of contiguous stream
of bytes. TCP group the bytes in the form of TCP segments and then passed it to the IP
layer for transmission to the destination. TCP itself segments the data and forward to the
IP.
o Reliability: TCP assigns a sequence number to each byte transmitted and expects a
positive acknowledgement from the receiving TCP. If ACK is not received within a
timeout interval, then the data is retransmitted to the destination.
The receiving TCP uses the sequence number to reassemble the segments if they arrive
out of order or to eliminate the duplicate segments.
o Flow Control: When receiving TCP sends an acknowledgement back to the sender
indicating the number the bytes it can receive without overflowing its internal buffer. The
number of bytes is sent in ACK in the form of the highest sequence number that it can
receive without any problem. This mechanism is also referred to as a window
mechanism.
o Multiplexing: Multiplexing is a process of accepting the data from different applications
and forwarding to the different applications on different computers. At the receiving end,
the data is forwarded to the correct application. This process is known as demultiplexing.
TCP transmits the packet to the correct application by using the logical channels known
as ports.
o Logical Connections: The combination of sockets, sequence numbers, and window
sizes, is called a logical connection. Each connection is identified by the pair of sockets
used by sending and receiving processes.
o Full Duplex: TCP provides Full Duplex service, i.e., the data flow in both the directions at
the same time. To achieve Full Duplex service, each TCP should have sending and
receiving buffers so that the segments can flow in both the directions. TCP is a
connection-oriented protocol. Suppose the process A wants to send and receive the data
from process B. The following steps occur:
o Establish a connection between two TCPs.
o Data is exchanged in both the directions.
o The Connection is terminated.
TCP Segment Format
Where,
o Source port address: It is used to define the address of the application program in a
source computer. It is a 16-bit field.
o Destination port address: It is used to define the address of the application program in
a destination computer. It is a 16-bit field.
o Sequence number: A stream of data is divided into two or more TCP segments. The 32-
bit sequence number field represents the position of the data in an original data stream.
o Acknowledgement number: A 32-field acknowledgement number acknowledge the
data from other communicating devices. If ACK field is set to 1, then it specifies the
sequence number that the receiver is expecting to receive.
o Header Length (HLEN): It specifies the size of the TCP header in 32-bit words. The
minimum size of the header is 5 words, and the maximum size of the header is 15 words.
Therefore, the maximum size of the TCP header is 60 bytes, and the minimum size of the
TCP header is 20 bytes.
o Reserved: It is a six-bit field which is reserved for future use.
o Control bits: Each bit of a control field functions individually and independently. A
control bit defines the use of a segment or serves as a validity check for other fields.
There are total six types of flags in control field:
o URG: The URG field indicates that the data in a segment is urgent.
o ACK: When ACK field is set, then it validates the acknowledgement number.
o PSH: The PSH field is used to inform the sender that higher throughput is needed so if
possible, data must be pushed with higher throughput.
o RST: The reset bit is used to reset the TCP connection when there is any confusion
occurs in the sequence numbers.
o SYN: The SYN field is used to synchronize the sequence numbers in three types of
segments: connection request, connection confirmation ( with the ACK bit set ), and
confirmation acknowledgement.
o FIN: The FIN field is used to inform the receiving TCP module that the sender has
finished sending data. It is used in connection termination in three types of segments:
termination request, termination confirmation, and acknowledgement of termination
confirmation.
o Window Size: The window is a 16-bit field that defines the size of the window.
o Checksum: The checksum is a 16-bit field used in error detection.
o Urgent pointer: If URG flag is set to 1, then this 16-bit field is an offset from the
sequence number indicating that it is a last urgent data byte.
Basis for TCP UDP
Comparison
Definition TCP establishes a virtual circuit before UDP transmits the data directly to the
transmitting the data. destination computer without verifying whether
the receiver is ready to receive or not.
Connection Type It is a Connection-Oriented protocol It is a Connectionless protocol
Speed slow high
Reliability It is a reliable protocol. It is an unreliable protocol.
Header size 20 bytes 8 bytes
acknowledgement It waits for the acknowledgement of It neither takes the acknowledgement, nor it
data and has the ability to resend the retransmits the damaged frame.
lost packets.
o Options and padding: It defines the optional fields that convey the additional
information to the receiver.
SCTP stands for Stream Control Transmission Protocol.
It is a connection- oriented protocol in computer networks which provides a full-
duplex association i.e., transmitting multiple streams of data between two end
points at the same time that have established a connection in network. It is
sometimes referred to as next generation TCP or TCPng, SCTP makes it easier
to support telephonic conversation on Internet. A telephonic conversation
requires transmitting of voice along with other data at the same time on both
ends, SCTP protocol makes it easier to establish reliable connection.
SCTP is also intended to make it easier to establish connection over wireless
network and managing transmission of multimedia data. SCTP is a standard
protocol (RFC 2960) and is developed by Internet Engineering Task Force
(IETF).
Characteristics of SCTP
1. Unicast with Multiple properties –
It is a point-to-point protocol which can use different paths to reach end host.
2. Reliable Transmission –
It uses SACK and checksums to detect damaged, corrupted, discarded,
duplicate and reordered data. It is similar to TCP but SCTP is more efficient
when it comes to reordering of data.
3. Message oriented –
Each message can be framed and we can keep order of datastream and
tabs on structure. For this, In TCP, we need a different layer for abstraction.
4. Multi-homing –
It can establish multiple connection paths between two end points and does
not need to rely on IP layer for resilience.
5. Security –
Another characteristic of SCTP that is security. In SCTP, resource allocation
for association establishment only takes place following cookie exchange
identification verification for the client (INIT ACK). Man-in-the-middle and
denial-of-service attacks are less likely as a result. Furthermore, SCTP
doesn’t allow for half-open connections, making it more resistant to network
floods and masquerade attacks.
Advantages of SCTP :
1. It is a full- duplex connection i.e. users can send and receive data
simultaneously.
2. It allows half- closed connections.
3. The message’s boundaries are maintained and application doesn’t have to
split messages.
4. It has properties of both TCP and UDP protocol.
5. It doesn’t rely on IP layer for resilience of paths.
Disadvantages of SCTP :
1. One of key challenges is that it requires changes in transport stack on node.
2. Applications need to be modified to use SCTP instead of TCP/UDP.
3. Applications need to be modified to handle multiple simultaneous streams.
Congestion Control in Computer Networks
What is congestion?
A state occurring in network layer when the message traffic is so heavy that it
slows down network response time.
Effects of Congestion
As delay increases, performance decreases.
If delay increases, retransmission occurs, making situation worse.
What is Quality of Service (QOS)?
Quality of Service (QOS) determines a network's capability to support predictable
service over various technologies, containing frame relay, Asynchronous Transfer
Mode (ATM), Ethernet, SONET IP-routed networks. The networks can use any or all of
these frameworks.
The QOS also provides that while supporting priority for one or more flows does not
create other flows fail. A flow can be a combination of source and destination
addresses, source and destination socket numbers, session identifier, or packet from a
specific application or an incoming interface.
The QOS is primarily used to control resources like bandwidth, equipment, wide-area
facilities etc. It can get more efficient use of network resources, provide tailored
services, provide coexistence of mission-critical applications, etc.
QOS Concepts
The QOS concepts are explained below−
Congestion Management
The bursty feature of data traffic sometimes bounds to increase traffic more than a
connection speed. QoS allows a router to put packets into different queues.
Servicespecific queues more often depend on priority than buffer traffic in an individual
queue and let the first packet by the first packet out.
Queue Management
The queues in a buffer can fill and overflow. A packet would be dropped if a queue is
complete, and the router cannot prevent it from being dropped if it is a high priority
packet. This is referred to as tail drop.
Link Efficiency
The low-speed links are bottlenecks for lower packets. The serialization delay caused
by the high packets forces the lower packets to wait longer. The serialization delay is
the time created to put a packet on the connection.
Elimination of overhead bits
It can also increase efficiency by removing too many overhead bits.
Traffic shaping and policing
Shaping can prevent the overflow problem in buffers by limiting the full bandwidth
potential of the applications packets. Sometimes, many network topologies with a
highbandwidth link connected with a low-bandwidth link in remote sites can overflow
low bandwidth connections.
Therefore, shaping is used to provide the traffic flow from the high bandwidth link closer
to the low bandwidth link to avoid the low bandwidth link's overflow. Policing can
discard the traffic that exceeds the configured rate, but it is buffered in the case of
shaping.
Techniques to Improve QoS
Some techniques that can be used to improve the quality of service. The four
common methods: scheduling, traffic shaping, admission control, and resource
reservation.
a. Scheduling
Packets from different flows arrive at a switch or router for processing. A good
scheduling technique treats the different flows in a fair and appropriate manner.
Several scheduling techniques are designed to improve the quality of service. We
discuss three of them here: FIFO queuing, priority queuing, and weighted fair
queuing.
i. FIFO Queuing
In first-in, first-out (FIFO) queuing, packets wait in a buffer (queue) until the node
(router or switch) is ready to process them. If the average arrival rate is higher than
the average processing rate, the queue will fill up and new packets will be
discarded. A FIFO queue is familiar to those who have had to wait for a bus at a
bus stop.
ii. Priority Queuing
In priority queuing, packets are first assigned to a priority class. Each priority class
has its own queue. The packets in the highest-priority queue are processed first.
Packets in the lowest- priority queue are processed last. Note that the system does
not stop serving a queue until it is empty. Figure 4.32 shows priority queuing with
two priority levels (for simplicity).
A priority queue can provide better QoS than the FIFO queue because higher
priority traffic, such as multimedia, can reach the destination with less delay.
However, there is a potential drawback. If there is a continuous flow in a high-
priority queue, the packets in the lower-priority queues will never have a chance to
be processed. This is a condition called starvation
iii. Weighted Fair Queuing
A better scheduling method is weighted fair queuing. In this technique, the packets
are still assigned to different classes and admitted to different queues. The queues,
however, are weighted based on the priority of the queues; higher priority means a
higher weight. The system processes packets in each queue in a round-robin
fashion with the number of packets selected from each queue based on the
corresponding weight. For example, if the weights are 3, 2, and 1, three packets are
processed from the first queue, two from the second queue, and one from the third
queue. If the system does not impose priority on the classes, all weights can be
equal. In this way, we have fair queuing with priority. Figure 4.33 shows the
technique with three classes.
b. Traffic Shaping
Traffic shaping is a mechanism to control the amount and the rate of the traffic sent
to the network. Two techniques can shape traffic: leaky bucket and token bucket
Leaky bucket algorithm
Step1: Consider a bucket with a small hole at the bottom into which water is poured at a
variable pace, but which leaks at a constant rate.
Step2: So (as long as there is water in the bucket), the rate at which water leaks is
unaffected by the pace at which water is poured into the bucket.
Step3: If the bucket is full, any more water that enters will pour over the edges and be
lost.
Step4: The same technique was apply to network packets. Consider the fact that data is
arriving at varying speeds from the source. Thus, Assume a source transmits data at 10
Mbps for 4 seconds. Moreover, For the next three seconds, there is no data. For 2
seconds, the source transfers data at an 8 Mbps pace. Thus, 68 Mb of data sent in less
than 8 seconds.
As a result, if you employ a leaky bucket technique, the data flow will be 8 Mbps for 9
seconds. As a result, the steady flow is maintained.
Features
1. Firstly, Each host is connect to the network via a leaky bucket interface, which is a finite
internal queue.
2. When space in a queue becomes available, a packet will sent from a store application.
3. A new packet sent from an application is discard when the queue is full.
4. The host operating system creates or simulates this hardware configuration.
5. Packets are queue and release at regular intervals and in the same amount, reducing
the likelihood of congestion.
Token Bucket Algorithm
The leaky bucket algorithm enforces output patterns at the average rate, no matter how
busy the traffic is. So, to deal with the more traffic, we need a flexible algorithm so that
the data is not lost. One such approach is the token bucket algorithm.
Let us understand this algorithm step wise as given below −
Step 1 − In regular intervals tokens are thrown into the bucket f.
Step 2 − The bucket has a maximum capacity f.
Step 3 − If the packet is ready, then a token is removed from the bucket, and the
packet is sent.
Step 4 − Suppose, if there is no token in the bucket, the packet cannot be sent.