Chapter 5
Transport Layer
Prepared By : Asst. Prof Sanjivan Satyal
PREPARED BY: ASST. PROF. SANIVAN SATYAL 1
Table of Contents
• Functions of Transport Layer
• Connection Management: TCP, UDP
• Port Addressing: Ports and Sockets
• Connection Establishment and Release
• Flow Control, Buffering
• Congestion Control: Token Bucket, Leaky Bucket
• IP Remapping: NAT
Transport Layer
• Offers peer-to-peer and end-to-end connection between two
processes on remote hosts.
• Process to process delivery.
• Takes data from upper layer (i.e. Application layer) and then breaks
it into smaller size segments, numbers each byte, and hands over to
lower layer (Network Layer) for delivery.
Process to Process communication
• A process is an application-layer entity that uses the
services of the transport layer.
• The network layer is responsible for communication at the
computer level
• A network layer can deliver the message only to the
destination computer.
• The transport-layer protocol is responsible for delivery of
the message to the appropriate process.
Functions
• Breaks the information data, supplied by Application layer in to smaller
units called segments.
• It numbers every byte in the segment and maintains their accounting.
• This layer ensures that data must be received in the same sequence in
which it was sent.
• This layer provides end-to-end delivery of data between hosts which
may or may not belong to the same subnet.
• All server processes intend to communicate over the network are
equipped with port numbers.
Logical connection at the transport layer
1. Connection Establishment and Release
Transport Layer
• Transport Layer provides two types of services:
Connection Oriented Transmission:
In this type of transmission the receiving devices sends an
acknowledge back to the source after a packet or group of
packet is received. It is slower transmission method.
Connectionless Transmission:
In this type of transmission the receiving devices does not sends
an acknowledge back to the source. It is faster transmission
method.
Transmission Control Protocol
• The transmission Control Protocol (TCP) is one of the
most important protocols of Internet Protocols suite.
• It is most widely used protocol for data transmission
in communication network such as internet.
• Well Known TCP Ports
• FTP- 20(data), 21(control)
• TELNET- 23
• SMTP-25
• DNS-53
• HTTP-80
• RPC-111
Definitions
• The domain name system (i.e., “DNS”) is responsible for
translating domain names into a specific IP address so that
the initiating client can load the requested Internet resources.
The domain name system works much like a phone book where
users can search for a requested person and retrieve their phone
number.
• Telnet : Port 23 is typically used by the Telnet protocol. Telnet
commonly provides remote access to a variety of communications
systems. Telnet is also often used for remote maintenance of
many networking communications devices including routers and
switches.
Port 111 is generally called an unsecured or a security vulnerability
as it provides direct and easy access to the RPC services. Remote
Procedure Call is a software communication protocol that one
program can use to request a service from a program located in
another computer on a network without having to understand the
network's details.
Features
• TCP is reliable protocol i.e. whether the data packet is reached the
destination or it needs to resend it.
• TCP ensures that the data reaches intended destination in the same
order it was sent.
• TCP is connection oriented. TCP requires that connection between two
remote points be established before sending actual data.
• TCP provides error-checking and recovery mechanism.
• TCP provides end-to-end communication.
• TCP provides flow control and quality of service.
• TCP operates in Client/Server point-to-point mode.
• TCP provides full duplex server, i.e. it can perform roles of both
receiver and sender.
Header
3-Way Handshaking
• TCP is connection oriented which means before data
transfer connection should be established
• Step in TCP communication
1. Connection establishment
2. Data Transfer
3. Connection termination
• Connection is established with help of 3 way
handshaking protocol
• Connection termination is also done with 3 way
handshaking protocol
Connection Establishment (Handshaking)
• SYN-Synchronization message used for connection establishment
• ACK- Acknowledgement
Steps
1. First client send SYN message to server
2. Server responds ACK to SYN and SYN to client
3. Client responds ACK to SYN and connection is established
Connection Establishment (Handshaking)
Data Transfer(ACK Number + SEQ Number )
PSH: Request for push
Connection Termination (Handshaking)
•FIN(Finish) – Connection Termination
• ACK- Acknowledgement
Steps
1. First client send FIN message to server
2. Server responds ACK to FIN from client and send FIN to
client
3. Client responds ACK to FIN and connection is established
Connection Termination (Handshaking)
User Datagram Protocol
• simplest Transport Layer communication protocol available
of the TCP/IP protocol suite.
• involves minimum amount of communication mechanism.
• unreliable transport protocol but it uses IP services which
provides best effort delivery mechanism.
• the receiver does not generate an acknowledgement of
packet received and in turn, the sender does not wait for
any acknowledgement of packet sent. This shortcoming
makes this protocol unreliable as well as easier on
processing.
Features
• UDP is used when acknowledgement of data does not hold any
significance.
• UDP is good protocol for data flowing in one direction.
• UDP is simple and suitable for query based communications.
• UDP is not connection oriented.
• UDP does not provide congestion control mechanism.
• UDP does not guarantee ordered delivery of data.
• UDP is suitable protocol for streaming applications such as VoIP,
multimedia streaming.
UDP Header
• Source port number(16 bits): This is the port number used
by the process running on the source host Range 0-65535
• Destination port number(16 bits)
• This is the port number used by the process running on
the destination host Range 0-65535 Length
• Length : Defines the total length of the user datagram,
header plus data. Length ranges between 8 to 65,535 bytes
• Checksum
• For error checking
Reliable vs. Unreliable
Port and Socket
PORT
• At the transport layer, we need a transport layer address, called a port number, to
choose among multiple processes running on the destination host.
• The destination port number is needed for delivery; the source port number is
needed for the reply.
• In the Internet model, the port numbers are 16-bit integers between 0 and
65,535.
• The client program defines itself with a port number, chosen randomly by the
transport layer software running on the client host. This is the ephemeral port
number.
• The server process should also defines itself with a port
number but this port is well known port number.
lANA Ranges
• The lANA (Internet Assigned Number Authority) has divided the port
numbers into three ranges: well known, registered, and dynamic (or private)
• Well-known ports. The ports ranging from 0 to 1023 are assigned and
controlled by lANA. These are the well-known ports.
• Registered ports. The ports ranging from 1024 to 49,151 are not assigned
or controlled by lANA. They can only be registered with lANA to prevent
duplication.
• Dynamic ports. The ports ranging from 49,152 to 65,535 are neither
controlled nor registered. They can be used by any process. These are the
ephemeral ports.
Socket Address
• Process-to-process delivery needs two identifiers, IP address and the
port number, at each end to make a connection.
• The combination of an IP address and a port number is called a socket
address.
• The client socket address defines the client process uniquely just as
the server socket address defines the server process uniquely
• A transport layer protocol needs a pair of socket addresses: the client
socket address and the server socket address.
Traffic Shaping Algorithms
• Traffic shaping is a mechanism to control the amount and the rate of the
traffic sent to the network.
• a burst transmission or data burst is the broadcast of a relatively high-bandwidth
transmission over a short period
• Two techniques can shape traffic:
• Leaky bucket and
• Token bucket.
The Leaky Bucket Algorithm
• If a bucket has a small hole at the bottom, the water leaks
from the bucket at a constant rate as long as there is
water in the bucket. The rate at which the water leaks
does not depend on the rate at which the water is input to
the bucket unless the bucket is empty
• NB: In Network Bucket is router and water is data packets
• In the figure, we assume that the network has committed a
bandwidth of 3 Mbps for a host. The use of the leaky bucket
shapes the input traffic to make it conform to this commitment.
The host sends a burst of data at a rate of 12 Mbps for 2 s, for
a total of 24 Mbits of data. The host is silent for 5 s and then
sends data at a rate of 2Mbps for 3 s, for a total of 6 Mbits of
data. In all, the host has sent 30 Mbits of data in 10 s
• The input rate can vary, but the output rate remains constant
• Similarly, in networking, a technique called leaky bucket can
smooth out bursty traffic, Bursty chunks are stored in router
and sent out at an average rate
• It may also drop the packet if the bucket is full
Implementation:•
a. Process removes a fixed number of packets from the queue at
each tick of the clock
b. If the traffic consists of variable-length packets, the fixed
output rate must be based on the number of bytes or bits.
c. The following is an algorithm for variable-length packets:
1. Initialize a counter to n at the tick of the clock.
2. If n is greater than the size of the packet, send the packet
and decrement the counter by the packet size. Repeat this step
until n is smaller than the packet size.
3. Reset the counter and go to step 1
Token Bucket Algorithm
• In contrast to the LB, the Token Bucket (TB)
algorithm, allows the output rate to vary, depending on
the size of the burst.
• In the TB algorithm, the bucket holds tokens. To
transmit a packet, the host must capture and destroy
one token.
• Tokens are generated by a clock at the rate of one
token every t sec.
• Idle hosts can capture and save up tokens (up to the
max. size of the bucket) in order to send larger bursts
later.
• The host can send bursty data as long as the bucket is not
empty
• Token bucket algorithm allows idle hosts to accumulate
credit for the future in the form of tokens
• For each tick of the clock, the system sends n tokens to
the bucket
• The system removes one token for every cell (or byte) of
data sent For example, if n is 100 and the host is idle for
100 ticks, the bucket collects 10,000 tokens. Now the host
can consume all these tokens in one tick with 10,000 cells,
or the host takes 1000 ticks with 10 cells per tick
Implementation:
• The token bucket can easily be implemented, In this case,
the bucket holds token to transmit a packet, the host must
capture and destroy one token. Tokens are generated by a
clock at the rate of one token every second.
1. A token is added at every ∆t time.
2. The bucket can hold b-token. If a token arrive when
bucket is full it is discarded
3. When a packet of m bytes arrived m token are removed
from the bucket and the packet sent to the network
4. If less than n tokens are available no token are removed
from the buckets and the packet is considered to be non
conformant.
Congestion
• Load of network : The number of packets sent to the network
• Capacity of network : The number of packets that can be
handle
• Congestion occurs when the load of network is greater than
capacity of network
• Congestion occurs because of the following factor
1. Processing capacity of router
2. No of Packets in input and output interface
Congestion Control
• Congestion control refers to techniques and mechanisms that
can either prevent congestion, before it happens, or remove
congestion, after it has happened
• Congestion control mechanisms into two broad categories:
1. Open-loop congestion control (prevention)
2. Closed-loop congestion control (removal)
Open Loop Congestion
• Congestion Prevention mechanism
• Policies are applied to prevent congestion before it happens
• Congestion control is handled by either the source or the
destination
1. Retransmission policy
2. Windowing policy
3. Acknowledge policy
4. Discard policy
5. Admission policy
• Retransmission Policy :Retransmission is sometimes unavoidable.
If the sender feels that a sent packet is lost or corrupted, the
packet needs to be retransmitted.
• Window Policy: when the timer for a packet times out, several
packets may be resent, although some may have arrived safe and
sound at the receiver. This duplication may make the congestion
worse
• Acknowledgment Policy :The acknowledgment policy imposed by
the receiver may also affect congestion. If the receiver does not
acknowledge every packet it receives, it may slow down the
sender and help prevent congestion.
• Discarding Policy: A good discarding policy by the
routers may prevent congestion and at the same time
may not harm the integrity of the transmission.
• Admission Policy: A router can deny establishing a virtual
circuit connection if there is congestion in the network or
if there is a possibility of future congestion.
Closed-Loop Congestion Control
• Closed-loop congestion control mechanisms try to alleviate congestion
after it happens
• Several mechanisms have been used by different protocols, They are
1. Back pressure
2. Choke packet
3. Implicit signalling
4. Explicit signalling
1. Back Pressure
• Backpressure refers to a congestion control mechanism in
which a congested node stops receiving data from the
immediate upstream node or nodes.
• Backpressure is a node-to-node congestion control
• Starts from congested node propagates, in the opposite
direction of data flow, to the source
• Backpressure technique can be applied only to virtual circuit
2. Choke Packet :
A choke packet is a packet sent by a node to the source to
inform it of congestion
• Warning is from the congested encountered router to the
source station directly
• Intermediate nodes doesn’t know about congestion.
3. Implicit Signalling
• No communication between the congested nodes and the source
• Source guesses that there is a congestion somewhere in the
network from other symptoms
• Signals like acknowledgement is used
• If ACK is delayed the source assume there is congestion in
destination and slows down the data transfer
• Used mainly in TCP network
4. Explicit Signalling
• The node that experiences congestion can explicitly send a signal to
the source or destination
• Here only difference from choke packet is no separate packet is
used where as in the choke packet separate packet is used
• It is used in Frame relay
Congestion Control in TCP
• TCP uses congestion control to avoid congestion or alleviate
congestion in the network
1. Congestion Window
2. Congestion Policy
a. Slow start
b. Congestion Avoidance
c. Congestion Detection
1. Congestion Window
• Sender can send only that amount of data before waiting
for an
acknowledgment, the amount data is termed a window size
• The sender has two pieces of information:
1. Receiver window size : Maximum available buffer size of
receiver
2. Congestion window size : Congestion window size is
determined by congestion policy
• Window size >= minimum of receiver window size ,
congestion window size
2. Congestion policy
Handling of congestion is done by 3 phases
a. Slow start : Size of the congestion window increases
exponentially until it reaches a threshold
b. Congestion Avoidance :
• It uses Additive inverse
• Congestion window increases additively until congestion is detected
• Slow-start phase stops and the additive phase begins
c. Congestion Detection
• If congestion occurs, the congestion window size must be
decreased
• The only way the sender can guess that congestion has
occurred is by the need to retransmit a segment
• However, retransmission can occur in one of two cases:
1. when a timer times out
2. when three ACKs are received
• In both cases, the size of the threshold is dropped to
one-half, a multiplicative decrease
Congestion Control in TCP
• Congestion example
THANK YOU