Process-to-Process Delivery:
UDP and TCP
23.1
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
23-1 PROCESS-TO-PROCESS DELIVERY
The transport layer is responsible for process-to-process delivery.
The transport layer is responsible for process-to-process
delivery—the delivery of a packet, part of a message, from
one process to another. Two processes communicate in a
client/server relationship, as we will see later.
Topics discussed in this section:
Client/Server Paradigm
Multiplexing and Demultiplexing
Connectionless Versus Connection-Oriented Service
Reliable Versus Unreliable
Three Protocols
23.2
Figure 23.1 Types of data deliveries
23.3
Connectionless vs. Connection-Oriented Service
Transport layer protocol can either be
• Connectionless: UDP
• the packets are sent from one party to another with
no need for connection establishment or connection
release. The packets are not numbered; they may be
delayed or lost or may arrive out of sequence.
• Connection-oriented: TCP
• a connection is first established between the sender
and receiver before data transfer. At the end, the
connection is released.
23.4
Reliable vs. Unreliable
• UDP: connectionless & unreliable
• TCP: connection-oriented & reliable
23.5 Figure 23.7 Error control
Three Protocols
Figure 23.8 Position of UDP, TCP, and SCTP in TCP/IP suite
23.6
23-2 USER DATAGRAM PROTOCOL (UDP)
The User Datagram Protocol (UDP) is called a
connectionless, unreliable transport protocol. It does
not add anything to the services of IP except to provide
process-to-process communication instead of host-to-
host communication.
Topics discussed in this section:
Well-Known Ports for UDP
User Datagram
Checksum
UDP Operation
Use of UDP
23.7
Well-known Ports for UDP
23.8
Table 23.1 Well-known ports used with UDP
User Datagram
UDP packet is called user datagram with a fixed size header of 8 bytes
Figure 23.9 User datagram format
port number used by the process running on the
source host
Source port number
if ephemeral port number client
if well-known port number server
port number used by the process running on the
Destination port number
destination host
length of user datagram header and data
Total length UDP length = IP length - IP length header
UDP length = IP length – IP header’s length
Checksum field used to detect errors
23.9
Checksum
Figure 23.10 Pseudoheader for checksum calculation
• UDP checksum calculation have 3 sections: pseudoheader, header
(UDP) & data (from the application layer)
• Pseudoheader: to ensure if IP header is corrupted the user
datagram is not delivered to the wrong host
• Protocol field: ensure packet belong to UDP and not other
transport layer protocol (protocol field for UDP is 17)
23.10
Example 23.2
Figure 23.11 shows the checksum calculation for a very small user
datagram with only 7 bytes of data. Because the number of bytes of
data is odd, padding is added for checksum calculation. The
pseudoheader as well as the padding will be dropped when the user
datagram is delivered to IP.
23.11 Figure 23.11 Checksum calculation of a simple UDP user datagram
UDP Operation
1. When client process starts
• Request port number
• Create incoming & outgoing queue/ incoming queue
2. Client process sends message to outgoing queue
3. UDP removes message 1 by 1
• Add UDP header
• Deliver to IP
4. At server side once its process starts running
• Create incoming & outgoing queue
5. When message arrives
• UDP sends to incoming queue of the port number specified
Figure 23.12 Queues in UDP
23.12
Use of UDP
1. Suitable for process that requires simple request-response
communication with little concern for flow & error control
2. Suitable for process with internal flow & error control
eg: trivial FTP (TFTP) process have flow & error control
3. Suitable for transport protocol for multicasting
4. Used for management processes such as SNMP
5. Used for some route updating protocol, eg. RIP
23.13
23-3 TCP
TCP is a connection-oriented protocol; it creates a
virtual connection between two TCPs to send data. In
addition, TCP uses flow and error control mechanisms
at the transport level.
Topics discussed in this section:
TCP Services
TCP Features
Segment
A TCP Connection
Flow Control
Error Control
23.14
TCP Services
23.15
Table 23.2 Well-known ports used by TCP
Stream Delivery Services
• In TCP, process delivers data as a stream of bytes and receiver
receives data in stream of bytes
• TCP creates an environment in which the two processes seem to
be connected by an imaginary “tube” that carries data across the
Internet
Figure 23.13 Stream delivery
23.16
Sending and receiving buffers
• Sending and receiving processes may not write/ read at the same speed
TCP needs buffer for storage
• Uses circular array of 1-byte locations
1. At sending site, buffer has 3 type of chambers
White: empty, can be filled by sending process
Pink: contains bytes to be sent
Gray: holds bytes that have been sent but not yet acknowledge
2. At receiver’s site, buffer is divided into 2 chambers
White: empty, to be filled by bytes received from network
Pink: holds bytes to be read by process
23.17 Figure 23.14 Sending and receiving buffers
Segments
• IP layer as service provider for TCP need to send data in packet,
not stream of bytes
1. TCP groups a number of bytes into a packet called segment,
then adds header and sends to IP
2. Segment are encapsulated in IP datagram and transmitted
• Segments are not necessarily the same size
• TCP offers full-duplex service and provide sending & receiving
buffer to allow segments move in both directions
Figure 23.15 TCP segments
23.18
TCP Features
The bytes of data being transferred in each connection are numbered
by TCP.The numbering starts with a randomly generated number.
The value in the sequence number field of a segment defines the
number of the first data byte contained in that segment.
The value of the acknowledgment field in a segment defines
the number of the next byte a party expects to receive.
The acknowledgment number is cumulative.
23.19
Example 23.3
The following shows the sequence number for each
segment:
23.20
Segment
In TCP packet is called segment which consists of 20 – 60 bytes header
port number used by the process running on the source host
Source port add if ephemeral port number client
if well-known port number server
Destination port add port number used by the process running on destination host
Sequence number Identifies the number of 1st bytes in segment
Acknowledgement number Defines byte number that it expects to receive
Between 20 – 60 bytes
HLEN (header length) Field value can be from: 5 5*4=20 bytes (no option)
to: 15 15*4=60 bytes (max option)
23.21
Reserved Reserved for future use
Defines 6 different control bits/ flags
Control fields
Window size Windows size determined by receiver that the sender must maintain
Used for error checking (Protocol field value for TCP is 6) same calculation
Checksum
as UDP but Mandatory to send with segment
Urgent pointer Used when segment contains urgent data (only valid if urgent pointer is set)
Options/ Padding Can be up to 40 bytes of optional information
23.22
Table 23.3 Description of flags in the control field
URG - The sending application program tells the sending TCP that the
piece of data is urgent. The sending TCP creates a segment and inserts
the urgent data at the beginning of the segment.
Push flag - inform server’s TCP to deliver data to process as soon as they are
received and not to wait for more data to come.
23.
23
Each party must initialize communication and get approval
TCP Connection from the other party before any data are transferred.
3 phases Server tells its TCP that it is ready
1st phase: Connection Establishment to accept a connection
a. Server request passive open Client tells its TCP that it needs to be
b. Client program request active open connected to a server
c. 3 way handshaking starts
• SYN segment
A SYN segment cannot carry
data, but it consumes one
sequence number.
• SYN + ACK segment
A SYN + ACK segment cannot
carry data, but does consume
one sequence number.
• ACK segment
An ACK segment, if carrying no
data, consumes no sequence
Figure 23.18 Connection establishment using
number.
three-way handshaking
23.24
2nd phase: Data Transfer
Eg:
Client sends 2000 bytes in 2 segments
Server then sends 2000 bytes in 1
segment
Figure 23.19 Data transfer
23.25
3rd phase: Connection Termination
3 way handshaking
• FIN segment
The FIN segment consumes one
sequence number if it does
not carry data.
• FIN + ACK segment
The FIN + ACK segment consumes
one sequence number if it
does not carry data.
• ACK segment
Figure 23.20 Connection termination using
three-way handshaking
23.26
Flow Control
A sliding window is used to make transmission more efficient as well as
to control the flow of data so that the destination does not become
overwhelmed with data. TCP sliding windows are byte-oriented.
3 activities of window-
Open: allow new bytes in the buffer that are eligible for sending
Close: some bytes have been acknowledged
Shrink: revoke eligibility of some bytes for sending (not encourage)
Window size is determined by the lesser of 2 values
a. rwnd (receiver window)
• value advertised by opposite end using ack
• tells how many bytes it can accept before its buffer overflows
b. cwnd (congestion window)
• value determined by network to avoid congestion
23.27
Example 23.4
What is the value of the receiver window (rwnd) for host
A if the receiver, host B, has a buffer size of 5000 bytes
and 1000 bytes of received and unprocessed data?
Solution
• The value of rwnd = 5000 − 1000 = 4000.
• Host B can receive only 4000 bytes of data before
overflowing its buffer.
• Host B advertises this value in its next segment to A.
23.28
Example 23.5
What is the size of the window for host A if the value of
rwnd is 3000 bytes and the value of cwnd is 3500 bytes?
Solution
The size of the window is the minimum of rwnd and
cwnd, which is 3000 bytes.
23.29
Example 23.6
An unrealistic example of a sliding window
• The sender has sent bytes up to 202 (bytes 200 to 202 are sent,
but not acknowledged)
• Assume that cwnd is 20
• The receiver has sent an acknowledgment number of 200 with an
rwnd of 9 bytes.
• The size of the sender window is the minimum of rwnd and cwnd,
or 9 bytes.
• Bytes 203 to 208 can be sent without worrying about
acknowledgment.
• Bytes 209 and above cannot be sent.
23.30
Note
Some points about TCP sliding windows:
❏ The size of the window is the lesser of rwnd and
cwnd.
❏ The source does not have to send a full window’s
worth of data.
❏ The window can be opened or closed by the
receiver, but should not be shrunk.
❏ The destination can send an acknowledgment at
any time as long as it does not result in a shrinking
window.
❏ The receiver can temporarily shut down the
window; the sender, however, can always send a
segment of 1 byte after the window is shut down.
23.31
Error Control
Error detection & correction in TCP is achieved through
a. Checksum
b. Acknowledgment
c. Time out/ Retransmission
In modern implementations, a retransmission occurs if the retransmission timer
expires or 3 duplicate ACK segments have arrived.
1. In retransmission time out (RTO), a RTO timer is set for all
outstanding segments
No retransmission timer is set for an ACK segment.
2. In 3 duplicate ACK segments, the missing segment is transmitted
immediately (fast retransmission)
Data may arrive out of order and be temporarily stored by the receiving TCP, but
TCP guarantees that no out-of-order segment is delivered to the process.
23.32
Scenarios
Normal operation
eg: A bidirectional data transfer
between 2 systems
• ACK is delayed 500ms to
see if any more segments
arrive
• When timer matures
trigger ACK
Figure 23.24 Normal operation
23.33
Lost segment
The receiver TCP delivers only ordered data to the process.
Note that the receiver stores bytes 801
to 900, but never delivers these bytes
to the application until the gap is filled.
Figure 23.25 Lost segment
23.34
Fast retransmission
Although timer for segment 3 has
not yet matured, fast retransmission
requires that segment 3 be resent
immediately after 3 duplicate of ACK
segments are received.
23.35 Figure 23.26 Fast retransmission
Multiple Access
12.36
12 Multiple Access
Multiple Access Protocols
When nodes or stations are connected to or use a common
link, called a multipoint or broadcast link, we need a multiple
access protocol to coordinate access to the link.
Many formal protocols have been devised to handle access
to the shared link. We categorize them into three groups.
12.37
Figure 12.2 Taxonomy of multiple-access protocols discussed in this chapter
12-1 RANDOM ACCESS
In RANDOM ACCESS or contention methods,
No station is superior to another station and none is assigned
the control over another.
No station permits, or does not permit, another station to
send.
At each instance, each station has the right to the medium
without being controlled by any other station
However, if more than one station tries to send, there is an
access conflict (collision) and the frames will be either
destroyed or modified.
12.38
12-1 RANDOM ACCESS
• To avoid access conflict or to resolve it when it happens, each
station follows a procedure that answers the following questions:
• When can the station access the medium?
• What can the station do if the medium is busy?
• How can the station determine the success or failure of
the transmission?
• What can the station do if there is an access conflict?
• Different protocol respond to these questions differently.
protocols to be discussed in this section:
• ALOHA
Carrier Sense Multiple Access
• Carrier Sense Multiple Access with Collision Detection
• Carrier Sense Multiple Access with Collision Avoidance
12.39
12-1 RANDOM ACCESS
The random access methods have evolved from a very interesting
protocol known as ALOHA, which used a very simple procedure called
multiple access (MA).
The method was improved with the addition of a procedure that forces
the station to sense the medium before transmitting. This was called
carrier sense multiple access. (CSMA)
This method later evolved into two parallel methods: carrier sense
multiple access with collision detection (CSMA/CD) and carrier sense
multiple access with collision avoidance (CSMA/CA).
CSMA/CD tells the station what to do when a collision is detected.
CSMA/CA tries to avoid the collision.
Evolution of random-
access methods
12.40
12-1 RANDOM ACCESS
Multiple access (MA)-ALOHA Protocol
Multiple access. Any station sends a frame when it has a frame to
send.
Acknowledgement. After sending the frame, the station waits for
an acknowledgement (explicit or implicit).
If the acknowledgement does not arrive after a time-out period, it
assumes that the frame is lost; it tries sending again after a
random amount of time (We call this time the back-off time). The
randomness will help avoid more collisions.
To prevent congesting the channel, After a maximum number of
retransmission attempts KMax a station must give up and try later
The protocol flowchart is shown in Figure 12.4.
12.41
12-1 RANDOM ACCESS
Multiple access (MA)-ALOHA Protocol
Procedure for ALOHA protocol 12.42
12-1 RANDOM ACCESS
Carrier sense multiple access (CSMA)
To minimize the chance of collision and, therefore, increase the
performance, the CSMA method was developed.
The chance of collision can be reduced if a station senses the
medium before trying to use it.
Carrier sense multiple access (CSMA) requires that each station
first listen to the medium (or check the state of the medium) before
sending.
CSMA can reduce the possibility of collision, but it cannot
eliminate it.
12.43
12-1 RANDOM ACCESS
Carrier sense multiple access (CSMA)
The chance of collision can be reduced if a station senses
the medium before trying to use it.
Carrier sense multiple access (CSMA) requires that each
station first listen to the medium (or check the state of the
medium) before sending.
CSMA can reduce the possibility of collision, but it cannot
eliminate it.
The possibility of collision still exists because of the
propagation delay; when a station sends a frame, it takes a
while (although very short) for the first bit to reach every
station and for every station to sense it.
12.44
12-1 RANDOM ACCESS
Carrier sense multiple access (CSMA)
In other words, a station may sense the medium and find it idle,
only because propagation by another station has not yet reached
this station.
Collision in CSMA 12.45
12-1 RANDOM ACCESS
Carrier sense multiple access (CSMA)
Persistence strategy
• What should a station do if the channel is busy?
• The persistence strategy defines the procedures for a station that
senses a busy medium.
• Two substrategies have been devised: nonpersistent and persistent.
• In a nonpersistent strategy, a station that has a frame to send
senses the line. If the line is idle, the station sends immediately.
• If the line is not idle, the station waits a random period of time
and then senses the line again.
• The nonpersistent approach reduces the chance of collision
because it is unlikely that two or more stations wait the same
amount of time and retry again simultaneously.
12.46
12-1 RANDOM ACCESS
Carrier sense multiple access (CSMA)
Persistence strategy
• However, nonpersistent strategy reduces the efficiency of
the network if the medium is idle when there are stations
that have frames to send.
Non-persistent strategy
12.47
12-1 RANDOM ACCESS
Carrier sense multiple access (CSMA)
Persistence strategy
In a persistent strategy, a station senses the line. If the line is
idle, the station sends a frame. This method has two variations:
1-persistent and p-persistent.
In the 1-persistent method, if the station finds the line idle, the
station sends its frame immediately (with a probability of 1).
This method increases the chance of collision because two or
more stations may send their frames after finding the line idle.
1-persistent method
12.48
12-1 RANDOM ACCESS
Carrier sense multiple access (CSMA)
Persistence strategy
the p-persistent is used if the channel has time slots. In this
method, if the station finds the line idle, the station may or may
not send.
It sends with probability p and refrains from sending with
probability 1-p. The p-persistent strategy combines the advantages
of the other two strategies. It reduces the chance of collision and
improves the efficiency.
12.49
12-1 RANDOM ACCESS
Carrier sense multiple access (CSMA)
Persistence strategy
Figure 12.10
Behavior of three persistence
methods
12.50
12-1 RANDOM ACCESS
Carrier sense multiple access (CSMA)
Persistence strategy
Figure 12.11
Flow diagram for three
persistence methods
12.51
12-1 RANDOM ACCESS
Carrier sense multiple access with collision detection
(CSMA/CD)
The CSMA method does not specify the procedure following a collision.
That is the reason CSMA was never implemented.
Carrier sense multiple access with collision detection (CSMA/CD)
adds a procedure to handle a collision.
In this method, any station can send a frame after executing its
persistence procedure. The station then monitors the medium to see if
transmission was successful. If so, the station is finished.
If, however, there was a collision, the frame needs to be sent again. To
reduce the probability of collision the second time, the station waits – it
needs to back off time.
How long it should back off? It is reasonable that the station waits a little
the first time, more if a collision occurs again, much more if it happens a
third time, and so on.
CSMA/CD is used in traditional Ethernet.
12.52
12-1 RANDOM ACCESS
Carrier sense multiple access with collision detection
(CSMA/CD)
Figure 12.14 Flow
diagram for the
CSMA/CD
12.53
12-1 RANDOM ACCESS
Carrier sense multiple access with Collision
Avoidance(CSMA/CA)
CSMA/CA is used in wireless LANs.
The CSMA/CA procedure differs from the previous procedures in that
there should be no access conflict after data is sent.
The procedure avoids collision by:
First, checking the channel using one of the persistence strategies.
After it finds the line idle, the station waits an IFG (Interframe gap)
amount of time.
Even though the channel may appear idle when it is
sensed, a distant station may have already started
transmitting so the station does not send immediately. It
waits for a period of time called the IFG.
If the line still idle, It then waits another random amount of time
before start sending.
12.54
12-1 RANDOM ACCESS
Carrier sense multiple access with Collision
Avoidance(CSMA/CA)
12.55
ARP – Layer 2 Protocol
ARP is a layer 2 protocol (or Data Link Layer Protocol),
because the protocol does not contain
Network Layer Header
Transport Layer Header
Application Layer Header
It only has “two headers” in the frame
The MAC header
The ARP header
What does ARP do?
The main functions of ARP
Obtaining the MAC address of an destination IP.
Forming the ARP table with lookup entry of “destination IP to MAC address”
Issued by a host OS that tries to obtain the MAC address of an destination IP
(automatically).
After obtaining the MAC address of the “desired destination IP” through ARP,
the host will use the information to form an entry in the ARP table (or ARP
cache)
There are two parts of the ARP
ARP request (issued by the source host)
ARP reply (issued by the destination host)
ARP Table
ARP Request
ARP Request:
Argon broadcasts an ARP request to all stations on the
network: “What is the hardware address of Router137?”
Argon Router137
128.143.137.144 128.143.137.1
00:a0:24:71:e4:44 00:e0:f9:23:a8:20
ARP Request:
What is the MAC address
of 128.143.71.1?
ARP Reply
ARP Reply:
Router 137 responds with an ARP Reply which contains the
hardware address
Argon Router137
128.143.137.144 128.143.137.1
00:a0:24:71:e4:44 00:e0:f9:23:a8:20
ARP Reply:
The MAC address of 128.143.71.1
is 00:e0:f9:23:a8:20
ARP Packet Format
Ethernet II header
Destination Source Type
address address 0x8060
ARP Request or ARP Reply Padding CRC
6 6 2 28 10 4
Hardware type (2 bytes) Protocol type (2 bytes)
Hardware address Protocol address
Operation code (2 bytes)
length (1 byte) length (1 byte)
Source hardware address*
Source protocol address*
Target hardware address*
Target protocol address*
* Note: The length of the address fields is determined by the corresponding address length fields
ARP and RARP
Note:
The Internet is based on IP addresses
Data link protocols (Ethernet, FDDI, ATM) may have different (MAC)
addresses
The ARP and RARP protocols perform the translation between IP
addresses and MAC layer addresses
ARP Ethernet MAC
IP address
address
(32 bit)
(48 bit)
RARP