Unit 4 Notes
Unit 4 Notes
ALOHA, CSMA/CD, CSMA/CA, Ethernet, Token Ring - Flow Control :Stop and Wait, Sliding Window -
Error Control: Stop and Wait ARQ, Sliding Window ARQ - Error Detection : Parity Check, Checksum,
CRC - Error Correction: Hamming codes - Data-Link Layer Protocols : HDLC, PPP.
The MAC sublayer is the bottom part of the data link layer. The protocols used to determine who
goes next on a multiaccess channel belong to a sublayer of the data link layer called the MAC (Medium
Access Control) sublayer. The MAC sublayer is especially important in LANs, particularly wireless ones
because wireless is naturally a broadcast channel. broadcast channels are sometimes referred to as multi-
access channels or random access channels.
If the spectrum is cut up into N regions and fewer than N users are currently interested in
communicating, a large piece of valuable spectrum will be wasted.
If more than N users want to communicate, some of them will be denied permission for lack of
bandwidth, even if some of the users who have been assigned a frequency band hardly ever transmit or
receive anything.
However, even assuming that the number of users could somehow be held constant at N, dividing the
single available channel into static sub channels is inherently inefficient.
The basic problem is that when some users are quiescent, their bandwidth is simply lost. They are not
using it, and no one else is allowed to use it either.
Furthermore, in most computer systems, data traffic is extremely bursty (peak traffic to mean traffic
ratios of 1000:1 are common). Consequently, most of the channels will be idle most of the time.
The poor performance of static FDM can easily be seen from a simple queuing theory calculation. Let us
start with the mean time delay, T, for a channel of capacity C bps, with an arrival rate of λ frames/sec,
each frame having a length drawn from an exponential probability density function with mean 1/μ
bits/frame. With these parameters the arrival rate is λ frames/sec and the service rate is μ C frames/sec.
From queuing theory it can be shown that for Poisson arrival and service times,
For example, if C is 100 Mbps, the mean frame length, 1/μ, is 10,000 bits, and the frame arrival rate, λ,
is 5000 frames/sec, then T = 200 μ sec. Note that if we ignored the queuing delay and just asked how
long it takes to send a 10,000 bit frame on a 100-Mbps network, we would get the (incorrect) answer of
100 μ sec. That result only holds when there is no contention for the channel.
Now let us divide the single channel into N independent sub channels, each with capacity C/N bps. The
mean input rate on each of the sub channels will now be λ/N. Recomputing T we get
Equation 4
The mean delay using FDM is N times worse than if all the frames were somehow magically arranged
orderly in a big central queue.
Precisely the same arguments that apply to FDM also apply to time division multiplexing (TDM). Each
user is statically allocated every Nth time slot. If a user does not use the allocated slot, it just lies fallow.
The same holds if we split up the networks physically. Using our previous example again, if we were to
replace the 100-Mbps network with 10 networks of 10 Mbps each and statically allocate each user to
one of them, the mean delay would jump from 200 μ sec to 2 msec.
Since none of the traditional static channel allocation methods work well with bursty traffic, we will
now explore dynamic methods.
In a random-access protocol, a transmitting node always transmits at the full rate of the channel, namely,
R bps.
When there is a collision, each node involved in the collision repeatedly retransmits its frame (t hat is
, packet) until the frame gets through without a collision. But when a node experiences a collision, it
doesn’t necessarily retransmit the frame right away. Instead, it waits a random delay before
retransmitting the frame.
Each node involved in a collision chooses independent random delays. Because the random delays are
independently chosen, it is possible that one of the nodes will pick a delay that is sufficiently less than
the delays of the other colliding nodes and will therefore be able to sneak its frame into the channel
without a collision.
4. Collision-Free Protocols
5. Limited-Contention Protocols
1. ALOHA
In the 1970s, Norman Abramson and his colleagues at the University of Hawaii devised a new and
elegant
method to solve the channel allocation problem.
Although Abramson's work, called the ALOHA system, used ground-based radio broadcasting, the
basic idea is applicable to any system in which uncoordinated users are competing for the use of a
single shared channel.
The two versions of ALOHA here: pure and slotted.
They differ with respect to whether time is divided into discrete slots into which all frames must fit.
Pure ALOHA does not require global time synchronization; slotted ALOHA does.
PURE ALOHA
A sketch of frame generation in an ALOHA system is given in Fig. 4-1. We have made the frames all the
same length because the throughput of ALOHA systems is maximized by having a uniform frame size rather
than by allowing variable length frames.
Figure 4-1. In pure ALOHA, frames are transmitted at completely arbitrary times.
To assess Pure ALOHA, we need to predict its throughput, the rate of (successful) transmission of frames.
First, let's make a few simplifying assumptions:
All frames have the same length.
Stations cannot generate a frame while transmitting or trying to transmit.
The population of stations attempts to transmit (both new frames and old frames that collided)
according to a Poisson distribution.
Let "T" refer to the time needed to transmit one frame on the channel, and let's define "frame-time" as a
unit of time equal to T. Let "G" refer to the mean used in the Poisson distribution over transmission-attempt
amounts that is, on average, there are G transmission-attempts per frame-time.
SLOTTED ALOHA
An improvement to the original ALOHA protocol was "Slotted ALOHA", which introduced discrete
timeslots and increased the maximum throughput.
A station can send only at the beginning of a timeslot, and thus collisions are reduced. In this case, we
only need to worry about the transmission-attempts within 1 frame-time and not 2 consecutive frame-
times, since collisions can only occur during each timeslot. Thus, the probability of there being zero
transmission-attempts in a single timeslot is:
Slotted ALOHA is used in low-data-rate tactical satellite communications networks by military forces,
in subscriber-based satellite communications networks, mobile telephony call setup, and in the
contactless RFID technologies.
Pros
Cons
Carrier Sense Multiple Access (CSMA) is a probabilistic Media Access Control (MAC) protocol in which
a node verifies the absence of other traffic before transmitting on a shared transmission medium, such as an
electrical bus, or a band of the electromagnetic spectrum.
"Carrier Sense" describes the fact that a transmitter uses feedback from a receiver that detects a carrier
wave before trying to send. That is, it tries to detect the presence of an encoded signal from another station
before attempting to transmit. If a carrier is sensed, the station waits for the transmission in progress to finish
before initiating its own transmission.
"Multiple Access" describes the fact that multiple stations send and receive on the medium. Transmissions
by one node are generally received by all other stations using the medium.
ADVANTAGES
DISADVANTAGES
Cannot recover from a collision (inefficient waste of medium time)
Types of CSMA Protocols:
1. Non-Persistent CSMA
2. 1-Persistent CSMA
3. p-Persistent CSMA
1) Nonpersistent CSMA:
Performance:
1. Random delays reduce probability of collisions because two stations with data to be transmitted
will wait for different number of times.
2. Bandwidth is wasted if waiting time (backoff) is large because medium will remain idle
following end of transmission even if one or more stations have frames to send
2. If medium busy, continuously listen until medium becomes idle; then transmit immediately with
probability 1
Performance
If two or more stations becomes ready at the same time, collision guaranteed
3). P- persistent CSMA: Time is divided to slots where each Time unit (slot) typically equals maximum
propagation delay
1. If medium idle,
wait one time unit (slot) with probability (1 – p), then repeat 1.
3. Performance
Carrier sense multiple access with collision detection (CSMA/CD) is a Media Access Control method in
which.
a carrier sensing scheme is used.
a transmitting data station that detects another signal while transmitting a frame, stops transmitting
that frame, transmits a jam signal, and then waits for a random time interval before trying to resend
the frame.
CSMA/CD is a modification of pure carrier sense multiple access (CSMA). CSMA/CD is used to
improve CSMA performance by terminating transmission as soon as a collision is detected, thus
shortening the time required before a retry can be attempted.
ALGORITHM : The following procedure is used to initiate a transmission. The procedure is complete when
the frame is transmitted successfully or a collision is detected during transmission
When a station wants to send some information, it uses the following algorithm.
Main procedure
Methods for collision detection are media dependent, but on an electrical bus such as 10BASE-5 or 10BASE-2,
collisions can be detected by comparing transmitted data with received data or by recognizing a higher-than-
normal signal amplitude on the bus.
JAM SIGNAL
The jam signal is a signal that carries a 32-bit binary pattern sent by a data station to inform the other
stations that they must not transmit.
ADVANTAGES
DISADVANTAGES
Requires ability to detect collisions
Collision-Free Protocols
Wireless LAN Protocols
Ethernet
Ethernet:
Ethernet is a f a m i l y o f computer n e t w o r k i n g technologies c o m m o n l y u s e d i n
local a r e a networks (LAN), metropolitan area networks (MAN) and wide area networks (WAN). It was
commercially introduced in 1980 and first standardized in 1983 as IEEE 802.3. Ethernet has since retained
a good deal of backward compatibility and has been refined to support higher bit rates, a greater number of
nodes, and longer link distances. Over time, Ethernet has largely replaced competing wired LAN
technologies such as Token Ring, FDDI (Fiber Distributed Data Interface) (FDDI). and ARCNET.
(Attached Resource Computer NETwork (ARCNET or ARCnet))
The original 10BASE5 Ethernet uses coaxial cable as a shared medium, while the newer Ethernet variants
use twisted pair and fiber optic links in conjunction with switches. Over the course of its history, Ethernet data
transfer rates have been increased from the original 2.94 megabits per second (Mbit/s) to the latest 400 gigabits
per second (Gbit/s). The Ethernet standards comprise several wiring and signaling variants of the OSI physical
layer in use with Ethernet.
Switched Ethernet: switched Ethernet. An Ethernet LAN that uses switches to connect individual hosts or
segments. In the case of individual hosts, the switch replaces the repeater and effectively gives the device full
10 Mbps bandwidth (or 100 Mbps for Fast Ethernet) to the rest of the network
Gigabit Ethernet : Gigabit Ethernet, a transmission technology based on the Ethernet frame format and
protocol used in local area networks (LANs), provides a data rate of 1 billion bits per second
(one gigabit). Gigabit Ethernet is defined in the IEEE 802.3 standard and is currently being used as the
backbone in many enterprise networks
802.11 Architecture and Protocol Stack: The 802.11 Protocol Stack Part of the 802.11 protocol stack. 12.
Wireless Physical Layer 802.11 Infrared – Two capacities 1 Mbps or 2 Mbps. Wireless Physical
Layer 802.11 DSSS (Direct Sequence Spread Spectrum) – Spreads signal over entire spectrum using pseudo-
random sequence .
orthogonal frequency-division multiplexing (OFDM)
first Wi-Fi standard that introduced MIMO (Multiple-Input and Multiple-Output)
The 802.11 MAC Sublayer Protocol : The 802.11 MAC Sublayer Protocol. IEEE 802.11 standard, popularly
known as WiFi, lays down the architecture and specifications of wireless LANs (WLANs). The 802.11 MAC
sublayer provides an abstraction of the physical layer to the logical link control sublayer and upper layers of the
OSI network.
The hidden terminal problem : In wireless networking, the hidden node problem or hidden terminal
problem occurs when a node can communicate with a wireless access point (AP), but cannot directly
communicate with other nodes that are communicating with that AP. ... Practical protocol solutions exist to the
hidden node problem.
The exposed terminal problem: The Exposed Terminal Problem. In wireless LANs (wireless local area
networks), the exposed terminal problem is a transmission problem that arises when a transmitting station is
prevented from sending frames due to interference with another transmitting station.
Interframe spacing in 802.11: In IEEE 802.11 spacing is used to separate frames. The length of interframe
space determines when the channel can be accessed. Thus, the interframe spacing is used to set prioritized
access to the channel.
Short Interframe Space (SISF), Extended inter-frame spacing (EIFS)
802.11 Frame Structure : The 802.11 Frame Structure. The IEEE 802.11 standard, lays down the architecture
and specifications of wireless local area networks (WLANs). WLAN or WiFi uses high frequency radio waves
instead of cables for connecting the devices in LAN. Users connected by WLANs can move around within the
area of network coverage
What is DLL (Data Link Layer)?
The Data Link Layer is the second layer in the OSI model, above the Physical Layer, which ensures
that the error free data is transferred between the adjacent nodes in the network. It breaks the datagram
passed down by above layers and converts them into frames ready for transfer. This is called Framing.
1. Data Link Control (DLC): It deals with the design and procedures for communication b/w nodes:
node-to-node communication.
(1) Framing.
(1) FRAMING
1. Frame header
2. Payload field for holding packet
3. Frame trailer
Source machine sends independent frames to destination machine having destination machine
acknowledge them
No logical connection
No logical connection
Source and destination machines establish a connection before any data are transferred
3 PHASES
1. Connection established
2. Frames are transmitted
3. Connection released
Figure 1.3 Placement of the data link Protocol
Consider a typical example: a WAN subnet consisting of routers connected by point-to-point leased
telephone lines.
When a frame arrives at a router, the hardware checks it for errors, and then passes the frame to the
data link layer software.
The data link layer software checks to see if this is the frame expected, and if so, gives the packet
contained in the payload field to the routing software.
The routing software then chooses the appropriate outgoing line and passes the packet back down to
the data link layer software, which then transmits it. The flow over two routers is shown in Fig. 1-3.
(1). FRAMING
Breaking the bit stream up into frames is more difficult than it at first appears. One way to achieve
this framing is to insert time gaps between frames, much like the spaces between words in ordinary text.
However, networks rarely make any guarantees about timing, so it is possible these gaps might be squeezed
out or other gaps might be inserted during transmission.
1. Character count.
The first framing method uses a field in the header to specify the number of characters in the frame.
When the data link layer at the destination sees the character count, it knows how many characters follow
and hence where the end of the frame is. This technique is shown in Fig. 3-4(a) for four frames of sizes 5, 5,
8, and 8 characters, respectively.
Figure 3-4. A character stream. (a) Without errors. (b) With one error.
The first framing method uses a field in the header to specify the number of characters in the frame.
When the data link layer at the destination sees the character count, it knows how many characters
follow and hence where the end of the frame is.
This technique is shown in Fig. 3-4(a) for four frames of sizes 5, 5, 8, and 8 characters, respectively.
The trouble with this algorithm is that the count can be garbled by a transmission error.
For example, if the character count of 5 in the second frame of Fig. 3-4(b) becomes a 7, the destination
will get out of synchronization and will be unable to locate the start of the next frame.
Even if the checksum is incorrect so the destination knows that the frame is bad, it still has no way of
telling where the next frame starts.
Sending a frame back to the source asking for a retransmission does not help either, since the destination
does not know how many characters to skip over to get to the start of the retransmission. For this reason,
the character count method is rarely used anymore.
2. Flag bytes with byte stuffing:
Advantage:
Disadvantage:
1. Even if with checksum, the receiver knows that the frame is bad there is no way to tell where the next
frame starts.
2. Asking for retransmission doesn’t help either because the start of the retransmitted frame is not known.
3. Hence No longer used.
Byte stuffing is the process of adding 1 extra byte whenever there is a flag or escape character in the
text.
Figure : Byte stuffing and unstuffing
Problem: fixed character size: assumes character size to be 8 bits: can’t handle heterogeneous environment.
Figure (a)
Bit stuffing is the process of adding one extra 0 whenever five consecutive 1s follow a 0 in the data, so that
the receiver does not mistake the pattern 0111110 for a flag.
Figure (b)
Figure (c)
The last method of framing is only applicable to networks in which the encoding on the physical
medium contains some redundancy
. For example, some LANs encode 1 bit of data by using 2 physical bits. Normally, a 1 bit is a high-
low pair and a 0 bit is a low-high pair. The scheme means that every data bit has a transition in the middle,
making it easy for the receiver to locate the bit boundaries. The combinations high-high and low-low are not
used for data but are used for delimiting frames in some protocols.
As a final note on framing, many data link protocols use a combination of a character count with one
of the other methods for extra safety. When a frame arrives, the count field is used to locate the end of the
frame. Only if the appropriate delimiter is present at that position and the checksum is correct is the frame
accepted as valid. Otherwise, the input stream is scanned for the next delimiter.
How do we make sure that all frames are eventually delivered to the network layer at the destination
and in the proper order?
Provide sender with some acknowledgement about what is happening with the receiver
Sender could wait for acknowledgement
Disadvantages
If a frame vanishes, the receiver will not send an acknowledgement thus, sender will wait forever
Dealt with by timers and sequence numbers – important part of DLL
Sender transmits a frame, starts a timer.
Timer set to expire after interval long enough for frame to reach destination, be processed, and have
acknowledgement sent to sender
Is a danger of frame being transmitted several times, however dealt with by assigning sequence
numbers to outgoing frames, so that receiver can distinguish retransmissions from originals.
What do we do when a sender transmits frames faster than the receiver can accept them?
Feedback-based flow control – receiver sends back information to the sender, giving it permission
to send more data or at least telling the sender how the receiver is doing
Rate-based flow control – the protocol has a built-in mechanism that limits the rate at which the
sender may transmit data, using feedback from the receiver.
ERROR DETECTION AND CORRECTION METHODS
Because of Attenuation, distortion, noise and interferences, errors during transmission are inevitable,
leading to corruption transmitted bits.
Longer the frame size and higher the probability of single bit error, lower is the probability receiving
a frame without error.
ERROR
When data is being transmitted from one machine to another, it may possible that data become
corrupted on its way. Some of the bits may be altered, damaged or lost during transmission. Such a
condition is known as error.
TYPES OF ERRORS
Single bit error: Only one bit gets corrupted. Common in Parallel transmission.
Burst error: More than one bit gets corrupted very common in serial transmission of data occurs
when the duration of noise is longer than the duration of one bit.
The term single-bit error means that only one bit of given data unit (such as a byte, character, or data
unit) is changed from 1 to 0 or from 0 to 1 as shown in Fig. 3.2.1.
Single bit errors are least likely type of errors in serial data transmission.
For example, if 16 wires are used to send all 16 bits of a word at the same time and one of the wires is
noisy, one bit is corrupted in each word.
Burst error:
More than one bit gets corrupted very common in serial transmission of data occurs when the
duration of noise is longer than the duration of one bit.
The noise affects data; it affects a set of bits.
The number of bits affected depends on the data rate and duration of noise.
ERROR DETECTION TECHNIQUES
Basic approach used for error detection is the use of redundancy, where additional bits are added to
facilitate detection and correction of errors. Popular techniques are
Redundancy is the method in which some extra bits are added to the data so as to check whether the data
contain error or not.
m - data bits (i.e., message bits)
r - redundant bits (or check bits).
n - total number of bits
n= (m + r).
An n-bit unit containing data and check-bits is often referred to as an n-bit codeword.
The simplest and most popular error detection scheme. Appends a Parity bit to the end of the data.
A parity of 1 is added to the block if it contains an odd number of 1’s (ON bits) and 0 is added if it contains
an even number of 1’s. At the receiving end the parity bit is computed from the received data bits and
compared with the received parity bit.
This scheme makes the total number of 1’s even, that is why it is called even parity checking.
Considering a 4-bit word, different combinations of the data words and the corresponding code words are
given in Table 3.2.1.
Example:
PERFORMANCE OF SIMPLE PARITY CHECK
Simple parity check can detect all single-bit error
It can also detect burst error, if the number of bits in even or odd.
The technique is not foolproof against burst errors that invert more than one bit. If an even number
of bits is inverted due to error, the error is not detected.
Performance can be improved by using two dimensional parity check, which organizes the block of
bits in the form of table.
Parity check bits are calculated from each row, which is equivalent to a simple parity check.
Parity check bits are also calculated for all columns.
Both are sent along with the data.
At the receiving end these are compared with the parity bits calculated on the received data.
Performance:
If two bits in one data unit are damaged and two bits in exactly same position in another data unit are
also damaged, The 2-D Parity check checker will not detect an error.
For example, if two data units: 11001100 and 10101100.
If first and second from last bits in each of them is changed, making the data units as 01001110 and
00101110, the error cannot be detected by 2-D Parity check.
CHECKSUM
In checksum error detection scheme, the data is divided into k segments each of m bits.
In the sender’s end the segments are added using 1’s complement arithmetic to get the sum.
The sum is complemented to get the checksum. The checksum segment is sent along with the data
segments
Example 1:
Sender Reciever:
Data checksum
• One of the most powerful and commonly used error detecting codes.
Basic approach:
• Given a m-bit block of bit sequence, the sender generates an n-bit sequence known as frame
sequence check(FCS), so that the resulting frame, consisting of m+n bits exactly divisible by same
predetermined number.
• The receiver divides the incoming frame by that number and, if there is no reminder, assumes there
was no error.
Fig. 3.2.7 by dividing a sample 4- bit number by the coefficient of the generator polynomial x3+x+1,
which is 1011, using the modulo-2 arithmetic.
Modulo-2 arithmetic is a binary addition process without any carry over, which is just the Exclusive-OR
operation.
Consider the case where k=1101. Hence we have to divide 1101000 (i.e. k appended by 3 zeros) by 1011,
which produces the remainder r=001, so that the bit frame (k+r) =1101001 is actually being transmitted
through the communication channel.
At the receiving end, if the received number, i.e.,1101001 is divided by the same generator polynomial
1011 to get the remainder as 000, it can be assumed that the data is free of errors.
Performance of CRC
CRC can detect all single-bit errors.
CRC can detect all double-bit errors(three1’s)
CRC can detect any odd number of errors of less than the degree of the polynomial.
CRC detects most of the larger burst errors with a high probability.
Concept of error-correction can be easily understood by examining the simplest case of single-bit
errors. As we have already seen that a single-bit error can be detected by addition of a parity bit with the
data, which needed to be send.
A single additional bit can detect error, but it’s not sufficient enough to correct that error too. For
correcting an error one has to know the exact position of error, i.e. exactly which bit is in error (to locate
the invalid bits).
For example, to correct a single-bit error in an ASCII character, the error correction must
determine which one of the seven bits is in error. To this, we have to add some additional redundant
bits.
To calculate the numbers of redundant bits (r) required to correct d data bits, let us find out the
relationship between the two. So we have (d+r) as the total number of bits, which are to be transmitted;
then r must be able to indicate at least d+r+1 different values. Of these, one value means no error, and
remaining d+r values indicate error location of error in each of d+r locations. So, d+r+1 states must be
distinguishable by r bits, and r bits can indicates 2 r states. Hence, 2r must be greater than d+r+1.
2r >= d+r+1
The value of r must be determined by putting in the value of d in the relation. For example, if d is 7,
then the smallest value of r that satisfies the above relation is 4. So the total bits, which are to be
transmitted is 11 bits (d+r = 7+4 =11).
Now let us examine how we can manipulate these bits to discover which bit is in error. A technique
developed by R.W. Hamming provides a practical solution. The solution or coding scheme he developed
is commonly known as Hamming Code.
Hamming code can be applied to data units of any length and uses the relationship between the data
bits and redundant bits as discussed.
11 10 9 8 7 6 5 4 3 2 1
D D D R d d d r d r r
Redundant bits
Figure 3.2.8 Positions of redundancy bits in hamming code
Basic approach for error detection by using Hamming code is as follows:
• To each group of m information bits k parity bits are added to form (m+k) bit code as shown in
Fig. 3.2.8.
• Location of each of the (m+k) digits is assigned a decimal value.
• The k parity bits are placed in positions 1, 2, …, 2k-1 positions.–K parity checks are performed on
selected digits of each codeword.
• At the receiving end the parity bits are recalculated. The decimal value of the k parity bits
provides the bit-position in error, if any.
Figure 3.2.9 Use of Hamming code for error correction for a 4-bit data
Figure 3.2.9 shows how hamming code is used for correction for 4-bit numbers (d4d3d2d1) with the help
of three redundant bits (r3r2r1).
For the example data 1010, first r1 (0) is calculated considering the parity of the bit positions, 1, 3, 5 and
7. Then the parity bits r2 is calculated considering bit positions 2, 3, 6 and 7.
Finally, the parity bits r4 is calculated considering bit positions 4, 5, 6 and 7 as shown. If any corruption
occurs in any of the transmitted code 1010010, the bit position in error can be found out by calculating
r3r2r1 at the receiving end.
For example, if the received code word is 1110010, the recalculated value of r 3r2r1 is 110, which
indicates that bit position in error is 6, the decimal value of 110.
DATA LINK LAYER PROTOCOLS
The following assumption has been made for developing the (algorithm) simplex protocol.
The channel is a perfect noiseless channel.
Hence an ideal channel in which no frames are lost, duplicated, or corrupted.
No flow control and error control used.
It is a unidirectional protocol in which data frames are traveling in only one direction- from
the sender to receiver.
Both transmitting and receiving network layer are always ready.
Processing time that is small enough to be negligible.
Infinite buffer space is available.
Figure3.1 shows The design of the simplest protocol with no flow or error control
Figure 3.2 below shows an example of communication using this protocol. It is very simple.
The sender sends a sequence of frames without even thinking about the receiver.
To send three frames, three events occur at the sender site and three events at the receiver site.
The following assumption has been made for developing the Stop-and-Wait Protocol
» d/ (b * R) * b = d / R
Figure 3.4. Stop-and-Wait protocol flow diagram
Purpose: To ensure a sequence of information packets is delivered in order and without errors or
duplications despite transmission errors & losses.
1. STOP AND WAIT WITH ARQ
Automatic Repeat Request (ARQ), an error control method, is incorporated with stop and wait
flow control protocol
If error is detected by receiver, it discards the frame and send a negative ACK (NAK), causing
sender to re-send the frame.
In case a frame never got to receiver, sender has a timer: each time a frame is sent, timer is set ! If no
ACK or NAK is received during timeout period, it re-sends the frame
Timer introduces a problem: Suppose timeout and sender retransmits a frame but receiver actually
received the previous transmission ! receiver has duplicated copies.
To avoid receiving and accepting two copies of same frame, frames and ACKs are alternatively
labeled 0 or 1: ACK0 for frame 1, ACK1 for frame 0.
Event :
The acknowledgement field contains the number of the last frame received without error. If this
number agrees with the sequence number of the frame the sender is trying to send, the sender knows it is
done with the frame stored in buffer and can fetch the next packet from its network layer. If the sequence
number disagrees, it must continue trying to send the same frame. Whenever a frame is received, a frame is
also sent back.
Assume that computer A is trying to send its frame 0 to computer B and that B is trying to send its
frame 0 to A. Suppose that A sends a frame to B, but A's timeout interval is a little too short. Consequently,
A may time out repeatedly, sending a series of identical frames, all with seq = 0 and ack = 1.
When the first valid frame arrives at computer B, it will be accepted and frame_expected will be set
to 1. All the subsequent frames will be rejected because B is now expecting frames with sequence number 1,
not 0. Furthermore, since all the duplicates have ack = 1 and B is still waiting for an acknowledgement of 0,
B will not fetch a new packet from its network layer. After every rejected duplicate comes in, B sends A a
frame containing seq = 0 and ack = 0. Eventually, one of these arrives correctly at A, causing A to begin
sending the next packet. No combination of lost frames or premature timeouts can cause the protocol to
deliver duplicate packets to either network layer, to skip a packet, or to deadlock.
Fig 2.8: Two scenarios for protocol 4. (a) Normal case. (b) Abnormal case. The notation is (seq, ack,
packet number). An asterisk indicates where a network layer accepts a packet
In a Go-Back-N (GBN) protocol, the sender is allowed to transmit multiple packets (when available)
without waiting for an acknowledgment, but is constrained to have no more than some maximum allowable
number, N, of unacknowledged packets in the pipeline.
• Invocation from above. When rdt_send() is called from above, the sender first checks to see if the
window is full, i.e., whether there are N outstanding, unacknowledged packets. If the window is not
full, a packet is created and sent, and variables are appropriately updated. If the window is full, the
sender simply returns the data back to the upper layer, an implicit indication that the window is full.
• Receipt of an ACK. In our GBN protocol, an acknowledgement for packet with sequence number n
will be taken to be a cumulative acknowledgement, indicating that all packets with a sequence
number up to and including n have been correctly received at the receiver. We'll come back to this
issue shortly when we examine the receiver side of GBN.
• A timeout event. The protocol's name, ``Go-Back-N,'' is derived from the sender's behavior in the
presence of lost or overly delayed packets. As in the stop-and-wait protocol, a timer will again be
used to recover from lost data or acknowledgement packets. If a timeout occurs, the sender resends
all packets that have been previously sent but that have not yet been acknowledged. If an ACK is
received but there are still additional transmitted-but-yet-to-be-acknowledged packets, the timer is
restarted. If there are no outstanding unacknowledged packets, the timer is stopped.
In our GBN protocol, the receiver discards out-of-order packets. While it may seem silly and wasteful
to discard a correctly received (but out-of-order) packet, there is some justification for doing so. Recall that
the receiver must deliver data, in-order, to the upper layer.
The advantage of this approach is the simplicity of receiver buffering - the receiver need not buffer
any out-of-order packets. Thus, while the sender must maintain the upper and lower bounds of its window
and the position of nextseqnum within this window, the only piece of information the receiver need maintain
is the sequence number of the next in-order packet. Of course, the disadvantage of throwing away a correctly
received packet is that the subsequent retransmission of that packet might be lost or garbled and thus even
more retransmissions would be required.
In the above figure, shows the operation of the GBN protocol for the case of a window size of four
packets. Because of this window size limitation, the sender sends packets 0 through 3 but then must wait for
one or more of these packets to be acknowledged before proceeding. As each successive ACK (e.g., ACK0
and ACK1) is received, the window slides forwards and the sender can transmit one new packet (pkt4 and
pkt5, respectively).
On the receiver side, packet 2 is lost and thus packets 3, 4, and 5 are found to be out-of-order and are
discarded.
The implementation would also likely be in the form of various procedures that implement the
actions to be taken in response to the various events that can occur. In such event-based programming, the
various procedures are called (invoked) either by other procedures in the protocol stack, or as the result of an
interrupt.
iii) A PROTOCOL USING SELECTIVE REPEAT
Selective Repeat (SR) protocols avoid unnecessary retransmissions by having the sender retransmit
only those packets that it suspects were received in error (i.e., were lost or corrupted) at the receiver. This
individual, as-needed, retransmission will require that the receiver individually acknowledge correctly-
received packets. A window size of N will again be used to limit the number of outstanding,
unacknowledged packets in the pipeline.
The SR receiver will acknowledge a correctly received packet whether or not it is in-order. Out-of-
order packets are buffered until any missing packets (i.e., packets with lower sequence numbers) are
received, at which point a batch of packets can be delivered in-order to the upper layer. Figure receiver
itemizes the various actions taken by the SR receiver.
Data received from above. When data is received from above, the SR sender checks the next
available sequence number for the packet. If the sequence number is within the sender's window, the
data is packetized and sent; otherwise it is either buffered or returned to the upper layer for later
transmission, as in GBN.
Timeout. Timers are again used to protect against lost packets. However, each packet must now have
its own logical timer, since only a single packet will be transmitted on timeout. A single hardware
timer can be used to mimic the operation of multiple logical timers.
ACK received. If an ACK is received, the SR sender marks that packet as having been received,
provided it is in the window. If the packet's sequence number is equal to send base, the window base
is moved forward to the unacknowledged packet with the smallest sequence number. If the window
moves and there are untransmuted packets with sequence numbers that now fall within the window,
these packets are transmitted.
Packet with sequence number in [rcvbase, rcvbase+N-1] is correctly received. In this case, the
received packet falls within the receivers window and a selective ACK packet is returned to the
sender. If the packet was not previously received, it is buffered. If this packet has a sequence number
equal to the base of the receive window, then this packet, and any previously buffered and
consecutively numbered (beginning with rcvbase) packets are delivered to the upper layer. The
receive window is then moved forward by the number of packets delivered to the upper layer.
Packet with sequence number in [rcvbase-N,rcvbase-1] is received. In this case, an ACK must be
generated, even though this is a packet that the receiver has previously acknowledged.
The lack of synchronization between sender and receiver windows has important consequences when
we are faced with the reality of a finite range of sequence numbers.
Consider what could happen, for example, with a finite range of four packet sequence numbers,
0,1,2,3 and a window size of three. Suppose packets 0 through 2 are transmitted and correctly received and
acknowledged at the receiver.
At this point, the receiver's window is over the fourth, fifth and sixth packets, which have sequence
numbers 3, 0, and 1, respectively. Now consider two scenarios.
In the first scenario, shown in Figure (a), the ACKs for the first three packets are lost and the sender
retransmits these packets. The receiver thus next receives a packet with sequence number 0 - a copy of the
first packet sent.
In the second scenario, shown in Figure 3.4-19(b), the ACKs for the first three packets are all
delivered correctly. The sender thus moves its window forward and sends the fourth, fifth and sixth packets,
with sequence numbers 3, 0, 1, respectively. The packet with sequence number 3 is lost, but the packet with
sequence number 0 arrives - a packet containing new data.