KEMBAR78
Computer Networks | PDF | Channel Access Method | Transmission Control Protocol
0% found this document useful (0 votes)
30 views39 pages

Computer Networks

The document discusses the Data Link Layer, focusing on protocols for noiseless and noisy channels, including the Simplest Protocol, Stop and Wait Protocol, and various Automatic Repeat Request (ARQ) methods. It also covers the Sliding Window Protocol and Channel Allocation problems, distinguishing between static and dynamic channel allocation schemes. Additionally, it introduces multiple access protocols for coordinating access to a shared link in a network.

Uploaded by

deenahurooj6694
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views39 pages

Computer Networks

The document discusses the Data Link Layer, focusing on protocols for noiseless and noisy channels, including the Simplest Protocol, Stop and Wait Protocol, and various Automatic Repeat Request (ARQ) methods. It also covers the Sliding Window Protocol and Channel Allocation problems, distinguishing between static and dynamic channel allocation schemes. Additionally, it introduces multiple access protocols for coordinating access to a shared link in a network.

Uploaded by

deenahurooj6694
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 39

UNIT- II

Data Link Layer

Noise less Channels and Noisy Channels

To transmit the data from one node to another, data link layer combines
framing, flow control & error control schemes.
We divide the discussion protocols into those that can be used
for noiseless(error free) channels and those that can be used for noisy (error
creating) channels.

Taxonomy of protocols

NOISELESS CHANNELS

Let us first assume we have an ideal channel in which no frames are lost, duplicated, or
damaged. We introduce two protocols for this type of channel

Simplest Protocol

Stop and wait Protocol


Simplest protocol

● Simplest protocol is a unidirectional protocol in which data frames are travelling in


only one direction from sender to receiver.
● In this there are no errors that take place in physical channel.
● The data link layer on the sender side takes the packet from the network layer &
then adds the header and trailer to create frame & transmit it to the physical layer.
● The receiver side data link layer removes the header from frame and transmits as
packet to the network layer.
● In this protocol the receiver will never be overwhelmed.

The design of the simplest protocol with no flow or error control

Example

Figure below shows an example of communication using this protocol.

The sender sends a sequence of frames without even thinking about the receiver.

To send three frames, three events occur at the sender site and three events at the
receiver site.

Note: that the data frames are shown by tilted boxes


Stop and wait protocol

● Stop and wait is a protocol where sender sends one frame and then waits for
acknowledgment, before further proceedings.
● The advantage of stop and wait protocol is its simplicity. Each frame is checked and
acknowledged before the next frame is sent.
● The disadvantage is its inefficiency. Stop & wait is very slow.
● Each frame must travel all the way to the receiver and an acknowledgement must
travel all the way back before next frame can be transmitted.

Design of Stop-and-Wait Protocol


Figure below shows an example of communication using this protocol. The sender sends one
frame and waits for ACKnowledge from the receiver. When the ACK arrives, the sender sends
the next frame. Note that sending two frames in the protocol involves the sender in four events
and the receiver in two events.

NOISY CHANNELS

Although the Stop-and-Wait Protocol gives us an idea of how to add flow control, noiseless
channels are nonexistent.

We discuss three protocols in this section that use error control.

1. Stop-and-Wait Automatic Repeat Request

2. Go-Back-N Automatic Repeat Request

3. Selective Repeat Automatic Repeat Request


Stop-and-Wait Automatic Repeat Request(ARQ)

● Error correction in Stop-and-Wait ARQ is done by keeping a copy of the sent frame
and retransmitting of the frame when the timer expires.
● In Stop-and-Wait ARQ, we use sequence numbers to number the frames. The
sequence numbers is represented in modulo -2 arithmetic.
● In Stop-and-Wait ARQ, the acknowledgment number always announces the
sequence number of the next frame expected.

Example of Stop-and-Wait ARQ.

● Frame 0 is sent and acknowledged.


● Frame 1 is lost and resent after the time-out. The resent frame 1 is acknowledged
and the timer stops.
● Frame 0 is sent and acknowledged, but the acknowledgment is lost. The sender has
no idea if the frame or the acknowledgment is lost, so after the time-out, it resends
frame 0, which is acknowledged.
Go Back-N Automatic Repeat Request(ARQ)

In this protocol we can send several frames before receiving acknowledgements. We keep a
copy of these frames until the acknowledgements arrive.

Frames from a sending station are numbered sequentially.

Figure below shows us design for this Go-Back N protocol. As we can see multiple frames can
be transmit in forward direction and multiple ACK in reverse direction. The idea is similar to
stop and wait ARQ but difference is that send window allows us to have many frames in
transition as there are slots in send window.
The receiver sends positive ACK if a frame has arrived safe. If a frame is damaged or
received out of order the receiver will sent NACK frame and will discard all subsequent
frames until it receives the one expecting .

When the timer expires the sender resends all outstanding frames. For example suppose
the sender has already sent frame 0, 1,2,3 but the timer for frame 1 expires. This means
that frame 1 has not been ACK: the sender goes back and sends frames 1,2, 3 again.
That is why the protocol is called Go-Back-N ARQ
Selective repeat ARQ

The specific damaged or lost frames are retransmitted in selective repeat ARQ.

The receiver sends positive ACK if a frame has arrived safe and sound.

If a frame is damaged or received out of order the receiver will sent Negative ACK frame
and it will not discard previously sent frames.
Design of Selective repeat ARQ

SLIDING WINDOW PROTOCOL

The sliding window is a technique for sending multiple frames at a time. It controls the data
packets between the two devices where reliable and gradual delivery of data frames is needed.

In this technique, each frame has sent from the sequence number. The sequence numbers are used
to find the missing data in the receiver end. The purpose of the sliding window technique is to
avoid duplicate data, so it uses the sequence number.

Multiple frames sent by source are acknowledged by receiver using a ACK frame.

Sliding window has imaginary boxes that hold the frames on both sender and receiver side.

• It provides the upper limit on the number of frames that can be transmitted before requiring an
acknowledgment.
• Frames may be acknowledged by receiver at any point even when window is not full on receiver
side.

• Frames may be transmitted by source even when window is not yet full on sender Side.

● The range which is concern of the sender is called send sliding window.

● The range which is concern of the receiver is called receiver sliding window.

Sliding Window on Sender Side

• At the beginning of a transmission, the sender’s window contains n-l frames.

• As the frames are sent by source, the left boundary of the window moves inward, shrinking the
size of window. This means if window size is w, if four frames are sent by source after the last
acknowledgment, then the number of frames left in window is w-4.

• When the receiver sends an ACK, the source’s window expand i.e. (right boundary moves
outward) to allow in a number of new frames equal to the number of frames acknowledged by that
ACK.

For example, Let the window size is 7 (see diagram (a)), if frames 0 through 3 have been sent and
no acknowledgment has been received, then the sender’s windowcontains three frames – 4,5,6.

• Now, if an ACK numbered 3 is received by source, it means three frames (0, 1, 2) have been
received by receiver and are undamaged.

• The sender’s window will now expand to include the next three frames in its buffer. At this point
the sender’s window will contain six frames (4, 5, 6, 7, 0, 1). (See diagram (b)).
Sliding Window on Receiver Side

At the beginning of transmission, the receiver’s window contains n-1 spaces for
frame but not the frames.
• As the new frames come in, the size of window shrinks.
• Therefore the receiver window represents not the number of frames received but
the number of frames that may still be received without an acknowledgment ACK
must be sent.
• Given a window of size w, if three frames are received without an ACK being
returned, the number of spaces in a window is w-3.
• As soon as acknowledgment is sent, window expands to include the number of
frames equal to the number of frames acknowledged.

Note:

Therefore, the sliding window of sender shrinks from left when frames of data are
sending. The sliding window of the sender expands to right when acknowledgments
are received.

• The sliding window of the receiver shrinks from left when frames of data are
received. The sliding window of the receiver expands to the right when
acknowledgement is sent.
Types of Sliding Window Protocol

1. One bit sliding window


2. Go-Back-N ARQ
3. Selective Repeat ARQ

One bit sliding window

■ In one – bit sliding window protocol, the size of the window is 1. So the sender
transmits a frame, waits for its acknowledgment, then transmits the next frame.

■ Thus it uses the concept of stop and waits for the protocol.

■ This protocol provides for full – duplex communications.

■ Hence, the acknowledgment is attached along with the next data frame to be sent by
piggybacking.

WORKING

■ One bit sliding window protocol is used for delivery of data frames.

1. Sender has sending window.

2. Receiver has receiving window.


3. Sending and receiving windows act as bu er storage.

4. Here size of windows size is 1.

5. One bit sliding window protocol uses Stop and Wait.

6. Sender transmits a frame with sequence number.

7. Then sender waits for acknowledgment from the receiver.

8. Receiver send back an acknowledgement with sequence number.


9. If sequence number of acknowledgement matches with sequence number of frame.

10. Sender transmit the next frame.

11. Else sender re-transmit the previous frame.

12. Its bidirectional protocol

Sliding window size 1. Sequence nos. 0 to 7.

(a) At start. Receiver waits for 0.


(b) Sender sends 0.
(c) Receiver receives 0. Waits for 1.
(d) Sender got ack for 0. Hasn't got 1 from its Network layer yet

Example
Go-Back-N ARQ Sliding window protocol

− 1 ,where m is the
In the Go-Back-N Protocol, the sequence numbers are modulo 2m
size of the sequence number field in bits, the sequence number range
from 0 to 2m − 1
For example if m is 4 the sequence numbers are 0 through 15.however we can repeat
sequence numbers are

0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,0,1,2,3,4,5,6,7,8,9,10,11….

The send sliding window is an define an imaginary box of size 2m − 1 with three
variables: Sf, Sn, and S size.

The send slide window can slide one or more slots when a valid acknowledgement arrives.
The receive window is define an imaginary box of size 1 with one single variable Rn. The
window slides when a correct frame has arrived; sliding occurs one slot at a time.

Selective Repeat ARQ Sliding window protocol

The specific damaged or lost frames are retransmitted in selective repeat ARQ.

The receiver sends positive ACK if a frame has arrived safe and sound.

If a frame is damaged or received out of order the receiver will sent NAK frame and it will
not discard previously sent frames.

The send sliding window is an abstract concept defining an imaginary box of size 2m-1 with three
variables: Sf, Sn, and Ssize.

In Selective Repeat ARQ, the size of the sender and receiver window must be at most one-half
of 2m.
The receive window is an abstract concept defining an imaginary box of size 2m-1 with variable
Rn

Channel Allocation Problem in computer networks

On a network, multiple devices are communicating with each other. It is the responsibility of the
data link layer to provide reliable communication by allocating a channel to the device for
communication. Allocating channels to specific devices for communication is known as channel
allocation.

● The data link layer allocates a single broadcast channel between competing devices.
● Depending on the network and geographic region, the channel can be guided media or
unguided media. On the channel, several nodes are connected.
● The purpose of a channel is to connect one device to another device on a network for
communication.

Channel allocation problem plays a major role in the network. There are two types of channel
allocation schemes used on the network. They are as follows:

1. Static Channel Allocation


2. Dynamic Channel Allocation
● Static Channel Allocation: This is a traditional way of allocating a single channel among
multiple users. In static allocation, the channel’s bandwidth is split into equal-sized
portions among users, and each user gets a portion of the bandwidth.
● Dynamic Channel Allocation: In dynamic channel allocation, bandwidth is not allocated
to the user permanently. The frequency is allocated to the devices when it is needed on the
network. It uses little CPU power, which increases the optimal resource utilization of the
network.

Static Channel Allocation

In static channel allocation, the network bandwidth is divided equally between devices and is
permanent for devices.

● For example, there are 50 users on the network, and the network bandwidth is 100 Hz. The
100 Hz bandwidth would be divided into 50 equally sized parts that are 2 Hz. Each user
will get a 2 Hz portion.
● Each device has a private frequency band, so there is no possibility of interference with
other devices.
● A well-known example of a single channel allocation scheme is FM radio, in which
different frequencies are assigned to different stations that are fixed.

The figure below shows the static channel allocation scheme.


● As shown in the figure, 1000 MHz bandwidth is present on a single channel. The 1000
MHz bandwidth is divided into four equally sized frequency bands of 250 MHz,
permanent for the specific station.
● In addition, there is a gap between the frequency bands of the stations to avoid signal
interference with other signals.

Let us see the various disadvantages of the static channel allocation scheme.

● Let’s say the bandwidth is divided for 50 devices, and only 40 devices are active on the
network, then the bulk of the valuable bandwidth will be wasted.
● If there are equal-sized portions of bandwidth for 50 devices and 60 devices want to
communicate, 10 devices will be denied permission due to lack of bandwidth.
● Now, let’s say 50 equally-sized bandwidth portions are assigned to 50 devices. But the
problem is that when some devices are idle, their bandwidth will be lost simply because
they are not using it, and no one is allowed to use it.
● In a static allocation channel, most of the channels will be idle for most of the time

Dynamic Channel Allocation

To overcome this problem, we can use dynamic channel allocation, in which the bandwidth is not
allocated to the device permanently.

● In dynamic channel allocation, the bandwidth is not allocated to the device permanently,
the frequency band is allocated to the device whenever required.
● When the device completes its communication on a channel, it is de-allocated, and the
same channel can be assigned to another device.
● In dynamic channel allocation, a channel is dynamically allocated between devices.

Assumptions for dynamic channel allocation

Single Channel: There is a single channel between devices for communication, in which all
devices can send and receive from it.
● In single-channel assumption, protocols are used to prioritize frames from less important
frames to more important frames.
● The channel transmits frames from one device to another according to the priority of the
frame.
Observable Collisions: The collision occurs when two frames are traveling on a shared channel
and overlap with each other. When a collision occurs between frames, all devices can detect that a
collision occurred on a channel. The lost frame can be retransmitted after some time. No error is
produced here compared to that generated by the collision.

Continuous or Slotted Time: We can consider the time constant as when frame transmission can
be initiated at any instant. Time can be slotted or divided into discrete intervals known as slots.
When the slot is allocated to the device or channel, the frame transmission must start at the
beginning of the slot.
● For the idle slot, the slot has 0 frames for transmission. Similarly, successful transmission
occurs if the slot contains one frame, and a channel collision will occur if the slot contains
more frames.

Carrier Sense or No Carrier Sense: In the carrier sense, the device checks whether the channel
is in use before using the channel. If the station finds the channel busy, it will not broadcast any
frame on the channel. If there is no carrier sense, the station may not understand the channel
before use, leading to frame collisions.

Multiple Access Protocols


When nodes or stations are connected and use a common link, called a multipoint or
broadcast link, we need a multiple-access protocol to coordinate access to the link.

Many protocols have been devised to handle access to a shared link. All of these
protocols belong to a sub layer in the data-link layer called media access control
(MAC). We categorize them into three groups:

1 The first section discusses random-access protocols. Four protocols, ALOHA,


CSMA,
CSMA/CD, and CSMA/CA, are described in this section. These protocols are
mostly used in LANs and WANs.

2 The second section discusses controlled-access protocols. Three protocols,


reservation,polling, and token-passing, are described in this section. Some of
these protocols are used in LANs, but others have some historical value.

3 The third section discusses channelization protocols. Three protocols, FDMA,


TDMA, and CDMA are described in this section. These protocols are used in
cellular telephony.

RANDOM ACCESS

In random-access or contention methods, no station is superior to another station and none is


assigned control over another. At each instance, a station that has data to send uses a
procedure defined by the protocol to make a decision on whether or not to send. This decision
depends on the state of the medium (idle or busy). In other words, each station can transmit
when it desire on the condition that it follows the predefined procedure, including testing the
state of the medium
.
Two features give this method its name:

First, there is no scheduled time for a station to transmit. Transmission is random among the
stations. That is why these methods are called random access.

Second, no rules specify which station should send next. Stations compete with one another to
access the medium. That is why these methods are also called contention methods.

In a random-access method, each station has the right to the medium without being controlled by
any other station. However, if more than one station tries to send, there is an access conflict-
collision-and the frames will be either destroyed or modified.

ALOHA
ALOHA, the earliest random access method was developed at the University of Hawaii in early
1970. It was designed for a radio (wireless) LAN, but it can be used on any shared medium.

It is obvious that there are potential collisions in this arrangement. The medium is shared
between the stations. When a station sends data, another station may attempt to do so at the same
time. The data from the two stations collide and become garbled.

Pure ALOHA
The original ALOHA protocol is called pure ALOHA. This is a simple but elegant protocol. The
idea is that each station sends a frame whenever it has a frame to send (multiple access).
However, since there is only one channel to share, there is the possibility of collision between
frames from different stations. Below figure shows an example of frame collisions in pure
ALOHA.

There are four stations (unrealistic assumption) that contend with one another for access to the
shared channel. The figure shows that each station sends two frames; there are a total of eight
frames on the shared medium. Some of these frames collide because multiple frames are in
contention for the shared channel. Above Figure shows that only two frames survive: one frame
from station 1 and one frame from station 3. We need to mention that even if one bit of a frame
coexists on the channel with one bit from another frame, there is a collision and both will be
destroyed. It is obvious that we need to resend the frames that have been destroyed during
transmission. The pure ALOHA protocol relies on acknowledgments from the receiver. When a
station sends a frame, it expects the receiver to send an acknowledgment. If the acknowledgment
does not arrive after a time-out period, the station assumes that the frame (or the
acknowledgment) has been destroyed and resends the frame. A collision involves two or more
stations. If all these stations try to resend their frames after the time-out, the frames will collide
again. Pure ALOHA dictates that when the time-out period passes, each station waits a random
amount of time before resending its frame. The randomness will help avoid more collisions. We
call this time the back off time Ts.
Vulnerable time
Let us find the vulnerable time, the length of time in which there is a possibility of collision.We
assumes that the stations send fixed-length frames with each frame taking Tjr seconds to send.
Following figure shows the vulnerable time for station B.

Station B starts to send a frame at time t. Now imagine station A has started to send its frame
after t - TjT' This leads to a collision between the frames from station Band station A. On the
other hand, suppose that station C starts to send a frame before time t + Tjr- Here, there is also a
collision between frames from station B and station C. Looking at Figure 12.4, we see that the

vulnerable time during which a collision may occur in pure ALOHA is 2 times the frame
transmission time.

Pure ALOHA vulnerable time = 2 x Tfr

Slotted ALOHA

Pure ALOHA has a vulnerable time of 2 x TjT" This is so because there is no rule that defines
when the station can send. A station may send soon after another station has started or just before
another station has finished. Slotted ALOHA was invented to improve the efficiency of pure
ALOHA. In slotted ALOHA we divide the time into slots of Tjr seconds and force the station to
send only at the beginning of the time slot. Below figure shows an example of frame collisions in
slotted ALOHA.
Because a station is allowed to send only at the beginning of the synchronized time slot, if a
station misses this moment, it must wait until the beginning of the next time slot. This means that
the station which started at the beginning of this slot has already finished sending its frame. Of
course, there is still the possibility of collision if two stations try to send at the beginning of the
same time slot. However, the vulnerable time is now reduced to one-half, equal to Tjr. Below
figure shows the situation.

Slotted ALOHA vulnerable time = Tlr


Throughput

It can be proven that the average number of successful transmissions for slotted ALOHA is S =
-G
G x e .The maximum throughput Smax is 0.368, when G = 1. In other words, if one frame is
generated during one frame transmission time, then 36.8 percent of these frames reach their
destination successfully. We expect G = 1 to produce maximum throughput because the
vulnerable time is equal to the frame transmission time. Therefore, if a station generates only one
frame in this vulnerable time (and no other station generates a frame during this time), the frame
will reach its destination successfully.

-G
The throughput for slotted ALOHA is S = G x e
The maximum throughput Smax = 0.368 when G = 1.
CSMA ( CARRIER SENSE MULTIPLE ACCESS )

To minimize the chance of collision and, therefore, increase the performance, the CSMA method
was developed. The chance of collision can be reduced if a station senses the medium before
trying to use it. Carrier sense multiple access (CSMA) requires that each station first listen to the
medium (or check the state of the medium) before sending. In other words, CSMA is based on
the principle "sense before transmit" or "listen before talk." CSMA can reduce the possibility of
collision, but it cannot eliminate it. The reason for this is shown in below figure, a space and
time model of a CSMA network. Stations are connected to a shared channel (usually a dedicated
medium). The possibility of collision still exists because of propagation delay; when a station
sends a frame, it still takes time (although very short) for the first bit to reach every station and
for every station to sense it. In other words, a station may sense the medium and find it idle, only
because the first bit sent by another station has not yet been received.

At time fl, station B senses the medium and finds it idle, so it sends a frame. At time t2 (t2 > tl),
station C senses the medium and finds it idle because, at this time, the first bits from station B
have not reached station C. Station C also sends a frame. The two signals collide and both frames
are destroyed.
Vulnerable Time

The vulnerable time for CSMA is the propagation time Tp. This is the time needed for a signal to
propagate from one end of the medium to the other. When a station sends a frame and any other
station tries to send a frame during this time, a collision will result. But if the first bi t of the
frame reaches the end of the medium, every station will already have heard the bit and will
refrain from sending. Below Figure shows the worst case. The leftmost station, A, sends a frame
at time fl, which reaches the rightmost station, D, at time tl + Tp. The gray area shows the
vulnerable area in time and space.

Persistence Methods

What should a station do if the channel is busy? What should a station do if the channel is idle?
Three methods have been devised to answer these questions: the I-persistent method, the
nonpersistent method, and the p-persistent method. Below Figure shows the behavior of three
persistence methods when a station finds a channel busy.
1-Persistent

The 1-persistent method is simple and straightforward. In this method, after the station finds the
line idle, it sends its frame immediately (with probability 1). This method has the highest chance
of collision because two or more stations may find the line idle and send their frames
immediately. We will see later that Ethernet uses this method.

Nonpersistent

In the nonpersistent method, a station that has a frame to send senses the line. If the line is idle, it
sends immediately. If the line is not idle, it waits a random amount of time and then senses the
line again. The nonpersistent approach reduces the chance of collision because it is unlikely that
two or more stations will wait the same amount of time and retry to send simultaneously.
However, this method reduces the efficiency of the network because the medium remains idle
when there may be stations with frames to send.

P- Persistent
The p-persistent method is used if the channel has time slots with a slot duration equal to or
greater than the maximum propagation time. The p-persistent approach combines the advantages
of the other two strategies. It reduces the chance of collision and improves efficiency. In this
method, after the station finds the line idle it follows these Steps:

1. With probability p, the station sends its frame.

2. With probability q = 1- p, the station waits for the beginning of the next time
slot and checks the line again.

a. If the line is idle, it goes to step 1.

b. If the line is busy, it acts as though a collision has occurred and uses the backoff
procedure.
CSMA/CD (Carrier sense multiple access with collision detection)
The CSMA method does not specify the procedure following a collision. Carrier sense multiple
access with collision detection (CSMA/CD) augments the algorithm to handle the collision. In
this method, a station monitors the medium after it sends a frame to see if the transmission was
successful. If so, the station is finished. If, however, there is a collision, the frame is sent again.

To better understand CSMA/CD, let us look at the first bits transmitted by the two stations
involved in the collision. Although each station continues to send bits in the frame until it detects
the collision, we show what happens as the first bits collide. In below figure, stations A and C
are involved in the collision.

At time t1, station A has executed its persistence procedure and starts sending the bits of its
frame. At time t2, station C has not yet sensed the first bit sent by A. Station C executes its
persistence procedure and starts sending the bits in its frame, which propagate both to the left
and to the right. The collision occurs sometime after time t2' Station C detects a collision at time
t3 when it receives the first bit of A's frame. Station C immediately (or after a short time, but we
assume immediately) aborts transmission. Station A detects collision at time t4 when it receives
the first bit of C's frame; it also immediately aborts transmission. Looking at the figure, we see
that A transmits for the duration t4 - t1; C transmits for the duration t3 - t2' Now that we know
the time durations for the two transmissions, we can show a more complete graph in below
figure.
CSMA/CA

Carrier sense multiple access with collision avoidance (CSMA/CA) was invented for wireless
networks. Collisions are avoided through the use of CSMA/CA's three strategies: the inter frame
space, the contention window, and acknowledgments, as shown in below figure.

Inter frame Space (IFS). First, collisions are avoided by deferring transmission even if the
channel is found idle. When an idle channel is found, the station does not send immediately. It
waits for a period of time called the inter frame space or IFS. Even though the channel may
appear idle when it is sensed, a distant station may have already started transmitting. The distant
station's signal has not yet reached this station. The IFS time allows the front of the transmitted
signal by the distant station to reach this station. After waiting an IFS time, if the channel is still
idle, the station can send, but it still needs to wait a time equal to the contention window
(described next).The IFS variable can also be used to prioritize stations or frame types. For
example, a station that is assigned shorter IFS has a higher priority.

Contention Window. The contention window is an amount of time divided into slots. A
station that is ready to send chooses a random number of slots as its wait time. The number of
slots in the window changes according to the binary exponential back off strategy. This means
that it is set to one slot the first time and then doubles each time the station cannot detect an idle
channel after the IFS time. This is very similar to the p-persistent method except that a random
outcome defines the number of slots taken by the waiting station. One interesting point about
the contention window is that the station needs to sense the channel after each time slot.
However, if the station finds the channel busy, it does not restart the process; it just stops the
timer and restarts it when the channel is sensed as idle. This gives priority to the station with the
longest waiting time. See below figure.
Acknowledgment. With all these precautions, there still may be a collision resulting in
destroyed data. In addition, the data may be corrupted during the transmission. The positive
acknowledgment and the time-out timer can help guarantee that the receiver has received
the frame.

Control Access Protocols


Polling:

A primary station (master) controls access by periodically "polling" secondary stations


(slaves) to see if they have data to send.
The primary station sends a "poll" message to each secondary station in a predefined
order.
If a secondary station has data to transmit, it responds with its data; otherwise, it sends
a "not ready" message.
Example: In a network with a central server (primary) and multiple clients (secondary),
the server would poll each client in turn to see if they need to send data.
Advantages: Predictable access patterns, can support Quality of Service (QoS) if
implemented correctly.
Disadvantages: Can be inefficient if the primary station spends a lot of time polling idle
stations.
2. Reservation:

Time is divided into intervals, and stations must make reservations before transmitting
data.
A reservation frame precedes the data frame, allowing stations to reserve a specific time
slot for their transmission.

Example: In a network with a central scheduler, a station would send a reservation


request for a specific time slot, and the scheduler would grant or deny the request.

Advantages: Provides predictable and reliable access to the medium, well-suited for
realtime traffic.

Disadvantages: Overhead associated with reservation process, potential for delays if


reservation requests are not handled efficiently.

The following figure shows a situation with five stations and a five-slot reservation
frame. In the first interval, only stations 1, 3, and 4 have made reservations. In the
second interval, only station 1 has made a reservation.

3. Token Passing:

Stations are arranged in a logical ring (not necessarily a physical ring topology).
A special frame called a "token" circulates around the ring.
A station can only transmit data when it possesses the token.
When a station finishes transmitting, it passes the token to the next station in the ring.
Example: In a network using FDDI (Fiber Distributed Data Interface), a token circulates
around a dual ring topology, and only the station holding the token can transmit data.
Advantages: Orderly access to the medium, avoids collisions, good throughput under
heavy load.
Disadvantages: If the token is lost or a station fails, the network can be disrupted.
Variations: Can be implemented in a logical ring or a physical ring topology.

Collision Free Protocols

Bit map protocol is collision free Protocol. In bitmap protocol method, each contention period consists of
exactly N slots. If any station has to send frame, then it transmits a 1 bit in the corresponding slot. For
example, if station 2 has a frame to send, it transmits a 1 bit to the 2nd slot.

In general, Station 1 Announce the fact that it has a frame questions by inserting a 1 bit into slot 1. In this
way, each station has complete knowledge of which station wishes to transmit. There will never be any
collisions because everyone agrees on who goes next. Protocols like this in which the desire to transmit is
broadcasting for the actual transmission are called Reservation Protocols.

Binary Countdown Protocol

Binary Countdown Protocol is a collision-free protocol that operates in the Medium Access Control (MAC)
layer of the OSI model. In computer networks, when more than one station tries to transmit simultaneously
via a shared channel, the transmitted data is garbled, an event called collision. Collision-free protocols
resolves channel access while the stations are contending for the shared channel, thus eliminating any
possibilities of collisions.

Working Principle of Binary Countdown


In a binary countdown protocol, each station is assigned a binary address. The binary addresses are bit strings
of equal lengths. When a station wants to transmit, it broadcasts its address to all the stations in the channel,
one bit at a time starting with the highest order bit.

In order to decide which station gets the channel access, the addresses of the stations which are broadcasted
are ORed. The higher numbered station gets the channel access.

Example

Suppose that six stations contend for channel accesses which have the addresses: 1011, 0010, 0101, 1100,
1001 and 1101.

The iterative steps are −

● All stations broadcast their most significant bit, i.e. 1, 0, 0, 1, 1, 1. Stations 0010 and 0101 sees 1 bit
in other stations, and so they give up competing for the channel.
● The stations 1011, 1100, 1001 and 1101 continue. They broadcast their next bit, i.e. 0, 1, 0, 1. Stations
1011 and 1001 sees 1 bit in other stations, and so they give up competing for the channel.
● The stations 1100 and 1101 continue. They broadcast their next bit, i.e. 0, 0. Since both of them have
same bit value, both of them broadcast their next bit.
● The stations 1100 and 1101 broadcast their least significant bit, i.e. 0 and 1. Since station 1101 has 1
while the other 0, station 1101 gets the access to the channel.
● After station 1101 has completed frame transmission, or there is a time-
out, the next contention cycle starts. The procedure is illustrated as
follows −

3. Limited Contention Protocols:


Collision based protocols (pure and slotted ALOHA, CSMA/CD) are good when the network load is
low.
Collision free protocols (bitmap, binary Countdown) are good when load is high.

These protocols combines the advantages of collision based protocols and collision free protocols. Under
light load, they behave like ALOHA scheme. Under heavy load, they behave like bitmap protocols.

4. Adaptive Tree Walk Protocol:

In adaptive tree walk protocol, the stations or nodes are arranged in the form of a binary tree and limit
the contention for each slot.
Under light load, everyone can try for each slot like aloha
Under heavy load, only a group can try for each slot

How do we do it:
1. Treat every stations as the leaf of a binary tree
2. First slot (after successful transmission), all station can try to get the slot (under the root node).
3. If no conflict, fine.
4. Else, in case of conflict, only nodes under a sub tree get to try for the next one. (depth first search)

Slot-0: C*, E*, F*, H* (all nodes under node 0 can try which are going to send), conflict

Slot-1: C* (all nodes under node 1 can try}, C sends

Slot-2: E*, F*, H*(all nodes under node 2 can try}, conflict

Slot-3: E*, F* (all nodes under node 5 can try to send), conflict

Slot-4: E* (all nodes under E can try), E sends

Slot-5: F* (all nodes under F can try), F sends

Slot-6: H* (all nodes under node 6 can try to send), H sends.

Channelization protocols

Channelization is a multiple-access method in which the available bandwidth of a link


is shared in time, frequency, or through code, between different stations. In this section,
we discuss three channelization protocols: FDMA, TDMA, and CDMA.

FDMA (Frequency Division Multiple Access):


Concept: Divides the total available bandwidth into smaller frequency bands, assigning each user
a unique frequency band.
How it works: Each user transmits on their allocated frequency, and the receiver filters out all other
frequencies.
Example: Early cellular systems and satellite communications often used FDMA.
Advantages: Simple to implement, widely used in various applications.
Disadvantages: Can be less spectrum efficient, especially if users don't transmit continuously.

TDMA (Time Division Multiple Access):


Concept: Divides the transmission time into time slots, allocating each user a specific time slot to
transmit.
How it works: Users take turns transmitting during their assigned time slots, sharing the same
frequency band.
Example: GSM (Global System for Mobile communications) uses TDMA.
Advantages: Higher spectral efficiency than FDMA, can be more flexible than FDMA.
Disadvantages: Requires precise time synchronization between users, can be affected by
multipath propagation
.

CDMA
Concept:
CDMA allows multiple users to share the same frequency band simultaneously by using unique
codes to distinguish between users.
How it works:
Each user's data is multiplied by a unique code, and the resulting signals are transmitted over the
same frequency band. The receiver uses the corresponding code to decode the desired user's
signal.
Analogy

Let us first give an analogy. CDMA simply means communication with different codes. For example, in a
large room with many people, two people can talk in English if nobody else understands English. Another
two people can talk in Chinese if they are the only ones who understand Chinese, and so on. In other words,
the common channel,the space of the room in this case, can easily allow communication between several
couples, but in different languages (codes).

Idea

Let us assume we have four stations 1, 2, 3, and 4 connected to the same channel. The data from station 1 are
d l , from station 2 are d2, and so on. The code assigned to the first station is cI, to the second is c2, and so on.
We assume that the assigned codes have two properties.

1. If we multiply each code by another, we get O.


2. If we multiply each code by itself, we get 4 (the number of stations).

With these two properties in mind, let us see how the above four stations can send data
using the same common channel, as shown in Figure. Station 1 multiplies (a special kind of multiplication, as
we will see) its data by its code to get d l . Cl' Station 2 multiplies its data by its code to get d2 . c2' And so on.
The data that go on the channel are the sum of all these terms, as shown in the box. Any
station that wants to receive data from one of the other three multiplies the data on the channel by the code of the
sender. For example, suppose stations 1 and 2 are talking to each other. Station 2 wants to hear what station I is
saying. It multiplies the data on the channel by cl' the code of station 1. Because (cl . cl) is 4, but (c2 . cI), (c3 . cI),
and (c4 . cl) are all Os, station 2 divides the result by 4 to get the data from station 1.

data =(d1 . C1 + d2 . C2 +d3 . C3 + d4 . c4) * C1


=d 1• C1. C1 + d2 C2. C1 + d3 . C3 . C1 + d4 . C4. C1
=(4 X d1)/4
=d1

Example Data Link Protocols


HDLC (High-Level Data Link Control)

HDLC (High-Level Data Link Control) is a bit-oriented code-transparent synchronous data link
layer protocol developed by the International Organization for Standardization (ISO). HDLC
provides both connection-oriented and connectionless service.

In HDLC, data is organized into a unit (called a frame) and sent across a network to a destination
that verifies its successful arrival. It supports half-duplex fullduplex transmission, point-to-point,
and multi-point configuration and switched or non-switched channels.

HDLC Frame
HDLC is a bit - oriented protocol where each frame contains up to six fields. The structure varies
according to the type of frame. The fields of a HDLC frame are −

● Flag − It is an 8-bit sequence that marks the beginning and the end of the frame. The bit pattern
of the flag is 01111110.
● Address − It contains the address of the receiver. If the frame is sent by the primary station, it
contains the address(es) of the secondary station(s). If it is sent by the secondary station, it
contains the address of the primary station. The address field may be from 1 byte to several
bytes.
● Control − It is 1 or 2 bytes containing flow and error control information.
● Payload − This carries the data from the network layer. Its length may vary from one network to
another.
● FCS − It is a 2 byte or 4 bytes frame check sequence for error detection. The standard code
used is CRC (cyclic redundancy code)

HDLC frame types:


Three fundamental types of HDLC frames may be distinguished:

∙ Information frames, or I-frames, transport user data from the network layer. They can also include flow
and error control information piggybacked on data.

Supervisory frames, or S-frames, are used for flow and error control whenever
piggybacking is impossible or inappropriate, such as when a station does not have data to
send. S-frames do not have information fields.
Some Example of S frames,
1. RR — receive ready
2. RNR — receive not ready
3. REJ — reject on frame N(R)
4. SREJ — selective reject on N(R)

Unnumbered frames, or U-frames, are used for various miscellaneous purposes, including
link management. Some U-frames contain an information field, depending on the type.

Point-to-Point Protocol (PPP)

Point - to - Point Protocol (PPP) is a communication protocol of the data link layer
that is used to transmit multiprotocol data between two directly connected (point-to-
point) computers.

It is a byte - oriented protocol that is widely used in broadband communications


having heavy loads and high speeds.

Since it is a data link layer protocol, data is transmitted in frames. It is also known as
RFC 1661.

Services Provided by PPP


The main services provided by Point - to - Point Protocol are –

1. Defining the frame format of the data to be transmitted.


2. Defining the procedure of establishing link between two points and
exchange of data.

3. Stating the method of encapsulation of network layer data in the


frame.

4. Stating authentication rules of the communicating devices.

5. Providing address for network communication.

6. Providing connections over multiple links.

7. Supporting a variety of network layer protocols by providing a


range os services.

Components of PPP

Point - to - Point Protocol is a layered protocol having three components –

∙ Encapsulation Component − It encapsulates the datagram so that it can


be transmitted over the specified physical layer.

∙ LinkControl Protocol (LCP) − It is responsible for establishing,configuring, testing,


maintaining and terminating links for transmission. It also imparts negotiation for set up of
options and use of features by the two endpoints of the links.

∙ Authentication Protocols (AP) − These protocols authenticate endpoints for use of services.
The two authentication protocols of PPP are –

o Password Authentication Protocol (PAP)


o Challenge Handshake Authentication Protocol (CHAP)

∙ NetworkControl Protocols (NCPs) − These protocols are used for negotiating the
parameters and facilities for the network layer. For every higher-layer protocol supported by
PPP, one NCP is there. Some of the NCPs of PPP are –

o Internet Protocol Control Protocol (IPCP)


o OSI Network Layer Control Protocol (OSINLCP)
o Internetwork Packet Exchange Control Protocol (IPXCP)
o DECnet Phase IV Control Protocol (DNCP)
o NetBIOS Frames Control Protocol (NBFCP)
o IPv6 Control Protocol (IPV6CP)

PPP Frame
PPP is a byte - oriented protocol where each field of the frame is composed of
one or more bytes. The fields of a PPP frame are –
∙ Flag− 1 byte that marks the beginning and the end of the frame. The bit
pattern of the flag is 01111110.

∙ Address − 1 byte which is set to 11111111 in case of broadcast.

∙ Control − 1 byte set to a constant value of 11000000.

∙ Protocol − 1 or 2 bytes that define the type of data contained in the


payload field.

∙ Payload − This carries the data from the network layer. The maximum
length of the payload field is 1500 bytes. However, this may be negotiated
between the endpoints of communication.

∙ FCS − It is a 2 byte or 4 bytes frame check sequence for error detection.


The standard code used is CRC (cyclic redundancy code)

LAST TWO TOPICS

1. WIRELESS LAN

2. SWITCHING

REFER PPTS

You might also like