KEMBAR78
Computer Networks | PDF | Computer Standards | Information And Communications Technology
0% found this document useful (0 votes)
22 views47 pages

Computer Networks

Explains about Data Link Layer
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views47 pages

Computer Networks

Explains about Data Link Layer
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 47

Unit-2

Data Link Layer:

The data link layer in the OSI (Open System Interconnections) Model, is in between the physical
layer and the network layer. This layer converts the raw transmission facility provided by the
physical layer to a reliable and error-free link.
The main functions and the design issues of this layer are

● Providing services to the network layer


● Framing
● Error Control
● Flow Control
Services to the Network Layer
In the OSI Model, each layer uses the services of the layer below it and provides services to the
layer above it. The data link layer uses the services offered by the physical layer.The primary
function of this layer is to provide a well defined service interface to network layer above it.

The types of services provided can be of three types −

● Unacknowledged connectionless service


● Acknowledged connectionless service
● Acknowledged connection - oriented service

1. Unacknowledged Connectionless Service


● In this type of service source machine sends frames to destination machine but the
destination machine does not send any acknowledgement of these frames back to
the source. Hence it is called unacknowledged service.

● There is no connection establishment between source and destination machine


before data transfer or release after data transfer. Therefore it is known as
connectionless service.

● There is no error control i.e. if any frame is lost due to noise on the line, no attempt
is made to recover it.

● This type of service is used when error rate is low and It is suitable for real time
traffic such as speech.

2. Acknowledged Connectionless Service

● In this service, neither the connection is established before the data transfer nor is it
released after the data transfer between source and destination.
● When the sender sends the data frames to destination, destination machine sends
back the acknowledgement of these frames.

● This type of service provides additional reliability because source machine


retransmit the frames if it does not receive the acknowledgement of these frames
within the specified time.

● This service is useful over unreliable channels, such as wireless systems.

3. Acknowledged Connection - Oriented Service

● This service is the most sophisticated service provided by data link layer to network
layer.

● It is connection-oriented. It means that connection is establishment between source


& destination before any data is transferred.

● In this service, data transfer has three distinct phases:-Connection establishment,


Actual data transfer, Connection release.

● Here, each frame being transmitted from source to destination is given a specific
number and is acknowledged by the destination machine.

● All the frames are received by destination in the same order in which they are send
by the source.
Framing
In the physical layer, data transmission involves synchronized transmission of bits from the
source to the destination. The data link layer packs these bits into frames.
Data-link layer takes the packets from the Network Layer and encapsulates them into frames. If
the frame size becomes too large, then the packet may be divided into small sized frames. Smaller
sized frames makes flow control and error control more efficient.
Then, it sends each frame bit-by-bit on the hardware. At receiver’s end, data link layer picks up
signals from hardware and assembles them into frames.

Parts of a Frame
A frame has the following parts −
● Frame Header − It contains the source and the destination addresses of the frame.
● Payload field − It contains the message to be delivered.
● Trailer − It contains the error detection and error correction bits.
● Flag − It marks the beginning and end of the frame.

Types of Framing
Framing can be of two types, fixed sized framing and variable sized framing.
Fixed-sized Framing
Here the size of the frame is fixed and so the frame length acts as delimiter of the frame.
Consequently, it does not require additional boundary bits to identify the start and end of the
frame.
Example − ATM cells.
Variable – Sized Framing
Here, the size of each frame to be transmitted may be different. So additional mechanisms are
kept to mark the end of one frame and the beginning of the next frame.
It is used in local area networks.
Two ways to define frame delimiters in variable sized framing are −
● Length Field − Here, a length field is used that determines the size of the frame. It is used
in Ethernet (IEEE 802.3).
● End Delimiter − Here, a pattern is used as a delimiter to determine the size of frame. It is
used in Token Rings. If the pattern occurs in the message, then two approaches are used to
avoid the situation −
o Byte – Stuffing − A byte is stuffed in the message to differentiate from the
delimiter. This is also called character-oriented framing.
o Bit – Stuffing − A pattern of bits of arbitrary length is stuffed in the message to
differentiate from the delimiter. This is als+o called bit – oriented framing.
Error Control
Error control in data link layer is the process of detecting and correcting data frames that have
been corrupted or lost during transmission.
In case of lost or corrupted frames, the receiver does not receive the correct data-frame and
sender is ignorant about the loss. Data link layer follows a technique to detect transit errors and
take necessary actions, which is retransmission of frames whenever error is detected or frame is
lost. The process is called Automatic Repeat Request (ARQ).
Phases in Error Control
The error control mechanism in data link layer involves the following phases −
● Detection of Error − Transmission error, if any, is detected by either the sender or the
receiver.
● Acknowledgment − acknowledgment may be positive or negative.
o Positive ACK − On receiving a correct frame, the receiver sends a positive
acknowledge.
o Negative ACK − On receiving a damaged frame or a duplicate frame, the receiver
sends a negative acknowledgment back to the sender.
● Retransmission − The sender maintains a clock and sets a timeout period. If an
acknowledgment of a data-frame previously transmitted does not arrive before the
timeout, or a negative acknowledgment is received, the sender retransmits the frame.
Error Control Techniques
There are three main techniques for error control −
● Stop and Wait ARQ
This protocol involves the following transitions −
o A timeout counter is maintained by the sender, which is started when a frame is
sent.
o If the sender receives acknowledgment of the sent frame within time, the sender is
confirmed about successful delivery of the frame. It then transmits the next frame
in queue.
o If the sender does not receive the acknowledgment within time, the sender
assumes that either the frame or its acknowledgment is lost in transit. It then
retransmits the frame.
o If the sender receives a negative acknowledgment, the sender retransmits the
frame.
● Go-Back-N ARQ
The working principle of this protocol is −
o The sender has buffers called sending window.
o The sender sends multiple frames based upon the sending-window size, without
receiving the acknowledgment of the previous ones.
o The receiver receives frames one by one. It keeps track of incoming frame’s
sequence number and sends the corresponding acknowledgment frames.
o After the sender has sent all the frames in window, it checks up to what sequence
number it has received positive acknowledgment.
o If the sender has received positive acknowledgment for all the frames, it sends
next set of frames.
o If sender receives NACK or has not receive any ACK for a particular frame, it
retransmits all the frames after which it does not receive any positive ACK.
● Selective Repeat ARQ
o Both the sender and the receiver have buffers called sending window and receiving
window respectively.
o The sender sends multiple frames based upon the sending-window size, without
receiving the acknowledgment of the previous ones.
o The receiver also receives multiple frames within the receiving window size.
o The receiver keeps track of incoming frame’s sequence numbers, buffers the
frames in memory.
o It sends ACK for all successfully received frames and sends NACK for only
frames which are missing or damaged.
o The sender in this case, sends only packet for which NACK is received.
Flow Control
Flow control is a technique that allows two stations working at different speeds to communicate
with each other. It is a set of measures taken to regulate the amount of data that a sender sends so
that a fast sender does not overwhelm a slow receiver. In data link layer, flow control restricts the
number of frames the sender can send before it waits for an acknowledgment from the receiver.
Approaches of Flow Control
Flow control can be broadly classified into two categories −

● Feedback based Flow Control In these protocols, the sender sends frames after it has
received acknowledgments from the user. This is used in the data link layer.
● Rate based Flow Control These protocols have built in mechanisms to restrict the rate of
transmission of data without requiring acknowledgment from the receiver. This is used in
the network layer and the transport layer
.
Flow Control Techniques in Data Link Layer
Data link layer uses feedback based flow control mechanisms. There are two main techniques −

Stop and Wait


This protocol involves the following transitions −
● The sender sends a frame and waits for acknowledgment.
● Once the receiver receives the frame, it sends an acknowledgment frame back to the
sender.
● On receiving the acknowledgment frame, the sender understands that the receiver is ready
to accept the next frame. So it sender the next frame in queue.
Sliding Window
This protocol improves the efficiency of stop and wait protocol by allowing multiple frames to be
transmitted before receiving an acknowledgment.
The working principle of this protocol can be described as follows −
● Both the sender and the receiver has finite sized buffers called windows. The sender and
the receiver agrees upon the number of frames to be sent based upon the buffer size.
● The sender sends multiple frames in a sequence, without waiting for acknowledgment.
When its sending window is filled, it waits for acknowledgment. On receiving
acknowledgment, it advances the window and transmits the next frames, according to the
number of acknowledgments received.

Error & Flow control mechanisms :-


Protocols in the data link layer are designed so that this layer can perform its basic
functions: framing, error control and flow control. Framing is the process of dividing bit - streams
from physical layer into data frames whose size ranges from a few hundred to a few thousand
bytes. Error control mechanisms deals with transmission errors and retransmission of corrupted
and lost frames. Flow control regulates speed of delivery and so that a fast sender does not drown
a slow receiver.
Types of Data Link Protocols
Data link protocols can be broadly divided into two categories, depending on whether the
transmission channel is noiseless or noisy.

Simplex Protocol
The Simplex protocol is hypothetical protocol designed for unidirectional data transmission over
an ideal channel, i.e. a channel through which transmission can never go wrong. It has distinct
procedures for sender and receiver. The sender simply sends all its data available onto the channel
as soon as they are available its buffer. The receiver is assumed to process all incoming data
instantly. It is hypothetical since it does not handle flow control or error control.

Stop – and – Wait Protocol

Stop – and – Wait protocol is for noiseless channel too. It provides unidirectional data
transmission without any error control facilities. However, it provides for flow control so that a
fast sender does not drown a slow receiver. The receiver has a finite buffer size with finite
processing speed. The sender can send a frame only when it has received indication from the
receiver that it is available for further data processing.

Stop – and – Wait ARQ


Stop – and – wait Automatic Repeat Request (Stop – and – Wait ARQ) is a variation of the above
protocol with added error control mechanisms, appropriate for noisy channels. The sender keeps a
copy of the sent frame. It then waits for a finite time to receive a positive acknowledgement from
receiver. If the timer expires or a negative acknowledgement is received, the frame is
retransmitted. If a positive acknowledgement is received then the next frame is sent.

Go – Back – N ARQ

​ The sender maintains a timeout counter. When a frame is sent, the sender starts the
timeout counter.
​ If acknowledgement of frame comes in time, the sender transmits the next frame in
queue.
​ If acknowledgement does not come in time, the sender assumes that either the
frame or its acknowledgement is lost in transit. Sender retransmits the frame and
starts the timeout counter.
​ If a negative acknowledgement is received, the sender retransmits the frame.
​ Sender keeps a copy of sent frame until successful delivery is ensured.
​ Receiver responds with an ack when it successfully receives a frame.
​ Both data and Ack frames must be numbered

Selective Repeat ARQ

This protocol also provides for sending multiple frames before receiving the acknowledgement
for the first frame. However, here only the erroneous or lost frames are retransmitted, while the
good frames are received and buffered.
Difference between the Go-Back-N ARQ and Selective Repeat ARQ?

Go-Back-N ARQ Selective Repeat ARQ

If a frame is corrupted or lost in it,all In this, only the frame is sent again, which is corrupted
subsequent frames have to be sent or lost.
again.

If it has a high error rate,it wastes a There is a loss of low bandwidth.


lot of bandwidth.

It is less complex. It is more complex because it has to do sorting and


searching as well. And it also requires more storage.

It does not require sorting. In this, sorting is done to get the frames in the correct
order.

It does not require searching. The search operation is performed in it.

It is used more. It is used less because it is more complex.

Error Detection & Correction


There are many reasons such as noise, cross-talk etc., which may help data to get corrupted
during transmission. The upper layers work on some generalized view of network architecture
and are not aware of actual hardware data processing. Hence, the upper layers expect error-free
transmission between the systems. Most of the applications would not function expectedly if
they receive erroneous data. Applications such as voice and video may not be that affected and
with some errors they may still function well.
Data-link layer uses some error control mechanism to ensure that frames (data bit streams) are
transmitted with certain level of accuracy. But to understand how errors is controlled, it is
essential to know what types of errors may occur.

Types of Errors

There may be three types of errors:


● Single bit error

In a frame, there is only one bit, anywhere though, which is corrupt.


● Multiple bits error
Frame is received with more than one bits in corrupted state.
● Burst error

Frame contains more than1 consecutive bits corrupted.


Error control mechanism may involve two possible ways:
● Error detection
● Error correction

Error Detection

Errors in the received frames are detected by means of Parity Check and Cyclic Redundancy
Check (CRC). In both cases, few extra bits are sent along with actual data to confirm that bits
received at other end are same as they were sent. If the counter-check at receiver’ end fails, the
bits are considered corrupted.

Parity Check

One extra bit is sent along with the original bits to make number of 1s either even in case of
even parity, or odd in case of odd parity.
The sender while creating a frame counts the number of 1s in it. For example, if even parity is
used and number of 1s is even then one bit with value 0 is added. This way number of 1s
remains even.If the number of 1s is odd, to make it even a bit with value 1 is added.

The receiver simply counts the number of 1s in a frame. If the count of 1s is even and even
parity is used, the frame is considered to be not-corrupted and is accepted. If the count of 1s is
odd and odd parity is used, the frame is still not corrupted.
If a single bit flips in transit, the receiver can detect it by counting the number of 1s. But when
more than one bits are erro neous, then it is very hard for the receiver to detect the error.

Cyclic Redundancy Check (CRC)

CRC is a different approach to detect if the received frame contains valid data. This technique
involves binary division of the data bits being sent. The divisor is generated using polynomials.
The sender performs a division operation on the bits being sent and calculates the remainder.
Before sending the actual bits, the sender adds the remainder at the end of the actual bits. Actual
data bits plus the remainder is called a codeword. The sender transmits data bits as codewords.
At the other end, the receiver performs division operation on codewords using the same CRC
divisor. If the remainder contains all zeros the data bits are accepted, otherwise it is considered
as there some data corruption occurred in transit.

Error Correction

In the digital world, error correction can be done in two ways:


● Backward Error Correction When the receiver detects an error in the data received, it
requests back the sender to retransmit the data unit.
● Forward Error Correction When the receiver detects some error in the data received,
it executes error-correcting code, which helps it to auto-recover and to correct some
kinds of errors.
The first one, Backward Error Correction, is simple and can only be efficiently used where
retransmitting is not expensive. For example, fiber optics. But in case of wireless transmission
retransmitting may cost too much. In the latter case, Forward Error Correction is used.
To correct the error in data frame, the receiver must know exactly which bit in the frame is
corrupted. To locate the bit in error, redundant bits are used as parity bits for error detection.For
example, we take ASCII words (7 bits data), then there could be 8 kind of information we need:
first seven bits to tell us which bit is error and one more bit to tell that there is no error.
For m data bits, r redundant bits are used. r bits can provide 2r combinations of information. In
m+r bit codeword, there is possibility that the r bits themselves may get corrupted. So the
number of r bits used must inform about m+r bit locations plus no-error information, i.e. m+r+1.

Multiple access protocol- ALOHA, CSMA, CSMA/CA and CSMA/CD


Data Link Layer

The data link layer is used in a computer network to transmit the data between two devices or
nodes. It divides the layer into parts such as data link control and the multiple access
resolution/protocol. The upper layer has the responsibility to flow control and the error control
in the data link layer, and hence it is termed as logical of data link control. Whereas the lower
sub-layer is used to handle and reduce the collision or multiple access on a channel. Hence it is
termed as media access control
Data Link Control

A data link control is a reliable channel for transmitting data over a dedicated link using various
techniques such as framing, error control and flow control of data packets in the computer
network.

What is a multiple access protocol?

When a sender and receiver have a dedicated link to transmit data packets, the data link control is
enough to handle the channel. Suppose there is no dedicated path to communicate or transfer the
data between two devices. In that case, multiple stations access the channel and simultaneously
transmits the data over the channel. It may create collision and cross talk. Hence, the multiple
access protocol is required to reduce the collision and avoid crosstalk between the channels.
For example, suppose that there is a classroom full of students. When a teacher asks a question,
all the students (small channels) in the class start answering the question at the same time
(transferring the data simultaneously). All the students respond at the same time due to which
data is overlap or data lost. Therefore it is the responsibility of a teacher (multiple access
protocol) to manage the students and make them one answer.
Following are the types of multiple access protocol that is subdivided into the different process
as:

A. Random Access Protocol

In this protocol, all the station has the equal priority to send the data over a channel. In random
access protocol, one or more stations cannot depend on another station nor any station control
another station. Depending on the channel's state (idle or busy), each station transmits the data
frame. However, if more than one station sends the data over a channel, there may be a collision
or data conflict. Due to the collision, the data frame packets may be lost or changed. And hence,
it does not receive by the receiver end.
Following are the different methods of random-access protocols for broadcasting frames on the
channel.
o Aloha
o CSMA
o CSMA/CD
o CSMA/CA

ALOHA Random Access Protocol

It is designed for wireless LAN (Local Area Network) but can also be used in a shared medium to
transmit data. Using this method, any station can transmit data across a network simultaneously
when a data frameset is available for transmission.
Aloha Rules
1. Any station can transmit data to a channel at any time.
2. It does not require any carrier sensing.
3. Collision and data frames may be lost during the transmission of data through multiple
stations.
4. Acknowledgment of the frames exists in Aloha. Hence, there is no collision detection.
5. It requires retransmission of data after some random amount of time.

Pure Aloha
Whenever data is available for sending over a channel at stations, we use Pure Aloha. In pure
Aloha, when each station transmits data to a channel without checking whether the channel is idle
or not, the chances of collision may occur, and the data frame can be lost. When any station
transmits the data frame to a channel, the pure Aloha waits for the receiver's acknowledgment. If
it does not acknowledge the receiver end within the specified time, the station waits for a random
amount of time, called the backoff time (Tb). And the station may assume the frame has been lost
or destroyed. Therefore, it retransmits the frame until all the data are successfully transmitted to
the receiver.
1. The total vulnerable time of pure Aloha is 2 * Tfr.
2. Maximum throughput occurs when G = 1/ 2 that is 18.4%.
3. Successful transmission of data frame is S = G * e ^ - 2 G.

As we can see in the figure above, there are four stations for accessing a shared channel and
transmitting data frames. Some frames collide because most stations send their frames at the same
time. Only two frames, frame 1.1 and frame 2.2, are successfully transmitted to the receiver end.
At the same time, other frames are lost or destroyed. Whenever two frames fall on a shared
channel simultaneously, collisions can occur, and both will suffer damage. If the new frame's first
bit enters the channel before finishing the last bit of the second frame. Both frames are
completely finished, and both stations must retransmit the data frame.
Slotted Aloha
The slotted Aloha is designed to overcome the pure Aloha's efficiency because pure Aloha has a
very high possibility of frame hitting. In slotted Aloha, the shared channel is divided into a fixed
time interval called slots. So that, if a station wants to send a frame to a shared channel, the frame
can only be sent at the beginning of the slot, and only one frame is allowed to be sent to each slot.
And if the stations are unable to send data to the beginning of the slot, the station will have to
wait until the beginning of the slot for the next time. However, the possibility of a collision
remains when trying to send a frame at the beginning of two or more station time slot.
1. Maximum throughput occurs in the slotted Aloha when G = 1 that is 37%.
2. The probability of successfully transmitting the data frame in the slotted Aloha is S = G *
e ^ - 2 G.
3. The total vulnerable time required in slotted Aloha is Tfr.
CSMA (Carrier Sense Multiple Access)

It is a carrier sense multiple access based on media access protocol to sense the traffic on a
channel (idle or busy) before transmitting the data. It means that if the channel is idle, the station
can send data to the channel. Otherwise, it must wait until the channel becomes idle. Hence, it
reduces the chances of a collision on a transmission medium.
CSMA Access Modes
1-Persistent: In the 1-Persistent mode of CSMA that defines each node, first sense the shared
channel and if the channel is idle, it immediately sends the data. Else it must wait and keep track
of the status of the channel to be idle and broadcast the frame unconditionally as soon as the
channel is idle.
Non-Persistent: It is the access mode of CSMA that defines before transmitting the data, each
node must sense the channel, and if the channel is inactive, it immediately sends the data.
Otherwise, the station must wait for a random time (not continuously), and when the channel is
found to be idle, it transmits the frames.
P-Persistent: It is the combination of 1-Persistent and Non-persistent modes. The P-Persistent
mode defines that each node senses the channel, and if the channel is inactive, it sends a frame
with a P probability. If the data is not transmitted, it waits for a (q = 1-p probability) random
time and resumes the frame with the next time slot.
O- Persistent: It is an O-persistent method that defines the superiority of the station before the
transmission of the frame on the shared channel. If it is found that the channel is inactive, each
station waits for its turn to retransmit the data.
CSMA/ CD

It is a carrier sense multiple access/ collision detection network protocol to transmit data
frames. The CSMA/CD protocol works with a medium access control layer. Therefore, it first
senses the shared channel before broadcasting the frames, and if the channel is idle, it transmits a
frame to check whether the transmission was successful. If the frame is successfully received, the
station sends another frame. If any collision is detected in the CSMA/CD, the station sends a jam/
stop signal to the shared channel to terminate data transmission. After that, it waits for a random
time before sending a frame to a channel.

CSMA/ CA

It is a carrier sense multiple access/collision avoidance network protocol for carrier


transmission of data frames. It is a protocol that works with a medium access control layer. When
a data frame is sent to a channel, it receives an acknowledgment to check whether the channel is
clear. If the station receives only a single (own) acknowledgments, that means the data frame has
been successfully transmitted to the receiver. But if it gets two signals (its own and one more in
which the collision of frames), a collision of the frame occurs in the shared channel. Detects the
collision of the frame when a sender receives an acknowledgment signal.
Following are the methods used in the CSMA/CA to avoid the collision:
Interframe space: In this method, the station waits for the channel to become idle, and if it gets
the channel is idle, it does not immediately send the data. Instead of this, it waits for some time,
and this time period is called the Interframe space or IFS. However, the IFS time is often used to
define the priority of the station.
Contention window: In the Contention window, the total time is divided into different slots.
When the station/ sender is ready to transmit the data frame, it chooses a random slot number of
slots as wait time. If the channel is still busy, it does not restart the entire process, except that it
restarts the timer only to send data packets when the channel is inactive.
Acknowledgment: In the acknowledgment method, the sender station sends the data frame to the
shared channel if the acknowledgment is not received ahead of time.

B. Controlled Access Protocol

It is a method of reducing data frame collision on a shared channel. In the controlled access
method, each station interacts and decides to send a data frame by a particular station approved
by all other stations. It means that a single station cannot send the data frames unless all other
stations are not approved. It has three types of controlled access: Reservation, Polling,
and Token Passing.
In controlled access, the stations seek information from one another to find which station has the
right to send. It allows only one node to send at a time, to avoid collision of messages on shared
medium.
The three controlled-access methods are:
1. Reservation
2. Polling
3. Token Passing
Reservation
● In the reservation method, a station needs to make a reservation before sending data.
● The time line has two kinds of periods:
1. Reservation interval of fixed time length
2. Data transmission period of variable frames.
● If there are M stations, the reservation interval is divided into M slots, and each station has
one slot.
● Suppose if station 1 has a frame to send, it transmits 1 bit during the slot 1. No other station is
allowed to transmit during this slot.
● In general, i th station may announce that it has a frame to send by inserting a 1 bit into
i th slot. After all N slots have been checked, each station knows which stations wish to
transmit.
● The stations which have reserved their slots transfer their frames in that order.
● After data transmission period, next reservation interval begins.
● Since everyone agrees on who goes next, there will never be any collisions.
The following figure shows a situation with five stations and a five-slot reservation frame. In the
first interval, only stations 1, 3, and 4 have made reservations. In the second interval, only station
1 has made a reservation.
Polling
● Polling process is similar to the roll-call performed in class. Just like the teacher, a controller
sends a message to each node in turn.
● In this, one acts as a primary station(controller) and the others are secondary stations. All data
exchanges must be made through the controller.
● The message sent by the controller contains the address of the node being selected for
granting access.
● Although all nodes receive the message but the addressed one responds to it and sends data, if
any. If there is no data, usually a “poll reject”(NAK) message is sent back.
● Problems include high overhead of the polling messages and high dependence on the
reliability of the controller.

Efficiency
Let Tpoll be the time for polling and Tt be the time required for transmission of data. Then,
Efficiency = Tt/(Tt + Tpoll)
Token Passing
● In token passing scheme, the stations are connected logically to each other in form of ring and
access of stations is governed by tokens.
● A token is a special bit pattern or a small message, which circulate from one station to the
next in some predefined order.
● In Token ring, token is passed from one station to another adjacent station in the ring whereas
incase of Token bus, each station
uses the bus to send the token to the next station in some predefined order.
● In both cases, token represents permission to send. If a station has a frame queued for
transmission when it receives the token, it can send that frame before it passes the token to the
next station. If it has no queued frame, it passes the token simply.
● After sending a frame, each station must wait for all N stations (including itself) to send the
token to their neighbors and the other N – 1 stations to send a frame, if they have one.
● There exists problems like duplication of token or token is lost or insertion of new station,
removal of a station, which need be tackled for correct and reliable operation of this scheme.

1
Performance
Performance of token ring can be concluded by 2 parameters:-
1. Delay, which is a measure of time between when a packet is ready and when it is delivered.
So, the average time (delay) required to send a token to the next station = a/N.
2. Throughput, which is a measure of the successful traffic.
Throughput, S = 1/(1 + a/N) for a<1 and
S = 1/{a(1 + 1/N)} for a>1.
where N = number of stations
a = Tp/Tt
(Tp = propagation delay and Tt = transmission delay)

Wired LANs-Ethernet:
INTRODUCTION
A local area network (LAN) is a computer network that is designed for a limited
geographic area such as a building or a campus. Although a LAN can be used as an
isolated network to connect computers in an organization for the sole purpose of
sharing resources, most LANs today are also linked to a wide area network (WAN) or
the Internet. The LAN market has seen several technologies such as Ethernet, Token
Ring, Token Bus, FDDI, and ATM LAN. Some of these technologies survived for a
while, but Ethernet is by far the dominant technology.
1. IEEE STANDARDS
In 1985, the Computer Society of the IEEE started a project, called Project 802, to set
standards to enable intercommunication among equipment from a variety of
manufacturers. Project 802 does not seek to replace any part of the OSI or the Internet
model. Instead, it is a way of specifying functions of the physical layer and the data link
layer of major LAN protocols.

The original Ethernet was created in 1976 at Xerox's Palo Alto Research Center
(PARC). Since then, it has gone through four generations:

a. Standard Ethernet (10 Mbps),


b. Fast Ethernet (100 Mbps),
c. Gigabit Ethernet (1 Gbps), and
d. Ten-Gigabit Ethernet (10 Gbps), as shown in Figure 13.1.

Figure 13.1 Ethernet evolution through four generations

​ Standard Ethernet(IEEE 802.3)


Standard Ethernet also known as IEEE 802.3 was the LAN standard proposed by
IEEE. Data rate for standard Ethernet is 10 Mbps.

MAC Sublayer
In Standard Ethernet, the MAC sublayer governs the operation of the access method.
It also frames data received from the upper layer and passes them to the physical
layer.
● Frame Format
The Ethernet frame contains seven fields: preamble, SFD, DA, SA, length or type of
protocol data unit (PDU), upper-layer data, and the CRC. Ethernet does not provide
any mechanism for acknowledging received frames, making it what is known as an
unreliable medium. Acknowledgments must be implemented at the higher layers. The
format of the MAC frame is shown in Figure 13.2.

Figure 13.2 802.3 MAC frame

i. Preamble. The first field of the 802.3 frame contains 7 bytes (56 bits) of alternating
0s and 1s that alerts the receiving system to the coming frame and enables it to
synchronize its input timing. The pattern provides only an alert and a timing pulse.
The 56-bit pattern allows the stations to miss some bits at the beginning of the
frame. The preamble is actually added at the physical layer and is not (formally) part
of the frame.

ii. Start frame delimiter (SFD). The second field (1 byte: 10101011) signals the
beginning of the frame. The SFD warns the station or stations that this is the last
chance for synchronization. The last 2 bits is 11 and alerts the receiver that the next
field is the destination address.

iii. Destination address (DA). The DA field is 6 bytes and contains the physical
address of the destination station or stations to receive the packet.

iv. Source address (SA). The SA field is also 6 bytes and contains the physical address
of the sender of the packet.

v. Length or type. This field is defined as a type field or length field. The original
Ethernet used this field as the type field to define the upper-layer protocol using the
MAC frame. The IEEE standard used it as the length field to define the number of
bytes in the data field. Both uses are common today.

vi. Data. This field carries data encapsulated from the upper-layer protocols. It is a
minimum of 46 and a maximum of 1500 bytes.

vii. CRC. The last field contains error detection information, in this case a CRC-32.
viii. Frame Length Ethernet has imposed restrictions on both the minimum and
maximum lengths of a frame, as shown in Figure 13.3.
Figure 13.3 Minimum and maximum lengths

The minimum length restriction is required for the correct operation of CSMA/CD as we
will see shortly. An Ethernet frame needs to have a minimum length of 512 bits or 64
bytes. Part of this length is the header and the trailer. If we count 18 bytes of header
and trailer (6 bytes of source address, 6 bytes of destination address, 2 bytes of length
or type, and 4 bytes of CRC), then the minimum length of data from the upper layer is
64 - 18 = 46 bytes. If the upper-layer packet is less than 46 bytes, padding is added to
make up the difference.

The standard defines the maximum length of a frame (without preamble and SFD field)
as 1518 bytes. If we subtract the 18 bytes of header and trailer, the maximum length of
the payload is 1500 bytes. The maximum length restriction has two historical reasons.
First, memory was very expensive when Ethernet was designed: a maximum length
restriction helped to reduce the size of the buffer. Second, the maximum length
restriction prevents one station from monopolizing the shared medium, blocking other
stations that have data to send.

​ MAC Addressing
Each station on an Ethernet network (such as a PC, workstation, or printer) has its own
network interface card (NIC). The NIC fits inside the station and provides the station
with a 6-byte physical (MAC) address. As shown in Figure 13.4, the Ethernet address is
6 bytes (48 bits), normally written in hexadecimal notation, with a colon between the
bytes.

Figure 13.4 Example of an Ethernet address in hexadecimal notation

Unicast, Multicast, and Broadcast Addresses


Data is transmitted over a network by three simple methods i.e. Unicast, Broadcast,
and Multicast Figure 13.5. So let’s begin to summarize the difference between these
three:
​ Unicast: from one source to one destination i.e. One-to-One
​ Broadcast: from one source to all possible destinations i.e. One-to-All
​ Multicast: from one source to multiple destinations stating an interest in
receiving the traffic i.e. One-to-Many

Figure 13.5 Unicasting, Multicasting and Broadcasting

● A source address is always a unicast address as the frame comes from only one
station.
● The destination address, however, can be unicast, multicast, or broadcast.
● Figure 13.6 shows how to distinguish a unicast address from a multicast
address. If the least significant bit of the first byte in a destination address is 0,
the address is unicast; otherwise, it is multicast.

Figure 13.6 Unicast and multicast MAC addresses

● A unicast destination address defines only one recipient; the relationship


between the sender and the receiver is one-to-one.
● A multicast address defines a group of relationship
destination addresses; the

between the sender and the receivers is one-to-many.


● The broadcast address is a special case of the multicast address; the recipients
are all the stations on the LAN. A broadcast destination address is forty-eight 1s.
Example 13.1
Define the type of the following destination
addresses: a. 4A:30:10:21:10:1A
b. 47:20:1B:2E:08:EE
c. FF:FF:FF:FF:FF:FF

Solution
To find the type of the address, we need to look at the second hexadecimal digit from
the left. If it is even, the address is unicast. If it is odd, the address is multicast. If all
digits are F's, the address is broadcast. Therefore, we have the following:
a. This is a unicast address because A in binary is 1010 (even).
b. This is a multicast address because 7 in binary is 0111 (odd).
c. This is a broadcast address because all digits are F's.
The way the addresses are sent out on line is different from the way they are written in
hexadecimal notation. The transmission is left-to-right, byte by byte; however, for
each byte, the least significant bit is sent first and the most significant bit is sent last.
This means that the bit that defines an address as unicast or multicast arrives first at the
receiver.

​ Categories of Standard Ethernet


The Standard Ethernet defines several physical layer implementations; four of the most
common, are shown in Figure 13.7.

Figure 13.7 Categories of Standard Ethernet


Encoding and
Decoding
All standard implementations use digital signaling (baseband) at 10 Mbps. At the
sender, data are converted to a digital signal using the Manchester scheme; at the
receiver, the received signal is interpreted as Manchester and decoded into data.
● 10Base5: Thick Ethernet
The first implementation is called 10Base5, thick Ethernet, or Thicknet. The
nickname derives from the size of the cable, which is roughly the size of a garden hose
and too stiff to bend with your hands. 10Base5 was the first Ethernet specification to
use a bus topology with an external transceiver (transmitter/receiver) connected via a
tap to a thick coaxial cable.

The transceiver is responsible for transmitting, receiving, and detecting collisions. The
transceiver is connected to the station via a transceiver cable that provides separate
paths for sending and receiving. This means that collision can only happen in the
coaxial cable. The maximum length of the coaxial cable must not exceed 500 m,
otherwise, there is excessive degradation of the signal. If a length of more than 500 m is
needed, up to five segments, each a maximum of 500-meter, can be connected using
repeaters.
● 10Base2: Thin Ethernet
The second implementation is called 10Base2, thin Ethernet, or Cheaper net. 10Base2
also uses a bus topology, but the cable is much thinner and more flexible. The cable can
be bent to pass very close to the stations. In this case, the transceiver is normally part
of the network interface card (NIC), which is installed inside the station.
Note that the collision here occurs in the thin coaxial cable. This implementation is
more cost effective than 10Base5 because thin coaxial cable is less expensive than
thick coaxial and the tee connections are much cheaper than taps. Installation is simpler
because the thin coaxial cable is very flexible. However, the length of each segment
cannot exceed 185 m (close to 200 m) due to the high level of attenuation in thin
coaxial cable.
● 10Base-T: Twisted-Pair Ethernet
The third implementation is called 10Base-T or twisted-pair Ethernet. 10Base-T uses a
physical star topology. The stations are connected to a hub via two pairs of twisted
cable. Note that two pairs of twisted cable create two paths (one for sending and one
for receiving) between the station and the hub. Any collision here happens in the hub.
Compared to 10Base5 or 10Base2, we can see that the hub actually replaces the
coaxial cable as far as a collision is concerned. The maximum length of the twisted
cable here is defined as 100 m, to minimize the effect of attenuation in the twisted
cable.
● 10Base-F: Fiber Ethernet
Although there are several types of optical fiber 10-Mbps Ethernet, the most common
is called 10Base-F. 10Base-F uses a star topology to connect stations to a hub. The
stations are connected to the hub using two fiber-optic cables.
Summary
Table 13.1 shows a summary of Standard Ethernet implementations.
Table 13.1 Summary of Standard Ethernet implementations

Characterist 10Base5 10Base2 10Base-T 10Base-F


ics
Media Thick Coaxial Thin Coaxial 2UTP 2Fiber
Cable Cable
Maximum 500m 185m 100m 2000m
length
Line Manchester Manchester Mancheste Mancheste
encoding r r

​ Changes in the Standard


The 10-Mbps Standard Ethernet has gone through several changes before moving to the
higher data rates. These changes actually opened the road to the evolution of the
Ethernet to become compatible with other high-data-rate LANs. We discuss some of
these changes in this section.

​Bridged Ethernet
The first step in the Ethernet evolution was the division of a LAN by bridges. A
Bridge is a two port switch used to connect two segments of a LAN. Bridges have two
effects on an Ethernet LAN:
● They raise the bandwidth and
● They separate collision domains.
Raising the Bandwidth
In an unbridged Ethernet network, the total capacity (10 Mbps) is shared among all
stations with a frame to send; the stations share the bandwidth of the network. If only
one station has frames to send, it benefits from the total capacity (10 Mbps). But if
more than one station needs to use the network, the capacity is shared. For example, if
two stations have a lot of frames to send, they probably alternate in usage. When one
station is sending, the other one refrains from sending. We can say that, in this case,
each station on average, sends at a rate of 5 Mbps. Figure 13.8 shows the situation.

Figure 13.8 Sharing bandwidth

A bridge divides the network into two or more networks. Bandwidth-wise, each
network is independent. For example, in Figure 13.9, a network with 12 stations is
divided into two networks, each with 6 stations. Now each network has a capacity of 10
Mbps. The 10-Mbps capacity in each segment is now shared between 6 stations
(actually 7 because the bridge acts as a station in each segment), not 12 stations. In a
network with a heavy load, each station theoretically is offered 10/6 Mbps instead of
10/12 Mbps, assuming that the traffic is not going through the bridge. It is obvious that
if we further divide the network, we can gain more bandwidth for each segment. For
example, if we use a four-port bridge, each station is now offered 10/3 Mbps, which is 4
times more than an unbridged network.

Figure 13.9 A network with and without a bridge


Separating Collision Domains
Another advantage of a bridge is the separation of the collision domain. Figure 13.10
shows the collision domains for an unbridged and a bridged network. You can see that
the collision domain becomes much smaller and the probability of collision is reduced
tremendously. Without bridging, 12 stations contend for access to the medium; with
bridging only 3 stations contend for access to the medium.

Figure 13.10 Collision domains in an unbridged network and a bridged network

​ witched Ethernet
S
The idea of a bridged LAN can be extended to a switched LAN. Instead of having two
to four networks, why not have N networks, where N is the number of stations on the
LAN? In other words, if we can have a multiple-port bridge, why not have an N-port
switch? In this way, the bandwidth is shared only between the station and the switch (5
Mbps each). In addition, the collision domain is divided into N domains.
A layer 2 switch is an N-port bridge with additional sophistication that allows faster
handling of the packets. Evolution from a bridged Ethernet to a switched Ethernet was
a big step that opened the way to an even faster Ethernet, as we will see. Figure 13.11
shows a switched LAN.

Figure 13.11 Switched Ethernet


​ ull-Duplex Ethernet
F
One of the limitations of 10Base5 and 10Base2 is that communication is half-duplex
(10Base-T is always full-duplex); a station can either send or receive, but may not do
both at the same time. The next step in the evolution was to move from switched
Ethernet to full- duplex switched Ethernet. The full-duplex mode increases the capacity
of each domain from 10 to 20 Mbps. Figure 13.12 shows a switched Ethernet in
full-duplex mode. Note that instead of using one link between the station and the
switch, the configuration uses two links: one to transmit and one to receive.

Figure 13.12 Full-duplex switched Ethernet

No Need for CSMA/CD

In full-duplex switched Ethernet, there is no need for the CSMAICD method. In a full
duplex switched Ethernet, each station is connected to the switch via two separate links.
Each station or switch can send and receive independently without worrying about
collision. Each link is a point-to-point dedicated path between the station and the
switch. There is no longer a need for carrier sensing; there is no longer a need for
collision detection. The job of the MAC layer becomes much easier. The carrier sensing
and collision detection functionalities of the MAC sub-layer can be turned off.

​ Fast Ethernet(IEEE 802.3u)


Fast Ethernet was designed to compete with LAN protocols such as FDDI or Fiber
Channel (or Fibre Channel, as it is sometimes spelled). IEEE created Fast Ethernet
under the name 802.3u. Fast Ethernet is backward-compatible with Standard Ethernet,
but it can transmit data 10 times faster at a rate of 100 Mbps. The goals of Fast
Ethernet can be summarized as follows:
a. Upgrade the data rate to 100 Mbps.
b. Make it compatible with Standard Ethernet.
c. Keep the same 48-bit address.
d. Keep the same frame format.
e. Keep the same minimum and maximum frame lengths.
Topology
Fast Ethernet is designed to connect two or more stations together. If there are only two
stations, they can be connected point-to-point. Three or more stations need to be
connected in a star topology with a hub or a switch at the center, as shown in Figure
13.13.
Figure 13.13 Fast Ethernet topology

Implementation
Fast Ethernet implementation at the physical layer can be categorized as either
two-wire or four-wire. The two-wire implementation can be either category 5 UTP
(100Base-TX) or fiber- optic cable (100Base-FX). The four-wire implementation is

designed only for category 3 UTP (100Base-T4). See Figure 13.14.


Figure 13.14 Fast Ethernet implementations
Table 13.2 Summary of Fast Ethernet
implementations
Characteristic 100Base-TX 100Base- 100Base-
s FX T4
Media Cat 5 UTP or Fiber Cat 4 UTP
STP
Number of 2 2 4
wires
Maximum 100m 100m 100m
length
Line encoding MLT-3 NRZ-I 8B/6T

​ Gigabit Ethernet(IEEE 802.3z)


The need for an even higher data rate resulted in the design of the Gigabit Ethernet
protocol (1000 Mbps). The IEEE committee calls the Standard 802.3z. The goals of
the Gigabit Ethernet design can be summarized as follows:
a. Upgrade the data rate to 1 Gbps.
b. Make it compatible with Standard or Fast Ethernet.
c. Use the same 48-bit address.
d. Use the same frame format.
e. Keep the same minimum and maximum frame lengths.
f. To support autonegotiation as defined in Fast Ethernet.
Gigabit Ethernet has two distinctive approaches for medium access: half-duplex and
full- duplex. Almost all implementations of Gigabit Ethernet follow the full-duplex
approach.
Topology
Gigabit Ethernet is designed to connect two or more stations. If there are only two
stations, they can be connected point-to-point. Three or more stations need to be
connected in a star topology with a hub or a switch at the center. Another possible
configuration is to connect several star topologies or let a star topology be part of

another as shown in Figure 13.15.


Figure 13.15 Topologies of Gigabit Ethernet

Implementation
Gigabit Ethernet can be categorized as either a two-wire or a four-wire implementation.
The two-wire implementations use fiber-optic cable (1000Base-SX, short-wave, or
1000Base-LX, long-wave), or STP (1000Base-CX). The four-wire version uses
category 5 twisted-pair cable (1000Base-T). In other words, we have four
implementations, as shown in Figure 13.16.
Figure 13.16 Gigabit Ethernet implementations

Summary
Table 13.3 is a summary of the Gigabit Ethernet implementations.
Table 13.3 Summary of Gigabit Ethernet implementations

Characteristic 1000Base- 1000Base- 1000Base- 1000Base-


s SX LX CX T
Fiber Fiber
Media Short wave Long wave STP CAT 5
UTP
Number of 2 2 2 4
wires
Maximum 550m 5000m 25m 100m
length
Line encoding NRZ NRZ NRZ 4D-PAM5

​ Ten-Gigabit Ethernet(IEEE 802.3ae)


The IEEE committee created Ten-Gigabit Ethernet and called it Standard 802.3ae. The
goals of the Ten-Gigabit Ethernet design can be summarized as follows:
a. Upgrade the data rate to 10 Gbps.
b. Make it compatible with Standard, Fast, and Gigabit Ethernet.
c. Use the same 48-bit address.
d. Use the same frame format.
e. S. Keep the same minimum and maximum frame lengths.
f. Allow the interconnection of existing LANs into a metropolitan area network
(MAN) or a wide area network (WAN).
g. Make Ethernet compatible with technologies such as Frame Relay and ATM.

Ten-Gigabit Ethernet operates only in full duplex mode which means there is no need
for contention; CSMA/CD is not used in Ten-Gigabit Ethernet.
Implementation
Ten-Gigabit Ethernet is designed for using fiber-optic cable over long distances. Three
implementations are the most common: 10GBase-S, 10GBase-L, and 10GBase-E. Table
13.4 shows a summary of the Ten-Gigabit Ethernet implementations:
Table 13.4 Summary of Ten-Gigabit Ethernet implementations

Characteristics 10GBase-S 10GBase-L 10GBase-E


Short-wa Long-wa Extende
ve ve d
Media 850-nm 1310-nm 1550-m
multimode Single mode m
Single mode
Maximum 300m 10km 40km
Length

Introduction to Wireless LAN


Wireless LAN stands for Wireless Local Area Network. It is also called LAWN (Local Area
Wireless Network). WLAN is one in which a mobile user can connect to a Local Area Network
(LAN) through a wireless connection.
The IEEE 802.11 group of standards defines the technologies for wireless LANs. For path
sharing, 802.11 standard uses the Ethernet protocol and CSMA/CA (carrier sense multiple access
with collision avoidance). It also uses an encryption method i.e. wired equivalent privacy
algorithm.
Wireless LANs provide high speed data communication in small areas such as building or an
office. WLANs allow users to move around in a confined area while they are still connected to
the network.
In some instance wireless LAN technology is used to save costs and avoid laying cable, while in
other cases, it is the only option for providing high-speed internet access to the public. Whatever
the reason, wireless solutions are popping up everywhere.
Examples of WLANs that are available today are NCR's waveLAN and Motorola's ALTAIR.
Advantages of WLANs
o Flexibility: Within radio coverage, nodes can communicate without further restriction.
Radio waves can penetrate walls, senders and receivers can be placed anywhere (also
non-visible, e.g., within devices, in walls etc.).
o Planning: Only wireless ad-hoc networks allow for communication without previous
planning, any wired network needs wiring plans.
o Design: Wireless networks allow for the design of independent, small devices which can
for example be put into a pocket. Cables not only restrict users but also designers of small
notepads, PDAs, etc.
o Robustness: Wireless networks can handle disasters, e.g., earthquakes, flood etc.
whereas, networks requiring a wired infrastructure will usually break down completely in
disasters.
o Cost: The cost of installing and maintaining a wireless LAN is on average lower than the
cost of installing and maintaining a traditional wired LAN, for two reasons. First, after
providing wireless access to the wireless network via an access point for the first user,
adding additional users to a network will not increase the cost. And second, wireless
LAN eliminates the direct costs of cabling and the labor associated with installing and
repairing it.
o Ease of Use: Wireless LAN is easy to use and the users need very little new information
to take advantage of WLANs.
Disadvantages of WLANs
o Quality of Services: Quality of wireless LAN is typically lower than wired networks.
The main reason for this is the lower bandwidth due to limitations is radio transmission,
higher error rates due to interference and higher delay/delay variation due to extensive
error correction and detection mechanisms.
o Proprietary Solutions: Due to slow standardization procedures, many companies have
come up with proprietary solutions offering standardization functionality plus many
enhanced features. Most components today adhere to the basic standards IEEE 802.11a or
802.11b.
o Restrictions: Several govt. and non-govt. institutions world-wide regulate the operation
and restrict frequencies to minimize interference.
o Global operation: Wireless LAN products are sold in all countries so, national and
international frequency regulations have to be considered.
o Low Power: Devices communicating via a wireless LAN are typically power consuming,
also wireless devices running on battery power. Whereas the LAN design should take this
into account and implement special power saving modes and power management
functions.
o License free operation: LAN operators don't want to apply for a special license to be
able to use the product. The equipment must operate in a license free band, such as the
2.4 GHz ISM band.
o Robust transmission technology: If wireless LAN uses radio transmission, many other
electrical devices can interfere with them (such as vacuum cleaner, train engines, hair
dryers, etc.).Wireless LAN transceivers cannot be adjusted for perfect transmission is a
standard office or production environment.
Asynchronous Transfer Mode (ATM) in Computer Network
● Difficulty Level : Medium
● Last Updated : 02 Nov, 2021
Why ATM networks?

1. Driven by the integration of services and performance requirements of both telephony and
data networking: “broadband integrated service vision” (B-ISON).
2. Telephone networks support a single quality of service and are expensive to boot.

3. Internet supports no quality of service but is flexible and cheap.

4. ATM networks were meant to support a range of service qualities at a reasonable cost-
intended to subsume both the telephone network and the Internet.

Asynchronous Transfer Mode (ATM):


It is an International Telecommunication Union- Telecommunications Standards Section (ITU-T)
efficient for call relay and it transmits all information including multiple service types such as
data, video, or voice which is conveyed in small fixed-size packets called cells. Cells are
transmitted asynchronously and the network is connection-oriented.

ATM is a technology that has some event in the development of broadband ISDN in the 1970s
and 1980s, which can be considered an evolution of packet switching. Each cell is 53 bytes
long – 5 bytes header and 48 bytes payload. Making an ATM call requires first sending a
message to set up a connection.

Subsequently, all cells follow the same path to the destination. It can handle both constant rate
traffic and variable rate traffic. Thus it can carry multiple types of traffic with end-to-end quality
of service. ATM is independent of a transmission medium, they may be sent on a wire or fiber by
themselves or they may also be packaged inside the payload of other carrier systems. ATM
networks use “Packet” or “cell” Switching with virtual circuits. Its design helps in the
implementation of high-performance multimedia networking.
ATM Cell Format –
As information is transmitted in ATM in the form of fixed-size units called cells. As known
already each cell is 53 bytes long which consists of a 5 bytes header and 48 bytes payload.

Asynchronous Transfer Mode can be of two format types which are as follows:
1. UNI Header: This is used within private networks of ATMs for communication between
ATM endpoints and ATM switches. It includes the Generic Flow Control (GFC) field.

2. NNI Header: is used for communication between ATM switches, and it does not include the
Generic Flow Control(GFC) instead it includes a Virtual Path Identifier (VPI) which
occupies the first 12 bits.

Working of ATM:
ATM standard uses two types of connections. i.e., Virtual path connections (VPCs) which consist
of Virtual channel connections (VCCs) bundled together which is a basic unit carrying a single
stream of cells from user to user. A virtual path can be created end-to-end across an ATM
network, as it does not rout the cells to a particular virtual circuit. In case of major failure, all
cells belonging to a particular virtual path are routed the same way through the ATM network,
thus helping in faster recovery.
Switches connected to subscribers use both VPIs and VCIs to switch the cells which are Virtual
Path and Virtual Connection switches that can have different virtual channel connections
between them, serving the purpose of creating a virtual trunk between the switches which can be
handled as a single entity. Its basic operation is straightforward by looking up the connection
value in the local translation table determining the outgoing port of the connection and the new
VPI/VCI value of connection on that link.
ATM vs DATA Networks (Internet) –
● ATM is a “virtual circuit” based: the path is reserved before transmission. While Internet
Protocol (IP) is connectionless and end-to-end resource reservations are not possible. RSVP
is a new signaling protocol on the internet.

● ATM Cells: Fixed or small size and Tradeoff is between voice or data. While IP packets are
of variable size.

● Addressing: ATM uses 20-byte global NSAP addresses for signaling and 32-bit locally
assigned labels in cells. While IP uses 32-bit global addresses in all packets.

ATM Layers:

1. ATM Adaption Layer (AAL) –


It is meant for isolating higher-layer protocols from details of ATM processes and prepares
for conversion of user data into cells and segments it into 48-byte cell payloads. AAL
protocol excepts transmission from upper-layer services and helps them in mapping
applications, e.g., voice, data to ATM cells.
2. Physical Layer –
It manages the medium-dependent transmission and is divided into two parts physical
medium-dependent sublayer and transmission convergence sublayer. The main functions are
as follows:
● It converts cells into a bitstream.
● It controls the transmission and receipt of bits in the physical medium.
● It can track the ATM cell boundaries.
● Look for the packaging of cells into the appropriate type of frames.
3. ATM Layer –
It handles transmission, switching, congestion control, cell header processing, sequential
delivery, etc., and is responsible for simultaneously sharing the virtual circuits over the
physical link known as cell multiplexing and passing cells through an ATM network known
as cell relay making use of the VPI and VCI information in the cell header.

ATM Applications:

1. ATM WANs –
It can be used as a WAN to send cells over long distances, a router serving as an end-point
between ATM network and other networks, which has two stacks of the protocol.

2. Multimedia virtual private networks and managed services –


It helps in managing ATM, LAN, voice, and video services and is capable of full-service
virtual private networking, which includes integrated access to multimedia.

3. Frame relay backbone –


Frame relay services are used as a networking infrastructure for a range of data services and
enabling frame-relay ATM service to Internetworking services.

4. Residential broadband networks –


ATM is by choice provides the networking infrastructure for the establishment of residential
broadband services in the search of highly scalable solutions.

5. Carrier infrastructure for telephone and private line networks –


To make more effective use of SONET/SDH fiber infrastructures by building the ATM
infrastructure for carrying the telephonic and private-line traffic.

You might also like