KEMBAR78
CN Unit2 Data Link Layer | PDF | Ip Address | Computer Network
0% found this document useful (0 votes)
25 views82 pages

CN Unit2 Data Link Layer

The document covers the data link layer's design issues, including framing, error detection, and correction techniques, as well as various protocols like HDLC and ATM. It discusses methods for flow control such as Stop-and-Wait and Sliding Window, along with error detection codes like parity, checksums, and CRCs. Additionally, it explains the functionalities of Ethernet devices and the significance of medium access control protocols.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views82 pages

CN Unit2 Data Link Layer

The document covers the data link layer's design issues, including framing, error detection, and correction techniques, as well as various protocols like HDLC and ATM. It discusses methods for flow control such as Stop-and-Wait and Sliding Window, along with error detection codes like parity, checksums, and CRCs. Additionally, it explains the functionalities of Ethernet devices and the significance of medium access control protocols.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 82

UNIT-2

Data link layer :


Design issues, framing, error detection and correction techniques with numerical,
CRC.
Elementary Protocol :
Stop and wait, Sliding Window, Data link layer in HDLC, ATM. Multiple Access
Protocols, Link Layer Addressing, ARP, DHCP.
Ethernet devices – Hubs, Bridges, and Switches.
Medium Access sub layer :
ALOHA, MAC addresses, CSMA, CSMA/CD. IEEE 802.XStandardEthernet,wireless LAN.
Designing Issues-Reliability
• The data link layer uses the services
of the physical layer to send and
receive bits over communication
channels. It has a number of
functions, including:
1. Providing a well-defined service
interface to the network layer.
2. Dealing with transmission errors.
3. Regulating the flow of data so that
slow receivers are not swamped by
fast senders.
Services Provided to the Network
Layer
• The data link layer can be designed to
offer various services. The actual
services that are offered vary from
protocol to protocol. Three
reasonable possibilities that we will
consider in turn are:
1. Unacknowledged connectionless
service(Ethernet/802.3)
2. Acknowledged connectionless
service(WIFI/WLAN/802.11)
3. Acknowledged connection-oriented
service.
Framing

• To provide service to the network layer, the data link layer must use the service
provided to it by the physical layer. What the physical layer does is accept a raw
bit stream and attempt to deliver it to the destination. If the channel is noisy,as it
is for most wireless and some wired links, the physical layer will add some
redundancy to its signals to reduce the bit error rate to a tolerable level.
• The usual approach is for the data link layer to break up the bit stream into
discrete frames, compute a short token called a checksum for each frame, and
include the checksum in the frame when it is transmitted.
• When a frame arrives at the destination,the checksum is recomputed. If the
newly computed checksum is different from the one contained in the frame, the
data link layer knows that an error has occurred and takes steps to deal with it
(e.g., discarding the bad frame and possibly also sending back an error report).
We will look at four methods:
1. Byte count.
2. Flag bytes with byte stuffing.
3. Flag bits with bit stuffing.
4. Physical layer coding violations.
Byte Count
• The first framing method uses a field
in the header to specify the number
of bytes in the frame. When the data
link layer at the destination sees the
byte count,it knows how many bytes
follow and hence where the end of
the frame is.
• This technique is shown in Fig. 3-3(a)
for four small example frames of sizes
5, 5, 8,and 8 bytes, respectively.
• the destination will get out of
synchronization. It will then be
unable to locate the correct start of
the next frame.
Flag bytes with byte stuffing
• The second framing method gets
around the problem of
resynchronization after an error by
having each frame start and end with
special bytes. Often the same byte,
called a flag byte, is used as both the
starting and ending delimiter.
• This byte is shown in Fig. 3-4(a) as FLAG.
• Two consecutive flag bytes indicate the
end of one frame and the start of the
next. Thus, if the receiver ever loses
synchronization it can just search for
two flag bytes to find the end of the
current frame and the start of the next
frame.
• There is a still a problem we have to solve. It may happen that the flag
byte occurs in the data, especially when binary data such as
photographs or songs are being transmitted. This situation would
interfere with the framing.
• One way to solve this problem is to have the sender’s data link layer
insert a special escape byte (ESC) just before each ‘‘accidental’’ flag
byte in the data. Thus, a framing flag byte can be distinguished from
one in the data by the absence or presence of an escape byte before it.
• The data link layer on the receiving end removes the escape bytes
before giving the data to the network layer. This technique is called
byte stuffing.
Flag bits with bit stuffing
• The third method of delimiting the bit stream gets around a disadvantage of byte
stuffing, which is that it is tied to the use of 8-bit bytes.
• Framing can be also be done at the bit level, so frames can contain an arbitrary
number of bits made up of units of any size.
• It was developed for the once very popular HDLC (High level Data Link Control)
protocol.
• Each frame begins and ends with a special bit pattern, 01111110 or 0x7E in
hexadecimal. This pattern is a flag byte. Whenever the sender’s data link layer
encounters five consecutive 1s in the data, it automatically stuffs a 0 bit into the
outgoing bit stream.
• This bit stuffing is analogous to byte stuffing, in which an escape byte is stuffed into
the outgoing character stream before a flag byte in the data. It also ensures a
minimum density of transitions that help the physical layer maintain synchronization.
• USB (Universal Serial Bus) uses bit stuffing for this reason.
Physical layer coding violations
• With both bit and byte stuffing, a side effect is that the length of a frame now
depends on the contents of the data it carries.
• We can use some reserved signals to indicate the start and end of frames. In effect,
we are using ‘‘coding violations’’ to delimit frames.
• The beauty of this scheme is that, because they are reserved signals, it is easy to find
the start and end of frames and there is no need to stuff the data.
• Many data link protocols use a combination of these methods for safety. A common
pattern used for Ethernet and 802.11 is to have a frame begin with a well-defined
pattern called a preamble. This pattern might be quite long (72 bits is typical for
802.11) to allow the receiver to prepare for an incoming packet. The preamble is then
followed by a length (i.e., count) field in the header that is used to locate the end of
the frame.
ERROR DETECTION AND
CORRECTION
• We will examine four different error-correcting codes:
1. Hamming codes.
2. Binary convolutional codes.
3. Reed-Solomon codes.
4. Low-Density Parity Check codes
• All of these codes add redundancy to the information that is sent. A frame consists of m
data (i.e., message) bits and r redundant (i.e. check) bits.
• In a block code, the r check bits are computed solely as a function of the m data bits with
which they are associated, as though the m bits were looked up in a large table to find
their corresponding r check bits.
• In a systematic code, the m data bits are sent directly, along with the check bits, rather
than being encoded themselves before they are sent.
• In a linear code, the r check bits are computed as a linear function of the m data bits.
Exclusive OR (XOR) or modulo 2 addition is a popular choice. This means that encoding can
be done with operations such as matrix multiplications or simple logic circuits.
• Let the total length of a block be n (i.e., n = m + r). We will describe this as an (n,m) code.
• An n-bit unit containing data and check bits is referred to as an n bit codeword. The code
rate, or simply rate, is the fraction of the codeword that carries information that is not
redundant, or m/n. The rates used in practice vary widely. They might be 1/2 for a noisy
channel, in which case half of the received information is redundant, or close to 1 for a
high-quality channel, with only a small number of check bits added to a large message.
Error-Detecting Codes
• We will examine three different error-detecting codes. They are all
linear,systematic block codes:
1. Parity.
2. Checksums.
3. Cyclic Redundancy Checks (CRCs)
Parity Check-1d and 2d
Check Sum
Checksum is an error detection method.Error detection
using checksum method involves the following steps-
• Then, following two cases are possible-
• Step-01:At sender side,
• If m bit checksum is used, the data unit to be transmitted • Case-01: Result = 0
is divided into segments of m bits.All the m bit segments
are added.The result of the sum is then complemented
using 1’s complement arithmetic.The value so obtained is • If the result is zero,
called as checksum. • Receiver assumes that no error occurred in
• Step-02: the data during the transmission.Receiver
• The data along with the checksum value is transmitted to accepts the data.
the receiver.
• Step-03:At receiver side, • Case-02: Result ≠ 0
• If m bit checksum is being used, the received data unit is • If the result is non-zero,Receiver assumes
divided into segments of m bits.All the m bit segments
are added along with the checksum value.The value so that error occurred in the data during the
obtained is complemented and the result is checked. transmission.
• Receiver discards the data and asks the
sender for retransmission.
CRC
Flow Control
• It is a set of procedures that tells the • Two methods have been
sender how much data it can developed to control the flow of
transmit before the data data:
overwhelms the receiver.
• The receiving device has limited
speed and limited memory to store • Stop-and-wait
the data. Therefore, the receiving • Sliding window
device must be able to inform the
sending device to stop the
transmission temporarily before the
limits are reached.
• It requires a buffer, a block of
memory for storing the information
until they are processed.
Stop-and-wait • Disadvantage of Stop-and-wait

• In the Stop-and-wait method, the sender • Stop-and-wait technique is


waits for an acknowledgement after inefficient to use as each frame
every frame it sends. must travel across all the way to
• When acknowledgement is received, the receiver, and an
then only next frame is sent. The process acknowledgement travels all the
of alternately sending and waiting of a way before the next frame is
frame continues until the sender sent. Each frame sent and
transmits the EOT (End of transmission)
frame. received uses the entire time
needed to traverse the link.
• Advantage of Stop-and-wait
• The Stop-and-wait method is simple as
each frame is checked and acknowledged
before the next frame is sent.
Sliding Window

• The Sliding Window is a method of flow control in which a sender can transmit the several frames before
getting an acknowledgement.In Sliding Window Control, multiple frames can be sent one after the
another due to which capacity of the communication channel can be utilized efficiently.
• A single ACK acknowledge multiple frames.Sliding Window refers to imaginary boxes at both the sender
and receiver end.
• The window can hold the frames at either end, and it provides the upper limit on the number of frames
that can be transmitted before the acknowledgement.
• Frames can be acknowledged even when the window is not completely filled.
• The window has a specific size in which they are numbered as modulo-n means that they are numbered
from 0 to n-1. For example, if n = 8, the frames are numbered from
0,1,2,3,4,5,6,7,0,1,2,3,4,5,6,7,0,1........
• The size of the window is represented as n-1. Therefore, maximum n-1 frames can be sent before
acknowledgement.
• When the receiver sends the ACK, it includes the number of the next frame that it wants to receive. For
example, to acknowledge the string of frames ending with frame number 4, the receiver will send the
ACK containing the number 5. When the sender sees the ACK with the number 5, it got to know that the
frames from 0 through 4 have been received.
Sender Window
• At the beginning of a transmission, the sender
window contains n-1 frames, and when they are
sent out, the left boundary moves inward
shrinking the size of the window. For example, if
the size of the window is w if three frames are
sent out, then the number of frames left out in
the sender window is w-3.
• Once the ACK has arrived, then the sender
window expands to the number which will be
equal to the number of frames acknowledged by
ACK.
• For example, the size of the window is 7, and if
frames 0 through 4 have been sent out and no
acknowledgement has arrived, then the sender
window contains only two frames, i.e., 5 and 6.
Now, if ACK has arrived with a number 4 which
means that 0 through 3 frames have arrived
undamaged and the sender window is expanded
to include the next four frames. Therefore, the
sender window contains six frames (5,6,7,0,1,2).
Receiver Window

• At the beginning of transmission, the receiver window does


not contain n frames, but it contains n-1 spaces for frames.
• When the new frame arrives, the size of the window
shrinks.
• The receiver window does not represent the number of
frames received, but it represents the number of frames
that can be received before an ACK is sent. For example,
the size of the window is w, if three frames are received
then the number of spaces available in the window is (w-
3).
• Once the acknowledgement is sent, the receiver window
expands by the number equal to the number of frames
acknowledged.
• Suppose the size of the window is 7 means that the
receiver window contains seven spaces for seven frames. If
the one frame is received, then the receiver window
shrinks and moving the boundary from 0 to 1. In this way,
window shrinks one by one, so window now contains the
six spaces. If frames from 0 through 4 have received, then
the window contains two spaces before an
acknowledgement is sent.
Error Control (Automatic Repeat
Request)
Stop-and-wait ARQ
• Stop-and-wait ARQ is a technique used to retransmit the data in case
of damaged or lost frames.

• This technique works on the principle that the sender will not
transmit the next frame until it receives the acknowledgement of the
last transmitted frame.
Four features are required for the retransmission:

• The sending device keeps a copy of the last transmitted frame until the
acknowledgement is received. Keeping the copy allows the sender to retransmit the
data if the frame is not received correctly.
• Both the data frames and the ACK frames are numbered alternately 0 and 1 so that
they can be identified individually. Suppose data 1 frame acknowledges the data 0
frame, means that the data 0 frame has been arrived correctly and expects to receive
data 1 frame.
• If an error occurs in the last transmitted frame, then the receiver sends the
NAK(Negative Acknowledgement) frame which is not numbered. On receiving the
NAK frame, sender retransmits the data.
• It works with the timer. If the acknowledgement is not received within the allotted
time, then the sender assumes that the frame is lost during the transmission, so it
will retransmit the frame.
Sliding Window ARQ
• SlidingWindow ARQ is a technique used for continuous transmission error control.
Three Features used for retransmission:
• In this case, the sender keeps the copies of all the transmitted frames until they have been
acknowledged. Suppose the frames from 0 through 4 have been transmitted, and the last
acknowledgement was for frame 2, the sender has to keep the copies of frames 3 and 4 until
they receive correctly.
• The receiver can send either NAK or ACK depending on the conditions. The NAK frame tells
the sender that the data have been received damaged. Since the sliding window is a
continuous transmission mechanism, both ACK and NAK must be numbered for the
identification of a frame. The ACK frame consists of a number that represents the next frame
which the receiver expects to receive. The NAK frame consists of a number that represents
the damaged frame.
• The sliding window ARQ is equipped with the timer to handle the lost acknowledgements.
Suppose then n-1 frames have been sent before receiving any acknowledgement. The sender
waits for the acknowledgement, so it starts the timer and waits before sending any more. If
the allotted time runs out, the sender retransmits one or all the frames depending upon the
protocol used.
GO Back N(ARQ)
• In Go-Back-N ARQ protocol, if
one frame is lost or damaged,
then it retransmits all the frames
after which it does not receive
the positive ACK.
Selective Reject
• Selective-Reject ARQ technique is more
efficient than Go-Back-n ARQ.
• In this technique, only those frames are
retransmitted for which negative
acknowledgement (NAK) has been
received.
• The receiver storage buffer keeps all the
damaged frames on hold until the frame
in error is correctly received.
• The receiver must have an appropriate
logic for reinserting the frames in a correct
order.
• The sender must consist of a searching
mechanism that selects only the
requested frame for retransmission.
High-level Data Link Control (HDLC)
• High-level Data Link Control (HDLC) is a group of communication
protocols of the data link layer for transmitting data between network
points or nodes. Since it is a data link protocol, data is organized into
frames.
• A frame is transmitted via the network to the destination that verifies
its successful arrival. It is a bit - oriented protocol that is applicable for
both point - to - point and multipoint communications.
• Transfer Modes:
• HDLC supports two types of transfer modes, normal response mode and
asynchronous balanced mode.
Normal Response Mode (NRM) −
• Here, two types of stations are
there, a primary station that
send commands and secondary
station that can respond to
received commands. It is used
for both point - to - point and
multipoint communications.
Asynchronous Balanced Mode (ABM)
• Here, the configuration is
balanced, i.e. each station can
both send commands and
respond to commands. It is used
for only point - to - point
communications.
HDLC (High-level Data Link Control)Frame
• HDLC is a bit - oriented protocol where each frame
contains up to six fields. The structure varies
according to the type of frame. The fields of a HDLC
frame are −
• Flag − It is an 8-bit sequence that marks the
beginning and the end of the frame. The bit pattern
of the flag is 01111110.
• Address − It contains the address of the receiver. If
the frame is sent by the primary station, it contains
the address(es) of the secondary station(s). If it is
sent by the secondary station, it contains the address
of the primary station. The address field may be from
1 byte to several bytes.
• Control − It is 1 or 2 bytes containing flow and error
control information.
• Payload − This carries the data from the network
layer. Its length may vary from one network to
another.
• FCS (Frame Check Sequence)− It is a 2 byte or 4 bytes
frame check sequence for error detection. The
standard code used is CRC (cyclic redundancy code)
Types of HDLC Frames
• There are three types of HDLC frames. The type of
frame is determined by the control field of the
frame
• I-frame − I-frames or Information frames carry
user data from the network layer. They also
include flow and error control information that is
piggybacked on user data. The first bit of control
field of I-frame is 0.
• S-frame − S-frames or Supervisory frames do not
contain information field. They are used for flow
and error control when piggybacking is not
required. The first two bits of control field of S-
frame is 10.
• U-frame − U-frames or Un-numbered frames are
used for myriad(a countless or extremely great
number of people or things) miscellaneous
functions, like link management. It may contain an
information field, if required. The first two bits of
control field of U-frame is 11.
Multiple Access Control
• The data link layer is separated into • Data Link Layer
two sub-layers. The upper sub-layer is
responsible for flow control and
error control that is called the logical Logical Link Layer
link control layer.
• The lower sub-layer is responsible for
multiple access resolution that is
called media access control (MAC).
• The media access control protocol is Media Access Control(MAC)
divided into three protocols: The
Random access, the Controlled-
access, and the Channelization
protocol.
Random Access Protocol
• In the Random access protocol, all systems are equal. No anyone system can
depend and control another system. However, if more than one station
attempts to transmit the data, there is an access conflict—collision, due to
which the frames are either lost or changed.
• Random access protocol is divided into two categories
• aloha
• pure aloha
• slotted aloha

• CSMA (carrier sense multiple access).


• CSMA/CD
• CSMA/CA.
Note :
• When a collision is detected, CSMA/CD tells the station what to do. CSMA/CA attempts to stop a
collision.
Aloha

• ALOHA is a Multiple Access Protocol which is used for random


access in the network. Aloha was designed for the wireless local
area network (WLAN), which is also known as the radio broadcast.
In this method, any user can transmit the data at any time.

• Type of Aloha
• 1. Pure Aloha
• 2. Slotted Aloha
Pure Aloha:
•Pure aloha is also called the
original aloha protocol. It's a
simple but elegant protocol, i.e.,
whenever the system has a data
frame to send, it transmits the
data frame continuously.
•Due to which the risk of collision
is very high in this aloha method.
Shown in below pure aloha.
• Whenever data is available for sending over a channel at stations, we
use Pure Aloha. In pure Aloha, when each station transmits data to a
channel without checking whether the channel is idle or not, the
chances of collision may occur, and the data frame can be lost.
• When any station transmits the data frame to a channel, the pure
Aloha waits for the receiver's acknowledgment.
• If it does not acknowledge the receiver end within the specified time,
the station waits for a random amount of time, called the backoff
time (Tb). And the station may assume the frame has been lost or
destroyed. Therefore, it retransmits the frame until all the data are
successfully transmitted to the receiver.
Slotted Aloha
• Slotted aloha was developed to
improve the efficiency of the Pure
Aloha. In this Aloha, the time of the
systems is divided into slots so that
the system can send only one frame
to a slot, and this frame can only be
sent at the beginning of the slot.
• If a system cannot send a frame at the
beginning of the slot, then it has to
wait for the next slot to start. If two
systems try to transmit the frame at
the beginning of a time slot. But it is
better than pure Aloha because it has
less chance of collision.
Aloha Rules-Concluded

• Any station can transmit data to a channel at any time.


• It does not require any carrier sensing.
• Collision and data frames may be lost during the transmission of data
through multiple stations.
• Acknowledgment of the frames exists in Aloha. Hence, there is no
collision detection.
• It requires retransmission of data after some random amount of time.
CSMA (Carrier Sense Multiple Access)

• It is a carrier sense multiple access based on media access protocol to


sense the traffic on a channel (idle or busy) before transmitting the
data.
• It means that if the channel is idle, the station can send data to the
channel. Otherwise, it must wait until the channel becomes idle.
Hence, it reduces the chances of a collision on a transmission
medium.
CSMA Access Modes

• 1-Persistent: In the 1-Persistent mode of CSMA that defines each node, first sense the shared
channel and if the channel is idle, it immediately sends the data. Else it must wait and keep track
of the status of the channel to be idle and broadcast the frame unconditionally as soon as the
channel is idle.
• Non-Persistent: It is the access mode of CSMA that defines before transmitting the data, each
node must sense the channel, and if the channel is inactive, it immediately sends the data.
Otherwise, the station must wait for a random time (not continuously), and when the channel is
found to be idle, it transmits the frames.
• P-Persistent: It is the combination of 1-Persistent and Non-persistent modes. The P-Persistent
mode defines that each node senses the channel, and if the channel is inactive, it sends a frame
with a P probability. If the data is not transmitted, it waits for a (q = 1-p probability) random time
and resumes the frame with the next time slot.
• O- Persistent: It is an O-persistent method that defines the superiority of the station before the
transmission of the frame on the shared channel. If it is found that the channel is inactive, each
station waits for its turn to retransmit the data.
CSMA/ CD
• It is a carrier sense multiple access/ collision detection network
protocol to transmit data frames. The CSMA/CD protocol works with a
medium access control layer.
• Therefore, it first senses the shared channel before broadcasting the
frames, and if the channel is idle, it transmits a frame to check
whether the transmission was successful.
• If the frame is successfully received, the station sends another frame.
If any collision is detected in the CSMA/CD, the station sends a jam/
stop signal to the shared channel to terminate data transmission. After
that, it waits for a random time before sending a frame to a channel.
• The algorithm of CSMA/CD is:

• When a frame is ready, the transmitting station checks whether the


channel is idle or busy.
• If the channel is busy, the station waits until the channel becomes
idle.
• If the channel is idle, the station starts transmitting and continually
monitors the channel to detect collision.
• If a collision is detected, the station starts the collision resolution
algorithm.
• The station resets the retransmission counters and completes frame
transmission.
• The algorithm of Collision Resolution is:

• The station continues transmission of the current frame for a specified


time along with a jam signal, to ensure that all the other stations
detect collision.
• The station increments the retransmission counter.
• If the maximum number of retransmission attempts is reached, then
the station aborts transmission.
• Otherwise, the station waits for a backoff period which is generally a
function of the number of collisions and restart main algorithm.
CSMA/ CA
• It is a carrier sense multiple access/collision avoidance network
protocol for carrier transmission of data frames. It is a protocol that
works with a medium access control layer. When a data frame is sent to
a channel, it receives an acknowledgment to check whether the
channel is clear.
• If the station receives only a single (own) acknowledgments, that
means the data frame has been successfully transmitted to the
receiver. But if it gets two signals (its own and one more in which the
collision of frames), a collision of the frame occurs in the shared
channel(Detects the collision of the frame when a sender receives an
acknowledgment signal)
• Algorithm of CSMA/CA is:

• When a frame is ready, the transmitting station checks whether the


channel is idle or busy.
• If the channel is busy, the station waits until the channel becomes idle.
• If the channel is idle, the station waits for an Inter-frame gap (IFG)
amount of time and then sends the frame.
• After sending the frame, it sets a timer.
• The station then waits for acknowledgement from the receiver. If it
receives the acknowledgement before expiry of timer, it marks a
successful transmission.
• Otherwise, it waits for a back-off time period and restarts the
algorithm.
Controlled Access Protocol

• In controlled access, the stations seek information from one another to find which

station has the right to send. It allows only one node to send at a time, to avoid

collision of messages on shared medium. The three controlled-access methods are:

• Reservation
• Polling
• Token Passing
Reservation
• In the reservation method, a station needs to make a reservation before sending data.
• The time line has two kinds of periods:
• Reservation interval of fixed time length.
• Data transmission period of variable frames.
• If there are M stations, the reservation interval is divided into M slots, and each station
has one slot.
• Suppose if station 1 has a frame to send, it transmits 1 bit during the slot 1. No other
station is allowed to transmit during this slot.
• In general, i th station may announce that it has a frame to send by inserting a 1 bit into i
th slot. After all N slots have been checked, each station knows which stations wish to
transmit.
• The stations which have reserved their slots transfer their frames in that order.
• After data transmission period, next reservation interval begins.
• Since everyone agrees on who goes next, there will never be any collisions.
• The following figure shows a situation with five stations and a five-slot
reservation frame. In the first interval, only stations 1, 3, and 4 have
made reservations. In the second interval, only station 1 has made a
reservation.
Polling

• Polling process is similar to the roll-call performed in class. Just like the
teacher, a controller sends a message to each node in turn.
• In this, one acts as a primary station(controller) and the others are
secondary stations. All data exchanges must be made through the
controller.
• The message sent by the controller contains the address of the node
being selected for granting access.
• Although all nodes receive the message but the addressed one
responds to it and sends data, if any. If there is no data, usually a “poll
reject”(NAK) message is sent back.
• Problems include high overhead of the polling messages and high
dependence on the reliability of the controller.
Efficiency: Let Tpoll be the time for polling and Tt be the time
required for transmission of data. Then, Efficiency = Tt/(Tt + Tpoll)
Token Passing
• In token passing scheme, the stations are connected logically to each other in form of ring
or in other words,the stations are connected logically to each other in form of ring and
access to stations is governed by tokens.
• A token is a special bit pattern or a small message, which circulate from one station to the
next in some predefined order.
• In Token ring, token is passed from one station to another adjacent station in the ring
whereas incase of Token bus, each station uses the bus to send the token to the next
station in some predefined order.
• In both cases, token represents permission to send. If a station has a frame queued for
transmission when it receives the token, it can send that frame before it passes the token
to the next station. If it has no queued frame, it passes the token simply.
• After sending a frame, each station must wait for all N stations (including itself) to send the
token to their neighbours and the other N – 1 stations to send a frame, if they have one.
• There exists problems like duplication of token or token is lost or insertion of new station,
removal of a station, which need be tackled for correct and reliable operation of this
scheme(access to stations is governed by tokens).
• Performance of token ring can be concluded by 2 parameters:-

1.Delay, which is a measure of time between when a packet is ready and when
it is delivered. So, the average time (delay) required to send a token to the next
station = a/N.
2.Throughput, which is a measure of the successful traffic.
• Throughput, S = 1/(1 + a/N) for a<1
• and

• S = 1/{a(1 + 1/N)} for a>1.


• where N = number of stations
• a = Tp/Tt
• (Tp = propagation delay and Tt = transmission delay)
• Transmission delay can be calculated as follows −
• Transmission delay = Packet size / bandwidth packet
• Destination Data line (1 bit per second)
• Example We have 12,000 bits
• Ethernet packet being sent out on a 100mbps = 100 * 106 bps link.
• So the transmission delay can be calculated as − 12000/100*106 =
0.12 milliseconds

• Propagation delay is defined as the flight time of packets over the


transmission link and is limited by the speed of light. For example, if
the source and destination are in the same building at the distance of
200 m, the propagation delay will be ∼ 1 μsec.
Link Layer Addressing : ARP(Address
Resolution Protocol)
• ARP stands for Address Resolution Protocol, which is used to find the
MAC address of the device from its known IP address. This means, the
source device already knows the IP address but not the MAC address
of the destination device.
• The MAC address of the device is required because you cannot
communicate with a device in a local area network (Ethernet) without
knowing its MAC address. So, the Address Resolution Protocol helps to
obtain the MAC address of the destination device.
Purpose of ARP
• The purpose of ARP is to convert the 32-bit logical address (IPv4
address) to the 48-bit physical address (MAC address). This protocol
works between layer 2 and layer 3 of the OSI model.
• The MAC address resides at layer 2, which is also known as the data
link layer and IP address resides at layer 3, this layer is also known as
the network layer.
Example:
•Suppose two devices (device A and device B) want to communicate
with each other. The device A already knows the IP address of the
Device B. But in order to communicate with the device B, device A still
needs the MAC address of the device B.
•The IP address is used to locate a device on a local area network and
the MAC address is used to identify the actual device.
•The device A first look at its internal list known as ARP cache (table) to
check if the IP address of the device B already consists of its MAC
address or not. If the ARP table consists of the MAC address of the
device B, then device A simply use that MAC address and start
communication.
• If the table does not consist of the MAC address of device B, then
device A sends an ARP broadcast message on the network to know
which device has that specific IP address and ask for the MAC address of
that particular device.
• Then the device that has matching IP address to the source address
sends an ARP response message that consists of the MAC address of the
device B(UNICAST).
• When device A obtains the MAC address of the device B, it will store the
information in the ARP cache (table). The ARP cache is used to make the
network more efficient. It stores the IP address of the device along with
its MAC address. The stored information is used when device A wants to
communicate with device B on a network, and it does not need to
broadcast a message on the network again. It will simply check the ARP
cache for the entries and then use it for communication.
Types of Mapping in ARP
• There are two different ways to map the IP address into the MAC
address, which are given below:

• Static Mapping
• Dynamic Mapping
• Static Mapping - In the static mapping, a table consists of a logical
address and corresponding physical address of the destination device.
In this, the IP and MAC address of the device is entered manually in
an ARP table. The source device has to access the table first if a
source wants to communicate with the destination device.

• Dynamic Mapping - In the dynamic mapping, if a device knows the


logical address of the other device, then by using the Address
Resolution protocol, this device will also find the physical address of
the device. The dynamic entries are created automatically when the
source device sends an ARP broadcast request. These entries are not
permanent and cleared periodically.
Ethernet Devices
• Ethernet is a type of communication
protocol that is created at Xerox PARC in
1973 by Robert Metcalfe and others, which
connects computers on a network over a
wired connection. It is a widely used LAN
protocol, which is also known as Alto
Aloha Network. It connects computers
within the local area network and wide
area network. Numerous devices like
printers and laptops can be connected by
LAN and WAN within buildings, homes,
and even small neighborhoods.
• Hub(no mind-no routing,no
intelligence,broadcast,features,types,adva
ntage and disadvantage)
• Switch(mind-no
broadcast,destination+mac address,)
• Router (efficient-read packet ,routing
info ,best path,network layer )
Dynamic Host Configuration
Protocol(DHCP)
• Dynamic Host Configuration Protocol(DHCP) is an application layer protocol which is
used to provide:

• Subnet Mask (Option 1 – e.g., 255.255.255.0)


• Router Address (Option 3 – e.g., 192.168.1.1)
• DNS Address (Option 6 – e.g., 8.8.8.8)
• Vendor Class Identifier (Option 43 – e.g., ‘unifi’ = 192.168.1.9 ##where unifi = controller)
• DHCP is based on a client-server model and based on discovery, offer, request, and ACK.
• DHCP port number for server is 67 and for the client is 68. It is a Client server protocol
which uses UDP services. IP address is assigned from a pool of addresses. (In DHCP, the
client and the server exchange mainly 4 DHCP messages in order to make a connection,
also called DORA (Discover,Offer,Request,Acknowledge)process, but there are 8 DHCP
messages in the process. )
DHCP does the following:
• DHCP manages the provision of all the nodes or devices added or
dropped from the network.
• DHCP maintains the unique IP address of the host using a DHCP server.
• It sends a request to the DHCP server whenever a client/node/device,
which is configured to work with DHCP, connects to a network. The server
acknowledges by providing an IP address to the client/node/device.
• DHCP is also used to configure the proper subnet mask, default gateway
and DNS server information on the node or device.
• There are many versions of DCHP are available for use in IPV4 (Internet
Protocol Version 4) and IPV6 (Internet Protocol Version 6).
How DHCP works
• DHCP runs at the application layer of the TCP/IP protocol stack to
dynamically assign IP addresses to DHCP clients/nodes and to
allocate TCP/IP configuration information to the DHCP clients.
Information includes subnet mask information, default gateway,
IP addresses and domain name system addresses.

• DHCP is based on client-server protocol in which servers manage a


pool of unique IP addresses, as well as information about client
configuration parameters, and assign addresses out of those
address pools.
DHCP lease process
• First of all, a client (network device) must be connected to the internet.
• DHCP clients request an IP address. Typically, client broadcasts a query
for this information.
• DHCP server responds to the client request by providing IP server address
and other configuration information. This configuration information also
includes time period, called a lease, for which the allocation is valid.
• When refreshing an assignment, a DHCP clients request the same
parameters, but the DHCP server may assign a new IP address. This is
based on the policies set by the administrator.
Components of DHCP
• DHCP Server: DHCP server is a networked device running the DCHP service that holds IP addresses and
related configuration information. This is typically a server or a router but could be anything that acts
as a host, such as an SD-WAN appliance.
• DHCP client: DHCP client is the endpoint that receives configuration information from a DHCP server.
This can be any device like computer, laptop, IoT endpoint or anything else that requires connectivity to
the network. Most of the devices are configured to receive DHCP information by default.
• IP address pool: IP address pool is the range of addresses that are available to DHCP clients. IP
addresses are typically handed out sequentially from lowest to the highest.
• Subnet: Subnet is the partitioned segments of the IP networks. Subnet is used to keep networks
manageable.
• Lease: Lease is the length of time for which a DHCP client holds the IP address information. When a
lease expires, the client has to renew it.
• DHCP relay: A host or router that listens for client messages being broadcast on that network and then
forwards them to a configured server. The server then sends responses back to the relay agent that
passes them along to the client. DHCP relay can be used to centralize DHCP servers instead of having a
server on each subnet.
Benefits of DHCP
• Centralized administration of IP configuration: DHCP IP configuration information
can be stored in a single location and enables that administrator to centrally
manage all IP address configuration information.
• Dynamic host configuration: DHCP automates the host configuration process and
eliminates the need to manually configure individual host. When TCP/IP
(Transmission control protocol/Internet protocol) is first deployed or when IP
infrastructure changes are required.
• Seamless IP host configuration: The use of DHCP ensures that DHCP clients get
accurate and timely IP configuration IP configuration parameter such as IP address,
subnet mask, default gateway, IP address of DND server and so on without user
intervention.
• Flexibility and scalability: Using DHCP gives the administrator increased flexibility,
allowing the administrator to move easily change IP configuration when the
infrastructure changes.
MAC Sublayer frame of IEEE 802.11
• IEEE 802.11 standard, popularly known as WiFi, lays down the
architecture and specifications of wireless LANs (WLANs). WiFi or
WLAN uses high frequency radio waves instead of cables for
connecting the devices in LAN. Users connected by WLANs can move
around within the area of network coverage.

• The 802.11 MAC sublayer provides an abstraction of the physical


layer to the logical link control sublayer and upper layers of the OSI
network. It is responsible for encapsulating frames and describing
frame formats.
MAC Sublayer frame of IEEE 802.11
• The main fields of a frame of wireless LANs as laid down by IEEE 802.11 are −
• Frame Control − It is a 2 bytes starting field composed of 11 subfields. It
contains control information of the frame.
• Duration − It is a 2-byte field that specifies the time period for which the
frame and its acknowledgement occupy the channel.
• Address fields − There are three 6-byte address fields containing addresses of
source, immediate destination and final endpoint respectively.
• Sequence − It a 2 bytes field that stores the frame numbers.
• Data − This is a variable sized field carries the data from the upper layers. The
maximum size of data field is 2312 bytes.
• Check Sequence − It is a 4-byte field containing error detection information.
Avoidance of Collisions by 802.11
MAC Sublayer
• In wireless systems, the method of collision detection does not work. It uses a protocol
called carrier sense multiple access with collision avoidance (CSMA/CA).
• The method of CSMA/CA is −
• When a frame is ready, the transmitting station checks whether the channel is idle or
busy.
• If the channel is busy, the station waits until the channel becomes idle.
• If the channel is idle, the station waits for an Inter-frame gap (IFG) amount of time and
then sends the frame.
• After sending the frame, it sets a timer.
• The station then waits for acknowledgement from the receiver. If it receives the
acknowledgement before expiry of timer, it marks a successful transmission.
• Otherwise, it waits for a back-off time period and restarts the algorithm.
Co-ordination Functions in 802.11 MAC Sublayer

• IEEE 802.11 MAC Sublayer uses two co-ordination functions for collision
avoidance before transmission −

• Distributed Coordination Function (DCF) −


• It is a mandatory function used in CSMA/CA.
• It is used in distributed contention-based channel access.
• It is deployed in both Infrastructure BSS (basic service set) as well as Independent BSS.
• Point Coordination Function (PCF) −
• It is an optional function used by 802.11 MAC Sublayer.
• It is used in centralized contention-free channel access.
• It is deployed in Infrastructure BSS only.
Frame Format of Classic Ethernet
• Classic Ethernet is the original form of Ethernet used primarily in
LANs. It provides data rates between 3 to 10 Mbps.It operates both in
the physical layer and in the MAC sublayer of the OSI model. In the
physical layer, the features of the cables and networks are considered.
In MAC sublayer, the frame formats for the Ethernet data frame are
laid down.
• Classic Ethernet was first standardized in 1980s as IEEE 802.3
standard.
Frame Format of Classic Ethernet
• Classic Ethernet frames can be either of Ethernet (DIX) or of IEEE 802.3 standard. The frames of the
two standards are very similar except for one field. The main fields of a frame of classic Ethernet are

• Preamble − It is the starting field that provides alert and timing pulse for transmission. In case of
Ethernet (DIX) it is an 8 byte field and in case of IEEE 802.3 it is of 7 bytes.
• Start of Frame Delimiter (SOF) − It is a 1 byte field in an IEEE 802.3 frame that contains an
alternating pattern of ones and zeros ending with two ones.
• Destination Address − It is a 6 byte field containing physical address of destination stations.
• Source Address − It is a 6 byte field containing the physical address of the sending station.
• Type/Length − This is a 2 byte field. In case of Ethernet (DIX), the field is type that instructs the
receiver which process to give the frame to. In case of IEEE 802.3, the field is length that stores the
number of bytes in the data field.
• Data − This is a variable sized field carries the data from the upper layers. The maximum size of data
field is 1500 bytes.
• Padding − This is added to the data to bring its length to the minimum requirement of 46 bytes.
• CRC − CRC stands for cyclic redundancy check. It contains the error detection information.

You might also like