Computer Network (Module-3)
Computer Network (Module-3)
MODULE-3
The Data Link Layer in the OSI model is responsible for node-to-node communication, error
detection, and error correction. It ensures that data is transmitted from one device to another
without errors. However, errors can occur during transmission due to various factors like noise,
interference, or signal degradation.
Framing:
Framing is a technique used in the Data Link Layer to divide a continuous stream of data into
manageable chunks, or "frames," for easier transmission and error detection. This ensures that
the data is transmitted in an organized and recognizable format, allowing the receiver to know
where each frame starts and ends. However, certain challenges like accidental data patterns
resembling frame delimiters require special techniques like character stuffing and bit stuffing.
Types of Framing:
1. Fixed-Size Framing:
i. Frames have a constant, predefined size.
ii. The receiver doesn’t need to identify frame boundaries since every frame has
the same length.
iii. Example: In some low-level protocols, each frame might always be 512 bytes.
2. Variable-Size Framing:
i. Frames vary in size, so special markers are needed to identify the beginning and
end of a frame.
ii. This is where character stuffing and bit stuffing become useful.
Bit Stuffing:
Bit stuffing is used in protocols that work at the bit level. It deals with situations where specific
bit patterns (such as 01111110, used as a frame boundary in HDLC) are used to delimit frames.
If the data contains the same bit pattern as the flag, it could confuse the receiver. Bit stuffing
avoids this by inserting extra bits into the data stream whenever a certain pattern appears.
Example of Bit Stuffing:
• Frame boundary pattern: Assume 01111110 is the flag that denotes the start and end
of a frame.
• Original data: If the data contains a sequence of 11111, which could be mistaken for
part of the boundary, bit stuffing is applied.
• After encountering five consecutive 1s, an extra 0 is inserted to prevent confusion.
• Before bit stuffing: 01111110 11111100
• After bit stuffing: 01111110 111110100 (stuffing a 0 after five consecutive 1s)
The receiver knows to remove the extra 0 to retrieve the original data.
Use of Character and Bit Stuffing:
Both techniques are used to ensure that the data can be reliably framed without confusing the
receiver. They allow for:
• Correct framing: Even if the actual data contains sequences similar to frame boundary
markers, the receiver can correctly identify the start and end of frames.
• Transparency: These methods ensure that the actual data can contain any possible bit
or character sequence without interfering with the framing process.
Difference Between Character and Bit Stuffing:
• Character stuffing works at the byte level and adds an extra escape character whenever
control characters are present in the data.
• Bit stuffing works at the bit level, adding extra bits to ensure that the data doesn’t
accidentally match the frame delimiters.
Flow control:
Flow control is a crucial mechanism in the Data Link and Transport Layers of computer
networks that ensures that the sender and receiver operate at compatible speeds when
transmitting data. Without proper flow control, a fast sender might overwhelm a slow receiver,
leading to data loss, buffer overflow, or the need for retransmission. Flow control helps regulate
the rate at which data is transmitted to ensure that both sender and receiver stay synchronized.
Types of Flow Control:
1. Stop-and-Wait Flow Control:
• In this simple method, the sender sends a single frame of data and then waits
for an acknowledgment (ACK) from the receiver before sending the next frame.
• Once the receiver acknowledges that it has received the frame, the sender can
send the next one.
• Advantages: Easy to implement, avoids overwhelming the receiver.
• Disadvantages: Inefficient for high-latency networks since the sender must
wait for the ACK before sending the next frame, leading to idle time.
Example:
• Sender sends Frame 1 → waits for ACK from receiver.
• Receiver receives Frame 1 → sends ACK → waits for next frame.
2. Sliding Window Flow Control:
• A more efficient method that allows the sender to send multiple frames before
requiring an acknowledgment, up to a specified window size.
• Both sender and receiver maintain a "window" that tracks how many frames can
be sent or received.
• The sender can continue transmitting frames within the window without waiting
for an ACK for each one.
• The window "slides" forward as acknowledgments are received, allowing more
frames to be sent.
• Advantages: Utilizes the network more efficiently by allowing continuous data
transmission and reducing idle time.
• Disadvantages: More complex to implement, but it works well for high-latency
networks.
Example:
• If the window size is 5, the sender can send up to 5 frames before waiting for
an ACK.
• After receiving the ACK for the first frame, the window slides, allowing the
sender to transmit the next frame.
3. Credit-Based Flow Control:
• In this method, the receiver assigns "credits" (or permission) to the sender for
how many frames it can send before waiting for an acknowledgment.
• This allows dynamic control over the data flow, depending on the receiver's
buffer capacity.
• Advantages: Provides better control over buffer management, avoids overflow.
• Disadvantages: Requires both sender and receiver to manage credits, making
it more complex than simpler methods.
Example:
• Receiver grants the sender 3 credits, allowing the sender to transmit 3 frames
before waiting for more credits.
Protocols:
In the context of reliable data transmission, ARQ (Automatic Repeat Request) protocols are
essential to ensure that data is delivered accurately and in the correct order, even when errors
occur. ARQ protocols use error detection and acknowledgments (ACKs) to handle lost or
corrupted data, and they request retransmission when necessary.
Three important ARQ protocols are:
1. Stop-and-Wait ARQ:
This is the simplest ARQ protocol and an extension of the Stop-and-Wait flow control.
• After sending a frame, the sender stops and waits for an acknowledgment (ACK) from
the receiver. Only when the ACK is received does the sender send the next frame.
• If no ACK is received within a timeout period (indicating the frame was lost or
corrupted), the sender retransmits the frame.
• The protocol ensures reliable transmission by resending lost or corrupted frames, but
it’s inefficient because it only sends one frame at a time and waits for an
acknowledgment before sending the next frame.
Example:
• Sender sends Frame 1 → waits for ACK.
• Receiver receives Frame 1 → sends ACK.
• If ACK is received, sender sends Frame 2; otherwise, sender retransmits Frame 1.
Advantages:
• Simple and easy to implement.
• Works well in low-latency, low-error environments.
Disadvantages:
• Highly inefficient, especially in networks with high latency or large data transfers, as
the sender remains idle while waiting for the ACK.
• Poor utilization of available bandwidth.
2. Go-Back-N ARQ:
In Go-Back-N ARQ, the sender can send multiple frames (up to a specified window size, N)
without waiting for individual ACKs after each frame. However, if an error occurs or a frame
is lost, all frames from the lost frame onward are retransmitted, even if they were sent correctly.
• The sender maintains a window of frames that can be sent before needing an
acknowledgment.
• The receiver can only accept frames in order. If a frame is received out of order (due to
a loss or error), it discards that frame and all subsequent frames until the missing one is
correctly received.
Example:
• Sender sends frames 1, 2, 3, 4, and 5 without waiting for an ACK after each one.
• If frame 3 is lost, the receiver discards frames 4 and 5 and requests a retransmission
starting from frame 3.
• The sender then "goes back" and retransmits frame 3 and all subsequent frames.
Advantages:
• More efficient than Stop-and-Wait because it allows multiple frames to be sent before
waiting for an ACK.
• Better bandwidth utilization than Stop-and-Wait.
Disadvantages:
• If one frame is lost or corrupted, many frames may need to be retransmitted, even if
they were received correctly, leading to inefficiency.
• High cost in terms of retransmission for large window sizes and high error rates.
Example:
Explanation:
Advantages:
• Most efficient in terms of retransmission because only the frames with errors are resent.
• Ideal for networks with high error rates and large window sizes, as it minimizes the
number of retransmitted frames.
• Better utilization of bandwidth compared to Go-Back-N.
Disadvantages:
• More complex to implement because the sender and receiver need to maintain buffers
and manage out-of-order frames.
• Both the sender and receiver need to track multiple sequence numbers for the sliding
window.
Disadvantages of HDLC
• Complexity: The protocol is more complex than simpler protocols, which may
complicate implementation.
• Resource Requirements: Requires more processing power and memory for buffering
and managing frames.
Applications
• Wide Area Networks (WANs): Commonly used for communication over leased lines
and in various WAN protocols.
• X.25 Networks: Forms the basis for X.25, a standard for packet-switched networks.
• Point-to-Point Protocol (PPP): HDLC principles are used in PPP for establishing
internet connections over serial links.
Advantages of PPP
• Versatility: Supports multiple network layer protocols, making it adaptable for various
applications.
• Error Detection: Includes error detection mechanisms to ensure data integrity.
• Authentication: Provides built-in authentication methods for secure connections.
• Easy Configuration: Simple to set up for point-to-point links, particularly in dial-up
connections.
Disadvantages of PPP
• Overhead: The encapsulation adds some overhead compared to more direct protocols,
which may slightly reduce efficiency.
• Complexity in Configuration: While simple in theory, some PPP configurations
(especially with authentication) can be complex.
Applications
• Dial-Up Internet Access: Commonly used in traditional dial-up connections to ISPs.
• Leased Line Connections: Often used for connecting remote offices over dedicated
lines.
• VPNs (Virtual Private Networks): PPP can be utilized in establishing secure
connections over the internet.
2. Link Control Protocol (LCP)
Link Control Protocol (LCP) is a component of the Point-to-Point Protocol (PPP) used
for establishing, configuring, and testing the data link connection between two nodes.
Key Features:
• Establishing Links: LCP is responsible for initiating and terminating the PPP link.
• Configuration Options: It negotiates various link parameters, such as maximum
transmission unit (MTU), authentication protocols, and error detection methods.
• Testing Links: LCP can perform link quality testing to ensure the connection is stable
and functioning correctly.
• Error Detection: It manages error detection and recovery mechanisms.
Functions:
• Configuration Phase: During this phase, LCP negotiates the options for the link, such
as compression and authentication.
• Open Phase: The link is opened after successful negotiation.
• Close Phase: The link can be gracefully closed by either side.
4. Token Ring
Token Ring is a networking technology that uses a token-passing protocol to manage
access to the network. It was developed by IBM and operates at the data link layer
(Layer 2) of the OSI model.
Key Features:
• Token Passing: A special data packet called a "token" circulates around the network.
Only the device that possesses the token can send data, which helps prevent collisions.
• Star Topology: Physically, Token Ring networks are often configured in a star topology
using a hub or a central device, although the logical topology is a ring.
• Data Rates: Originally designed for 4 Mbps and later upgraded to 16 Mbps.
Operation:
• Token Generation: When a device wants to send data, it waits for the token. When it
receives the token, it can transmit its data.
• Releasing the Token: After sending its data, the device releases the token back onto
the network, allowing the next device to transmit.
Advantages:
• Collision-Free Communication: The token-passing mechanism ensures that only one
device transmits at a time, reducing the chances of collisions.
• Deterministic Access: Provides predictable performance because the token ensures fair
access to the network.
Disadvantages:
• Complexity: The token-passing mechanism adds complexity compared to Ethernet’s
carrier-sense multiple access (CSMA).
• Single Point of Failure: If the token is lost or the ring is broken, the network can
become inoperable.
As we can see in the figure above, there are four stations for accessing a shared channel and
transmitting data frames. Some frames collide because most stations send their frames at the
same time. Only two frames, frame 1.1 and frame 2.2, are successfully transmitted to the
receiver end. At the same time, other frames are lost or destroyed. Whenever two frames fall
on a shared channel simultaneously, collisions can occur, and both will suffer damage. If the
new frame's first bit enters the channel before finishing the last bit of the second frame. Both
frames are completely finished, and both stations must retransmit the data frame.
Advantages:
• Simplicity: Easy to implement and understand.
• Flexibility: Devices can transmit whenever they need to.
Disadvantages:
• Low Efficiency: As traffic increases, collisions become more frequent, reducing
overall throughput.
• Lack of Coordination: No mechanism to manage when devices can transmit, leading
to potential data loss.
Slotted ALOHA:
The slotted Aloha is designed to overcome the pure Aloha's efficiency because pure Aloha has
a very high possibility of frame hitting. In slotted Aloha, the shared channel is divided into a
fixed time interval called slots. So that, if a station wants to send a frame to a shared channel,
the frame can only be sent at the beginning of the slot, and only one frame is allowed to be sent
to each slot. And if the stations are unable to send data to the beginning of the slot, the station
will have to wait until the beginning of the slot for the next time. However, the possibility of a
collision remains when trying to send a frame at the beginning of two or more station time slot.
1. Maximum throughput occurs in the slotted Aloha when G = 1 that is 37%.
2. The probability of successfully transmitting the data frame in the slotted Aloha is S =
G * e ^ - 2 G.
3. The total vulnerable time required in slotted Aloha is Tfr.
Advantages:
• Higher Efficiency: Fewer collisions result in better utilization of the channel.
• Synchronization: The use of time slots helps to coordinate access, reducing data loss.
Disadvantages:
• Complexity: Slightly more complex than Pure ALOHA due to the need for
synchronization.
• Slot Timing: Devices must synchronize to the time slots, which may introduce delays.
CSMA/ CA:
It is a carrier sense multiple access/collision avoidance network protocol for carrier
transmission of data frames. It is a protocol that works with a medium access control layer.
When a data frame is sent to a channel, it receives an acknowledgment to check whether the
channel is clear. If the station receives only a single (own) acknowledgment, that means the
data frame has been successfully transmitted to the receiver. But if it gets two signals (its own
and one more in which the collision of frames), a collision of the frame occurs in the shared
channel. Detects the collision of the frame when a sender receives an acknowledgment signal.
Following are the methods used in the CSMA/ CA to avoid the collision:
a) Interframe space: In this method, the station waits for the channel to become idle, and
if it gets the channel is idle, it does not immediately send the data. Instead of this, it
waits for some time, and this time period is called the Interframe space or IFS.
However, the IFS time is often used to define the priority of the station.
b) Contention window: In the Contention window, the total time is divided into different
slots. When the station/ sender is ready to transmit the data frame, it chooses a random
slot number of slots as wait time. If the channel is still busy, it does not restart the entire
process, except that it restarts the timer only to send data packets when the channel is
inactive.
c) Acknowledgment: In the acknowledgment method, the sender station sends the data
frame to the shared channel if the acknowledgment is not received ahead of time.
Traditional Ethernet:
A networking standard that utilizes the CSMA/CD protocol, typically operating at 10 Mbps. It
often uses a bus topology where all devices share the same communication medium.
• How It Works:
a) Carrier Sensing: Devices sense the channel before transmitting. If the channel
is idle, they can send data.
b) Collision Detection: If a collision occurs (two devices transmit at the same
time), devices stop transmitting, send a jamming signal, and wait a random time
before retrying.
• Efficiency: Works well for moderate traffic; performance decreases as the number of
collisions increases with high traffic.
Fast Ethernet:
An upgrade to traditional Ethernet that operates at speeds of 100 Mbps, maintaining
compatibility with older Ethernet technologies.
• How It Works:
a) Retains the same CSMA/CD protocol for managing access to the shared
medium.
b) Typically uses a star topology with switches, which helps reduce collisions
compared to the bus topology of traditional Ethernet.
• Use Cases: Commonly used in local area networks (LANs) to support higher data
transfer rates and increased network performance.