KEMBAR78
Computer Network (Module-3) | PDF | Transmission Control Protocol | Computer Network
0% found this document useful (0 votes)
11 views21 pages

Computer Network (Module-3)

The Data Link Layer in the OSI model manages node-to-node communication, error detection, and correction, addressing issues like single-bit and burst errors caused by noise and interference. It employs techniques such as framing, character stuffing, and bit stuffing to ensure data integrity during transmission, alongside methods for error detection (like parity checks and CRC) and correction (like Hamming codes and Reed-Solomon codes). Flow control mechanisms, including Stop-and-Wait and Go-Back-N ARQ protocols, help synchronize data transmission between sender and receiver to prevent data loss and improve efficiency.

Uploaded by

icpco
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views21 pages

Computer Network (Module-3)

The Data Link Layer in the OSI model manages node-to-node communication, error detection, and correction, addressing issues like single-bit and burst errors caused by noise and interference. It employs techniques such as framing, character stuffing, and bit stuffing to ensure data integrity during transmission, alongside methods for error detection (like parity checks and CRC) and correction (like Hamming codes and Reed-Solomon codes). Flow control mechanisms, including Stop-and-Wait and Go-Back-N ARQ protocols, help synchronize data transmission between sender and receiver to prevent data loss and improve efficiency.

Uploaded by

icpco
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

COMPUTER NETWORK

MODULE-3
The Data Link Layer in the OSI model is responsible for node-to-node communication, error
detection, and error correction. It ensures that data is transmitted from one device to another
without errors. However, errors can occur during transmission due to various factors like noise,
interference, or signal degradation.

Types of Errors in the Data Link Layer:


1. Single-bit Error:
a) A single-bit error occurs when one bit of the transmitted data is altered. This
could mean that a 0 becomes 1 or a 1 becomes 0.
b) Example: In a binary sequence of 10010001, a single-bit error might change it
to 10010011.
2. Burst Error:
a) A burst error occurs when multiple bits are altered during transmission. These
errors usually happen in groups and can affect a sequence of bits (not necessarily
consecutive).
b) The length of the burst error is measured from the first changed bit to the last.
c) Example: If 10011001 is transmitted, it might become 11111101 due to a burst
error, affecting multiple bits.
Causes of Errors:
• Noise: Electromagnetic interference can cause random changes in the bits during
transmission.
• Attenuation: Weakening of the signal over long distances.
• Crosstalk: Interference from adjacent cables or other devices.
• Synchronization issues: Misalignment of clocks between sender and receiver.

Framing:
Framing is a technique used in the Data Link Layer to divide a continuous stream of data into
manageable chunks, or "frames," for easier transmission and error detection. This ensures that
the data is transmitted in an organized and recognizable format, allowing the receiver to know
where each frame starts and ends. However, certain challenges like accidental data patterns
resembling frame delimiters require special techniques like character stuffing and bit stuffing.
Types of Framing:
1. Fixed-Size Framing:
i. Frames have a constant, predefined size.
ii. The receiver doesn’t need to identify frame boundaries since every frame has
the same length.
iii. Example: In some low-level protocols, each frame might always be 512 bytes.
2. Variable-Size Framing:
i. Frames vary in size, so special markers are needed to identify the beginning and
end of a frame.
ii. This is where character stuffing and bit stuffing become useful.

Character Stuffing (Byte Stuffing):


Character stuffing is used in protocols that send data in the form of characters (typically 8 bits).
To indicate the start and end of a frame, special control characters are inserted (often called
flags or delimiters). However, if the actual data contains the same control characters by chance,
this could confuse the receiver into thinking it's the end of a frame. Character stuffing solves
this issue by inserting an additional escape character before any control character that appears
in the data.
Example of Character Stuffing:
• Frame boundary marker: Suppose the control character ESC (escape character) is
used as a special flag to indicate the start and end of the frame.
• Original data: A B ESC C ESC D
• Since ESC appears in the data, character stuffing would add another escape character
before each ESC in the data.
• After character stuffing: A B ESC ESC C ESC ESC D
The receiver removes the extra ESC to retrieve the original data.

Bit Stuffing:
Bit stuffing is used in protocols that work at the bit level. It deals with situations where specific
bit patterns (such as 01111110, used as a frame boundary in HDLC) are used to delimit frames.
If the data contains the same bit pattern as the flag, it could confuse the receiver. Bit stuffing
avoids this by inserting extra bits into the data stream whenever a certain pattern appears.
Example of Bit Stuffing:
• Frame boundary pattern: Assume 01111110 is the flag that denotes the start and end
of a frame.
• Original data: If the data contains a sequence of 11111, which could be mistaken for
part of the boundary, bit stuffing is applied.
• After encountering five consecutive 1s, an extra 0 is inserted to prevent confusion.
• Before bit stuffing: 01111110 11111100
• After bit stuffing: 01111110 111110100 (stuffing a 0 after five consecutive 1s)
The receiver knows to remove the extra 0 to retrieve the original data.
Use of Character and Bit Stuffing:
Both techniques are used to ensure that the data can be reliably framed without confusing the
receiver. They allow for:
• Correct framing: Even if the actual data contains sequences similar to frame boundary
markers, the receiver can correctly identify the start and end of frames.
• Transparency: These methods ensure that the actual data can contain any possible bit
or character sequence without interfering with the framing process.
Difference Between Character and Bit Stuffing:
• Character stuffing works at the byte level and adds an extra escape character whenever
control characters are present in the data.
• Bit stuffing works at the bit level, adding extra bits to ensure that the data doesn’t
accidentally match the frame delimiters.

Error detection and correction methods:


Error detection and correction methods are critical techniques in data transmission to ensure
that any errors introduced during communication (due to noise, interference, or other factors)
are detected and possibly corrected. The Data Link Layer is primarily responsible for these
functions.

Error Detection Methods:


These methods help identify whether an error has occurred during transmission but may not
correct it.
1. Parity Check: Adds an extra bit (parity bit) to the data to make the number of 1s either even
(even parity) or odd (odd parity).
• Even Parity: The parity bit is set to ensure the total number of 1s in the data is even.
• Odd Parity: The parity bit is set to ensure the total number of 1s in the data is odd.
Limitation: Parity checks can only detect single-bit errors and fail with burst errors or multiple
bit changes.
Example:
• Data: 1010101 (has four 1s, even number)
• Even Parity: Add a 0 to maintain even number of 1s → 10101010
• Odd Parity: Add a 1 to make the number of 1s odd → 10101011
2. Checksum:
• Data is divided into fixed-size blocks, and the sum of all blocks is calculated. This sum
(checksum) is appended to the data and sent along.
• The receiver recalculates the checksum and compares it with the transmitted checksum.
If they match, no errors are assumed; otherwise, an error is detected.
• Commonly used in network protocols like TCP/IP.
3. Cyclic Redundancy Check (CRC):
• CRC is a robust error detection technique that treats the data as a large binary number
and divides it by a fixed "generator" polynomial. The remainder of this division is
appended to the data.
• The receiver performs the same division and checks whether the remainder matches the
transmitted remainder.
• CRC is highly effective at detecting burst errors and is widely used in Ethernet, Wi-Fi,
and other digital communication protocols.
Example: In Ethernet, CRC-32 is used to detect errors in transmitted frames.
4. Hash Functions:
• Used primarily for error detection in data storage or cryptographic systems, a hash
function generates a unique value based on the data's content. If the hash values differ
after transmission, an error is detected.

Error Correction Methods:


These methods not only detect errors but also correct them without needing retransmission.
1. Hamming Code:
• Hamming codes can detect and correct single-bit errors and detect (but not correct) two-
bit errors.
• Redundant bits are added to the data in specific positions so that the receiver can
identify the exact location of a single-bit error and correct it.
• For k data bits, r redundant bits are added, and n=k+rn = k + rn=k+r is the total number
of bits.
• Hamming codes follow specific bit positions where redundancy checks occur, allowing
the receiver to locate and fix errors.
Example:
• Data: 1001
• Adding redundant bits: 0010011 (with redundancy bits calculated at specific positions)
• Upon receiving, the system can detect and correct errors by recalculating the parity bits.
2. Reed-Solomon Code:
• This is a block error-correcting code used in many applications like CDs, DVDs, QR
codes, and satellite communications.
• Reed-Solomon can detect and correct multiple random errors and burst errors by adding
redundant symbols (or parity data) to the original data.
• Common in systems where data must remain readable despite multiple errors.
Example: Used in digital TV broadcasts to correct errors caused by signal loss.
3. Forward Error Correction (FEC):
• FEC adds redundant data to the message, enabling the receiver to detect and correct
errors without needing retransmission.
• Convolutional codes and Turbo codes are popular FEC methods used in mobile
communications and satellite links.
Example: In mobile networks, convolutional codes protect voice and data transmissions from
errors caused by noise and interference.
4. Low-Density Parity-Check (LDPC):
• LDPC codes are linear block codes that provide excellent error-correcting capabilities
by adding redundancy across long codewords.
• LDPC codes are used in applications requiring highly efficient error correction, such as
5G cellular networks, Wi-Fi (802.11n and later), and satellite communication.
• These codes use a sparse parity-check matrix, which reduces the number of redundant
bits while still enabling robust error correction.

Flow control:
Flow control is a crucial mechanism in the Data Link and Transport Layers of computer
networks that ensures that the sender and receiver operate at compatible speeds when
transmitting data. Without proper flow control, a fast sender might overwhelm a slow receiver,
leading to data loss, buffer overflow, or the need for retransmission. Flow control helps regulate
the rate at which data is transmitted to ensure that both sender and receiver stay synchronized.
Types of Flow Control:
1. Stop-and-Wait Flow Control:
• In this simple method, the sender sends a single frame of data and then waits
for an acknowledgment (ACK) from the receiver before sending the next frame.
• Once the receiver acknowledges that it has received the frame, the sender can
send the next one.
• Advantages: Easy to implement, avoids overwhelming the receiver.
• Disadvantages: Inefficient for high-latency networks since the sender must
wait for the ACK before sending the next frame, leading to idle time.
Example:
• Sender sends Frame 1 → waits for ACK from receiver.
• Receiver receives Frame 1 → sends ACK → waits for next frame.
2. Sliding Window Flow Control:
• A more efficient method that allows the sender to send multiple frames before
requiring an acknowledgment, up to a specified window size.
• Both sender and receiver maintain a "window" that tracks how many frames can
be sent or received.
• The sender can continue transmitting frames within the window without waiting
for an ACK for each one.
• The window "slides" forward as acknowledgments are received, allowing more
frames to be sent.
• Advantages: Utilizes the network more efficiently by allowing continuous data
transmission and reducing idle time.
• Disadvantages: More complex to implement, but it works well for high-latency
networks.
Example:
• If the window size is 5, the sender can send up to 5 frames before waiting for
an ACK.
• After receiving the ACK for the first frame, the window slides, allowing the
sender to transmit the next frame.
3. Credit-Based Flow Control:
• In this method, the receiver assigns "credits" (or permission) to the sender for
how many frames it can send before waiting for an acknowledgment.
• This allows dynamic control over the data flow, depending on the receiver's
buffer capacity.
• Advantages: Provides better control over buffer management, avoids overflow.
• Disadvantages: Requires both sender and receiver to manage credits, making
it more complex than simpler methods.
Example:
• Receiver grants the sender 3 credits, allowing the sender to transmit 3 frames
before waiting for more credits.
Protocols:
In the context of reliable data transmission, ARQ (Automatic Repeat Request) protocols are
essential to ensure that data is delivered accurately and in the correct order, even when errors
occur. ARQ protocols use error detection and acknowledgments (ACKs) to handle lost or
corrupted data, and they request retransmission when necessary.
Three important ARQ protocols are:
1. Stop-and-Wait ARQ:
This is the simplest ARQ protocol and an extension of the Stop-and-Wait flow control.
• After sending a frame, the sender stops and waits for an acknowledgment (ACK) from
the receiver. Only when the ACK is received does the sender send the next frame.

• If no ACK is received within a timeout period (indicating the frame was lost or
corrupted), the sender retransmits the frame.
• The protocol ensures reliable transmission by resending lost or corrupted frames, but
it’s inefficient because it only sends one frame at a time and waits for an
acknowledgment before sending the next frame.
Example:
• Sender sends Frame 1 → waits for ACK.
• Receiver receives Frame 1 → sends ACK.
• If ACK is received, sender sends Frame 2; otherwise, sender retransmits Frame 1.
Advantages:
• Simple and easy to implement.
• Works well in low-latency, low-error environments.
Disadvantages:
• Highly inefficient, especially in networks with high latency or large data transfers, as
the sender remains idle while waiting for the ACK.
• Poor utilization of available bandwidth.
2. Go-Back-N ARQ:
In Go-Back-N ARQ, the sender can send multiple frames (up to a specified window size, N)
without waiting for individual ACKs after each frame. However, if an error occurs or a frame
is lost, all frames from the lost frame onward are retransmitted, even if they were sent correctly.

• The sender maintains a window of frames that can be sent before needing an
acknowledgment.
• The receiver can only accept frames in order. If a frame is received out of order (due to
a loss or error), it discards that frame and all subsequent frames until the missing one is
correctly received.
Example:
• Sender sends frames 1, 2, 3, 4, and 5 without waiting for an ACK after each one.
• If frame 3 is lost, the receiver discards frames 4 and 5 and requests a retransmission
starting from frame 3.
• The sender then "goes back" and retransmits frame 3 and all subsequent frames.
Advantages:
• More efficient than Stop-and-Wait because it allows multiple frames to be sent before
waiting for an ACK.
• Better bandwidth utilization than Stop-and-Wait.
Disadvantages:
• If one frame is lost or corrupted, many frames may need to be retransmitted, even if
they were received correctly, leading to inefficiency.
• High cost in terms of retransmission for large window sizes and high error rates.

3. Selective Repeat ARQ:


Selective Repeat (or Selective Retransmission) ARQ is more efficient than Go-Back-N. Here,
the sender also sends multiple frames (up to a window size), but only the specific frames that
are lost or corrupted are retransmitted.
• The receiver can accept frames out of order and buffer them until the missing frame is
received. Once the missing frame is received, the receiver can reorder and pass the
correct sequence to the higher layer.
• Both the sender and receiver maintain a window for tracking frames, and the receiver
sends individual ACKs for each successfully received frame.

Example:

Explanation:

• Step 1 − Frame 0 sends from sender to receiver and set timer.


• Step 2 − Without waiting for acknowledgement from the receiver another frame,
Frame1 is sent by sender by setting the timer for it.
• Step 3 − In the same way frame2 is also sent to the receiver by setting the timer without
waiting for previous acknowledgement.
• Step 4 − Whenever sender receives the ACK0 from receiver, within the frame 0 timer
then it is closed and sent to the next frame, frame 3.
• Step 5 − whenever the sender receives the ACK1 from the receiver, within the frame 1
timer then it is closed and sent to the next frame, frame 4.
• Step 6 − If the sender doesn’t receive the ACK2 from the receiver within the time slot,
it declares timeout for frame 2 and resends the frame 2 again, because it thought the
frame2 may be lost or damaged.

Advantages:
• Most efficient in terms of retransmission because only the frames with errors are resent.
• Ideal for networks with high error rates and large window sizes, as it minimizes the
number of retransmitted frames.
• Better utilization of bandwidth compared to Go-Back-N.
Disadvantages:
• More complex to implement because the sender and receiver need to maintain buffers
and manage out-of-order frames.
• Both the sender and receiver need to track multiple sequence numbers for the sliding
window.

4. High-Level Data Link Control (HDLC)


HDLC (High-Level Data Link Control) is a widely used data link layer protocol in the OSI
model that provides reliable communication for both point-to-point and multipoint networks.
It is a bit-oriented protocol designed for transmitting data packets in a reliable and efficient
manner.
Key Features
1. Bit-Oriented Protocol: HDLC treats data as a continuous stream of bits, allowing for
flexible framing of data packets. It uses special bit patterns (flags) to indicate the start
and end of frames.
2. Frame Structure: HDLC frames are structured as follows:
• Flag Field: Marks the beginning and end of the frame (bit pattern 01111110).
• Address Field: Identifies the sender or receiver, used primarily in multipoint
configurations.
• Control Field: Contains control information, including sequence numbers for
frames and acknowledgment.
• Data Field: Contains the actual data being transmitted.
• FCS (Frame Check Sequence): A checksum used for error detection, typically
using a CRC (Cyclic Redundancy Check).
3. Operational Modes: HDLC can operate in three modes:
• Normal Response Mode (NRM): One primary station controls communication
with multiple secondary stations.
• Asynchronous Response Mode (ARM): Secondary stations can initiate
communication without waiting for permission from the primary station.
• Asynchronous Balanced Mode (ABM): All stations have equal privileges to
send and receive data, making it suitable for peer-to-peer communication.
4. Frame Types: HDLC defines three types of frames:
• Information Frames (I-frames): Carry user data and flow control information.
• Supervisory Frames (S-frames): Used for flow and error control; they
acknowledge received I-frames.
• Unnumbered Frames (U-frames): Used for control commands (e.g.,
establishing or terminating connections).
5. Error Control:
• HDLC employs ARQ (Automatic Repeat reQuest) mechanisms to ensure
reliable transmission. It uses the FCS for error detection and retransmits frames
when errors are detected.
6. Flow Control:
• HDLC uses sliding window techniques to manage the flow of data, allowing
multiple frames to be sent before waiting for an acknowledgment.
Advantages of HDLC
• Reliability: HDLC provides robust error detection and correction mechanisms,
ensuring data integrity.
• Flexibility: Supports various network configurations (point-to-point and multipoint).
• Efficiency: The use of sliding windows and selective retransmission improves overall
throughput.

Disadvantages of HDLC
• Complexity: The protocol is more complex than simpler protocols, which may
complicate implementation.
• Resource Requirements: Requires more processing power and memory for buffering
and managing frames.
Applications
• Wide Area Networks (WANs): Commonly used for communication over leased lines
and in various WAN protocols.
• X.25 Networks: Forms the basis for X.25, a standard for packet-switched networks.
• Point-to-Point Protocol (PPP): HDLC principles are used in PPP for establishing
internet connections over serial links.

Medium Access sub layer:


1. Point-to-Point Protocol (PPP):
Point-to-Point Protocol (PPP) is a data link layer protocol widely used for establishing direct
connections between two network nodes. It is commonly used for dial-up internet connections
and point-to-point links in various networking scenarios.
Key Features
a) Frame Structure: PPP encapsulates network layer packets into frames, allowing
various network layer protocols (like IP, IPX, and AppleTalk) to be carried over the
link.
The PPP frame consists of:
• Flag Field: Indicates the start and end of a frame (bit pattern 01111110).
• Address Field: Usually set to a specific value (not used in many applications).
• Control Field: Indicates the type of frame.
• Protocol Field: Identifies the network layer protocol being encapsulated.
• Data Field: Contains the actual data being transmitted.
• FCS (Frame Check Sequence): Used for error detection.

Typical PPP Frame Structure:


b) Encapsulation:
• PPP supports encapsulating multiple protocols, allowing it to work with
different network layer protocols.
c) Error Detection:
• PPP uses the FCS field for error detection, typically employing CRC to check
for errors in the transmitted frames.
d) Link Control Protocol (LCP):
• PPP includes LCP to establish, configure, and test the data link connection. LCP
can negotiate options such as maximum frame size and authentication methods.
e) Authentication:
• PPP supports several authentication protocols to verify the identity of the
connecting devices. Common methods include:
▪ Password Authentication Protocol (PAP): A simple
username/password authentication method.
▪ Challenge Handshake Authentication Protocol (CHAP): A more
secure method that uses a challenge-response mechanism for
authentication.
f) Network Layer Protocol Support:
• PPP can encapsulate various network layer protocols, allowing for flexible
integration with different types of networks.

Advantages of PPP
• Versatility: Supports multiple network layer protocols, making it adaptable for various
applications.
• Error Detection: Includes error detection mechanisms to ensure data integrity.
• Authentication: Provides built-in authentication methods for secure connections.
• Easy Configuration: Simple to set up for point-to-point links, particularly in dial-up
connections.
Disadvantages of PPP
• Overhead: The encapsulation adds some overhead compared to more direct protocols,
which may slightly reduce efficiency.
• Complexity in Configuration: While simple in theory, some PPP configurations
(especially with authentication) can be complex.
Applications
• Dial-Up Internet Access: Commonly used in traditional dial-up connections to ISPs.
• Leased Line Connections: Often used for connecting remote offices over dedicated
lines.
• VPNs (Virtual Private Networks): PPP can be utilized in establishing secure
connections over the internet.
2. Link Control Protocol (LCP)
Link Control Protocol (LCP) is a component of the Point-to-Point Protocol (PPP) used
for establishing, configuring, and testing the data link connection between two nodes.
Key Features:
• Establishing Links: LCP is responsible for initiating and terminating the PPP link.
• Configuration Options: It negotiates various link parameters, such as maximum
transmission unit (MTU), authentication protocols, and error detection methods.
• Testing Links: LCP can perform link quality testing to ensure the connection is stable
and functioning correctly.
• Error Detection: It manages error detection and recovery mechanisms.
Functions:
• Configuration Phase: During this phase, LCP negotiates the options for the link, such
as compression and authentication.
• Open Phase: The link is opened after successful negotiation.
• Close Phase: The link can be gracefully closed by either side.

3. Network Control Protocol (NCP)


Network Control Protocol (NCP) is a component of PPP that allows multiple network
layer protocols to operate over a single PPP link. Each network layer protocol has its
own NCP.
Key Features:
• Protocol-Specific Configuration: NCPs negotiate options specific to the network
layer protocol being used (e.g., IP, IPX).
• Encapsulation: Each NCP is responsible for encapsulating and decapsulating the
corresponding network layer packets.
• Support for Multiple Protocols: Allows for the simultaneous use of different network
protocols over a single PPP connection.
Common NCPs:
• IPCP (IP Control Protocol): Used for configuring and managing IP.
• IPXCP (IPX Control Protocol): Used for configuring and managing IPX
(Internetwork Packet Exchange).
• AppleTalk Control Protocol: Used for configuring AppleTalk.

4. Token Ring
Token Ring is a networking technology that uses a token-passing protocol to manage
access to the network. It was developed by IBM and operates at the data link layer
(Layer 2) of the OSI model.
Key Features:
• Token Passing: A special data packet called a "token" circulates around the network.
Only the device that possesses the token can send data, which helps prevent collisions.
• Star Topology: Physically, Token Ring networks are often configured in a star topology
using a hub or a central device, although the logical topology is a ring.
• Data Rates: Originally designed for 4 Mbps and later upgraded to 16 Mbps.
Operation:
• Token Generation: When a device wants to send data, it waits for the token. When it
receives the token, it can transmit its data.
• Releasing the Token: After sending its data, the device releases the token back onto
the network, allowing the next device to transmit.
Advantages:
• Collision-Free Communication: The token-passing mechanism ensures that only one
device transmits at a time, reducing the chances of collisions.
• Deterministic Access: Provides predictable performance because the token ensures fair
access to the network.
Disadvantages:
• Complexity: The token-passing mechanism adds complexity compared to Ethernet’s
carrier-sense multiple access (CSMA).
• Single Point of Failure: If the token is lost or the ring is broken, the network can
become inoperable.

Reservation and Polling:


Reservation and polling are two methods used for managing access to a shared
communication medium in networking, particularly in scenarios involving multiple
users or devices.
1. Reservation:
Reservation is a method in which a device requests and reserves a specific amount of
bandwidth or time slot for communication before it sends data. This approach is often
used in network environments where resources need to be allocated to ensure quality
of service (QoS).
Key Features:
• Guaranteed Bandwidth: Devices can reserve specific bandwidth or time slots,
ensuring that they have the necessary resources for transmission.
• Pre-allocated Resources: Once a reservation is made, the resources are
allocated exclusively for that device for a specified duration.
• Reduced Contention: Since devices reserve resources ahead of time, the
chances of collisions or data contention are minimized.
Use Cases:
• Real-time Applications: Reservation is commonly used in applications that
require guaranteed bandwidth, such as voice over IP (VoIP), video
conferencing, and streaming services.
• Integrated Services: In networks implementing Integrated Services (IntServ),
where end-to-end QoS is crucial.
2. Polling:
Polling is a method where a central controller (master) systematically checks each
device (slave) in the network to see if it has data to send. This method is often used in
networks where multiple devices need to communicate but where it’s not feasible for
every device to transmit at will.
Key Features:
• Central Control: A master device controls the polling process, asking each
slave device if it has data to send.
• Time Division: Devices take turns transmitting data based on the polling
schedule, which can be fixed or dynamic.
• Minimized Collisions: By controlling access to the medium, polling reduces
the likelihood of data collisions.
Use Cases:
• Token Ring Networks: Polling is used in Token Ring and other controlled-
access networks where a central device manages data transmission.
• Industrial Control Systems: Often used in environments where devices need
to be queried periodically, such as in process control and automation.
Pure ALOHA:
Whenever data is available for sending over a channel at stations, we use Pure Aloha. In pure
Aloha, when each station transmits data to a channel without checking whether the channel is
idle or not, the chances of collision may occur, and the data frame can be lost. When any station
transmits the data frame to a channel, the pure Aloha waits for the receiver's acknowledgment.
If it does not acknowledge the receiver end within the specified time, the station waits for a
random amount of time, called the backoff time (Tb). And the station may assume the frame
has been lost or destroyed. Therefore, it retransmits the frame until all the data are successfully
transmitted to the receiver.
1. The total vulnerable time of pure Aloha is 2 * Tfr.
2. Maximum throughput occurs when G = 1/ 2 that is 18.4%.
3. Successful transmission of data frame is S = G * e ^ - 2 G.

As we can see in the figure above, there are four stations for accessing a shared channel and
transmitting data frames. Some frames collide because most stations send their frames at the
same time. Only two frames, frame 1.1 and frame 2.2, are successfully transmitted to the
receiver end. At the same time, other frames are lost or destroyed. Whenever two frames fall
on a shared channel simultaneously, collisions can occur, and both will suffer damage. If the
new frame's first bit enters the channel before finishing the last bit of the second frame. Both
frames are completely finished, and both stations must retransmit the data frame.
Advantages:
• Simplicity: Easy to implement and understand.
• Flexibility: Devices can transmit whenever they need to.
Disadvantages:
• Low Efficiency: As traffic increases, collisions become more frequent, reducing
overall throughput.
• Lack of Coordination: No mechanism to manage when devices can transmit, leading
to potential data loss.

Slotted ALOHA:
The slotted Aloha is designed to overcome the pure Aloha's efficiency because pure Aloha has
a very high possibility of frame hitting. In slotted Aloha, the shared channel is divided into a
fixed time interval called slots. So that, if a station wants to send a frame to a shared channel,
the frame can only be sent at the beginning of the slot, and only one frame is allowed to be sent
to each slot. And if the stations are unable to send data to the beginning of the slot, the station
will have to wait until the beginning of the slot for the next time. However, the possibility of a
collision remains when trying to send a frame at the beginning of two or more station time slot.
1. Maximum throughput occurs in the slotted Aloha when G = 1 that is 37%.
2. The probability of successfully transmitting the data frame in the slotted Aloha is S =
G * e ^ - 2 G.
3. The total vulnerable time required in slotted Aloha is Tfr.

Advantages:
• Higher Efficiency: Fewer collisions result in better utilization of the channel.
• Synchronization: The use of time slots helps to coordinate access, reducing data loss.
Disadvantages:
• Complexity: Slightly more complex than Pure ALOHA due to the need for
synchronization.
• Slot Timing: Devices must synchronize to the time slots, which may introduce delays.

CSMA (Carrier Sense Multiple Access):


It is a carrier sense multiple access based on media access protocol to sense the traffic on a
channel (idle or busy) before transmitting the data. It means that if the channel is idle, the
station can send data to the channel. Otherwise, it must wait until the channel becomes idle.
Hence, it reduces the chances of a collision on a transmission medium.
CSMA Access Modes:
1-Persistent: In the 1-Persistent mode of CSMA that defines each node, first sense the shared
channel and if the channel is idle, it immediately sends the data. Else it must wait and keep
track of the status of the channel to be idle and broadcast the frame unconditionally as soon as
the channel is idle.
Non-Persistent: It is the access mode of CSMA that defines before transmitting the data, each
node must sense the channel, and if the channel is inactive, it immediately sends the data.
Otherwise, the station must wait for a random time (not continuously), and when the channel
is found to be idle, it transmits the frames.
P-Persistent: It is the combination of 1-Persistent and Non-persistent modes. The P-Persistent
mode defines that each node senses the channel, and if the channel is inactive, it sends a frame
with a P probability. If the data is not transmitted, it waits for a (q = 1-p probability) random
time and resumes the frame with the next time slot.
O- Persistent: It is an O-persistent method that defines the superiority of the station before the
transmission of the frame on the shared channel. If it is found that the channel is inactive, each
station waits for its turn to retransmit the data.
CSMA/ CD:
It is a carrier sense multiple access/ collision detection network protocol to transmit data
frames. The CSMA/CD protocol works with a medium access control layer. Therefore, it first
senses the shared channel before broadcasting the frames, and if the channel is idle, it transmits
a frame to check whether the transmission was successful. If the frame is successfully received,
the station sends another frame. If any collision is detected in the CSMA/CD, the station sends
a jam/ stop signal to the shared channel to terminate data transmission. After that, it waits for
a random time before sending a frame to a channel.

CSMA/ CA:
It is a carrier sense multiple access/collision avoidance network protocol for carrier
transmission of data frames. It is a protocol that works with a medium access control layer.
When a data frame is sent to a channel, it receives an acknowledgment to check whether the
channel is clear. If the station receives only a single (own) acknowledgment, that means the
data frame has been successfully transmitted to the receiver. But if it gets two signals (its own
and one more in which the collision of frames), a collision of the frame occurs in the shared
channel. Detects the collision of the frame when a sender receives an acknowledgment signal.
Following are the methods used in the CSMA/ CA to avoid the collision:
a) Interframe space: In this method, the station waits for the channel to become idle, and
if it gets the channel is idle, it does not immediately send the data. Instead of this, it
waits for some time, and this time period is called the Interframe space or IFS.
However, the IFS time is often used to define the priority of the station.
b) Contention window: In the Contention window, the total time is divided into different
slots. When the station/ sender is ready to transmit the data frame, it chooses a random
slot number of slots as wait time. If the channel is still busy, it does not restart the entire
process, except that it restarts the timer only to send data packets when the channel is
inactive.
c) Acknowledgment: In the acknowledgment method, the sender station sends the data
frame to the shared channel if the acknowledgment is not received ahead of time.

Traditional Ethernet:
A networking standard that utilizes the CSMA/CD protocol, typically operating at 10 Mbps. It
often uses a bus topology where all devices share the same communication medium.
• How It Works:
a) Carrier Sensing: Devices sense the channel before transmitting. If the channel
is idle, they can send data.
b) Collision Detection: If a collision occurs (two devices transmit at the same
time), devices stop transmitting, send a jamming signal, and wait a random time
before retrying.
• Efficiency: Works well for moderate traffic; performance decreases as the number of
collisions increases with high traffic.

Fast Ethernet:
An upgrade to traditional Ethernet that operates at speeds of 100 Mbps, maintaining
compatibility with older Ethernet technologies.
• How It Works:
a) Retains the same CSMA/CD protocol for managing access to the shared
medium.
b) Typically uses a star topology with switches, which helps reduce collisions
compared to the bus topology of traditional Ethernet.
• Use Cases: Commonly used in local area networks (LANs) to support higher data
transfer rates and increased network performance.

You might also like