Computer Networks Pyqs Solved
Computer Networks Pyqs Solved
Time Division No time slots; send anytime Time is divided into slots
The 3-way handshake is used to establish a reliable connection between a client and a
server in TCP.
Steps:
1. SYN: Client sends a connection request to the server by sending a segment with
the SYN (synchronize) flag set. It also includes an initial sequence number (say,
X).
2. SYN-ACK: Server responds with a segment that has both SYN and ACK flags
set. It acknowledges the client’s sequence number (X+1) and sends its own
sequence number (say, Y).
3. ACK: Client sends an ACK back to the server acknowledging the server’s
sequence number (Y+1).
We divide 1010011110000 by 1011. The remainder after the XOR division is the
CRC.
The Leaky Bucket algorithm is a traffic shaping and congestion control technique used
in computer networks to regulate the data flow. It ensures that data packets are
transmitted at a constant and manageable rate, even if they are arriving at irregular
intervals. The concept is based on a simple analogy: imagine a bucket with a small hole
at the bottom. Water (representing data packets) can be added to the bucket at any rate,
but it leaks out at a fixed rate through the hole. If water is poured in too fast and exceeds
the bucket's capacity, the excess water overflows and is lost. Similarly, in networking, if
too many packets arrive at once and the buffer (or bucket) is full, the extra packets are
discarded.
This algorithm helps in smoothing out bursts of traffic and prevents network congestion
by controlling the output rate. It is especially useful in scenarios where devices may send
data in unpredictable bursts but the network must handle data at a steady pace. In
summary, the Leaky Bucket algorithm acts like a traffic police that ensures data flows out
of a system in a controlled and uniform manner, improving overall network stability.
Conclusion: OSI is a reference model, more detailed and theoretical, while TCP/IP is
widely used in real networks.
7. What is bit rate? What is baud rate? An analog signal carries 4 bits in each signal
unit. If 1000 signal units are sent per second, find the baud rate and bit rate.
Bit rate:
Baud rate:
● Measured in baud.
Formula:
Given:
● Signal units per second = 1000 (i.e., Baud rate = 1000 baud)
Calculation:
Final Answer:
Number of subnets:
For Class C, default subnet mask = /24 →
Extra 2 bits for subnetting → 22=42^2 = 422=4 subnets
Each subnet has 64 addresses.
Final Answers:
10. Explain “Stop and Wait ARQ Protocol” with the help of a diagram.
The Stop and Wait ARQ (Automatic Repeat reQuest) protocol is one of the simplest
error control methods used in data communication. It ensures reliable transmission by
sending one data frame at a time and waiting for an acknowledgment (ACK) before
sending the next frame.
In this method, the sender transmits a single frame, then stops and waits until it
receives an acknowledgment from the receiver. If the acknowledgment is received
successfully, the sender proceeds to send the next frame. However, if the
acknowledgment is lost or not received within a certain time (timeout), the sender
resends the same frame, assuming it was lost or corrupted.
🔁 Working Steps:
4. If ACK is not received, the Sender resends the same frame after timeout.
⚠️Error Handling:
● If an ACK is lost, the sender will again resend the frame, and the receiver will
identify it as a duplicate (if it already received it).
✅ Advantages:
❌ Disadvantages:
● Inefficient for long-distance or high-latency networks, as it waits for each ACK before
continuing.
(https://www.geeksforgeeks.org/stop-and-wait-arq/)
11. Define each term, and clarify the key difference(s) between the two terms:
“Iterative DNS query” and “Recursive DNS query”
In the Domain Name System (DNS), when a client (usually your computer or browser)
wants to find the IP address of a website (like www.google.com), it sends a DNS
query. There are two main types of DNS queries that define how this request is
resolved: Recursive query and Iterative query.
● Example: Your computer sends a recursive query to your ISP's DNS server asking
for the IP of www.google.com. That server will contact other DNS servers (like
root, TLD, and authoritative servers) until it finds the IP and returns it to you.
In an iterative query, the DNS server responds with the best answer it has, usually a
referral to another DNS server closer to the answer. The client then follows up with
another query to the referred server, and this process continues until the final answer is
found.
✅ Key Differences:
12. Which field or fields are used in TCP’s 3-way handshake to open a new
connection? What information is conveyed during the handshake, and how?
○ SYN = 1
● This says: “I want to start a connection, and my starting sequence number is X.”
○ Acknowledgment Number = X + 1
○ ACK = 1
○ Sequence Number = X + 1
○ Acknowledgment Number = Y + 1
● This final step says: “I confirm your sequence number. Let’s start data
transmission.”
13. What is congestion? Explain token bucket algorithm for congestion control.
Congestion in computer networks occurs when the number of packets sent into the
network exceeds the capacity of the network to handle them. This leads to packet delays,
loss of data, retransmissions, and overall degradation in network performance.
Congestion is especially common in routers where too many packets arrive in a short
span, and the device cannot process or forward them quickly enough, leading to buffer
overflows and dropped packets.
To manage such scenarios, the token bucket algorithm is widely used as a congestion
control and traffic shaping technique. In this method, a bucket is used to store tokens,
which are generated at a constant rate. Each token grants permission to send a unit of data
(like a byte or packet). If a packet arrives and there are sufficient tokens in the bucket, the
required number of tokens are removed, and the packet is transmitted. However, if there
are not enough tokens, the packet may be queued (delayed) until enough tokens
accumulate, or it may be discarded, depending on the implementation.
This approach allows the network to control the average rate of data transmission
while still permitting short bursts of data when tokens have been saved up. The token
bucket is particularly effective because, unlike the leaky bucket algorithm which enforces
a strict constant output rate, the token bucket accommodates the natural burstiness of
real-world traffic without overwhelming the network.
No. of Layers 7 4
Summary:
Slotted ALOHA and Pure ALOHA are both random access protocols used to manage
how data is transmitted over a shared communication channel. The main difference
between the two lies in how they handle the timing of data transmission. In Pure
ALOHA, a station can transmit data at any time, which leads to a higher chance of
collision, as the data can overlap with other transmissions that begin at any point. This
results in a low efficiency, and the maximum throughput of Pure ALOHA is only about
18.4%, or 0.184.
In contrast, Slotted ALOHA divides the time into equal-sized slots and requires that
stations only begin transmission at the start of a time slot. This synchronization
significantly reduces the chances of collisions, as overlapping transmissions can now
only happen if two or more stations choose the exact same time slot. Due to this
improvement, the maximum throughput of Slotted ALOHA becomes about 36.8%, or
0.368, which is nearly twice the efficiency of Pure ALOHA.
(5 Marks)
● Instead, it waits until it has data to send back, and includes the ACK with its
outgoing data.
● If the receiver doesn’t have data to send within a certain time, it sends a separate
ACK to avoid sender timeout.
✅ Advantages of Piggybacking:
● Fewer control frames: Combines data and ACK into one frame.
● Delay in acknowledgment: If the receiver has no data to send back quickly, it delays
the ACK.
● Timer management complexity: Sender may time out if ACK is delayed too
long.
● Total Length: Length of header + Data (16 bits), which has a minimum value
20 bytes and the maximum is 65,535 bytes.
● Identification: Unique Packet Id for identifying the group of fragments of a
single IP datagram (16 bits)
● Flags: 3 flags of 1 bit each : reserved bit (must be zero), do not fragment flag,
more fragments flag (same order)
● Fragment Offset: Represents the number of Data Bytes ahead of the particular
fragment in the particular Datagram. Specified in terms of number of 8 bytes,
which has the maximum value of 65,528 bytes.
● Time to live: Datagram’s lifetime (8 bits), It prevents the datagram to loop
through the network by restricting the number of Hops taken by a Packet
before delivering to the Destination.
● Protocol: Name of the protocol to which the data is to be passed (8 bits)
● Option: Optional information such as source route, record route. Used by the
Network administrator to check whether a path is working or not.
19. An ISP is granted a large block of address starting with 190.100.0.0\16. ISP
needs to distribute it for three group customers as follows. I. 1 st group has 64
customers: each need 256 IP addresses. II. 2 nd group has 128 customers: each need
128 addresses. III. 3 rd group has 128 customers: each need 64 addresses. Design the
sub-blocks and give the slash notation for each sub-block.
To allocate the IP address block 190.100.0.0/16 for the three customer groups with
different requirements, we need to use subnetting to efficiently assign addresses to each
group.
● 1st Group:
○ Number of customers: 64
● 2nd Group:
To subnet the IP address block, we need to calculate how many addresses each group
requires and then determine the appropriate subnet mask.
Group 1:
● The smallest power of 2 that can accommodate 256 addresses is 28=2562^8 = 256.
● So, each subnet for the 1st group will need 256 addresses, which requires a /24
subnet mask (because 32−8=2432 - 8 = 24).
Group 2:
● The smallest power of 2 that can accommodate 128 addresses is 27=1282^7 = 128.
● So, each subnet for the 2nd group will need 128 addresses, which requires a /25
subnet mask (because 32−7=2532 - 7 = 25).
Group 3:
● So, each subnet for the 3rd group will need 64 addresses, which requires a /26
subnet mask (because 32−6=2632 - 6 = 26).
The first subnet for the 1st group starts at 190.100.0.0/24. The next subnet will start from
190.100.1.0/24, and so on.
Slash notation for Group 1: 190.100.0.0/24, 190.100.1.0/24, 190.100.2.0/24, …,
190.100.63.0/24.
The first subnet for the 2nd group starts at 190.100.64.0/25. The next subnet will start
from 190.100.64.128/25, and so on.
Slash notation for Group 2: 190.100.64.0/25, 190.100.64.128/25, 190.100.65.0/25, …,
190.100.127.128/25.
Conclusion
By dividing the 190.100.0.0/16 address block into subnets of varying sizes based on the
specific requirements of each group, we efficiently allocate the required number of IP
addresses to each group while ensuring optimal utilization of the available address space.
1. Address Length:
● IPv4: IPv4 addresses are 32-bit long, allowing for about 4.3 billion unique IP
addresses (2^32).
● IPv6: IPv6 addresses are 128-bit long, allowing for an almost unlimited number
of IP addresses (2^128), which equals 340 undecillion addresses.
2. Address Representation:
3. Header Structure:
● IPv4: IPv4 header is smaller but more complex. It has 13 fields and requires
options to be handled separately.
● IPv6: IPv6 header is simplified with only 8 fields and does not include options
directly within the header, leading to more efficient routing.
4. Address Allocation:
● IPv6: IPv6 provides a huge address space, thus eliminating the need for NAT
and allowing for direct end-to-end connectivity.
Limitations of IPv4
○ IPv4 uses 32-bit addresses, which limits the total number of unique IP
addresses to about 4.3 billion. With the rapid growth of internet-connected
devices, this address space is insufficient to meet global demands, leading
to the exhaustion of IPv4 addresses.
○ Due to the shortage of IPv4 addresses, techniques like NAT are used to
allow multiple devices within a local network to share a single public IP
address. However, NAT complicates network configurations and can
interfere with some types of internet communication, such as peer-to-peer
applications.
○ The IPv4 header is relatively complex and includes many fields, some of
which are rarely used. This leads to inefficient processing and routing,
requiring additional time and resources. IPv6, on the other hand, simplifies
the header for faster processing.
● In private key cryptography, the same key is used for both encryption and
decryption of data.
● Both the sender and receiver must share the secret key beforehand.
● The public key encrypts the data, and only the corresponding private key can
decrypt it.
● Example: RSA.
Conclusion
● Private key cryptography is faster and ideal for encrypting large data, but suffers
from key distribution issues.
● Example: 203.0.113.1
Private IP Address: A private IP address is used within a local network and is not
accessible directly from the internet. These IPs are part of specific reserved ranges and
can be reused across different networks. Devices with private IPs use NAT (Network
Address Translation) to communicate with the internet.
● Example: 192.168.1.1
Key Differences:
● Public IPs are globally unique and accessible over the internet, while Private IPs
are used within a local network and not accessible from the internet.
Private IPs allow for efficient use of IP addresses as multiple networks can reuse the
same private IP ranges.
TCP (Transmission Control Protocol) and UDP (User Datagram Protocol) are both
transport layer protocols used for data transmission across networks. However, they
differ in terms of reliability, speed, and usage.
1. Reliability
● TCP: Provides reliable data transmission. It ensures that data is delivered in order
and checks for errors using acknowledgment (ACK) and retransmission if needed.
● UDP: Does not provide reliability. It sends data without ensuring that it reaches
the destination or without checking for errors.
2. Connection
● TCP: Connection-oriented protocol. It establishes a connection between the
sender and receiver before transmitting data, ensuring both ends are ready.
3. Speed
● TCP: Slower than UDP due to its connection setup, error-checking, and flow
control mechanisms.
● UDP: Faster because it has no connection setup and minimal overhead for error-
checking.
4. Error Handling
● TCP: Includes error detection and correction. It guarantees data integrity through
mechanisms like checksums, acknowledgments, and retransmissions.
● UDP: Includes error detection but no error correction. It simply discards erroneous
data without trying to recover it.
5. Usage
● TCP: Used in applications where reliability and data integrity are critical, such as
HTTP, FTP, Email, and File Transfers.
● UDP: Used in applications where speed is more important than reliability, such as
VoIP, Live Streaming, and Online Gaming.
The TCP header is 20 bytes in size (minimum) and contains essential fields that ensure
reliable data transmission. Here’s a brief explanation of its structure:
1. Source and Destination Port (16 bits each): These fields identify the source and
destination application ports.
2. Sequence Number (32 bits): Indicates the sequence number of the first byte of
data in the segment. This ensures data is delivered in order.
3. Acknowledgment Number (32 bits): Contains the next expected byte to be
received, used for acknowledging data.
4. Data Offset (4 bits): Specifies the length of the TCP header in 32-bit words,
indicating where the data begins.
5. Control Flags (9 bits): Includes flags like SYN, ACK, FIN, and RST, which
manage the connection setup and teardown.
6. Window Size (16 bits): Defines the number of bytes the sender is willing to
receive, helping with flow control.
7. Checksum (16 bits): Used for error-checking the header and data.
8. Urgent Pointer (16 bits): Used when the URG flag is set, indicating the last
urgent byte of data.
1. Traffic Filtering: A bridge examines the MAC (Media Access Control) addresses
in the data frames to decide whether to forward or filter the traffic. It only
forwards data to the segment where the destination device resides, which reduces
network congestion.
4. Forwarding Data Frames: The bridge forwards data frames based on the MAC
address table. If the destination device is on the same segment, the bridge will not
forward the frame, preventing unnecessary traffic. If the device is on a different
segment, it forwards the frame to that segment.
5. Broadcast Filtering: Bridges can prevent broadcasts from flooding the entire
network by selectively forwarding broadcast frames to other segments only when
necessary, thereby reducing network traffic.
● Start Frame Delimiter (SFD) (1 byte): This byte, usually 0xAB, marks the start
of the actual frame, indicating where the data portion begins.
● Destination MAC Address (6 bytes): This is the MAC address of the receiving
device. It ensures that the frame is delivered to the correct recipient.
● Source MAC Address (6 bytes): This is the MAC address of the sender, helping
the receiver identify the origin of the frame.
○ Type: If the value is greater than 1500, it specifies the protocol of the data
being carried (e.g., IPv4, ARP, etc.).
○ Length: If the value is less than or equal to 1500, it represents the length of
the data field (payload).
● Data/Payload (46-1500 bytes): This contains the actual data being transmitted.
The size of the data can vary, but it must be between 46 bytes and 1500 bytes. If
the data is smaller than 46 bytes, padding is added to meet the minimum frame
size.
● Frame Check Sequence (FCS) (4 bytes): This field contains a CRC (Cyclic
Redundancy Check) checksum used to detect errors in the frame. The receiver
recalculates the CRC and checks if it matches the value in this field to ensure the
data has been received correctly.
In computer networks, data can be transmitted in various ways, depending on the number
of receivers. The following are the different types of communication:
1. Unicasting:
○ Characteristics:
■ One-to-one communication.
2. Anycasting:
○ Characteristics:
■ One-to-nearest communication.
■ Multiple potential receivers, but only one (the closest) will receive
the data.
○ Example: Requesting DNS resolution, where the nearest DNS server
responds.
3. Multicasting:
○ Characteristics:
■ One-to-many communication.
4. Broadcasting:
○ Characteristics:
■ One-to-all communication.
Flow control mechanisms are essential in computer networks to manage the rate of data
transmission between sender and receiver. The primary reasons for using flow control
are:
1. Preventing Buffer Overflow: If the sender transmits data faster than the receiver
can process or store it, the receiver's buffer may overflow, leading to data loss.
Flow control ensures that the sender only sends data at a rate the receiver can
handle.
2. Ensuring Efficient Data Transmission: Flow control helps maintain a balance
between the sender and receiver. By regulating the data flow, it ensures that the
network resources are utilized efficiently without overwhelming any part of the
system.
The Sliding Window Protocol is a flow control and error control mechanism used in
reliable data transfer protocols like TCP. It allows a sender to send multiple frames
before needing an acknowledgment for the first one, making the transmission more
efficient.
Key Concepts:
1. Window Size: The window represents the range of frames that can be sent without
receiving an acknowledgment. For example, if the window size is 4, the sender
can send 4 frames continuously before waiting.
2. Sliding Mechanism:
○ This way, the sender doesn't need to stop and wait after sending each frame.
Advantages:
The sliding window protocol is essential for reliable and efficient communication in
modern networks.
32. Discuss the problems in Go-Back-N flow control mechanism and its solutions.
Problems in Go-Back-N Flow Control Mechanism (5 Marks)
The Go-Back-N (GBN) protocol is a type of sliding window protocol where the sender
can send multiple frames without waiting for acknowledgment but must retransmit all
frames from a lost or erroneous frame onward. This approach has certain drawbacks:
1. Wasted Bandwidth:
If a single frame is lost or corrupted, all subsequent frames (even if received correctly by
the receiver) are discarded and must be resent. This leads to unnecessary retransmissions,
wasting bandwidth.
2. Inefficient Error Handling:
Go-Back-N does not support selective acknowledgment. It does not allow retransmitting
only the damaged frame; instead, it forces the sender to go back and resend a group of
frames, even if only one frame was problematic.
3. Receiver Inactivity:
The receiver has to maintain strict sequence order. It cannot buffer out-of-order frames.
This causes the receiver to discard any frame that is not in the expected sequence,
reducing throughput.
4. High Latency in Noisy Channels:
● Selective Repeat allows the receiver to accept and buffer out-of-order frames.
● Dynamically adjusting the window size based on network conditions can help
manage retransmissions and improve performance.
4. Receiver Buffering:
You're right — for a 5-mark answer, the content should be a bit more detailed to ensure
full credit. Here's an improved version of each note, expanded enough to be suitable for
5 marks, yet still written in easy, clear language.
1. Socket (5 Marks)
A socket is a software endpoint that enables communication between two machines over
a network. It acts as an interface between the application layer and the transport layer in
the network model.
Example: A web browser (client) opens a socket to connect to a web server, enabling
data exchange via HTTP.
2. DNS (Domain Name System) (5 Marks)
DNS is like the internet’s phonebook. It converts human-readable domain names (like
www.example.com) into IP addresses (like 192.0.2.1) that computers use to
locate each other.
○ Root servers
Working:
● If not cached, the DNS resolver contacts servers step by step to find the IP
address.
Types of queries:
● It uses the HTTP/HTTPS protocols to request and deliver data from web servers
to web browsers.
● Web pages are written in HTML, and may include scripts (JavaScript), styles
(CSS), and multimedia.
● Introduced by Tim Berners-Lee in 1989, it revolutionized how we access and
share information.
Working:
● When a user enters a URL, the browser sends a request to the web server, which
responds with the web page content.
WWW is different from the Internet—it is just a service running over the Internet.
FTP is a standard network protocol used to transfer files between a client and a server
on a computer network.
● Operates on port 21 for command control and port 20 for data transfer.
● Users must log in with username and password, but anonymous access is also
allowed in public servers.
Modes:
● Active mode
Limitations:
● WLAN provides network access to devices like laptops, smartphones, and tablets
within a limited area—such as homes, schools, or offices.
Components:
● Access Point (AP): Acts as a bridge between wireless clients and the wired
network.
Advantages:
Security protocols like WEP, WPA, and WPA2/WPA3 are used to protect wireless
communication.
Here’s a well-structured and to-the-point answer for “Explain Distance Vector Routing
with a suitable example” (4 marks):
Distance Vector Routing is a dynamic routing protocol in which each router shares its
routing table with its immediate neighbors periodically. The term "distance" refers to the
cost (usually hop count) to reach a destination, and "vector" refers to the direction (next
hop).
Each router builds its routing table based on:
Key Features:
Example:
Initially:
After exchange:
Distance Vector Routing is simple and works well for small networks. However, it is
slower to converge and may suffer from count-to-infinity problems.
To improve Quality of Service (QoS) in networks, certain techniques are used to manage
bandwidth, reduce delay, and ensure smooth data flow. Two such techniques are:
1. Traffic Shaping:
Traffic shaping is used to control the volume and timing of traffic entering the network.
It helps regulate bursty traffic, making it more consistent, which reduces congestion and
improves overall performance.
● Provides end-to-end QoS support by informing all routers along the path to
allocate required resources.
To improve efficiency, Slotted ALOHA was introduced. In this version, time is divided
into discrete slots, and a station can only send data at the beginning of these slots. If a
station misses the beginning of the slot, it waits for the next one. This time-slot structure
significantly reduces the chances of collisions since only one station can begin
transmission in each slot. As a result, Slotted ALOHA has a higher efficiency of
approximately 36.8%.
The key difference lies in the synchronization: Pure ALOHA is unsynchronized and
simpler, while Slotted ALOHA introduces timing coordination, which improves
performance at the cost of added complexity.
HTTP is the protocol used to exchange data over the World Wide Web. It follows a
request-response model, where the client (usually a browser) sends a request to a web
server, and the server returns the appropriate response, such as an HTML page, image, or
file. HTTP is a stateless protocol, meaning that each request is treated independently; the
server does not retain any information about previous requests from the same client.
HTTP works over port 80 for unsecured connections and port 443 for secure HTTPS
communication. It uses different request methods such as GET (to retrieve data), POST
(to send data), PUT, and DELETE, depending on the type of interaction. For example,
when a user clicks on a link, the browser sends an HTTP GET request to the server,
which then returns the required web page.
Since HTTP is text-based, the data being transferred is in readable text format, making it
simple but also requiring secure encryption (HTTPS) for sensitive communication.
SMTP is a protocol used for sending emails across networks. It is a push protocol,
which means it pushes messages from a client to a mail server or from one mail server to
another. SMTP is used only for sending messages and does not retrieve them; that task is
handled by other protocols like POP3 or IMAP.
The process begins when a user sends an email through a client (like Gmail or Outlook).
The client contacts the SMTP server and sends the email details, including sender,
receiver, subject, and body. The SMTP server then forwards this email to the recipient’s
email server. From there, the recipient can download or view the email using a suitable
client.
SMTP operates over port 25 by default for unencrypted communication, while port 587
or port 465 is used for secure transmission. The protocol ensures that messages are
formatted correctly and routed properly through intermediate servers, delivering the
message to the intended destination.
● Error Reporting: ICMP helps in reporting errors during packet transmission. For
example, it sends a "Destination Unreachable" message if a router can't forward a
packet.
● Diagnostics: ICMP supports diagnostic tools like ping and traceroute, allowing
users to test connectivity and trace packet routes.
● Step: The client initiates the connection by sending a TCP packet with the SYN
(synchronize) flag set to 1.
● Purpose: This packet indicates that the client wants to establish a connection with
the server.
● Sequence Number: The packet also includes a randomly chosen initial sequence
number (ISN), which is used to track the bytes transmitted during the session.
● Step: The server receives the SYN packet and responds by sending back a packet
with both SYN and ACK (acknowledge) flags set to 1.
● Purpose: The SYN part of this packet acknowledges the client's request to
establish the connection, while the ACK part acknowledges the receipt of the
client's ISN by setting the acknowledgment number to the client's ISN + 1.
● Sequence Number: The server also includes its own randomly generated ISN in
the packet.
3. ACK (Acknowledge) - Client to Server:
● Step: The client acknowledges the server’s response by sending a packet with the
ACK flag set to 1.
● Connection Established: After this final step, the connection is fully established,
and both sides can begin transmitting data.
Analog modulation is a technique used to transmit analog signals (such as voice, video,
or music) over a communication channel by altering certain characteristics of a carrier
signal. The carrier signal is a high-frequency waveform that is modified in a way that
represents the information of the base signal (the analog signal).
The primary objective of modulation is to improve the signal's ability to travel long
distances without significant loss and to utilize the available frequency spectrum
effectively.
● Description: In FM, the frequency of the carrier signal is varied according to the
amplitude of the message signal.
● How it Works: The carrier signal's frequency is shifted higher or lower depending
on the instantaneous value of the message signal.
● Description: In PM, the phase of the carrier signal is varied in accordance with
the instantaneous amplitude of the message signal.
● How it Works: The phase of the carrier signal shifts based on the message
signal’s changes.
3. Reassembly: Once all packets reach the destination, they are reassembled in the
correct order to recreate the original message.
Advantages:
Disadvantages:
HDLC (High-Level Data Link Control) is a data link layer protocol used for
communication between devices over a network. It provides both connection-oriented
and connectionless services, ensuring reliable data transmission with error detection.
1. Flag (1 byte): Marks the beginning and end of a frame. It is always "01111110" in
binary.
2. Address (1-2 bytes): Identifies the destination device or the recipient of the frame.
3. Control (1-2 bytes): Contains control information, such as frame type and
sequence number.
5. FCS (Frame Check Sequence) (2 bytes): Used for error checking to ensure the
integrity of the frame. It is calculated based on the data content.