KEMBAR78
Computer Networks Pyqs Solved | PDF | Transmission Control Protocol | Domain Name System
0% found this document useful (0 votes)
6 views49 pages

Computer Networks Pyqs Solved

Uploaded by

akashmaju04
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views49 pages

Computer Networks Pyqs Solved

Uploaded by

akashmaju04
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 49

COMPUTER NETWORKS PYQS SOLVED

1. Compare between Pure ALOHA and Slotted ALOHA

Feature Pure ALOHA Slotted ALOHA

Time Division No time slots; send anytime Time is divided into slots

Collision High (anytime transmission can Lower (transmission only at start


Chance collide) of slots)

Efficiency Around 18% Around 36%

Complexity Simple Slightly more complex (need time


sync)

Throughput Lower Almost double compared to Pure


ALOHA

2. Explain 3-way TCP handshaking process

The 3-way handshake is used to establish a reliable connection between a client and a
server in TCP.

Steps:

1. SYN: Client sends a connection request to the server by sending a segment with
the SYN (synchronize) flag set. It also includes an initial sequence number (say,
X).

2. SYN-ACK: Server responds with a segment that has both SYN and ACK flags
set. It acknowledges the client’s sequence number (X+1) and sends its own
sequence number (say, Y).

3. ACK: Client sends an ACK back to the server acknowledging the server’s
sequence number (Y+1).

Now the connection is established, and data transfer can begin.


3. Explain ARP packet format with suitable diagram
ARP (Address Resolution Protocol) is used to map an IP address to a MAC (hardware)
address. The ARP packet format is used for ARP requests and replies and consists of
multiple fields including hardware type, protocol type, hardware and protocol size,
operation, sender and target hardware, and IP addresses. These fields work together to
help devices on a network find and communicate with each other.
Hardware type: This is 16 bits field defining the type of the network on which ARP is
running. Ethernet is given type 1.
Protocol type: This is 16 bits field defining the protocol. The value of this field for the
IPv4 protocol is 0800H.
Hardware length: This is an 8 bits field defining the length of the physical address in
bytes. Ethernet is the value 6.
Protocol length: This is an 8 bits field defining the length of the logical address in bytes.
For the IPv4 protocol, the value is 4.
Operation (request or reply): This is a 16 bits field defining the type of packet. Packet
types are ARP request (1), and ARP reply (2).
Sender hardware address: This is a variable length field defining the physical address
of the sender. For example, for Ethernet, this field is 6 bytes long.
Sender protocol address: This is also a variable length field defining the logical address
of the sender For the IP protocol, this field is 4 bytes long.
Target hardware address: This is a variable length field defining the physical address
of the target. For Ethernet, this field is 6 bytes long. For the ARP request messages, this
field is all Os because the sender does not know the physical address of the target.
Target protocol address: This is also a variable length field defining the logical address
of the target. For the IPv4 protocol, this field is 4 bytes long.
4. Generate the CRC code for the data word of 1010011110. The divisor is 1011.

Step 1: Append (n−1) zeros to data, where n = length of divisor


Divisor = 1011 → 4 bits → append 3 zeros to data

Data becomes: 1010011110 000

Step 2: Perform binary division using XOR

We divide 1010011110000 by 1011. The remainder after the XOR division is the
CRC.

Final CRC code (remainder): 011


(Note: You should show long division if asked in exam. Use pen/paper or tools.)

Final transmitted frame = 1010011110 + 011 = 1010011110011

5. Explain Leaky Bucket Algorithm.

The Leaky Bucket algorithm is a traffic shaping and congestion control technique used
in computer networks to regulate the data flow. It ensures that data packets are
transmitted at a constant and manageable rate, even if they are arriving at irregular
intervals. The concept is based on a simple analogy: imagine a bucket with a small hole
at the bottom. Water (representing data packets) can be added to the bucket at any rate,
but it leaks out at a fixed rate through the hole. If water is poured in too fast and exceeds
the bucket's capacity, the excess water overflows and is lost. Similarly, in networking, if
too many packets arrive at once and the buffer (or bucket) is full, the extra packets are
discarded.
This algorithm helps in smoothing out bursts of traffic and prevents network congestion
by controlling the output rate. It is especially useful in scenarios where devices may send
data in unpredictable bursts but the network must handle data at a steady pace. In
summary, the Leaky Bucket algorithm acts like a traffic police that ensures data flows out
of a system in a controlled and uniform manner, improving overall network stability.

6. Compare and contrast between OSI and TCP/IP layered models

Feature OSI Model TCP/IP Model

Full Form Open Systems Interconnection Transmission Control


Protocol/Internet Protocol

Number of 7 Layers 4 or 5 Layers (varies by


Layers reference)

Development Theoretical model Practical model, used in


real-world

Layers Application, Presentation, Session, Application, Transport,


Transport, Network, Data Link, Internet, Network Access
Physical

Protocol Protocols not strictly defined Standard protocols defined


Defined (TCP, IP, etc.)
Approach Protocol-independent Protocol-oriented

Conclusion: OSI is a reference model, more detailed and theoretical, while TCP/IP is
widely used in real networks.

7. What is bit rate? What is baud rate? An analog signal carries 4 bits in each signal
unit. If 1000 signal units are sent per second, find the baud rate and bit rate.

Bit rate:

● The number of bits transmitted per second.

● Measured in bps (bits per second).

Baud rate:

● The number of signal units (symbols) transmitted per second.

● Measured in baud.

Formula:

Bit Rate=Baud Rate×Number of bits per signal\text{Bit Rate} = \text{Baud Rate} \


times \text{Number of bits per signal}Bit Rate=Baud Rate×Number of bits per signal

Given:

● Bits per signal unit = 4

● Signal units per second = 1000 (i.e., Baud rate = 1000 baud)

Calculation:

Bit Rate=1000×4=4000 bps\text{Bit Rate} = 1000 \times 4 = 4000 \text{ bps}Bit


Rate=1000×4=4000 bps

Final Answer:

● Baud Rate = 1000 baud

● Bit Rate = 4000 bps


8. Generate the CRC code for the data word of 1010101010. The polynomial to
generate the divisor is x⁴ + x³ + 1.

Step 1: Convert the polynomial to binary

Polynomial: x4+x3+1x^4 + x^3 + 1x4+x3+1 → Binary = 11001


(we skip x² and x¹, so their bits are 0)

Step 2: Append 4 zeros to data word


Data = 1010101010 → Append 4 zeros = 10101010100000

Step 3: Perform binary division using XOR


Divide 10101010100000 by 11001

Final Remainder (CRC code) = 1010 (based on calculation)

Final transmitted code: 1010101010 + 1010 = 10101010101010

9. For a Class C subnetting, subnet mask is 255.255.255.192, IP address


172.38.15.12. Write class, Net ID and subnet mask of this IP.

Step 1: Identify the Class of IP

● 1st Octet = 172 → falls in Class B range (128–191)


BUT since question says “Class C subnetting”, let’s assume subnetting done on
Class C block.

Subnet Mask in binary:


255.255.255.192 → 11111111.11111111.11111111.11000000
→ 26 bits mask (i.e., /26)

Number of subnets:
For Class C, default subnet mask = /24 →
Extra 2 bits for subnetting → 22=42^2 = 422=4 subnets
Each subnet has 64 addresses.

Network Address (Net ID):


IP = 172.38.15.12
192 mask means 3rd octet subnet is divided into blocks of 64:

● First block: 0–63 → 172.38.15.0 (Net ID)


● 172.38.15.12 belongs to this block

Final Answers:

● Class: C (by subnetting assumption)

● Subnet Mask: 255.255.255.192 (/26)

● Net ID: 172.38.15.0

10. Explain “Stop and Wait ARQ Protocol” with the help of a diagram.

The Stop and Wait ARQ (Automatic Repeat reQuest) protocol is one of the simplest
error control methods used in data communication. It ensures reliable transmission by
sending one data frame at a time and waiting for an acknowledgment (ACK) before
sending the next frame.

In this method, the sender transmits a single frame, then stops and waits until it
receives an acknowledgment from the receiver. If the acknowledgment is received
successfully, the sender proceeds to send the next frame. However, if the
acknowledgment is lost or not received within a certain time (timeout), the sender
resends the same frame, assuming it was lost or corrupted.
🔁 Working Steps:

1. Sender sends Frame 0 to the Receiver.

2. Receiver receives Frame 0 correctly and sends ACK 0 back.

3. On receiving ACK 0, Sender sends the next frame, Frame 1.

4. If ACK is not received, the Sender resends the same frame after timeout.

⚠️Error Handling:

● If a data frame is lost, the sender times out and retransmits.

● If an ACK is lost, the sender will again resend the frame, and the receiver will
identify it as a duplicate (if it already received it).

✅ Advantages:

● Simple and easy to implement.


● Guarantees that data is delivered in order and without errors.

❌ Disadvantages:

● Inefficient for long-distance or high-latency networks, as it waits for each ACK before
continuing.

● Only one frame can be in transmission at a time, leading to low bandwidth


utilization.

(https://www.geeksforgeeks.org/stop-and-wait-arq/)

11. Define each term, and clarify the key difference(s) between the two terms:
“Iterative DNS query” and “Recursive DNS query”

In the Domain Name System (DNS), when a client (usually your computer or browser)
wants to find the IP address of a website (like www.google.com), it sends a DNS
query. There are two main types of DNS queries that define how this request is
resolved: Recursive query and Iterative query.

🔄 Recursive DNS Query:


In a recursive query, the DNS client asks a DNS server to fetch the complete answer
to the query — not just a reference. The server takes full responsibility to find the IP
address, even if it has to contact multiple other DNS servers in the background. The final
result is then returned to the client.

● Example: Your computer sends a recursive query to your ISP's DNS server asking
for the IP of www.google.com. That server will contact other DNS servers (like
root, TLD, and authoritative servers) until it finds the IP and returns it to you.

● 🔸 The client waits for the final answer.

● 🔸 Used by end devices (DNS resolvers).

🔁 Iterative DNS Query:

In an iterative query, the DNS server responds with the best answer it has, usually a
referral to another DNS server closer to the answer. The client then follows up with
another query to the referred server, and this process continues until the final answer is
found.

● Example: A local DNS server doesn't know the IP of www.google.com, so it


tells the client: "Try the .com server." Then the client queries the .com server,
which says: "Try Google's authoritative server." This continues until the client gets
the answer.

● 🔸 The client is actively involved in querying multiple servers.

● 🔸 Used by DNS servers themselves, not typically by end-users.

✅ Key Differences:

Feature Recursive Query Iterative Query

Responsibility Server fetches full Server provides best info or


answer referral
Client Passive (waits for Active (queries multiple servers)
Involvement answer)

Used By DNS clients/resolvers DNS servers themselves

Response Type Final answer (IP Referral to another DNS server


address)

12. Which field or fields are used in TCP’s 3-way handshake to open a new
connection? What information is conveyed during the handshake, and how?

🧩 Fields Used in TCP Header:

● SYN flag – Indicates a request to establish a connection.

● ACK flag – Acknowledges the received segment.

● Sequence Number – Used to start tracking data bytes sent.

● Acknowledgment Number – Confirms receipt and readiness to receive the next


byte.

🔄 How the 3-Way Handshake Works (Step-by-Step):


Step 1: Client → Server (SYN)

● The client initiates the connection by sending a TCP segment with:

○ SYN = 1

○ Sequence Number = X (randomly chosen ISN)

● This says: “I want to start a connection, and my starting sequence number is X.”

Step 2: Server → Client (SYN-ACK)

● The server responds with:


○ SYN = 1, ACK = 1

○ Sequence Number = Y (server’s ISN)

○ Acknowledgment Number = X + 1

● This means: “Okay, I accept your request. My sequence number is Y, and I


acknowledge your sequence number X.”

Step 3: Client → Server (ACK)

● The client replies with:

○ ACK = 1

○ Sequence Number = X + 1

○ Acknowledgment Number = Y + 1

● This final step says: “I confirm your sequence number. Let’s start data
transmission.”

📦 What Information Is Conveyed and How:

● Willingness to connect → conveyed using SYN flag.

● Initial sequence numbers → exchanged using Sequence Number field.

● Acknowledgment of received SYN → conveyed through ACK flag and


Acknowledgment Number.

● Readiness of both sides to start communication → established by completing


all three steps with valid flags and numbers.

13. What is congestion? Explain token bucket algorithm for congestion control.

Congestion in computer networks occurs when the number of packets sent into the
network exceeds the capacity of the network to handle them. This leads to packet delays,
loss of data, retransmissions, and overall degradation in network performance.
Congestion is especially common in routers where too many packets arrive in a short
span, and the device cannot process or forward them quickly enough, leading to buffer
overflows and dropped packets.
To manage such scenarios, the token bucket algorithm is widely used as a congestion
control and traffic shaping technique. In this method, a bucket is used to store tokens,
which are generated at a constant rate. Each token grants permission to send a unit of data
(like a byte or packet). If a packet arrives and there are sufficient tokens in the bucket, the
required number of tokens are removed, and the packet is transmitted. However, if there
are not enough tokens, the packet may be queued (delayed) until enough tokens
accumulate, or it may be discarded, depending on the implementation.

This approach allows the network to control the average rate of data transmission
while still permitting short bursts of data when tokens have been saved up. The token
bucket is particularly effective because, unlike the leaky bucket algorithm which enforces
a strict constant output rate, the token bucket accommodates the natural burstiness of
real-world traffic without overwhelming the network.

14. Compare between ISO-OSI model and TCP/IP model.

Feature OSI Model TCP/IP Model

Full Form Open Systems Interconnection Transmission Control


Protocol/Internet Protocol

No. of Layers 7 4

Layer Names Application, Presentation, Session, Application, Transport,


Transport, Network, Data Link, Internet, Network Access
Physical

Approach Theoretical reference model Practical implementation


model

Development Developed by ISO Developed by the U.S.


Department of Defense
Protocol Protocol-independent Protocol-dependent (TCP,
Specific IP, etc.)

Usage Reference for learning & designing Real-world networking


(Internet)

15. Compare between hub, switch and router.

Devic Function Works At Intelligen Broadcast/ IP


e Layer ce Unicast Handlin
g

Hub Forwards data to Physical No Broadcast only No


all ports (blindly) Layer (Layer
1)

Switc Forwards data to Data Link Yes Unicast & No


h specific device Layer (Layer (MAC) Broadcast
using MAC 2)

Route Connects different Network Yes (IP) Unicast Yes


r networks Layer (Layer
3)

Summary:

● Hub: Basic, no filtering

● Switch: Smarter, uses MAC address

● Router: Smartest, uses IP, connects networks


16. Show how the Slotted ALOHA throughput is almost twice that of Pure ALOHA.

Slotted ALOHA and Pure ALOHA are both random access protocols used to manage
how data is transmitted over a shared communication channel. The main difference
between the two lies in how they handle the timing of data transmission. In Pure
ALOHA, a station can transmit data at any time, which leads to a higher chance of
collision, as the data can overlap with other transmissions that begin at any point. This
results in a low efficiency, and the maximum throughput of Pure ALOHA is only about
18.4%, or 0.184.

In contrast, Slotted ALOHA divides the time into equal-sized slots and requires that
stations only begin transmission at the start of a time slot. This synchronization
significantly reduces the chances of collisions, as overlapping transmissions can now
only happen if two or more stations choose the exact same time slot. Due to this
improvement, the maximum throughput of Slotted ALOHA becomes about 36.8%, or
0.368, which is nearly twice the efficiency of Pure ALOHA.

Mathematically, for Pure ALOHA, the maximum throughput is given by S = G ×


e^(−2G) and for Slotted ALOHA it is S = G × e^(−G), where G is the average
number of frames generated per frame time. When G = 0.5 for Pure ALOHA, the
throughput S becomes 0.184; and when G = 1 for Slotted ALOHA, S becomes 0.368.
This clearly shows that synchronizing the transmission using time slots nearly doubles
the efficiency.

17. Discuss piggybacking with a proper diagram.

(5 Marks)

Piggybacking is a technique used in bidirectional communication protocols to increase


the efficiency of data transmission. It combines data and acknowledgment (ACK) into a
single frame, rather than sending them separately.

In traditional acknowledgment methods, the receiver sends a standalone ACK after


receiving each frame. But in piggybacking, the receiver waits until it has its own data to
send back, and attaches the ACK to that outgoing data frame. This saves bandwidth and
reduces the number of frames on the network.
🔹 Working Principle:

● Piggybacking is used in full-duplex communication, where both sender and


receiver send data.
● When the receiver gets a data frame, it doesn’t send an immediate ACK.

● Instead, it waits until it has data to send back, and includes the ACK with its
outgoing data.

● If the receiver doesn’t have data to send within a certain time, it sends a separate
ACK to avoid sender timeout.

✅ Advantages of Piggybacking:

● Efficient use of bandwidth: Reduces the number of separate frames needed.

● Fewer control frames: Combines data and ACK into one frame.

● Better channel utilization in two-way communication.


❌ Disadvantages of Piggybacking:

● Delay in acknowledgment: If the receiver has no data to send back quickly, it delays
the ACK.

● Timer management complexity: Sender may time out if ACK is delayed too
long.

● Not suitable for one-way or heavily imbalanced traffic.

18. Explain IPv4 datagram format with suitable diagram.


● VERSION: Version of the IP protocol (4 bits), which is 4 for IPv4
● HLEN: IP header length (4 bits), which is the number of 32 bit words in the
header. The minimum value for this field is 5 and the maximum is 15.
● Type of service: Low Delay, High Throughput, Reliability (8 bits)

● Total Length: Length of header + Data (16 bits), which has a minimum value
20 bytes and the maximum is 65,535 bytes.
● Identification: Unique Packet Id for identifying the group of fragments of a
single IP datagram (16 bits)
● Flags: 3 flags of 1 bit each : reserved bit (must be zero), do not fragment flag,
more fragments flag (same order)
● Fragment Offset: Represents the number of Data Bytes ahead of the particular
fragment in the particular Datagram. Specified in terms of number of 8 bytes,
which has the maximum value of 65,528 bytes.
● Time to live: Datagram’s lifetime (8 bits), It prevents the datagram to loop
through the network by restricting the number of Hops taken by a Packet
before delivering to the Destination.
● Protocol: Name of the protocol to which the data is to be passed (8 bits)

● Header Checksum: 16 bits header checksum for checking errors in the


datagram header
● Source IP address: 32 bits IP address of the sender

● Destination IP address: 32 bits IP address of the receiver

● Option: Optional information such as source route, record route. Used by the
Network administrator to check whether a path is working or not.
19. An ISP is granted a large block of address starting with 190.100.0.0\16. ISP
needs to distribute it for three group customers as follows. I. 1 st group has 64
customers: each need 256 IP addresses. II. 2 nd group has 128 customers: each need
128 addresses. III. 3 rd group has 128 customers: each need 64 addresses. Design the
sub-blocks and give the slash notation for each sub-block.

To allocate the IP address block 190.100.0.0/16 for the three customer groups with
different requirements, we need to use subnetting to efficiently assign addresses to each
group.

Let's break down the requirements and steps involved:

Step 1: Understand the Requirements

● 1st Group:

○ Number of customers: 64

○ Each customer needs 256 IP addresses.

● 2nd Group:

○ Number of customers: 128

○ Each customer needs 128 IP addresses.


● 3rd Group:

○ Number of customers: 128

○ Each customer needs 64 IP addresses.

Step 2: Calculate the Required Subnet Size

To subnet the IP address block, we need to calculate how many addresses each group
requires and then determine the appropriate subnet mask.
Group 1:

● Each customer needs 256 IP addresses.

● The smallest power of 2 that can accommodate 256 addresses is 28=2562^8 = 256.

● So, each subnet for the 1st group will need 256 addresses, which requires a /24
subnet mask (because 32−8=2432 - 8 = 24).

Group 2:

● Each customer needs 128 IP addresses.

● The smallest power of 2 that can accommodate 128 addresses is 27=1282^7 = 128.

● So, each subnet for the 2nd group will need 128 addresses, which requires a /25
subnet mask (because 32−7=2532 - 7 = 25).

Group 3:

● Each customer needs 64 IP addresses.

● The smallest power of 2 that can accommodate 64 addresses is 26=642^6 = 64.

● So, each subnet for the 3rd group will need 64 addresses, which requires a /26
subnet mask (because 32−6=2632 - 6 = 26).

Step 3: Determine the Subnets for Each Group


Now, let's assign the subnets for each group by calculating how much space is required
and then assigning addresses accordingly.

Group 1 Subnet Design (/24 Subnet Mask)

● Required: 64 customers × 256 IP addresses = 16,384 addresses.

● Each subnet will have 256 addresses.

● Number of subnets needed = 16,384256=64\frac{16,384}{256} = 64 subnets.

The first subnet for the 1st group starts at 190.100.0.0/24. The next subnet will start from
190.100.1.0/24, and so on.
Slash notation for Group 1: 190.100.0.0/24, 190.100.1.0/24, 190.100.2.0/24, …,
190.100.63.0/24.

Group 2 Subnet Design (/25 Subnet Mask)

● Required: 128 customers × 128 IP addresses = 16,384 addresses.

● Each subnet will have 128 addresses.

● Number of subnets needed = 16,384128=128\frac{16,384}{128} = 128 subnets.

The first subnet for the 2nd group starts at 190.100.64.0/25. The next subnet will start
from 190.100.64.128/25, and so on.
Slash notation for Group 2: 190.100.64.0/25, 190.100.64.128/25, 190.100.65.0/25, …,
190.100.127.128/25.

Group 3 Subnet Design (/26 Subnet Mask)

● Required: 128 customers × 64 IP addresses = 8,192 addresses.

● Each subnet will have 64 addresses.

● Number of subnets needed = 8,19264=128\frac{8,192}{64} = 128 subnets.


The first subnet for the 3rd group starts at 190.100.128.0/26. The next subnet will start
from 190.100.128.64/26, and so on.
Slash notation for Group 3: 190.100.128.0/26, 190.100.128.64/26,
190.100.128.128/26, …, 190.100.191.192/26.

Step 4: Summary of the Subnet Block Allocation

● Group 1 (64 customers needing 256 addresses each):

○ Subnet mask: /24

○ Address Range: 190.100.0.0/24 to 190.100.63.0/24

○ Total subnets required: 64

● Group 2 (128 customers needing 128 addresses each):

○ Subnet mask: /25

○ Address Range: 190.100.64.0/25 to 190.100.127.128/25

○ Total subnets required: 128

● Group 3 (128 customers needing 64 addresses each):

○ Subnet mask: /26

○ Address Range: 190.100.128.0/26 to 190.100.191.192/26

○ Total subnets required: 128

Conclusion

By dividing the 190.100.0.0/16 address block into subnets of varying sizes based on the
specific requirements of each group, we efficiently allocate the required number of IP
addresses to each group while ensuring optimal utilization of the available address space.

20. Compare between IPV4 and IPV6.

Comparison between IPv4 and IPv6

1. Address Length:
● IPv4: IPv4 addresses are 32-bit long, allowing for about 4.3 billion unique IP
addresses (2^32).

● IPv6: IPv6 addresses are 128-bit long, allowing for an almost unlimited number
of IP addresses (2^128), which equals 340 undecillion addresses.

2. Address Representation:

● IPv4: Address is written in dotted decimal notation, such as 192.168.0.1.

● IPv6: Address is written in hexadecimal notation and separated by colons, such


as 2001:0db8:85a3:0000:0000:8a2e:0370:7334.

3. Header Structure:

● IPv4: IPv4 header is smaller but more complex. It has 13 fields and requires
options to be handled separately.

● IPv6: IPv6 header is simplified with only 8 fields and does not include options
directly within the header, leading to more efficient routing.

4. Address Allocation:

● IPv4: Address allocation is exhausting, and it requires techniques like NAT


(Network Address Translation) to handle address shortages.

● IPv6: IPv6 provides a huge address space, thus eliminating the need for NAT
and allowing for direct end-to-end connectivity.

21. Explain the limitation of IPV4.

Limitations of IPv4

1. Limited Address Space:

○ IPv4 uses 32-bit addresses, which limits the total number of unique IP
addresses to about 4.3 billion. With the rapid growth of internet-connected
devices, this address space is insufficient to meet global demands, leading
to the exhaustion of IPv4 addresses.

2. Addressing Issues and NAT (Network Address Translation):

○ Due to the shortage of IPv4 addresses, techniques like NAT are used to
allow multiple devices within a local network to share a single public IP
address. However, NAT complicates network configurations and can
interfere with some types of internet communication, such as peer-to-peer
applications.

3. Complex Header Structure:

○ The IPv4 header is relatively complex and includes many fields, some of
which are rarely used. This leads to inefficient processing and routing,
requiring additional time and resources. IPv6, on the other hand, simplifies
the header for faster processing.

22. Discuss the concept of public and private cryptography.

Public and Private Cryptography

Cryptography is the process of securing communication. The two main types of


cryptographic systems are private key cryptography (symmetric) and public key
cryptography (asymmetric).

Private Key Cryptography (Symmetric Encryption)

● In private key cryptography, the same key is used for both encryption and
decryption of data.

● Both the sender and receiver must share the secret key beforehand.

● Example: AES (Advanced Encryption Standard).

○ Advantages: Faster and more efficient for large data.

○ Disadvantages: Key distribution is a challenge as both parties need to


securely share the key.

Public Key Cryptography (Asymmetric Encryption)


● In public key cryptography, there is a pair of keys: a public key (shared openly)
and a private key (kept secret).

● The public key encrypts the data, and only the corresponding private key can
decrypt it.

● Example: RSA.

○ Advantages: More secure as the private key is never shared.

○ Disadvantages: Slower than symmetric encryption and requires more


processing power.

Conclusion

● Private key cryptography is faster and ideal for encrypting large data, but suffers
from key distribution issues.

● Public key cryptography provides better security, especially for open


communication, but is slower.

23. Explain RSA algorithm with a suitable example.


24. Discuss the concept of public and private IP address.

Public and Private IP Addresses

Public IP Address: A public IP address is assigned to a device or network that is directly


accessible over the internet. These addresses are globally unique and routable across the
internet. Public IPs are assigned by ISPs (Internet Service Providers) and are used by
devices such as servers, websites, and routers.

● Example: 203.0.113.1

● Characteristics: Unique, accessible from anywhere, assigned by ISPs.

Private IP Address: A private IP address is used within a local network and is not
accessible directly from the internet. These IPs are part of specific reserved ranges and
can be reused across different networks. Devices with private IPs use NAT (Network
Address Translation) to communicate with the internet.

● Example: 192.168.1.1

● Characteristics: Not globally unique, used within local networks, managed by


network admins.

Key Differences:

● Public IPs are globally unique and accessible over the internet, while Private IPs
are used within a local network and not accessible from the internet.

Private IPs allow for efficient use of IP addresses as multiple networks can reuse the
same private IP ranges.

25. Compare between TCP and UDP.

TCP (Transmission Control Protocol) and UDP (User Datagram Protocol) are both
transport layer protocols used for data transmission across networks. However, they
differ in terms of reliability, speed, and usage.

1. Reliability

● TCP: Provides reliable data transmission. It ensures that data is delivered in order
and checks for errors using acknowledgment (ACK) and retransmission if needed.

● UDP: Does not provide reliability. It sends data without ensuring that it reaches
the destination or without checking for errors.

2. Connection
● TCP: Connection-oriented protocol. It establishes a connection between the
sender and receiver before transmitting data, ensuring both ends are ready.

● UDP: Connectionless protocol. It sends data without establishing a connection,


making it faster but less reliable.

3. Speed

● TCP: Slower than UDP due to its connection setup, error-checking, and flow
control mechanisms.

● UDP: Faster because it has no connection setup and minimal overhead for error-
checking.

4. Error Handling

● TCP: Includes error detection and correction. It guarantees data integrity through
mechanisms like checksums, acknowledgments, and retransmissions.

● UDP: Includes error detection but no error correction. It simply discards erroneous
data without trying to recover it.

5. Usage

● TCP: Used in applications where reliability and data integrity are critical, such as
HTTP, FTP, Email, and File Transfers.

● UDP: Used in applications where speed is more important than reliability, such as
VoIP, Live Streaming, and Online Gaming.

26. Explain TCP header structure in details.

The TCP header is 20 bytes in size (minimum) and contains essential fields that ensure
reliable data transmission. Here’s a brief explanation of its structure:

1. Source and Destination Port (16 bits each): These fields identify the source and
destination application ports.

2. Sequence Number (32 bits): Indicates the sequence number of the first byte of
data in the segment. This ensures data is delivered in order.
3. Acknowledgment Number (32 bits): Contains the next expected byte to be
received, used for acknowledging data.

4. Data Offset (4 bits): Specifies the length of the TCP header in 32-bit words,
indicating where the data begins.

5. Control Flags (9 bits): Includes flags like SYN, ACK, FIN, and RST, which
manage the connection setup and teardown.

6. Window Size (16 bits): Defines the number of bytes the sender is willing to
receive, helping with flow control.

7. Checksum (16 bits): Used for error-checking the header and data.

8. Urgent Pointer (16 bits): Used when the URG flag is set, indicating the last
urgent byte of data.

9. Options (Variable): Optional fields for additional settings like Maximum


Segment Size (MSS).

27. Explain the functionalities of Bridges in computer network.

Functionality of Bridges in Computer Networks (5 Marks)

A bridge is a networking device used to connect two or more network segments,


allowing them to function as a single network. It operates at the Data Link Layer (Layer
2) of the OSI model and has several key functionalities:

1. Traffic Filtering: A bridge examines the MAC (Media Access Control) addresses
in the data frames to decide whether to forward or filter the traffic. It only
forwards data to the segment where the destination device resides, which reduces
network congestion.

2. Collision Domain Segmentation: By dividing a network into smaller segments, a


bridge reduces the size of collision domains. This helps in minimizing collisions
and improving the overall network performance, especially in Ethernet networks.

3. Learning MAC Addresses: A bridge maintains a MAC address table (also


known as a forwarding table). It learns the MAC addresses of devices connected to
each segment by analyzing incoming frames. This allows the bridge to make
intelligent decisions about where to forward traffic.

4. Forwarding Data Frames: The bridge forwards data frames based on the MAC
address table. If the destination device is on the same segment, the bridge will not
forward the frame, preventing unnecessary traffic. If the device is on a different
segment, it forwards the frame to that segment.

5. Broadcast Filtering: Bridges can prevent broadcasts from flooding the entire
network by selectively forwarding broadcast frames to other segments only when
necessary, thereby reducing network traffic.

28. Explain Ethernet frame format with proper diagram.

Ethernet Frame Format (5 Marks)

An Ethernet frame is the basic unit of data transmission in an Ethernet network. It


encapsulates data for transmission and follows a specific structure. The Ethernet frame
consists of the following fields:

● Preamble (7 bytes): This field helps in synchronizing the transmission between


the sender and receiver. It allows the receiver to align with the incoming data
stream.

● Start Frame Delimiter (SFD) (1 byte): This byte, usually 0xAB, marks the start
of the actual frame, indicating where the data portion begins.

● Destination MAC Address (6 bytes): This is the MAC address of the receiving
device. It ensures that the frame is delivered to the correct recipient.

● Source MAC Address (6 bytes): This is the MAC address of the sender, helping
the receiver identify the origin of the frame.

● Type/Length (2 bytes): This field has two possible functions:

○ Type: If the value is greater than 1500, it specifies the protocol of the data
being carried (e.g., IPv4, ARP, etc.).

○ Length: If the value is less than or equal to 1500, it represents the length of
the data field (payload).

● Data/Payload (46-1500 bytes): This contains the actual data being transmitted.
The size of the data can vary, but it must be between 46 bytes and 1500 bytes. If
the data is smaller than 46 bytes, padding is added to meet the minimum frame
size.

● Frame Check Sequence (FCS) (4 bytes): This field contains a CRC (Cyclic
Redundancy Check) checksum used to detect errors in the frame. The receiver
recalculates the CRC and checks if it matches the value in this field to ensure the
data has been received correctly.

Ethernet Frame Diagram:

29. Discuss the concept of Unicasting, Anycasting, Multicasting, and Broadcasting.

Unicasting, Anycasting, Multicasting, and Broadcasting (5 Marks)

In computer networks, data can be transmitted in various ways, depending on the number
of receivers. The following are the different types of communication:

1. Unicasting:

○ Definition: Unicasting is the communication where data is sent from one


source to one specific destination.

○ Characteristics:

■ One-to-one communication.

■ Each packet is delivered to a single, specific device.

○ Example: A client requesting a webpage from a web server.

2. Anycasting:

○ Definition: Anycasting is a communication where data is sent from one


source to any one of a group of possible receivers, but only the nearest
receiver (in terms of routing distance) gets the message.

○ Characteristics:

■ One-to-nearest communication.

■ Multiple potential receivers, but only one (the closest) will receive
the data.
○ Example: Requesting DNS resolution, where the nearest DNS server
responds.

3. Multicasting:

○ Definition: Multicasting is the communication where data is sent from one


source to multiple specific receivers at the same time.

○ Characteristics:

■ One-to-many communication.

■ Only selected devices (those that are part of a multicast group)


receive the data.

○ Example: Streaming video to a group of subscribers or online gaming.

4. Broadcasting:

○ Definition: Broadcasting is the communication where data is sent from one


source to all devices within a network or broadcast domain.

○ Characteristics:

■ One-to-all communication.

■ Every device in the network or subnet receives the data.

○ Example: ARP requests in Ethernet networks or a message to all devices in


a local area network (LAN).

30. Explain the reason of using flow control mechanisms.

Reason for Using Flow Control Mechanisms (3 Marks)

Flow control mechanisms are essential in computer networks to manage the rate of data
transmission between sender and receiver. The primary reasons for using flow control
are:

1. Preventing Buffer Overflow: If the sender transmits data faster than the receiver
can process or store it, the receiver's buffer may overflow, leading to data loss.
Flow control ensures that the sender only sends data at a rate the receiver can
handle.
2. Ensuring Efficient Data Transmission: Flow control helps maintain a balance
between the sender and receiver. By regulating the data flow, it ensures that the
network resources are utilized efficiently without overwhelming any part of the
system.

3. Maintaining Reliable Communication: Flow control ensures smooth, error-free


communication by adapting the transmission rate, preventing congestion, and
avoiding packet drops that could occur due to network overload.

31. Explain the concept of sliding window protocol.

The Sliding Window Protocol is a flow control and error control mechanism used in
reliable data transfer protocols like TCP. It allows a sender to send multiple frames
before needing an acknowledgment for the first one, making the transmission more
efficient.
Key Concepts:

1. Window Size: The window represents the range of frames that can be sent without
receiving an acknowledgment. For example, if the window size is 4, the sender
can send 4 frames continuously before waiting.

2. Sliding Mechanism:

○ When an acknowledgment is received for the first frame, the window


"slides" forward, allowing the sender to send the next frame.

○ This way, the sender doesn't need to stop and wait after sending each frame.

3. Acknowledgments: The receiver sends ACKs for correctly received frames. If a


frame is lost or damaged, only the missing frame is retransmitted (in Selective
Repeat) or the entire window is retransmitted (in Go-Back-N).

Advantages:

● Increases efficiency by allowing continuous transmission.

● Reduces waiting time and improves bandwidth utilization.

The sliding window protocol is essential for reliable and efficient communication in
modern networks.

32. Discuss the problems in Go-Back-N flow control mechanism and its solutions.
Problems in Go-Back-N Flow Control Mechanism (5 Marks)

The Go-Back-N (GBN) protocol is a type of sliding window protocol where the sender
can send multiple frames without waiting for acknowledgment but must retransmit all
frames from a lost or erroneous frame onward. This approach has certain drawbacks:
1. Wasted Bandwidth:

If a single frame is lost or corrupted, all subsequent frames (even if received correctly by
the receiver) are discarded and must be resent. This leads to unnecessary retransmissions,
wasting bandwidth.
2. Inefficient Error Handling:

Go-Back-N does not support selective acknowledgment. It does not allow retransmitting
only the damaged frame; instead, it forces the sender to go back and resend a group of
frames, even if only one frame was problematic.
3. Receiver Inactivity:

The receiver has to maintain strict sequence order. It cannot buffer out-of-order frames.
This causes the receiver to discard any frame that is not in the expected sequence,
reducing throughput.
4. High Latency in Noisy Channels:

In environments where errors are frequent, Go-Back-N may cause repeated


retransmissions, leading to high delays and reduced efficiency.

Solutions to Go-Back-N Problems (4 Marks)

To overcome the limitations of Go-Back-N, the following solutions or alternative


techniques are used:
1. Use of Selective Repeat (SR) Protocol:

● Selective Repeat allows the receiver to accept and buffer out-of-order frames.

● Only the erroneous or lost frames are retransmitted.

● This improves efficiency and reduces bandwidth waste.

2. Better Error Detection and Correction:


● Employing strong error detection (like CRC) and error correction techniques can
reduce the number of retransmissions.

3. Adaptive Window Size:

● Dynamically adjusting the window size based on network conditions can help
manage retransmissions and improve performance.

4. Receiver Buffering:

● Advanced protocols allow receivers to buffer out-of-order frames, improving


throughput and reducing unnecessary retransmission.

33. Short notes on:

You're right — for a 5-mark answer, the content should be a bit more detailed to ensure
full credit. Here's an improved version of each note, expanded enough to be suitable for
5 marks, yet still written in easy, clear language.

1. Socket (5 Marks)

A socket is a software endpoint that enables communication between two machines over
a network. It acts as an interface between the application layer and the transport layer in
the network model.

● A socket is defined by an IP address and a port number.

● It allows programs to send or receive data over TCP (Transmission Control


Protocol) or UDP (User Datagram Protocol).

There are two main types of sockets:

● Stream Socket (TCP): Provides reliable, ordered, and error-checked delivery of


data.

● Datagram Socket (UDP): Offers faster, connectionless communication, but


without reliability.

Example: A web browser (client) opens a socket to connect to a web server, enabling
data exchange via HTTP.
2. DNS (Domain Name System) (5 Marks)

DNS is like the internet’s phonebook. It converts human-readable domain names (like
www.example.com) into IP addresses (like 192.0.2.1) that computers use to
locate each other.

● It has a hierarchical structure, consisting of:

○ Root servers

○ Top-level domain (TLD) servers

○ Authoritative name servers

Working:

● When you type a URL, your computer sends a DNS query.

● If not cached, the DNS resolver contacts servers step by step to find the IP
address.

Types of queries:

● Recursive: DNS server fetches the final result.

● Iterative: DNS server responds with a referral to another server.

DNS makes web browsing user-friendly and efficient.

3. WWW (World Wide Web) (5 Marks)

The World Wide Web (WWW) is a system of interlinked hypertext documents,


images, videos, and other media, accessed via the internet.

● It uses the HTTP/HTTPS protocols to request and deliver data from web servers
to web browsers.

● Web pages are written in HTML, and may include scripts (JavaScript), styles
(CSS), and multimedia.
● Introduced by Tim Berners-Lee in 1989, it revolutionized how we access and
share information.

Working:

● When a user enters a URL, the browser sends a request to the web server, which
responds with the web page content.

WWW is different from the Internet—it is just a service running over the Internet.

4. FTP (File Transfer Protocol) (5 Marks)

FTP is a standard network protocol used to transfer files between a client and a server
on a computer network.

● Operates on port 21 for command control and port 20 for data transfer.

● Users must log in with username and password, but anonymous access is also
allowed in public servers.

● Allows actions like uploading, downloading, renaming, deleting, and listing


files.

Modes:

● Active mode

● Passive mode (more firewall-friendly)

Limitations:

● Data is transmitted in plain text, making it insecure. Safer alternatives include


SFTP (SSH File Transfer Protocol) and FTPS (FTP Secure).

FTP is commonly used for website maintenance and file distribution.

5. Wireless LAN (WLAN) (5 Marks)


A Wireless LAN (WLAN) is a type of local area network where devices connect
wirelessly using radio waves instead of cables.

● Based on IEEE 802.11 standards (commonly known as Wi-Fi).

● WLAN provides network access to devices like laptops, smartphones, and tablets
within a limited area—such as homes, schools, or offices.

Components:

● Access Point (AP): Acts as a bridge between wireless clients and the wired
network.

● Wireless Clients: Devices like smartphones or laptops with Wi-Fi support.

Advantages:

● Mobility and flexibility.

● Easy to set up and expand.

Security protocols like WEP, WPA, and WPA2/WPA3 are used to protect wireless
communication.

34. In an organization given Net Id 192.138.15.0. Now we have to create four


subnets. Calculate no of usable host for each subnet, subnet id, broadcast address
and subnet masking for each subnet.
35. Explain Distance Vector Routing with a suitable example.

Here’s a well-structured and to-the-point answer for “Explain Distance Vector Routing
with a suitable example” (4 marks):

Distance Vector Routing

Distance Vector Routing is a dynamic routing protocol in which each router shares its
routing table with its immediate neighbors periodically. The term "distance" refers to the
cost (usually hop count) to reach a destination, and "vector" refers to the direction (next
hop).
Each router builds its routing table based on:

● Information from its directly connected neighbors

● The Bellman-Ford algorithm

Key Features:

● Routing tables contain: Destination, Cost (distance), and Next hop

● Routers exchange tables periodically (e.g., every 30 seconds)

● If a router receives a better path (lower cost), it updates its table

Example:

Consider 3 routers A, B, and C:

● A is connected to B with cost 1

● B is connected to C with cost 1

● A is not directly connected to C

Initially:

● A’s table: A (0), B (1), C (∞)

● B’s table: B (0), A (1), C (1)

● C’s table: C (0), B (1), A (∞)

After exchange:

● A learns from B that C can be reached in 1 hop from B

● A updates its routing table: C (2 via B)


Conclusion:

Distance Vector Routing is simple and works well for small networks. However, it is
slower to converge and may suffer from count-to-infinity problems.

36. Briefly discuss any two techniques to improve Quality of services.

To improve Quality of Service (QoS) in networks, certain techniques are used to manage
bandwidth, reduce delay, and ensure smooth data flow. Two such techniques are:

1. Traffic Shaping:

Traffic shaping is used to control the volume and timing of traffic entering the network.
It helps regulate bursty traffic, making it more consistent, which reduces congestion and
improves overall performance.

● It smoothens traffic by delaying packets that exceed the defined rate.

● Helps in achieving better bandwidth utilization and avoiding packet loss.

● Common algorithms: Leaky Bucket and Token Bucket.

2. Resource Reservation (RSVP):

RSVP (Resource Reservation Protocol) allows applications to reserve necessary


resources like bandwidth along the data path before the transmission starts.

● It ensures that critical applications (e.g., video conferencing, VoIP) get


guaranteed delivery with low latency and jitter.

● Provides end-to-end QoS support by informing all routers along the path to
allocate required resources.

37. Short notes on:

1. Pure and Slotted ALOHA

ALOHA is a random access protocol used for sharing a common communication


medium. In Pure ALOHA, a station transmits a data frame whenever it has data to send,
without checking whether the channel is free or not. This can lead to collisions if two or
more stations transmit at the same time, causing data loss. If a collision occurs, the station
waits for a random amount of time before retransmitting. The maximum efficiency of
Pure ALOHA is about 18.4%, as it does not avoid overlapping of transmissions.

To improve efficiency, Slotted ALOHA was introduced. In this version, time is divided
into discrete slots, and a station can only send data at the beginning of these slots. If a
station misses the beginning of the slot, it waits for the next one. This time-slot structure
significantly reduces the chances of collisions since only one station can begin
transmission in each slot. As a result, Slotted ALOHA has a higher efficiency of
approximately 36.8%.

The key difference lies in the synchronization: Pure ALOHA is unsynchronized and
simpler, while Slotted ALOHA introduces timing coordination, which improves
performance at the cost of added complexity.

2. HTTP (HyperText Transfer Protocol)

HTTP is the protocol used to exchange data over the World Wide Web. It follows a
request-response model, where the client (usually a browser) sends a request to a web
server, and the server returns the appropriate response, such as an HTML page, image, or
file. HTTP is a stateless protocol, meaning that each request is treated independently; the
server does not retain any information about previous requests from the same client.

HTTP works over port 80 for unsecured connections and port 443 for secure HTTPS
communication. It uses different request methods such as GET (to retrieve data), POST
(to send data), PUT, and DELETE, depending on the type of interaction. For example,
when a user clicks on a link, the browser sends an HTTP GET request to the server,
which then returns the required web page.

Since HTTP is text-based, the data being transferred is in readable text format, making it
simple but also requiring secure encryption (HTTPS) for sensitive communication.

3. SMTP (Simple Mail Transfer Protocol)

SMTP is a protocol used for sending emails across networks. It is a push protocol,
which means it pushes messages from a client to a mail server or from one mail server to
another. SMTP is used only for sending messages and does not retrieve them; that task is
handled by other protocols like POP3 or IMAP.
The process begins when a user sends an email through a client (like Gmail or Outlook).
The client contacts the SMTP server and sends the email details, including sender,
receiver, subject, and body. The SMTP server then forwards this email to the recipient’s
email server. From there, the recipient can download or view the email using a suitable
client.

SMTP operates over port 25 by default for unencrypted communication, while port 587
or port 465 is used for secure transmission. The protocol ensures that messages are
formatted correctly and routed properly through intermediate servers, delivering the
message to the intended destination.

38. Write the advantages of ICMP and IGMP over IPV4.

ICMP (Internet Control Message Protocol) and IGMP (Internet Group


Management Protocol) offer important enhancements to IPv4. They serve distinct roles
in error reporting, diagnostics, and multicast group management, respectively.

Advantages of ICMP Over IPv4:

● Error Reporting: ICMP helps in reporting errors during packet transmission. For
example, it sends a "Destination Unreachable" message if a router can't forward a
packet.

● Diagnostics: ICMP supports diagnostic tools like ping and traceroute, allowing
users to test connectivity and trace packet routes.

● Congestion Control: It allows routers to notify senders to slow down


transmission via Source Quench messages when congestion is detected.

● Path MTU Discovery: ICMP facilitates Path MTU discovery by sending


"Fragmentation Needed" messages to ensure packets are properly sized for the
network path.

Advantages of IGMP Over IPv4:

● Efficient Multicasting: IGMP enables efficient multicast communication by


sending data to multiple recipients at once, saving bandwidth.

● Group Membership Management: It manages the dynamic addition or removal


of hosts from multicast groups, ensuring multicast traffic is only sent to interested
receivers.

● Reduces Redundant Traffic: IGMP prevents unnecessary broadcasting of


multicast data, sending it only to hosts that need it.

● Improved Scalability: It allows multicast traffic to scale efficiently across large


networks without overburdening resources.

39. How the connection is established in TCP using three-way handshaking?


Explain in details.

The three-way handshake is a process used to establish a reliable connection between a


client and a server in TCP (Transmission Control Protocol). It ensures that both sides are
ready to communicate and have agreed on the parameters for data transmission.

The process involves three steps:

1. SYN (Synchronize) - Client to Server:

● Step: The client initiates the connection by sending a TCP packet with the SYN
(synchronize) flag set to 1.

● Purpose: This packet indicates that the client wants to establish a connection with
the server.

● Sequence Number: The packet also includes a randomly chosen initial sequence
number (ISN), which is used to track the bytes transmitted during the session.

2. SYN-ACK (Synchronize-Acknowledge) - Server to Client:

● Step: The server receives the SYN packet and responds by sending back a packet
with both SYN and ACK (acknowledge) flags set to 1.

● Purpose: The SYN part of this packet acknowledges the client's request to
establish the connection, while the ACK part acknowledges the receipt of the
client's ISN by setting the acknowledgment number to the client's ISN + 1.

● Sequence Number: The server also includes its own randomly generated ISN in
the packet.
3. ACK (Acknowledge) - Client to Server:

● Step: The client acknowledges the server’s response by sending a packet with the
ACK flag set to 1.

● Purpose: This packet acknowledges the server’s ISN by setting the


acknowledgment number to the server’s ISN + 1.

● Connection Established: After this final step, the connection is fully established,
and both sides can begin transmitting data.

40. What is analog modulation? Explain different types of analog modulation


techniques.

Analog modulation is a technique used to transmit analog signals (such as voice, video,
or music) over a communication channel by altering certain characteristics of a carrier
signal. The carrier signal is a high-frequency waveform that is modified in a way that
represents the information of the base signal (the analog signal).

The primary objective of modulation is to improve the signal's ability to travel long
distances without significant loss and to utilize the available frequency spectrum
effectively.

Types of Analog Modulation Techniques:

1. Amplitude Modulation (AM):

○ Description: In AM, the amplitude (height) of the carrier signal is varied in


proportion to the message signal (the baseband signal).

○ How it Works: The carrier signal’s amplitude is increased or decreased


based on the instantaneous amplitude of the message signal.

○ Application: AM is widely used in radio broadcasting, where the audio


signal is modulated onto a carrier wave to be transmitted over long
distances.
Frequency Modulation (FM):

● Description: In FM, the frequency of the carrier signal is varied according to the
amplitude of the message signal.

● How it Works: The carrier signal's frequency is shifted higher or lower depending
on the instantaneous value of the message signal.

● Application: FM is commonly used for high-quality audio transmission, such as


in FM radio broadcasts and television sound.

Phase Modulation (PM):

● Description: In PM, the phase of the carrier signal is varied in accordance with
the instantaneous amplitude of the message signal.

● How it Works: The phase of the carrier signal shifts based on the message
signal’s changes.

● Application: PM is often used in digital transmission systems, such as in


communication satellites.
41. Short notes on:

(i) Packet Switching (5 Marks)

Packet switching is a communication method used in computer networks where data is


broken into small packets before being transmitted. Each packet contains part of the data
along with control information (such as source and destination addresses). These packets
are sent independently over the network, possibly via different routes, and are
reassembled at the destination.

How Packet Switching Works:

1. Data Division: Large data is divided into smaller packets.

2. Transmission: Each packet travels independently through the network. Routers or


switches determine the optimal path for each packet.

3. Reassembly: Once all packets reach the destination, they are reassembled in the
correct order to recreate the original message.

Types of Packet Switching:

1. Datagram Packet Switching: Each packet is routed independently, with no


established path between sender and receiver. There is no guarantee of packet
order or delivery.

2. Virtual Circuit Packet Switching: A path is established before data transmission,


and packets follow the same route to the destination. This ensures the packets
arrive in order.

Advantages:

● Efficient use of network resources.

● Scalable for large networks.


● Robust, as the network can recover from failure of individual links or nodes.

Disadvantages:

● Overhead due to packet headers.

● Potential for network congestion as multiple users share resources.

(ii) HDLC Frame Format (5 Marks)

HDLC (High-Level Data Link Control) is a data link layer protocol used for
communication between devices over a network. It provides both connection-oriented
and connectionless services, ensuring reliable data transmission with error detection.

HDLC Frame Structure:


A typical HDLC frame consists of the following fields:

1. Flag (1 byte): Marks the beginning and end of a frame. It is always "01111110" in
binary.

2. Address (1-2 bytes): Identifies the destination device or the recipient of the frame.

3. Control (1-2 bytes): Contains control information, such as frame type and
sequence number.

4. Data (Variable size): Contains the actual data being transmitted.

5. FCS (Frame Check Sequence) (2 bytes): Used for error checking to ensure the
integrity of the frame. It is calculated based on the data content.

6. Flag (1 byte): The closing flag, identical to the opening flag.

Types of HDLC Frames:

1. Information (I-frame): Carries user data.

2. Supervisory (S-frame): Used for control purposes, such as acknowledgment and


flow control.

3. Unnumbered (U-frame): Used for link management functions.


Advantages of HDLC:

● Provides reliable and error-free data transmission.

● Supports both synchronous and asynchronous communication.

You might also like