CN Solved
CN Solved
1a. Explain the four basic topologies used in networks. List advantages and
disadvantages of each of them.
o Mesh Topology:
Every device is connected to every other device, requiring n(n−1)/2 links
for n devices.
o Star Topology:
Each device is connected to a central hub, which manages communication.
Advantages: Easy installation and fault isolation; if a link fails, only that
device is affected.
Bus Topology:
All devices are connected to a single backbone cable.
Ring Topology:
Devices are connected in a loop, with signals traveling in one direction
through repeaters.
Disadvantages: A break in the ring can disable the entire network, though
dual rings or switches can mitigate this.
Example: IBM’s Token Ring LANs.
To grasp the role of each layer, it's helpful to visualize the logical connections between them.
Figure 1.21 in the book illustrates these connections in a simple internet model.
This distinction is key: the top three layers operate across the entire internet, while the lower two
layers manage communication on individual network segments or "links."
Another important way to understand these connections is by considering the data units created
at each layer.
In the top three layers, the data units (referred to as packets) are not modified by routers
or link-layer switches.
In the bottom two layers, however, the packet created by the host can be modified by
routers but not by link-layer switches.
Figure shows a second principle of protocol layering: identical objects exist below each layer for
connected devices.
At the network layer, even though there's a logical connection between two hosts, a
router might fragment packets into smaller units.
The link between two hops does not alter these packets.
This layering approach allows for a structured, predictable method of managing data as it moves
across the network.
1. Delivery: The system must ensure that data reaches the correct destination. Only the
intended recipient—whether a device or a user—should receive the data.
2. Accuracy: Data must be transmitted without errors. If data is altered during transmission
and not corrected, it becomes unusable.
3. Timeliness: Data must be delivered promptly. Delayed data, especially in applications like
video and audio, lose their value. For real-time transmission, data must be delivered in the
same sequence and without significant delays.
4. Jitter: Jitter refers to the inconsistency in packet arrival times. Inconsistent delays, such as
video packets arriving at varying intervals, can degrade the quality of the audio or video.
For instance, if video packets are sent every 30 ms, but some arrive after 40 ms, the video
quality will be affected.
Components
2. Message: The data or information being communicated (e.g., text, images, audio).
3. Sender: The device that sends the message, such as a computer or smartphone.
4. Receiver: The device that receives the message, like another computer or a printer.
5. Transmission Medium: The physical path through which the data is transmitted, like
6. Protocol: A set of rules that governs the communication between devices to ensure proper
data exchange.
2a. What are guided transmission media? Explain twisted pair cable in detail
Guided Media: These include twisted-pair cables, coaxial cables, and fiber-optic cables.
Guided media are types of communication channels that provide a specific path for signals to travel
from one device to another. These include:
1. Twisted-Pair Cable: This type of cable consists of pairs of insulated copper wires twisted
together. The twisting helps reduce electromagnetic interference and maintains signal
quality.
Twisted-Pair Cable
A twisted pair cable consists of two insulated copper conductors twisted together. Each wire in the
pair serves a different function: one carries the signal to the receiver, and the other acts as a ground
reference. The receiver processes the difference between the two wires to retrieve the signal.
Shielded Twisted-Pair (STP): STP cables have an additional metal foil or braided mesh
covering each pair of conductors. This shielding reduces interference and improves signal
quality but makes the cables bulkier and more costly. STP is primarily used by IBM and is
less common outside of their applications.
Connectors
The RJ45 connector is the most common connector for UTP cables. It is a keyed connector,
meaning it can only be inserted in one direction, which ensures a proper connection.
Table 1.1: Categories of unshielded twisted-pair cables
Data
Category Specification Rate Use
(Mbps)
1 Unshielded twisted-pair used in telephone lines 2 Telephone
2 Unshielded twisted-pair originally used in T1 lines 10 T1 Lines
3 Improved Category 2 used in LANs 20 LANs
Token
4 Improved Category 3 used in Token Ring networks 100 Ring
Networks
Cable wire is normally 24 AWG with a jacket and
5 125 LANs
outside sheath
Performance
The performance of twisted-pair cables is often assessed by measuring attenuation (signal loss) in
relation to frequency and distance. Although twisted-pair cables can handle a broad range of
frequencies, attenuation increases significantly at frequencies above 100 kHz. Attenuation is
measured in decibels per kilometer (dB/km), and higher frequencies result in greater signal loss.
Applications
Twisted-pair cables are widely used in various applications:
Telephone Lines: Used for voice and data transmission in the local loop connecting
subscribers to telephone offices.
DSL Lines: Provide high-data-rate connections by utilizing the high bandwidth of UTP
cables.
Layered Architecture
To understand how the layers in the TCP/IP protocol suite work during communication between
two hosts, let's consider a small network composed of three local area networks (LANs), each
connected by a link-layer switch. These LANs are also interconnected through a router.
In this scenario, imagine that Host A (the source) communicates with Host B (the destination).
The communication process involves five devices:
Each of these devices operates at different layers of the TCP/IP protocol stack, depending on its
role in the network:
Both Host A and Host B are involved in all five layers of the TCP/IP model:
Application Layer: The source host (Host A) creates a message at the application layer
and sends it down through the stack.
Transport Layer: The message is passed to the transport layer, which ensures reliable
delivery.
Network Layer: At the network layer, the message is encapsulated into packets for
transmission across the network.
Data Link Layer: The packets are then prepared for transmission over the physical
network in the data-link layer.
Physical Layer: Finally, the message is sent through the physical medium (wires, cables,
etc.) to reach the destination host.
At the destination, Host B receives the message at the physical layer and passes it up through the
layers until it reaches the application layer for processing.
2. Router
A router plays a different role and operates at three layers of the TCP/IP model:
Network Layer: The router’s primary function is routing packets across networks. It
forwards packets based on their destination IP address.
Data Link Layer & Physical Layer: A router is connected to multiple links, and each link
may use a different data-link and physical-layer protocol. For instance, if a packet arrives
from LAN 1 (Link 1) using one set of protocols, the router must handle it and forward it to
LAN 2 (Link 2) using another set of protocols.
Importantly, the router does not deal with the transport or application layers, as its role is solely to
move packets between networks.
2. Link-Layer Switch
Data Link Layer: The switch processes the data frames and ensures they are forwarded to
the correct device within the same LAN.
Physical Layer: The switch forwards the data through the physical medium.
Unlike routers, link-layer switches do not need to handle different sets of protocols for different
links. They operate within a single LAN, using a single protocol set for the data-link and physical
layers.
2c. Compare OSI and TCP/IP Models. What are the reasons for OSI model to
fail?
Transmission Control
Full Form Open Systems Interconnection
Protocol/Internet Protocol
Protocol
Independent of specific protocols Protocol-driven (e.g., TCP, IP)
Dependency
3a. What is bit oriented framing and its frame pattern. Explain with example
byte stuffing and unstuffing in bit oriented framing.
Bit-Oriented Framing
Bit-oriented framing is a method used in data communication protocols to encapsulate frames (units of
transmission) as sequences of bits. It does not rely on byte boundaries and uses specific bit patterns as
delimiters to indicate the start and end of a frame.
Frame Pattern
A typical bit-oriented frame consists of:
1. Frame Delimiters (Flags):
o A special bit sequence (e.g., 01111110 in HDLC) marks the beginning and end of a frame.
2. Control Information:
o Contains headers like source and destination addresses, sequence numbers, or error-
checking data.
3. Payload (Data):
o The actual data being transmitted.
4. Frame Check Sequence (FCS):
o Ensures data integrity by detecting transmission errors.
2. Bit Unstuffing:
o The receiver scans the incoming bit stream. When it detects five consecutive 1s followed
by a 0, it removes the 0.
Example
Original Data:
01111110 1111101110111110
Framing with Delimiters:
Add flags to the beginning and end: 01111110 | 01111110 1111101110111110 | 01111110
Bit Stuffing (Sender Side):
The second byte (11111011) has five consecutive 1s, so a 0 is inserted: 01111110 | 01111110
1111100110111110 | 01111110
Transmission Data:
01111110 01111110 1111100110111110 01111110
Bit Unstuffing (Receiver Side):
The receiver removes the stuffed 0 after five consecutive 1s: 01111110 1111101110111110
Final Frame:
Original data restored:
01111110 1111101110111110
Framing
To provide the flexibility necessary to support all the options possible in the modes and
configurations just described, HDLC defines three types of frames:
1. I-frames (information frames) are used to transport user data and control
information relating to user data (piggybacking).
2. S-frames (supervisory frames) are used only to transport control information.
Flag field-The flag field of an HDLC frame is an 8-bit sequence with the bit pattern
01111110 that identifies both the beginning and the end of a frame and serves as a
synchronization pattern for the receiver.
Address field- The second field of an HDLC frame contains the address of the secondary
station. If a primary station created the frame, it contains a to address. If a secondary creates
the frame, it contains a from address. An address field can be 1 byte or several bytes long,
depending on the needs of the network.
Control field -The control field is a 1- or 2-byte segment of the frame used for flow and
error control. The interpretation of bits in this field depends on the frame type.
Information field-The information field contains the user's data from the network layer or
management information. Its length can vary from one network to another.
FCS field-The frame check sequence (FCS) is the HDLC error detection field. It can
contain either a 2- or 4-byte ITU-T CRC.
Control Field
The control field determines the type of frame and defines its functionality.
I- frames are designed to carry user data from the network layer. In addition, they can
include flow and error control information (piggybacking). The subfields in the control
field are used to define these functions.
The first bit defines the type. If the first bit of the control field is 0, this means the
frame is an I-frame.
The next 3 bits, called N(S), define the sequence number of the frame.
The last 3 bits, called N(R), correspond to the acknowledgment number when
piggybacking is used.
The PIF field is a single bit with a dual purpose. It has meaning only when it is set
(bit = 1) and can mean poll or final. It means poll when the frame is sent by a primary
station to a secondary. It means final when the frame is sent by a secondary to a
primary.
Control Field for S-Frames
Supervisory frames are used for flow and error control whenever piggybacking is either
impossible or inappropriate. S-frames do not have information fields. If the first 2 bits of
the control field is 10, this means the frame is an S-frame. The last 3 bits, called N(R),
corresponds to the acknowledgment number(ACK) or negative acknowledgment number
(NAK) depending on the type of S-frame.The 2 bits called code is used to define the type
of S-frame itself. With 2 bits, we can have four types of S-frames, as described below:
1. Receive ready (RR). If the value of the code subfield is 00, it is an RR S-frame.
Receive not ready (RNR). If the value of the code subfield is 10, it is an RNR S-
frame. The value of NCR is the acknowledgment number.
2. Reject (REJ). If the value of the code subfield is 01, it is a REJ S-frame. The value of
NCR) is the negative acknowledgment number.
3. Selective reject (SREJ). If the value of the code subfield is 11, it is an SREJ S-
frame. This is a NAK frame used in Selective Repeat ARQ. The value of N(R) is
the negative acknowledgment number.
Unnumbered frames are used to exchange session management and control information
between connected devices.
.U-frame codes are divided into two sections: a 2-bit prefix before the P/F bit and a 3-bit
suffix after the P/F bit. Together, these two segments (5 bits) can be used to create up to 32
different types of U-frames.
Figure shows how U-frames can be used for connection establishment and connection
release. Node A asks for a connection with a set asynchronous balanced mode (SABM)
frame; node B gives a positive response with an unnumbered acknowledgment (UA) frame.
After these two exchanges, data can be transferred between the two nodes (not shown in
the figure). After data transfer, node A sends a DISC (disconnect) frame to release the
7. Application Suitable for low traffic networks. Suitable for moderate traffic networks.
9. Collision High, as multiple nodes can transmit Reduced, as nodes transmit only at
Probability simultaneously. defined slots.
4a. Explain CRC encoder and decoder for 4 bit data word .
It is a subset of cyclic codes called the cyclic redundancy check (CRC), which is
used in networks such as LANs and WANs.
In the encoder,
1. The dataword has k bits (4 here); the codeword has n bits (7 here).
2. The size of the dataword is augmented by adding n - k (3 here) 0s to the right-
hand sideof the word. The n-bit result is fed into the generator.
3. The generator uses a divisor of size n - k + I (4 here), predefined and agreed
upon. The generator divides the augmented dataword by the divisor (modulo-2
ivision).
4. The quotient of the division is discarded; the remainder (r2rlro) is
appended to the dataword to create the codeword.
The decoder,
4b. Explain stop and wait protocol with FSM and Flow diagram.
FSMs
Sender States
The sender is initially in the ready state, but it can move between the ready and blocking
state Ready State. When the sender is in this state, it is only waiting for a packet from the
network layer. If a packet comes from the network layer, the sender creates a frame, saves
a copy of the frame, starts the only timer and sends the frame. The sender then moves to
the blocking state. Blocking State. When the sender is in this state, three events can occur:
a. If a time-out occurs, the sender resends the saved copy of the frame and
restarts the timer.
b. If a corrupted ACK arrives, it is discarded.
c. If an error-free ACK arrives, the sender stops the timer and discards the saved
copy of the frame. It then moves to the ready state.
Receiver
The receiver is always in the ready state. Two events may occur:
a. If an error-free frame arrives, the message in the frame is delivered to the network
layer and an ACK is sent.
Figure shows an example. The first frame is sent and acknowledged. The second frame is
sent, but lost. After time-out, it is resent. The third frame is sent and acknowledged, but the
acknowledgment is lost. The frame is resent. However, there is a problem with this scheme.
The
Network layer at the receiver site receives two copies of the third packet, which is not right.
Classful Addressing
When the Internet started, an IPv4 address was designed with a fixed-length prefix, but to
accommodate both small and large networks, three fixed-length prefixes were designed instead of
one (n = 8, n = 16, and n = 24). The whole address space was divided into five classes (class A, B,
C, D, and E). This scheme is referred to as classful addressing.
In class A, the network length is 8 bits, but since the first bit, which is 0, defines the class, we can
have only seven bits as the network identifier. This means there are only 27 = 128 networks in the
world that can have a class A address.
In class B, the network length is 16 bits, but since the first two bits, which are (10) 2, define the
class, we can have only 14 bits as the network identifier. This means there are only 214 = 16,384
networks in the world that can have a class B address.
All addresses that start with (110)2 belong to class C. In class C, the network length is 24 bits, but
since three bits define the class, we can have only 21 bits as the network identifier. This means
there are 221 = 2,097,152 networks in the world that can have a class C address.
Class D is not divided into prefix and suffix. It is used for multicast addresses. All addresses
that start with 1111 in binary belong to class E. As in Class D, Class E is not divided into prefix
and suffix and is used as reserve.
Dijkstra’s Algorithm
The Dijkstra’s Algorithm is a greedy algorithm that is used to find the minimum distance
between a node and all other nodes in a given graph. Here we can consider node as a
router and graph as a network. It uses weight of edge .ie, distance between the nodes to
find a minimum distance route.
Algorithm:
1: Mark the source node current distance as 0 and all others as infinity.
2: Set the node with the smallest current distance among the non-visited nodes as the
current node.
3: For each neighbor, N, of the current node:
Calculate the potential new distance by adding the current distance of the current
node with the weight of the edge connecting the current node to N.
If the potential new distance is smaller than the current distance of node N, update
N's current distance with the new distance.
4: Make the current node as visited node.
5: If we find any unvisited node, go to step 2 to find the next node which has the smallest
current distance and continue this process.
Example:
Consider the graph G:
Graph G
Now, we will start normalising graph one by one starting from node 0.
step 1
Nearest neighbour of 0 are 2 and 1 so we will normalize them first .
step 3
Similarly we will normalize other node considering it should not form a cycle and will
keep track in visited nodes.
step 5
o Isolates sensitive parts of a network, ensuring that access between subnets can be
controlled using routers or firewalls.
4. Simplifies Management:
o Makes it easier to manage smaller subnets than one large network.
o Helps in logically grouping devices, improving organization and troubleshooting.
5. Facilitates Hierarchical Routing:
o Subnetting reduces the size of routing tables, as routers only need to know how to route
traffic to a subnet rather than every individual IP address.
Importance of Subnetting
1. Scalability:
o Allows a network to grow efficiently while maintaining its structure and performance.
2. Broadcast Control:
o Broadcast traffic (e.g., ARP requests) is contained within subnets, improving overall
network efficiency.
3. Security Isolation:
o Devices in one subnet can be restricted from accessing another subnet unless explicitly
allowed.
4. Cost Savings:
o Makes better use of available IP address space, reducing waste and the need to acquire
additional IP blocks.
5. Support for Different Network Sizes:
o Allows networks of different sizes to coexist, tailored to specific needs (e.g., smaller
subnets for isolated servers, larger subnets for general user devices).
Routing is the process of determining the best path or route for data packets to travel from the source to
the destination across interconnected networks. It is a global decision-making process that occurs at the
network level.
Key Points:
Goal: To identify the optimal path between nodes based on specific criteria like shortest distance,
least cost, or minimum delay.
Dynamic or Static:
o Dynamic Routing: Uses algorithms to update routing tables automatically based on
network conditions (e.g., OSPF, RIP, BGP).
o Static Routing: Manually configured routes that do not change unless explicitly updated.
Algorithms: Implements routing algorithms to compute routes, such as:
o Distance Vector Routing: Shares the cost to reach destinations with neighbors (e.g.,
RIP).
o Link State Routing: Builds a complete map of the network (e.g., OSPF).
Routing Table: A data structure maintained by routers containing information about routes to
various network destinations.
Example:
A router using OSPF (Open Shortest Path First) calculates the shortest path to a destination based on the
link-state information it gathers from other routers.
2. Forwarding
Definition:
Forwarding is the process of transferring a data packet from an input interface of a router to the
appropriate output interface based on the routing table. It is a local, per-packet operation performed at
each hop in the network.
Key Points:
Goal: To move packets toward their destination efficiently, using information in the routing
table.
Decentralized: Forwarding decisions are made at each router based on the packet's destination
address and the router's forwarding table.
Forwarding Table:
Data
Routing Table. Forwarding Table.
Structure
Metric:
OSPF calculates the cost of reaching a destination from a source by considering link weights,
which can vary based on the type of service. This is shown in Figure below, where the total
Forwarding Tables:
OSPF routers use Dijkstra’s algorithm to create forwarding tables by building the shortest-path
tree to destinations. The difference between OSPF and RIP forwarding tables is mainly in the
cost values. If OSPF uses hop count as its metric, its forwarding tables would be identical to
those of RIP. Both protocols determine the best route using shortest-path trees.
Areas:
OSPF is designed for both small and large autonomous systems (AS). In large ASs, flooding
link-state packets (LSPs) across the entire network can cause congestion, so OSPF introduces
areas to localize LSP flooding. The AS is divided into smaller sections called areas, with one
backbone area (Area 0) responsible for inter-area communication.
Link-State Advertisement (LSA):
OSPF routers advertise their link states to neighbors for forming a global link-state database
(LSDB). Unlike the simple graph model, OSPF distinguishes between different types of nodes
and links, requiring various types of advertisements:
Router Link: Announces router existence and its connection to other entities.
Network Link: Advertises the existence of a network, but with no associated cost.
Summary Link to Network: Advertised by area border routers to summarize links
between areas.
Summary Link to AS: Announced by AS boundary routers to inform other areas of external
AS links.
OSPF Implementation:
OSPF operates at the network layer and uses IP for message propagation. OSPF messages are
encapsulated in IP datagrams with a protocol field value of 89. OSPF has two versions, with
version 2 being the most widely implemented.
OSPF Algorithm:
OSPF uses a modified link-state routing algorithm. After routers form their shortest-path trees,
they create corresponding routing tables. The algorithm also handles OSPF message exchange.
Performance:
Update Messages: OSPF’s LSPs are complex and can create heavy traffic in large areas, using
considerable bandwidth.
Convergence of Forwarding Tables: OSPF converges relatively quickly once flooding is
complete, although Dijkstra's algorithm can take time to run.
Robustness: OSPF is more robust than RIP since routers operate independently after
constructing their LSDBs. A failure in one router has less impact on the overall network.
example, ISPs can use DHCP to provide temporary addresses to users, allowing limited
IP resources to serve more customers.
Additional Information: DHCP can also provide essential details like the
network prefix, default router address, and name server address.
DHCP Message Format
The 64-byte option field in DHCP serves two purposes: carrying additional or vendor-specific
information. A special value, called a "magic cookie" (99.130.83.99), helps the client recognize
options in the message. The next 60 bytes contain options, structured in three fields: a 1-byte
tag, 1-byte length, and variable-length value. The tag field (e.g., 53) can indicate one of the 8
DHCP message types used by the protocol.
DHCP Operation
1. The host creates a DHCPDISCOVER message with only a random transaction-ID, as
it doesn’t know its own IP address or the server’s. The message is sent using UDP
(source port 68, destination port 67) and broadcasted (source IP: 0.0.0.0, destination IP:
255.255.255.255).
2. The DHCP server responds with a DHCPOFFER message, containing the offered IP
address, server address, and lease time. This message is sent with the same port
numbers but reversed, using a broadcast address so other servers can also offer better
options.
3. The host selects the best offer and sends a DHCPREQUEST to the server. The
message includes the chosen IP address and is sent as a broadcast (source: new
client IP, destination: 255.255.255.255) to notify other servers that their offers were
declined.
4. The selected server responds with a DHCPACK if the IP address is valid, completing
the process. If the IP address is unavailable, a DHCPNACK is sent, and the host must
restart the process.
Using FTP
The DHCPACK message includes a pathname to a file with additional information (e.g.,
DNS server address). The client uses FTP to retrieve this information.
Error Control
Since DHCP relies on unreliable UDP, it ensures error control by requiring UDP
checksums and using timers with a retransmission policy. To avoid traffic congestion (e.g.,
after a power failure), clients use random timers for retransmission.
Transition States
The operation of the DHCP were very simple. To provide dynamic address allocation, the
DHCP client acts as a state machine that performs transitions from one state to another
depending on the messages it receives or sends.
1. INIT state: The client starts here and sends a Discover message to find a DHCP server.
2. SELECTING state: After receiving one or more Offer messages, the client selects
one offer.
3. REQUESTING state: The client sends a Request message to the selected server
and waits.
4. BOUND state: If the server responds with an ACK message, the client uses the
assigned IP address.
5. RENEWING state: When 50% of the lease time is expired, the client tries to renew
the lease by contacting the server. If successful, it stays in the BOUND state.
6. REBINDING state: If the lease is 75% expired and no response is received, the client
tries to contact any DHCP server. If the server responds, it stays BOUND; otherwise,
it goes back to INIT to request a new IP.
7a. Draw the FSM diagrams for connectionless and connected oriented services
offered by transport layer.
Connectionless Service
In a connectionless service, the source application divides its message into chunks of data and
sends them to the transport layer, which treats each chunk independently. There is no
relationship between the chunks, so they may arrive out of order at the destination. For example,
a client might send three chunks (0, 1, and 2), but due to delays, the server could receive them
out of order (0, 2, 1), as shown in Figure . This could result in a garbled message. If one packet
is lost, since there is no numbering or coordination between the transport layers, the receiving
side won't know and will deliver incomplete data. This lack of flow control, error control, and
congestion control makes the system inefficient.
Connection-Oriented Service
In a connection-oriented service, the client and the server first need to establish a logical
connection between themselves. The data exchange can only happen after the connection
establishment. After data exchange, the connection needs to be torn down.
Figure shows the connection establishment, data-transfer, and tear-down phases in a connection-
oriented service at the transport layer.
We can implement flow control, error control, and congestion control in a connection oriented
protocol.
The behavior of a transport-layer protocol, both when it provides a connectionless and when it
provides a connection-oriented protocol, can be better shown as a finite state machine (FSM).
Using this tool, each transport layer (sender or receiver) is taught as a machine with a finite
number of states.
Every FSM must have an initial state, which is where the machine starts when it turns on. In
diagrams, rounded rectangles are used to represent states, colored text indicates events, and black
text shows actions. A horizontal line or a slash separates the event from the action, and arrows
depict the transition to the next state.
In a connectionless transport layer, the FSM has only one state: the established state. The
machines on both the client and server sides remain in the established state, always ready to send
and receive transport-layer packets.
UDP Services
Process-to-Process Communication
UDP provides process-to-process communication using socket addresses, a combination of IP
addresses and port numbers.
Connectionless Services
UDP provides a connectionless service. This means that each user datagram sent by UDP is an
independent datagram. There is no relationship between the different user datagrams even if they
are coming from the same source process and going to the same destination program. The user
datagrams are not numbered. There is no connection establishment and no connection
termination. This means that each user datagram can travel on a different path.
Flow Control
UDP is a very simple protocol. There is no flow control, and hence no window mechanism. The
receiver may overflow with incoming messages. The lack of flow control means that the process
using UDP should provide for this service, if needed.
Error Control
There is no error control mechanism in UDP except for the checksum. This means that the
sender does not know if a message has been lost or duplicated. When the receiver detects an
error through the checksum, the user datagram is silently discarded.
Checksum
UDP checksum calculation includes three sections: a pseudoheader, the UDP header, and the
data coming from the application layer. The pseudoheader is the part of the header of the IP
packet in which the user datagram is to be encapsulated with some fields filled with 0s.
If the checksum does not include the pseudoheader, a user datagram may arrive safe and sound.
However, if the IP header is corrupted, it may be delivered to the wrong host. The protocol field
is added to ensure that the packet belongs to UDP, and not to TCP. The value of the protocol
field for UDP is 17. If this value is changed during transmission, the checksum calculation at the
receiver will detect it and UDP drops the packet. It is not delivered to the wrong protocol.
Congestion Control
Since UDP is a connectionless protocol, it does not provide congestion control. UDP assumes
that the packets sent are small and sporadic and cannot create congestion in the network.
Queuing
In UDP, queues are associated with ports. At the client site, when a process starts, it requests a
port number from the operating system. Some implementations create both an incoming and an
outgoing queue associated with each process. Other implementations create only an incoming
queue associated with each process.
Applications of UDP
1. Real-Time Communication:
o UDP is ideal for time-sensitive applications where speed is more critical than reliability.
o Examples:
Voice over IP (VoIP)
Sequence Numbers
The sequence numbers are modulo 2m, where m is the size of the sequence number field in bits.
Acknowledgment Numbers
An acknowledgment number in this protocol is cumulative and defines the sequence number of
the next packet expected. For example, if the acknowledgment number (ackNo) is 7, it means all
packets with sequence number up to 6 have arrived, safe and sound, and the receiver is expecting
the packet with sequence number 7.
Send Window
The send window is an imaginary box covering the sequence numbers of the data packets that
can be in transit or can be sent. In each window position, some of these sequence numbers define
the packets that have been sent; others define those that can be sent. The maximum size of the
window is 2m – 1.
The send window at any time divides sequence numbers into four regions. The first region
includes acknowledged packets, which the sender no longer tracks. The second region contains
outstanding packets that have been sent but have an unknown status. The third region defines
sequence numbers for packets that can be sent but for which data hasn't been received from the
application layer. The fourth region consists of sequence numbers that cannot be used until the
window slides.
The window itself is an abstraction; three variables define its size and location at any time. We
call these variables S f (send window, the first outstanding packet), Sn (send window, the next
packet to be sent), and Ssize (send window, size). The variable Sf defines the sequence number of
the first (oldest) outstanding packet. The variable Sn holds the sequence number that will be
assigned to the next packet to be sent. Finally, the variable Ssize defines the size of the window,
which is fixed in our protocol.
Receive Window
The receive window ensures correct packet reception and acknowledgment. In Go-Back-N, its
size is always 1, as the receiver expects a specific packet. Out-of-order packets are discarded and
must be resent. Only packets matching the expected sequence number, Rn, are accepted and
acknowledged. The window slides by one slot upon receiving the correct packet, with Rn
updated as (Rn + 1) modulo 2m.
Timers
Although there can be a timer for each packet that is sent, in our protocol we use only one. The
reason is that the timer for the first outstanding packet always expires first. We resend all
outstanding packets when this timer expires.
Resending packets
When the timer expires, the sender resends all outstanding packets. For example, suppose the
sender has already sent packet 6 (Sn = 7), but the only timer expires. If Sf = 3, this means that
packets 3, 4, 5, and 6 have not been acknowledged; the sender goes back and resends packets 3,
4, 5, and 6. That is why the protocol is called Go-Back-N. On a time-out, the machine goes back
N locations and resends all packets.
1. In this situation, the client TCP, after receiving a close command from the client process,
sends the first segment, a FIN segment in which the FIN flag is set.
2. The server TCP, after receiving the FIN segment, informs its process of the situation and
sends the second segment, a FIN + ACK segment, to confirm the receipt of the FIN segment
from the client and at the same time to announce the closing of the connection in the other
direction.
3. The client TCP sends the last segment, an ACK segment, to confirm the receipt of the FIN
segment from the TCP server. This segment contains the acknowledgment number, which
is one plus the sequence number received in the FIN segment from the server. This segment
cannot carry data and consumes no sequence numbers.
Congestion Window
To control the number of segments to transmit, TCP uses another variable called a congestion
window, cwnd, whose size is controlled by the congestion situation in the network. The cwnd
variable and the rwnd variable together define the size of the send window in TCP. The first is
related to the congestion in the middle (network); the second is related to the congestion at the
end. The actual size of the window is the minimum of these two.
Actual window size = minimum (rwnd, cwnd)
Congestion Detection
TCP detects network congestion through two main events: time-outs and the receipt of three
duplicate ACKs. A time-out occurs when the sender does not receive an ACK for a segment or
group of segments before the timer expires, signaling the likelihood of severe congestion and
possible segment loss. On the other hand, receiving three duplicate ACKs (four identical ACKs)
indicates that one segment is missing, but others have been received, suggesting mild congestion
or network recovery. While early TCP versions like Tahoe treated both events the same, later
versions like Reno distinguish between the two, with time-outs indicating stronger congestion
than duplicate ACKs. TCP relies on ACKs as the only feedback to detect congestion, where
missing or delayed ACKs serve as indicators of network conditions.
Congestion Policies
TCP’s general policy for handling congestion is based on three algorithms: slow start, congestion
avoidance, and fast recovery.
Slow Start: Exponential Increase
The slow-start algorithm begins with the congestion window (cwnd) set to one maximum
segment size (MSS) and increases by one MSS for each received acknowledgment. The MSS is
negotiated during connection establishment. Despite the name, the algorithm grows
exponentially. Initially, the sender transmits one segment, and upon receiving the ACK, the
cwnd increases by 1, allowing the sender to transmit two segments. Each acknowledgment
further increases cwnd, doubling the number of segments the sender can transmit, resulting in
rapid growth as long as no congestion is detected. The size of the congestion window in this
algorithm is a function of the number of ACKs arrived and can be determined as follows.
If an ACK arrives, cwnd = cwnd + 1.
If we look at the size of the cwnd in terms of round-trip times (RTTs), we find that the growth
rate is exponential in terms of each round trip time, which is a very aggressive approach:
A slow start cannot continue indefinitely. There must be a threshold to stop this phase. The
sender keeps track of a variable named ssthresh (slow-start threshold). When the size of the
window in bytes reaches this threshold, slow start stops and the next phase starts.
For example, if the sender starts with cwnd = 4, it can send four segments. Upon receiving four
ACKs, one segment slot opens up, increasing cwnd to 5. After sending five segments and
receiving five acknowledgments, cwnd increases to 6, and so on.
The congestion window can be expressed as:
If an ACK arrives, cwnd = cwnd + (1/cwnd).
The window increases by (1/cwnd) portion of the Maximum Segment Size (MSS) in bytes.
Thus, all segments in the previous window must be acknowledged to increase cwnd by 1 MSS
byte. This results in a linear growth rate of cwnd with each round-trip time (RTT), making it a
more conservative approach than slow-start.
Fast Recovery
The fast-recovery algorithm is optional in TCP. The old version of TCP did not use it, but the
new versions try to use it. It starts when three duplicate ACKs arrive, which is interpreted as
light congestion in the network. Like congestion avoidance, this algorithm is also an additive
increase, but it increases the size of the congestion window when a duplicate ACK arrives (after
the three duplicate ACKs that trigger the use of this algorithm). We can say
If a duplicate ACK arrives, cwnd == cwnd + (1 / cwnd)
Policy Transition
We discussed three congestion policies in TCP. Now the question is when each of these policies
is used and when TCP moves from one policy to another. To answer these questions, we need to
refer to three versions of TCP: Taho TCP, Reno TCP, and New Reno TCP.
Taho TCP
The early TCP, known as Taho TCP, used only two different algorithms in their congestion policy:
slow start and congestion avoidance.
Taho TCP treats the two signs used for congestion detection, time-out and three duplicate ACKs,
in the same way. In this version, when the connection is established, TCP starts the slow-start
algorithm and sets the ssthresh variable to a pre-agreed value(normally a multiple of MSS) and
the cwnd to 1 MSS.
If congestion is detected (occurrence of time-out or arrival of three duplicate ACKs), TCP
immediately interrupts this aggressive growth and restarts a new slow start algorithm by limiting
the threshold to half of the current cwnd and resetting the congestion window to 1
If no congestion is detected while reaching the threshold, TCP learns that the ceiling of its
ambition is reached; it should not continue at this speed. It moves to the congestion avoidance
state and continues in that state.
In the congestion-avoidance state, the size of the congestion window is increased by 1 each time
a number of ACKs equal to the current size of the window has been received.
Reno TCP
A newer version of TCP, called Reno TCP, added a new state to the congestion-control FSM,
called the fast-recovery state. This version treated the two signals of congestion, time-out and the
arrival of three duplicate ACKs, differently. In this version, if a time-out occurs, TCP moves to
the slow-start state (or starts a new round if it is already in this state); on the other hand, if three
duplicate ACKs arrive, TCP moves to the fast-recovery state and remains there as long as more
duplicate ACKs arrive.
When TCP enters the fast-recovery state, three major events may occur. If duplicate ACKs
continue to arrive, TCP stays in this state, but the cwnd grows exponentially. If a time-out
occurs, TCP assumes that there is real congestion in the network and moves to the slow-start
state. If a new (non duplicate) ACK arrives, TCP moves to the congestion-avoidance state, but
deflates the size of the cwnd to the ssthresh value, as though the three duplicate ACKs have not
occurred, and transition is from the slow-start state to the congestion-avoidance state.
Stop-and-Wait Protocol
Stop-and-Wait is a connection-oriented protocol, which uses both flow and error control.
Both the sender and the receiver use a sliding window of size 1. The sender sends one
packet at a time and waits for an acknowledgment before sending the next one.
To detect corrupted packets, we need to add a checksum to each data packet. When a
packet arrives at the receiver site, it is checked. If its checksum is incorrect, the packet is
corrupted and silently discarded.
The silence of the receiver is a signal for the sender that a packet was either corrupted or
lost. Every time the sender sends a packet, it starts a timer.
If an acknowledgment arrives before the timer expires, the timer is stopped and the
sender sends the next packet (if it has one to send).
If the timer expires, the sender resends the previous packet, assuming that the packet was
either lost or corrupted. This means that the sender needs to keep a copy of the packet
until its acknowledgment arrives.
Sequence Numbers
To prevent duplicate packets, the protocol uses sequence numbers and acknowledgment
numbers. A field is added to the packet header to hold the sequence number of that packet.
Acknowledgment Numbers
Since the sequence numbers must be suitable for both data packets and acknowledgments, we
use this convention: The acknowledgment numbers always announce the sequence number of the
next packet expected by the receiver.
FSMs
Since the protocol is a connection-oriented protocol, both ends should be in the established state
before exchanging data packets. The states are actually nested in the established state.
Sender
The sender is initially in the ready state, but it can move between the ready and blocking state.
The variable S is initialized to 0.
Ready state. When the sender is in this state, it is only waiting for one event to occur. If a request
comes from the application layer, the sender creates a packet with the sequence number set to S.
A copy of the packet is stored, and the packet is sent. The sender then starts the only timer. The
sender then moves to the blocking state.
Blocking state. When the sender is in this state, three events can occur:
a. If an error-free ACK arrives with the ackNo related to the next packet to be sent, which means
ackNo = (S + 1) modulo 2, then the timer is stopped. The window slides, S = (S + 1) modulo 2.
Finally, the sender moves to the ready state.
b. If a corrupted ACK or an error-free ACK with the ackNo ≠ (S + 1) modulo 2 arrives, the ACK
is discarded.
c. If a time-out occurs, the sender resends the only outstanding packet and restarts the timer.
Receiver
The receiver is always in the ready state. Three events may occur:
a. If an error-free packet with seqNo = R arrives, the message in the packet is delivered to the
application layer. The window then slides, R = (R + 1) modulo 2. Finally an ACK with ackNo =
R is sent.
b. If an error-free packet with seqNo ≠ R arrives, the packet is discarded, but an ACK with
ackNo = R is sent.
If a corrupted packet arrives, the packet is discarded.
Figure shows an example of the Stop-and-Wait protocol. Packet 0 is sent and acknowledged.
Packet 1 is lost and resent after the time-out. The resent packet 1 is acknowledged and the timer
stops. Packet 0 is sent and acknowledged, but the acknowledgment is lost. The sender has no
idea if the packet or the acknowledgment is lost, so after the time-out, it resends packet 0, which
is acknowledged.
For smooth Internet operation, the protocols in the first four layers of the TCP/IP suite need to
be standardized and documented. These standard protocols are typically part of operating
systems like Windows or UNIX. However, the application-layer protocols can be both standard
and nonstandard for added flexibility.
In HTTP, Persistent and Non-Persistent connections refer to how connections between a client (e.g.,
browser) and a server are managed during data transfer. Here are the key differences:
Connection The Connection: keep-alive header is used to The connection closes automatically after a
Handling keep the connection open. single request/response cycle.
Suitable for modern web applications with Suitable for simple, single-resource requests or
Use Case
multiple resources (e.g., images, scripts). older protocols.
Resource More efficient in terms of network resources Consumes more resources due to frequent
Utilization and server processing. connection handling.
Default in HTTP/1.1 and later versions unless Default behavior in HTTP/1.0 unless explicitly
HTTP Versions
explicitly disabled. configured.
Here is a comparison of local logging and remote logging, which differ based on where and how log
data is stored and accessed:
Direct access to the local machine is Logs can be accessed from any system
Access
needed to view logs. with appropriate credentials and tools.
Susceptible to data loss if the local Provides greater reliability since logs are
Reliability system crashes or experiences storage stored externally, independent of the host
failure. system.
Logs are stored locally and may be more Typically more secure, as logs can be
Security vulnerable to local attacks or stored on secure remote systems with
unauthorized access. controlled access.
The Domain Name System (DNS) was developed to simplify access to Internet resources by
mapping human-friendly names to IP addresses, which are needed for network
identification. Similar to how a telephone directory helps map names to numbers, DNS
serves as a directory for the Internet, allowing users to remember domain names rather than
numeric IP addresses. A central directory for the entire Internet would be impractical due to
its vast scale and vulnerability to failures. Instead, DNS information is distributed across
multiple servers worldwide. When a host requires name-to-IP mapping, it contacts the
nearest DNS server with the necessary information. This distributed structure enhances
reliability and efficiency.
Figure shows how TCP/IP uses a DNS client and a DNS server to map a name to an address.
A user wants to use a file transfer client to access the corresponding file transfer server
running on a remote host. The user knows only the file transfer server name, such as
afilesource.com. The TCP/IP suite needs the IP address of the file transfer server to make
the connection.
1. The user passes the host name to the file transfer client.
2. The file transfer client passes the host name to the DNS client.
3. Each computer, after being booted, knows the address of one DNS server. The DNS
client sends a message to a DNS server with a query that gives the file transfer server
name using the known IP address of the DNS server.
4. The DNS server responds with the IP address of the desired file transfer server.
5. The DNS server passes the IP address to the file transfer client.
6. The file transfer client now uses the received IP address to access the file transfer server.