Unit-4 ECE 3-2 Notes
Unit-4 ECE 3-2 Notes
Unit-4
The Network Layer Design Issues
Store-and-Forward Packet Switching:
Store-and-forward switching is a method of switching data packets by the
switching device that receives the data frame and then checks for errors before
forwarding the packets.
Then the frame is checked for errors by using CRC (Cyclic Redundancy Check) if
the error is found then the packet is discarded else it is forwarded to the next
device.
Flow control − This layer routes the packet to another way, If too many packets
are present at the same time preventing bottlenecks and congestion.
Breaks Large Packets − Breaks larger packets into small packets.
Connection Oriented service − It is a network communication mode, where a
communication session is established before any useful data can be transferred
and where a stream of data is delivered in the same order as it was sent.
Connectionless Service − It is a data transmission method used in packet
switching networks by which each data unit is individually addressed and routed
based on information carried in each unit, rather than in the setup information of a
prearranged, fixed data channel as in connection-oriented communication.
Datagram − A datagram is a basic transfer unit associated with a packet-
switched network. The delivery, arrival time and order of arrival need not be
guaranteed by the network.
A virtual circuit − It is a means of transporting data over a packet switched
computer network in such a way that it appears as though there is a dedicated
physical layer link between the source and destination end system of this data.
Connectionless Service
A connection is similar to a postal system, in which each letter takes along different
route paths from the source to the destination address.
Connectionless service is used in the network system to transfer data from one end
to another end without creating any connection. S
o it does not require establishing a connection before sending the data from the
sender to the receiver.
It is not a reliable network service because it does not guarantee the transfer of data
packets to the receiver, and data packets can be received in any order to the receiver.
Therefore we can say that the data packet does not follow a defined path.
In connectionless service, the transmitted data packet is not received by the receiver
due to network congestion, and the data may be lost.
3
Connection-Oriented Service
A connection-oriented service is a network service that was designed and developed
after the telephone system.
A connection-oriented service is used to create an end to end connection between the
sender and the receiver before transmitting the data over the same or different
networks.
In connection-oriented service, packets are transmitted to the receiver in the same
order the sender has sent them.
It uses a handshake method that creates a connection between the user and sender for
transmitting the data over the network.
Hence it is also known as a reliable network service.
Suppose, a sender wants to send data to the receiver. Then, first, the sender sends a
request packet to a receiver in the form of an SYN packet.
After that, the receiver responds to the sender's request with (SYN-ACK)
signal/packets.
4
The call request and call accept packets are used to establish the connection between
the sender and the recipient.
The term "virtual circuit" refers to a logical link between two network nodes,
typically in a communications network. The path consists of many network parts
that are connected by switches.
5
The transmitter and receiver in the aforementioned diagram are A and B. The sender
and receiver are linked together using the call request and calls accept packets. Once
a path has been established, data will be transferred.
Datagram Networks
The switch uses the destination information contained in each packet to direct it to
the intended location.
As a result, packets have a header with all of the information about the destination.
6
Datagram packets transmitted between hosts H1 and H2 are shown in the diagram
above. The same message is being carried by the four datagram packets bearing the
labels A, B, C, and D, each of which is being sent by a separate path.
The packets of the message reach their destination out of order. It is H2's
responsibility to rearrange the packets in order to recover the original message
Routing algorithm
o In order to transfer the packets from source to the destination, the network layer
must determine the best route through which packets can be transmitted.
7
o Whether the network layer provides datagram service or virtual circuit service,
the main job of the network layer is to provide the best route. The routing
protocol provides this job.
o The routing protocol is a routing algorithm that provides the best path from the
source to the destination. The best path is the path that has the "least-cost path"
from source to the destination.
o Routing is the process of forwarding the packets from source to the destination
but the best route to send the packets is determined by the routing algorithm.
Classification of a Routing algorithm
The Routing algorithm is divided into two categories:
o Adaptive Routing algorithm
o Non-adaptive Routing algorithm
Optimality Principle
It states that if the router J is on the optimal path from router I to router K, then the
optimal path from J to K also falls along the same route. Call the route from I to
J r1 and the rest of the route r2. it could be concatenated with r1 to improve the
9
route from I to K, contradicting our statement that r1r2 is optimal only if a route
better than r2 existed from J to K.
Sink Tree for routers:
We can see that the set of optimal routes from all sources to a given destination
from a tree rooted at the destination as a directed consequence of the optimality
principle. This tree is called a sink tree and is illustrated in fig(1).
In the given figure the distance metric is the number of hops. Therefore, the
goal of all routing algorithms is to discover and use the sink trees for all routers.
The sink tree is not unique also other trees with the same path lengths may
exist. If we allow all of the possible paths to be chosen, the tree becomes a more
general structure called a DAG (Directed Acyclic Graph). DAGs have no loops.
We will use sink trees as convenient shorthand for both cases. we will take
technical assumption for both cases that the paths do not interfere with each other so,
for example, a traffic jam on one path will not cause another path to divert .
in a flow should first check the resource requirement of a network flow before
transmitting it further.
Closed Loop Congestion Control
Closed loop congestion control techniques are used to treat or alleviate congestion
after it happens. Several techniques are used by different protocols; some of them
are:
1. Backpressure:
Backpressure is a technique in which a congested node stops receiving packets from
upstream node. This may cause the upstream node or nodes to become congested and
reject receiving data from above nodes. Backpressure is node-to-node congestion
control techniques that propagate in the opposite direction of data flow.
3. Implicit Signaling:
In implicit signaling, there is no communication between the congested nodes and the
source. The source guesses that there is congestion in a network.
4. Explicit Signaling:
.In explicit signaling, if a node experiences congestion it can explicitly sends a packet
to the source or destination to inform about congestion. The difference between choke
packet and explicit signaling is that the signal is included in the packets that carry data
rather than creating a different packet as in case of choke packet technique.
Step 1 − The basic way to avoid congestion is to build a network that is well matched
to the traffic that it carries. If more traffic is directed but a low-bandwidth link is
available, definitely congestion occurs.
13
Step 2 − Sometimes resources can be added dynamically like routers and links when
there is serious congestion. This is called provisioning, and which happens on a
timescale of months, driven by long-term trends.
Step 3 − To utilise most existing network capacity, routers can be tailored to traffic
patterns making them active during daytime when network users are using more and
sleep in different time zones.
Step 4 − Some of local radio stations have helicopters flying around their cities to
report on road congestion to make it possible for their mobile listeners to route their
packets (cars) around hotspots. This is called traffic aware routing.
Step 5 − Sometimes it is not possible to increase capacity. The only way to reduce the
congestion is to decrease the load. In a virtual circuit network, new connections can be
refused if they would cause the network to become congested. This is called admission
control.
Step 6 − Routers can monitor the average load, queueing delay, or packet loss. In all
these cases, the rising number indicates growing congestion. The network is forced to
discard packets that it cannot deliver. The general name for this is Load shedding. The
better technique for choosing which packets to discard can help to prevent congestion
collapse.
The main goal of traffic aware routing is to identify the best routes by considering
the load, set the link weight to be a function of fixed link bandwidth and propagation
delay and the variable measured load or average queuing delay.
14
Least-weight paths will then favour paths that are more lightly loaded, remaining all
are equal.
Step 1 − Consider a network which is divided into two parts, East and West both are
connected by links CF and EI.
Step 2 − Suppose most of the traffic in between East and West is using link CF, and as
a result CF link is heavily loaded with long delays. Including queueing delay in the
weight which is used for shortest path calculation will make EI more attractive.
Step 3 − After installing the new routing tables, most of East-West traffic will now go
over the EI link. As a result in the next update CF link will appear to be the shortest
path.
Step 4 − As a result the routing tables may oscillate widely, leading to erratic routing
and many potential problems.
Step 5 − If we consider only bandwidth and propagation delay by ignoring the load,
this problem does not occur
Step 6 − Two techniques can contribute for successful solution, which are as follows −
Multipath routing
The routing scheme to shift traffic across routes.
15
Generally, we will get an idea to reduce the congestion by trying to increase the
resources or decrease the load, but it is not that much of a good idea. There are some
approaches for congestion control over a network which are usually applied on
different time scales to either prevent congestion or react to it once it has occurred.
Admission Control
It is one of techniques that is widely used in virtual-circuit networks to keep
congestion at bay. The idea is do not set up a new virtual circuit unless the network
can carry the added traffic without becoming congested.
Admission control can also be combined with traffic aware routing by considering
routes around traffic hotspots as part of the setup procedure.
Take two networks (a) A congestion network and (b) The portion of the network
that is not congested. A virtual circuit A to B is also shown below −
16
Step 2 − To avoid this situation, we can redraw the network as shown in figure (b),
removing the congested routers and all of their lines.
Step 3 − The dashed line indicates a possible route for the virtual circuit that avoids the
congested routers.
Traffic Throttling
Traffic throttling is one of the approaches for congestion control. In the internet and
other computer networks, senders trying to adjust the transmission need to send as
much traffic as the network can readily deliver. In this setting the network aim is to
operate just before the onset of congestion.
There are some approaches to throttling traffic that can be used in both datagram
and virtual-circuit networks.
Firs
Routers have to determine when congestion is approaching ideally before it has arrived.
Each router can continuously monitor the resources it is using.
17
dnew=αdold+(1−α)sdnew=�dold+(1−�)s
Where the constant α determines how fast the router forgets recent history. This is
called EWMA (Exponentially Weighted Moving Average)
Routers must deliver timely feedback to the senders that are causing the congestion.
Routers must also identify the appropriate senders. It must then warn carefully, without
sending many more packets into an already congested network.
Step 4 − This design is called explicit congestion notification and is mostly used on the
Internet.
Load shedding
The presence of congestion means the load is greater than the resources available
over a network to handle.
Generally we will get an idea to reduce the congestion by trying to increase the
resources or decrease the load, but it is not that much of a good idea.
There are some approaches for congestion control over a network which are usually
applied on different time scales to either prevent congestion or react to it once it has
occurred.
Load Shedding
19
This algorithm is used to control the rate at which traffic is sent to the network
and shape the burst traffic to a steady traffic stream.
The disadvantages compared with the leaky-bucket algorithm are the inefficient
use of available network resources.
The large area of network resources such as bandwidth is not being used
effectively.
Imagine a bucket with a small hole in the bottom.No matter at what rate water enters
the bucket, the outflow is at constant rate.When the bucket is full with water
additional water entering spills over the sides and is lost.
Similarly, each network interface contains a leaky bucket and the following steps are
involved in leaky bucket algorithm:
1. When host wants to send packet, packet is thrown into the bucket.
2. The bucket leaks at a constant rate, meaning the network interface transmits
packets at a constant rate.
3. Bursty traffic is converted to a uniform traffic by the leaky bucket.
4. In practice the bucket is a finite queue that outputs at a finite rate.
Token bucket Algorithm
The leaky bucket algorithm has a rigid output design at an average rate
independent of the bursty traffic.
In some applications, when large bursts arrive, the output is allowed to speed
up. This calls for a more flexible algorithm, preferably one that never loses
21
information. Therefore, a token bucket algorithm finds its uses in network traffic
shaping or rate-limiting.
It is a control algorithm that indicates when traffic should be sent. This order
comes based on the display of tokens in the bucket.
The bucket contains tokens. Each of the tokens defines a packet of
predetermined size. Tokens in the bucket are deleted for the ability to share a
packet.
When tokens are shown, a flow to transmit traffic appears in the display of
tokens.
No token means no flow sends its packets. Hence, a flow transfers traffic up to
its peak burst rate in good tokens in the bucket.
Need of token bucket Algorithm:-
The leaky bucket algorithm enforces output pattern at the average rate, no matter how
bursty the traffic is. So in order to deal with the bursty traffic we need a flexible
algorithm so that the data is not lost. One such algorithm is token bucket algorithm.
Steps of this algorithm can be described as follows:
1. In regular intervals tokens are thrown into the bucket. ƒ
2. The bucket has a maximum capacity. ƒ
3. If there is a ready packet, a token is removed from the bucket, and the packet is
sent.
4. If there is no token in the bucket, the packet cannot be sent.
Let’s understand with an example,
Internetworking:
Tunneling:
A technique of inter-networking called Tunneling is used when source and
destination networks of the same type are to be connected through a network of
different types. Tunneling uses a layered protocol model such as those of the OSI
or TCP/IP protocol suite.
So, in other words, when data moves from host A to B it covers all the different
levels of the specified protocol (OSI, TCP/IP, etc.) while moving between different
levels, data conversion (Encapsulation) to suit different interfaces of the particular
layer is called tunneling.
23
Tunneling
The task is sent on an IP packet from host A of Ethernet-1 to host B of Ethernet-2 via
a WAN.
Steps
Host A constructs a packet that contains the IP address of Host B.
It then inserts this IP packet into an Ethernet frame and this frame is addressed
to the multiprotocol router M1
Host A then puts this frame on Ethernet.
When M1 receives this frame, it removes the IP packet, inserts it in the payload
packet of the WAN network layer packet, and addresses the WAN packet to M2.
The multiprotocol router M2 removes the IP packet and sends it to host B in an
Ethernet frame.
Fragmentation
Fragmentation is an important function of network layer. It is technique in
which gateways break up or divide larger packets into smaller ones called
24
After this, each fragment is going to address to same exit gateway. Exit gateway of a
network reassembles or recombines all fragments as shown in above figure. The exit
gateway, G2 of network 1 recombines all fragments created by G1 before passing
them to network 2. Thus, subsequent network is not aware that fragmentation has
25
occurred. This type of strategy is used by ATM networks . These networks use special
hardware that provides transparent fragmentation of packets.
2. Non-Transparent Fragmentation:
This fragmentation is done by one network is non-transparent to the subsequent
networks through which a packet passes. Packet fragmented by a gateway of a
network is not recombined by exit gateway of same network as shown in the below
figure.
IP version 4 protocol:
IP stands for Internet Protocol and v4 stands for Version Four (IPv4). IPv4 was the
primary version brought into action for production within the ARPANET in 1983.
IP version four addresses are 32-bit integers which will be expressed in decimal
notation.
Example- 192.0.2.126 could be an IPv4 address.
Parts of IPv4
Network part:
The network part indicates the distinctive variety that’s appointed to the network.
The network part conjointly identifies the category of the network that’s assigned.
26
Host Part:
The host part uniquely identifies the machine on your network. This part of the
IPv4 address is assigned to every host.
For each host on the network, the network part is the same, however, the host half
must vary.
Subnet number:
This is the nonobligatory part of IPv4. Local networks that have massive numbers
of hosts are divided into subnets and subnet numbers are appointed to that.
IPv4 Header Format:
IPv4 is a connectionless protocol used for packet-switched networks. It operates on
a best effort delivery model, in which neither delivery is guaranteed, nor is proper
sequencing or avoidance of duplicate delivery assured.
Internet Protocol Version 4 (IPv4) is the fourth revision of the Internet Protocol
and a widely used protocol in data communication over different kinds of
networks.
IPv4 is a connectionless protocol used in packet-switched layer networks, such as
Ethernet. It provides a logical connection between network devices by providing
identification for each device.
There are many ways to configure IPv4 with all kinds of devices – including
manual and automatic configurations – depending on the network type.
IPv4 is defined and specified in IETF publication RFC 791.
IPv4 uses 32-bit addresses for Ethernet communication in five classes: A, B, C, D
and E.
Classes A, B and C have a different bit length for addressing the network host.
Class D addresses are reserved for military purposes, while class E addresses are
reserved for future use.
IPv4 uses 32-bit (4 byte) addressing, which gives 2 32 addresses. IPv4 addresses are
written in the dot-decimal notation, which comprises of four octets of the address
27
Time to live: Datagram’s lifetime (8 bits), It prevents the datagram to loop through
the network by restricting the number of Hops taken by a Packet before delivering to
the Destination.
Protocol: Name of the protocol to which the data is to be passed (8 bits)
Header Checksum: 16 bits header checksum for checking errors in the datagram
header
Source IP address: 32 bits IP address of the sender
Destination IP address: 32 bits IP address of the receiver
Option: Optional information such as source route, record route. Used by the
Network administrator to check whether a path is working or not.
CIDR:
CIDR stands for Classless Inter-Domain Routing. It is an IP address assigning
method that improves the efficiency of address distribution. It is also known as
super netting that replaces the older system based on classes A, B, and C networks.
By using a single CIDR IP address many unique IP addresses can be designated.
CIDR IP address is the same as the normal IP address except that it ends with a
slash followed by a number.
172.200.0.0/16 It is called IP network prefix.
NAT:
To access the Internet, one public IP address is needed, but we can use a private IP
address in our private network. The idea of NAT is to allow multiple devices to
access the Internet through a single public address. To achieve this, the translation
of a private IP address to a public IP address is required.
29
global addresses. This is generally used for Web hosting. These are not used in
organizations as there are many devices that will need Internet access and to
provide Internet access, a public IP address is needed.
2. Dynamic NAT – In this type of NAT, an unregistered IP address is translated
into a registered (Public) IP address from a pool of public IP addresses. If the IP
address of the pool is not free, then the packet will be dropped as only a fixed
number of private IP addresses can be translated to public addresses.
3. Port Address Translation (PAT) – This is also known as NAT overload. In
this, many local (private) IP addresses can be translated to a single registered IP
address. Port numbers are used to distinguish the traffic i.e., which traffic belongs
to which IP address. This is most frequently used as it is cost-effective as
thousands of users can be connected to the Internet by using only one real global
(public) IP address.
3. New options
IPv6 has new options to allow for additional functionalities.
Version (4-bits): Indicates version of Internet Protocol which contains bit sequence
0110.
Traffic Class (8-bits): The Traffic Class field indicates class or priority of IPv6
packet which is similar to Service Field in IPv4 packet. It helps routers to handle the
traffic based on the priority of the packet. If congestion occurs on the router then
packets with the least priority will be discarded.
As of now, only 4-bits are being used in which 0 to 7 are assigned to Congestion
controlled traffic and 8 to 15 are assigned to Uncontrolled traffic.
Flow Label (20-bits): Flow Label field is used by a source to label the packets
belonging to the same flow in order to request special handling by intermediate IPv6
33
In this above diagram, A given server with both IPv4 and IPv6 addresses configured
can communicate with all hosts of IPv4 and IPv6 via dual-stack router (DSR). The
dual stack router (DSR) gives the path for all the hosts to communicate with the server
without changing their IP addresses.
2. Tunneling:
Tunneling is used as a medium to communicate the transit network with the different
IP versions.
35
In this above diagram, the different IP versions such as IPv4 and IPv6 are present. The
IPv4 networks can communicate with the transit or intermediate network on IPv6 with
the help of the Tunnel. It’s also possible that the IPv6 network can also communicate
with IPv4 networks with the help of a Tunnel.
In the above diagram, an IPv4 address communicates with the IPv6 address via a
NAT-PT device to communicate easily. In this situation, the IPv6 address understands
that the request is sent by the same IP version (IPv6) and it responds.
36
Ipv6
Ipv4
Classes IPv4 has 5 different classes of IP IPv6 does not contain classes
address that includes Class A, of IP addresses.
Class B, Class C, Class D, and
Class E.
Packet flow It does not provide any It uses flow label field in the
identification mechanism for packet flow header for the packet flow
identification. identification.
Checksum field The checksum field is available The checksum field is not
in IPv4. available in IPv6.
ICMP Protocol
The ICMP stands for Internet Control Message Protocol. It is a network layer protocol.
It is used for error handling in the network layer, and it is primarily used on network
devices such as routers. As different types of errors can exist in the network layer, so
ICMP can be used to report these errors and to debug those errors.
The IP protocol does not have any error-reporting or error-correcting mechanism, so it
uses a message to convey the information
Position of ICMP in the network layer
Messages
The ICMP messages are usually divided into two categories:
o Error-reporting messages
The error-reporting message means that the router encounters a problem when it
processes an IP packet then it reports a message.
o Query messages
The query messages are those messages that help the host to get the specific
information of another host. For example, suppose there are a client and a server, and
39
the client wants to know whether the server is live or not, then it sends the ICMP
message to the server.
ICMP Message Format
The message format has two things; one is a category that tells us which type of
message it is. If the message is of error type, the error message contains the type and the
code. The type defines the type of message while the code defines the subtype of the
message.
The ICMP message contains the following fields:
o Type: It is an 8-bit field. It defines the ICMP message type. The values range
from 0 to 127 are defined for ICMPv6, and the values from 128 to 255 are the
informational messages.
o Code: It is an 8-bit field that defines the subtype of the ICMP message
o Checksum: It is a 16-bit field to detect whether the error exists in the message or
not.
Types of Error Reporting messages
The error reporting messages are broadly classified into the following categories:
40
o Destination unreachable
The destination unreachable error occurs when the packet does not reach the
destination. Suppose the sender sends the message, but the message does not reach the
destination, then the intermediate router reports to the sender that the destination is
unreachable.
The above diagram shows the message format of the destination unreachable message.
In the message format:
Type: It defines the type of message. The number 3 specifies that the destination is
unreachable.
Code (0 to 15): It is a 4-bit number which identifies whether the message comes from
some intermediate router or the destination itself.
Sometimes the destination does not want to process the request, so it sends the
destination unreachable message to the source. A router does not detect all the problems
that prevent the delivery of a packet.
o Source quench
There is no flow control or congestion control mechanism in the network layer or the IP
protocol. The sender is concerned with only sending the packets, and the sender does
not think whether the receiver is ready to receive those packets or is there any
congestion occurs in the network layer so that the sender can send a lesser number of
packets, so there is no flow control or congestion control mechanism.
41
o Time exceeded
Sometimes the situation arises when there are many routers that exist between the
sender and the receiver. When the sender sends the packet, then it moves in a routing
loop. The time exceeded is based on the time-to-live value. When the packet traverses
through the router, then each router decreases the value of TTL by one. Whenever a
router decreases a datagram with a time-to-live value to zero, then the router discards a
datagram and sends the time exceeded message to the original source.
Parameter problems
The router and the destination host can send a parameter problem message. This
message conveys that some parameters are not properly set.
The above diagram shows the message format of the parameter problem. The type of
message is 12, and the code can be 0 or 1.
42
Redirection
When the packet is sent, then the routing table is gradually augmented and updated. The
tool used to achieve this is the redirection message. For example, A wants to send the
packet to B, and there are two routers exist between A and B. First, A sends the data to
the router 1. The router 1 sends the IP packet to router 2 and redirection message to A
so that A can update its routing table.
Proxy ARP - Proxy ARP is a method through which a Layer 3 devices may respond to
ARP requests for a target that is in a different network from the sender. The
Proxy ARP configured router responds to the ARP and map the MAC address of the
router with the target IP address and fool the sender that it is reached at its destination.
At the backend, the proxy router sends its packets to the appropriate destination because
the packets contain the necessary information.
Gratuitous ARP - Gratuitous ARP is an ARP request of the host that helps to identify
the duplicate IP address. It is a broadcast request for the IP address of the router. If an
ARP request is sent by a switch or router to get its IP address and no ARP responses are
received, so all other nodes cannot use the IP address allocated to that switch or router.
Yet if a router or switch sends an ARP request for its IP address and receives an ARP
response, another node uses the IP address allocated to the switch or router.
Reverse ARP (RARP) - It is a networking protocol used by the client system in a local
area network (LAN) to request its IPv4 address from the ARP gateway router table. A
table is created by the network administrator in the gateway-router that is used to find
out the MAC address to the corresponding IP address.
44
When a new system is set up or any machine that has no memory to store the IP
address, then the user has to find the IP address of the device. The device sends a RARP
broadcast packet, including its own MAC address in the address field of both the sender
and the receiver hardware. A host installed inside of the local network called the RARP-
server is prepared to respond to such type of broadcast packet. The RARP server is then
trying to locate a mapping table entry in the IP to MAC address. If any entry matches
the item in the table, then the RARP server sends the response packet along with the IP
address to the requesting computer.
Inverse ARP (InARP) - Inverse ARP is inverse of the ARP, and it is used to find the
IP addresses of the nodes from the data link layer addresses. These are mainly used for
the frame relays, and ATM networks, where Layer 2 virtual circuit addressing are often
acquired from Layer 2 signaling. When using these virtual circuits, the relevant Layer 3
addresses are available.
ARP conversions Layer 3 addresses to Layer 2 addresses. However, its opposite address
can be defined by InARP. The InARP has a similar packet format as ARP, but
operational codes are different.
Components of DHCP
DHCP Server: DHCP Server is basically a server that holds IP Addresses and
other information related to configuration.
DHCP Client: It is basically a device that receives configuration information
from the server. It can be a mobile, laptop, computer, or any other electronic device
that requires a connection.
DHCP Relay: DHCP relays basically work as a communication channel
between DHCP Client and Server.
IP Address Pool: It is the pool or container of IP Addresses possessed by the
DHCP Server. It has a range of addresses that can be allocated to devices.
Subnets: Subnets are smaller portions of the IP network partitioned to keep
networks under control.
Lease: It is simply the time that how long the information received from the
server is valid, in case of expiration of the lease, the tenant must have to re-assign
the lease.
DNS Servers: DHCP servers can also provide DNS (Domain Name System)
server information to DHCP clients, allowing them to resolve domain names to IP
addresses.
Default Gateway: DHCP servers can also provide information about the
default gateway, which is the device that packets are sent to when the destination is
outside the local network.
Options: DHCP servers can provide additional configuration options to clients,
such as the subnet mask, domain name, and time server information.
Renewal: DHCP clients can request to renew their lease before it expires to
ensure that they continue to have a valid IP address and configuration information.
46
Working of DHCP
DHCP works on the Application layer of the TCP/IP Protocol. The main task of
DHCP is to dynamically assigns IP Addresses to the Clients and allocate information
on TCP/IP configuration to Clients. For more, you can refer to the Article Working of
DHCP.
The DHCP port number for the server is 67 and for the client is 68. It is a client-
server protocol that uses UDP services. An IP address is assigned from a pool of
addresses. In DHCP, the client and the server exchange mainly 4 DHCP messages in
order to make a connection, also called the DORA process, but there are 8 DHCP
messages in the process.
Working of DHCP