KEMBAR78
Chapter 4 - Data Link Layer | PDF | Computer Network | Transmission Control Protocol
0% found this document useful (0 votes)
11 views27 pages

Chapter 4 - Data Link Layer

The data-link layer operates between the physical and network layers, facilitating node-to-node communication by encapsulating datagrams into frames and managing tasks such as framing, physical addressing, flow control, error detection, and access control. It employs various techniques for error detection and correction, including parity checks, checksums, and cyclic redundancy checks, while also implementing flow control mechanisms like Stop and Wait and Sliding Windows to ensure efficient data transmission. Additionally, the data link layer is divided into the Logical Link Control (LLC) and Media Access Control (MAC) sub-layers, with protocols in place to manage access to shared communication links.

Uploaded by

buomwuthot19
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views27 pages

Chapter 4 - Data Link Layer

The data-link layer operates between the physical and network layers, facilitating node-to-node communication by encapsulating datagrams into frames and managing tasks such as framing, physical addressing, flow control, error detection, and access control. It employs various techniques for error detection and correction, including parity checks, checksums, and cyclic redundancy checks, while also implementing flow control mechanisms like Stop and Wait and Sliding Windows to ensure efficient data transmission. Additionally, the data link layer is divided into the Logical Link Control (LLC) and Media Access Control (MAC) sub-layers, with protocols in place to manage access to shared communication links.

Uploaded by

buomwuthot19
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

CHAPTER 4

DATALINK LAYER
The data-link layer is located between the physical and the network layers. It provides
services to the network layer; it receives services from the physical layer. The duty scope of
the data-link layer is node-to-node. When a packet is travelling in the Internet, the data-
link layer of a node (host or router) is responsible for delivering a datagram to the next
node in the path. For this purpose, the data-link layer of the sending node needs to
encapsulate the datagram received from the network in a frame, and the data-link layer of
the receiving node needs to decapsulate the datagram from the frame. The following figure
shows the relationship of the data link layer to the network and physical layers.

Fig 4.1: Datalink layer

Data Link Layer Responsibilities


 Framing: The data link layer divides the stream of bits received from the network layer
into manageable data units called frames.
 Physical addressing: If frames are to be distributed to different systems on the network,
the data link layer adds a header to the frame to define the sender and/or receiver of
the frame.
 If the frame is intended for a system outside the sender‘s network, the receiver
address is the address of the device that connects the network to the next one.
 Flow control: If the rate at which the data are absorbed by the receiver is less than the
rate at which data are produced in the sender, the data link layer imposes a flow control
mechanism to avoid overwhelming (overpowering) the receiver.
 Error Detection: Errors caused by signal attenuation or noises, receiver detects
presence of errors, and signals sender for retransmission or drops frame.
 Error control: The data link layer adds reliability to the physical layer by adding
mechanisms to detect and retransmit damaged or lost frames.
 It also uses a mechanism to recognize duplicate frames.
 Error control is normally achieved through a trailer added to the end of the frame.
 Access control: When two or more devices are connected to the same link, data link
layer protocols are necessary to determine which device has control over the link at
any given time.

As the figure above shows, communication at the data link layer occurs between two
adjacent nodes.

o To send data from A to F, three partial deliveries are made.

o First, the data link layer at A sends a frame to the data link layer at B (a router).

o Second, the data link layer at B sends a new frame to the data link layer at E.

o Finally, the data link layer at E sends a new frame to the data link layer at F.

 Note that the frames that are exchanged between the three nodes have different values
in the headers. The frame from A to B has B as the destination address and A as the
source address. The frame from B to E has E as the destination address and B as the
source address. The frame from E to F has F as the destination address and E as the
source address. The values of the trailers can also be different if error checking includes
the header of the frame.

Framing

 The data link layer, needs to pack bits into frames, so that each frame is distinguishable
from another. It separates a message from one source to a destination, or from other
messages to other destinations, by adding a sender address and a destination address.
The destination address defines where the packet is to go; the sender address helps the
recipient acknowledge the receipt.

Fig. 4.2. Frame

Fixed-Size Framing: In fixed-size framing, there is no need for defining the boundaries of
the frames; the size itself can be used as a delimiter. An example of this type of framing is
the ATM wide-area network, which uses frames of fixed size called cells.

Variable-Size Framing: variable-size framing is prevalent in local area networks. In


variable-size framing, we need a way to define the end of the frame and the beginning of
the next. Historically, mostly known two approaches were used for this purpose: a
character-oriented approach and a bit-oriented approach.

Character-Oriented approaches: The header, which normally carries the source and
destination addresses and other control information, and the trailer, which carries error
detection or error correction redundant bits, are also multiples of 8 bits. To separate one
frame from the next, an 8-bit (1-byte) flag is added at the beginning and the end of a frame.

Fig 4.3. Byte stuffing/character stuffing

Bit-Oriented: The data section of a frame is a sequence of bits to be interpreted by the


upper layer as text, graphic, audio, video, and so on. However, in addition to headers (and
possible trailers), we still need a delimiter to separate one frame from the other. Most
protocols use a special 8-bit pattern flag 01111110 as the delimiter to define the beginning
and the end of the frame, as shown in the figure.

Original data:

Transmitted data:

After destuffing:

4.1. Error Detection and Correction


Any time data are transmitted from one node to the next, they can become
corrupted in passage. Many factors can alter one or more bits of a message. Some
applications require a mechanism for detecting and correcting errors. At the data-link
layer, if a frame is corrupted between the two nodes, it needs to be corrected before it
continues its journey to other nodes. However, most link-layer protocols simply discard
the frame and let the upper-layer protocols handle the retransmission of the frame. Some
multimedia applications, however, try to correct the corrupted frame.

Types of Errors:
Whenever bits flow from one point to another, they are subject to unpredictable changes
because of interference. This interference can change the shape of the signal. The term
single-bit error means that only 1 bit of a given data unit (such as a byte, character, or
packet) is changed from 1 to 0 or from 0 to 1.The term burst error means that 2 or more
bits in the data unit have changed from 1 to 0 or from 0 to 1. The following figure shows the
effect of a single-bit and a burst error on a data unit.

a. Burst error b. Single bit error

Fig 4.4. Type of errors

The number of bits affected depends on the data rate and duration of noise.
For example, if we are sending data at 1 kbps, a noise of 1/100 second can affect
10 bits; if we are sending data at 1 Mbps, the same noise can affect 10,000 bits.

The central concept in detecting or correcting errors is redundancy. To be


able to detect or correct errors, we need to send some extra bits with our data.
These redundant bits are added by the sender and removed by the receiver. Their
presence allows the receiver to detect or correct corrupted bits.

Detection versus Correction: The correction of errors is more difficult than the
detection. In error detection, we are only looking to see if any error has occurred. The
answer is a simple yes or no. We are not even interested in the number of corrupted bits.
A single-bit error is the same for us as a burst error. In error correction, we need to know
the exact number of bits that are corrupted and, more importantly, their location in the
message. The number of errors and the size of the message are important factors.

Error Detecting Codes

Whenever a message is transmitted, it may get scrambled by noise or data may get
corrupted. To avoid this, we use error-detecting codes which are additional data added to
a given digital message to help us detect if any error has occurred during transmission of
the message. Some popular techniques for error detection are:

• Parity

• Checksum

• Cyclic redundancy check

Parity check

The parity check is done by adding an extra bit, called parity bit to the data to make a
number of 1s either even in case of even parity or odd in case of odd parity. It is suitable for
single bit error detection only. While creating a frame, the sender counts the number of 1s
in it and adds the parity bit in the following way:

• In case of even parity: If a number of 1s is even then parity bit value is 0. If the number of
1s is odd then parity bit value is 1.

• In case of odd parity: If a number of 1s is odd then parity bit value is 0. If a number of 1s is
even then parity bit value is 1. On receiving a frame, the receiver counts the number of 1s
in it. In case of even parity check, if the count of 1s is even, the frame is accepted, otherwise,
it is rejected. A similar rule is adopted for odd parity check.
Fig 4.5. Parity check

Checksum
In this error detection scheme, the following procedure is applied:

• Data is divided into fixed sized frames or segments. (k segments each of m bits)

• The sender adds the segments using 1’s complement arithmetic to get the sum. It then
complements the sum to get the checksum and sends it along with the data frames.

• The receiver adds the incoming segments along with the checksum using 1’s complement
arithmetic to get the sum and then complements it.

• If the result is zero, the received frames are accepted; otherwise, they are discarded.
Cyclic Redundancy Check:

A cyclic redundancy check (CRC) is an error-detecting code commonly used in digital


networks and storage devices to detect accidental changes to raw data. Unlike checksum
scheme, which is based on addition, CRC is based on binary division. In CRC, a sequence
of redundant bits, called cyclic redundancy check bits, are appended to the end of data
unit so that the resulting data unit becomes exactly divisible by a second, predetermined
binary number. At the destination, the incoming data unit is divided by the same number.
If at this step there is no remainder, the data unit is assumed to be correct and is therefore
accepted. A remainder indicates that the data unit has been damaged in transit and
therefore must be rejected.

At the sender side, the data unit to be transmitted IS divided by a predetermined divisor
(binary number) in order to obtain the remainder. This remainder is called CRC. The CRC
has one bit less than the divisor. It means that if CRC is of n bits, divisor is of n+ 1 bit. The
sender appends this CRC to the end of data unit such that the resulting data unit becomes
exactly divisible by predetermined divisor i.e. remainder becomes zero. At the destination,
the incoming data unit i.e. data + CRC is divided by the same number (predetermined
binary divisor). If the remainder after division is zero then there is no error in the data
unit & receiver accepts it. If remainder after division is not zero, it indicates that the data
unit has been damaged in transit and therefore it is rejected. This technique is more
powerful than the parity check and checksum error detection.

4.2. Data Link Control and Protocols


4.2.1. Data link control
a. Flow Control
When a data frame (Layer-2 data) is sent from one host to another over a single
medium, it is required that the sender and receiver should work at the same speed.
That is, sender sends at a speed on which the receiver can process and accept the
data. What if the speed (hardware/software) of the sender or receiver differs? If
sender is sending too fast the receiver may be overloaded, (swamped) and data may
be lost. Two types of mechanisms can be deployed to control the flow:
i. Stop and Wait
 source transmits frame (no need for sequence number)
 Destination receives frame and replies with acknowledgement (ACK) within
timeout.
 source waits for ACK before sending next
 destination can stop flow by not sending ACK
 works well for a few large frames
 Stop and wait becomes inadequate if large block of data is split into small frames.
 large block of data could be splitted up into small frames because of :
 limited buffer size of receiver
 errors detected sooner with less to resend
 to prevent media monopolizing
 Inefficient when the propagation delay is much longer than the transmission delay

Fig 4.6. Stop and wait example

ii. Sliding Windows Flow Control


This control mechanism allows multiple numbered frames to be in transit and the receiver
has buffer W long. Window size W is the maximum amount of received data, in bytes, that
can be buffered at one time on the receiving side of a connection. The sending host can
send only that amount of data before waiting for an acknowledgment and window update
from the receiving host. Transmitter sends up to W frames without ACK. The ACK includes
number of next frame expected. To keep track of which frames have been acknowledged,
each is labeled with a k-bit sequence number. Frames are numbered modulo 2k. Giving max
window size of up to 2k – 1 (why? The window size need not be the maximum possible size
for a given sequence number length k).A station can cut off the flow of frames from the
other side by sending a Receive Not Ready (RNR) message, which acknowledges former
frames but forbids transfer of future frames. It must send a normal acknowledgement to
resume. If two stations exchange data, each needs to maintain two windows, one for
transmit and one for receive, and each side needs to send the data and acknowledgments
to the other (if have full-duplex link) can piggyback ACKs.

Fig 4.7. Sliding Window example


b. Error Control
When data-frame is transmitted, there is a probability that data-frame may be lost
in the transit or it is received corrupted. In both cases, the receiver does not receive the
correct data-frame and sender does not know anything about any loss. In such case, both
sender and receiver are equipped with some protocols which helps them to detect transit
errors such as loss of data-frame. Hence, either the sender retransmits the data-frame or
the receiver may request to resend the previous data frame.
There are three types of techniques available which Data-link layer may deploy to control
the errors by Automatic Repeat Requests (ARQ):

i. Stop and wait ARQ

The following transition may occur in Stop-and-Wait ARQ:

 The sender maintains a timeout counter.

 When a frame is sent, the sender starts the timeout counter.

 If acknowledgement of frame comes in time, the sender transmits the next frame in
queue.
 If acknowledgement does not come in time, the sender assumes that either
the frame or its acknowledgement is lost in transit. Sender retransmits the frame and starts
the timeout counter.

 If a negative acknowledgement is received, the sender retransmits the frame.

ii. Go-Back-N ARQ


Stop and wait ARQ mechanism does not utilize the resources at their best. When the
acknowledgement is received, the sender sits idle and does nothing. In Go-Back-N
ARQ method, both sender and receiver maintain a window.

The sending-window size enables the sender to send multiple frames without
receiving the acknowledgement of the previous ones. The receiving-window enables
the receiver to receive multiple frames and acknowledge them. The receiver keeps track of
incoming frame’s sequence number. When the sender sends all the frames in window, it
checks up to what sequence number it has received positive acknowledgement. If all frames
are positively acknowledged, the sender sends next set of frames. If sender finds that it has
received NACK or has not receive any ACK for a particular frame, it retransmits all the
frames after which it does not receive any positive ACK.

iii. Selective Repeat ARQ


In Go-back-N ARQ, it is assumed that the receiver does not have any buffer space
for its window size and has to process each frame as it comes. This enforces the
sender to retransmit all the frames which are not acknowledged.

In Selective-Repeat ARQ, the receiver while keeping track of sequence numbers,


buffers the frames in memory and sends NACK for only frame which is missing or
damaged. The sender in this case, sends only packet for which NACK is received.
4.2.1. Data link protocol
The data link layer can further be divided in to two layers: the upper sub-layer that is
responsible for flow and error control is called the logical link control (LLC) layer; the lower
sub-layer that is mostly responsible for multiple access resolution is called the media access
control (MAC) layer.
When nodes or stations are connected and use a common link, called a multipoint or
broadcast link, we need a multiple-access protocol to coordinate access to the link. The
problem of controlling the access to the medium is similar to the rules of speaking in an
assembly.

Fig. 4.8. Classification of MAC

a. Random access protocol


In random access or contention methods, no station is superior to another station
and none is assigned the control over another. No station permits, or does not permit,
another station to send. At each instance, a station that has data to send uses a procedure
defined by the protocol to make a decision on whether or not to send. This decision
depends on the state of the medium (idle or busy). In other words, each station can
transmit when it desires on the condition that it follows the predefined procedure,
including the testing of the state of the medium. Two features give this method its name.
First, there is no scheduled time for a station to transmit. Transmission is random among
the stations. That is why these methods are called random access. Second, no rules specify
which station should send next. Stations compete with one another to access the medium.
That is why these methods are also called contention methods.
i. ALOHA
It is the simplest technique in multiple accesses. Basic idea of this mechanism is a user can
transmit the data whenever they want. If data is successfully transmitted then there isn’t
any problem. But if collision occurs then the station will transmit again. Sender can detect
the collision if it doesn’t receive the acknowledgement from the receiver.

Pure ALOHA
The original ALOHA protocol is called pure ALOHA. This is a simple, but elegant protocol.
The idea is that each station sends a frame whenever it has a frame to send. However, since
there is only one channel to share, there is the possibility of collision between frames from
different stations. Figure 4.9 shows an example of frame collisions in pure ALOHA.

Figure 4.9. Frame collisions in pure ALOHA

There are four stations (unrealistic assumption) that contend with one another for
access to the shared channel. The figure shows that each station sends two frames; there
are a total of eight frames on the shared medium. Some of these frames collide because
multiple frames are in contention for the shared channel. Figure 4.9 shows that only two
frames survive: frame 1.1 from station 1 and frame 3.2 from station 3. We need to
mention that even if one bit of a frame coexists on the channel with one bit from another
frame, there is a collision and both will be destroyed. It is obvious that we need to resend
the frames that have been destroyed during transmission. The pure ALOHA protocol relies
on acknowledgments from the receiver.
When a station sends a frame, it expects the receiver to send an acknowledgment. If the
acknowledgment does not arrive after a time-out period, the station assumes that the
frame (or the acknowledgment) has been destroyed and resends the frame.
A collision involves two or more stations. If all these stations try to resend their
frames after the time-out, the frames will collide again. Pure ALOHA dictates that when
the time-out period passes, each station waits a random amount of time before resending
its frame. The randomness will help avoid more collisions. We call this time the back-off
time.

Slotted ALOHA
In Pure ALOHA there is no rule that defines when the station can send. A station
may send soon after another station has started or soon before another station has
finished. Slotted ALOHA was invented to improve the efficiency of pure ALOHA.
In slotted ALOHA we divide the time into slots of the average time required to send out a
frame, and force the station to send only at the beginning of the time slot. Figure 4.10
shows an example of frame collisions in slotted ALOHA.

Figure 4.9. Frame collisions in slotted ALOHA


Because a station is allowed to send only at the beginning of the synchronized
time slot, if a station misses this moment, it must wait until the beginning of the next time
slot. This means that the station which started at the beginning of this slot has already
finished sending its frame. Of course, there is still the possibility of collision if two
stations try to send at the beginning of the same time slot. Now, you can compare the
number of received frames in the pure ALOHA and slotted ALOHA and then decide
which one in more efficient!
Differences of pure ALOHA Vs slotted ALOHA

ii. Carrier Sense Multiple Access (CSMA)


These Protocols that listen for a carrier and act accordingly are called carrier sense
protocols. Carrier sensing allows the station to detect whether the medium is currently
being used. Schemes that use a carrier sense circuits are classed together as carrier sense
multiple access or CSMA schemes. There are two variants of CSMA: CSMA/CD and
CSMA/CA
The simplest CSMA scheme is for a station to sense the medium, sending packets
immediately if the medium is idle. If the station waits for the medium to become idle it is
called persistent otherwise it is called non persistent.
Persistent:- wait if busy and transmit only when the media becomes idle again (no
transmission after a triggered timer expire)
 When a station has the data to send, it first listens the channel to check if anyone else
is transmitting data or not.
 If it senses the channel idle, station starts transmitting the data.
 If it senses the channel busy it waits until the channel is idle, by continuously sensing
the channel.
Non persistent CSMA is less aggressive compared to persistent protocol.
 In this protocol, before sending the data, the station senses the channel and if the
channel is idle it starts transmitting the data.
 But if the channel is busy, the station does not continuously sense it but instead of that
it waits for random amount of time and repeats the algorithm.
 Here the algorithm leads to better channel utilization but also results in longer delay
compared to persistent.
Carrier Sense Multiple Access/Collision Detection (CSMA/CD)
CSMA/CD is a technique for multiple access protocols.

 If no transmission is taking place at the time, the particular station can transmit.
 If two stations attempt to transmit simultaneously, this causes a collision, which is
detected by all participating stations.
 After a random time interval, the stations that collided attempt to transmit again.
 If another collision occurs, the time intervals from which the random waiting time is
selected are increased step by step.
 This is known as exponential back off.

Figure 4.10: Collision of the first bit in CSMA/CD


At time tl, station A has executed its persistence procedure and starts sending the
bits of its frame. At time t2, station C has not yet sensed the first bit sent by A. Station C
executes its persistence procedure and starts sending the bits in its frame, which
propagate both to the left and to the right. The collision occurs sometime after time t2.
Station C detects a collision at time t3 when it receives the first bit of A's frame. Station
C immediately (or after a short time, but we assume immediately) aborts transmission.
Station A detects collision at time t4 when it receives the first bit of C's frame; it also
immediately aborts transmission. The throughput of CSMA/CD is greater than that of
pure or slotted ALOHA.

Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA)


The basic idea behind CSMA/CD is that a station needs to be able to receive
while transmitting to detect a collision. The probability of collision to be happened with
CSMA/CD is reduced, but it still exits. We need to avoid collision on networks because
sometimes it cannot be detected (especially in wireless networks). Carrier sense multiple
access with collision avoidance (CSMA/CA) was invented for this purpose. Collisions
are avoided through the use of CSMA/CA's three strategies: the inter-frame space, the
contention window, and acknowledgments. With this method when an idle channel is
found, the station does not send immediately. It waits for a period of time called the
interframe space or IFS. If after the IFS time the channel is still idle, the station can send,
but it still needs to wait a time equal to the contention time.

The contention time is an amount of time divided into slots. A station that is ready to send
chooses a random number of slots as its wait time. With all these precautions, there still
may be a collision resulting in destroyed data. In addition, the data may be corrupted
during the transmission. The positive acknowledgment and the time-out timer can help
guarantee that the receiver has received the frame.

b. Controlled access protocol


In controlled access, the stations consult one another to find which station has the right to
send. A station cannot send unless it has been authorized by other stations. There are three
controlled access protocols: Reservation, Token passing and Polling.

i. Reservation
 In the reservation method, a station needs to make a reservation before sending data.

 Time is divided into intervals: In each interval, a reservation frame precedes the data
frames sent in that interval.

 If there are N stations in the system, there are exactly N reservation minis lots in the
reservation frame.
 Each mini slot belongs to a station. When a station needs to send a data frame, it makes
a reservation in its own minis lot.

 The stations that have made reservations can send their data frames after the
reservation frame.

 The following figure shows a situation with five stations and a five-minis lot reservation
frame.

 In the first interval, only stations 1, 3, and 4 have made reservations. In the second
interval, only station 1 has made a reservation.

Figure 4.11: Reservation

ii. Polling
 Master node “invites” slave nodes to transmit in turn
 The master sends a message to slave1 and lets it know that it can transmit up to some
maximum number of packets, after slave1 transmits some packets, the mast
sends a message to slave 2 and lets it know it can transmit up to some maximum
number of packets.
 Typically used with “dumb” slave devices
 Concerns: polling overhead, latency, and single point of failure (master)
 Examples: 802.15 protocol and Bluetooth protocol
Figure 4.12: Polling
iii. Token passing

 The control token (a small, special-purpose packet) passed from one node to next in
some fixed order
 When a node receives a token, it holds onto the token only if it has some packets to
transmit; otherwise, it immediately forwards the token to the next node•
 If a node has packets to transmit when it receives the token, it sends up to a maximum
number frames and then forwards the token to the next node.
 concerns: token, overhead, and latency
 single point of failure (some recovery procedure must be invoked to get the token back
in circulation)

Figure 4.13: Token passing

4.2.2. Data link control protocol


High-level Data Link Control (HDLC) is a bit-oriented protocol for communication over
point-to-point and multipoint links. It implements the ARQ mechanisms.

Fig. Normal response mode

Fig. Asynchronous balanced mode


Some of HDLC frames are I-frame, S-frame and U-frame.
Fig. HDLC frame format
Although HDLC is a general protocol that can be used for both point-to-point and
multipoint configurations, one of the most common protocols for point-to-point access is
the Point-to-Point Protocol (PPP).PPP is a byte-oriented protocol.

Fig. PPP frame format

4.3. Local Area Networks (LANs)


A local area network (LAN) is a computer network that interconnects computers within
a limited area such as a residence, school, laboratory, university campus or office building.
Ethernet and Wi-Fi are the two most common technologies in use for local area networks.
A number of experimental and early commercial LAN technologies were developed in the
1970s. Ethernet was developed at Xerox PARC between 1973 and 1974. In a wireless LAN,
users have unrestricted movement within the coverage area. Wireless networks have
become popular in residences and small businesses, because of their ease of installation.
Most wireless LANs use Wi-Fi as it is built into smartphones, tablet computers and laptops.
Guests are often offered Internet access via a hotspot service.
Advantages of LAN
1. The basic LAN implementation does not cost too much.
2. It is easy to control and manage the entire LAN as it is available in one small region.
3. The systems or devices connected on LAN communicates at very high speed depending
upon LAN type and Ethernet cables supported. The common speeds supported are 10
Mbps, 100 Mbps and 1000 Mbps.
4. With the help of file servers connected on the LAN, sharing of files and folders among
peers will become very easy and efficient.
5. It is easy to share common resources such as printers and internet line among multiple
LAN users.
Disadvantages of LANs:
1. Where a lot of terminals are served by only one or two printers, long print queues may
develop, causing people to have to wait for printed output.
2. Network security can be a problem. If a virus gets into one computer, it is likely to spread
quickly across the network because it will get into the central backing store.
3. If the dedicated file server fails, work stored on shared hard disk drives will not be
accessible and it will not be possible to use network printers either.
4.4. Wide Area Networks (WAN)

A wide area network (WAN) is a telecommunications network that extends over a large
geographical area for the primary purpose of computer networking. The textbook
definition of a WAN is, a computer network spanning regions, countries, or even the world.
WANs are used to connect LANs and other types of networks together so that users and
computers in one location can communicate with users and computers in other locations.
Many WANs are built for one particular organization and are private. Others, built by
Internet service providers, provide connections from an organization's LAN to the Internet.
Many technologies are available for wide area network links. Examples include circuit-
switched telephone lines, radio wave transmission, and optical fiber. New developments in
technologies have successively increased transmission rates.

What separates a WAN like the internet from a LAN? Due to its typically massive size,
WAN's are almost always slower than a LAN. The further the distance, the slower the
network. One of the big disadvantages to having a WAN is the cost it can incur. Having a
private WAN can be expensive. The reason that WANs cost a lot tends to be because of the
technology required to connect two remote places.

Advantages of a wide area network (WAN)

 Covers large geographical area


 Centralized data
 Get updated files and data
 Sharing of software and resources
 Global business
 High bandwidth
 Distribute workload and decrease travel charges
Disadvantages of a wide area network (WAN)

 Security problems
 Needs firewall and antivirus software
 The setup cost is high
 Server down and disconnection issue
Examples of wide area network (WAN)

 Internet

 Most big banks


 Airline companies
 Stock brokerages

 Railway reservations counter


 Satellite systems
Figure 4.14: LAN and WAN

You might also like