Module 2 CN 18EC71
Module 2 CN 18EC71
ENGINEERING COLLEGE
Benjanapadavu, Mangalore-574219
NOTES
Computer Networks
For 7TH Semester (CBCS)
Course Code: 18EC71
Prepared By :Rajitha A A
Module- 2
DATA LINK LAYER, MAC,
DEPARTMENT OF
ELECTRONICS & COMMUNICATION
ENGINEERING
MODULE 2
SYLLABUS:
Data-link Layer:
Introduction: Nodes and Links, Services, Categories of link, Sublayers, Link Layer addressing: Types
of addresses, ARP. Data Link Control (DLC) services: Framing, Flow and Error Control, Data Link
Layer Protocols: Simple Protocol, Stop and Wait protocol, Piggybacking. (9.1, 9.2(9.2.1, 9.2.2), 11.1,
11.2 of Text)
Text Books:
Forouzan, Data Communications and Networking, 5th Edition, McGraw Hill, 2013, ISBN: 1-
25906475-3.
MODULE-2
➢ The data-link layer at Alice’s computer communicates with the data-link layer at router R2.
The data-link layer at router R2 communicates with the data-link layer at router R4, and so
on. Finally, the data-link layer at router R7 communicates with the data-link layer at Bob’s
computer. Only one data-link layer is involved at the source or the destination, but two data-
link layers are involved at each router.
➢ The reason is that Alice’s and Bob’s computers are each connected to a single network, but
each router takes input from one network and sends output to another network. Note that
although switches are also involved in the data-link-layer communication, for simplicity we
have not shown them in the Fig. 1.
➢ Communication at the data-link layer is node-to-node. A data unit from one point in the
Internet needs to pass through many networks (LANs and WANs) to reach another point.
Theses LANs and WANs are connected by routers.
➢ It is customary to refer to the two end hosts and the routers as nodes and the networks in
between as links. Fig. 2 is a simple representation of links and nodes when the path of the
data unit is only six nodes.
➢ The first node is the source host; the last node is the destination host. The other four nodes
are four routers. The first, the third, and the fifth links represent the three LANs; the second
and the fourth links represent the two WANs.
SERVICES
➢ The data-link layer is located between the physical and the network layers. The datalink layer
provides services to the network layer; it receives services from the physical layer.
➢ Framing
• The first service provided by the data-link layer is framing. The data-link layer at each
node needs to encapsulate the datagram (packet received from the network layer) in a
frame before sending it to the next node.
• The node also needs to decapsulate the datagram from the frame received on the logical
channel. Although we have shown only a header for a frame, but frame may have both a
header and a trailer. Different data-link layers have different formats for framing.
➢ Flow Control
• Whenever we have a producer and a consumer, we need to think about flow control. If
the producer produces items that cannot be consumed, accumulation of items occurs.
• The sending data-link layer at the end of a link is a producer of frames; the receiving
data-link layer at the other end of a link is a consumer. If the rate of produced frames is
higher than the rate of consumed frames, frames at the receiving end need to be buffered
while waiting to be consumed (processed).
• We cannot have an unlimited buffer size at the receiving side. We have two choices. The
first choice is to let the receiving data-link layer drop the frames if its buffer is full. The
second choice is to let the receiving data-link layer send feedback to the sending data-link
layer to ask it to stop or slow down.
• Different data-link-layer protocols use different strategies for flow control. Since flow
control also occurs at the transport layer, with a higher degree of importance.
➢ Error Control
• At the sending node, a frame in a data-link layer needs to be changed to bits, transformed
to electromagnetic signals, and transmitted through the transmission media.
• At the receiving node, electromagnetic signals are received, transformed to bits, and put
together to create a frame.
• Since electromagnetic signals are susceptible to error, a frame is susceptible to error. The
error needs first to be detected. After detection, it needs to be either corrected at the
receiver node or discarded and retransmitted by the sending node. And also, error
detection and correction are an issue in every layer (node-to-node or host-to-host).
➢ Congestion Control
• Although a link may be congested with frames, which may result in frame loss, most
data-link-layer protocols do not directly use a congestion control to alleviate congestion,
although some wide-area networks do.
• In general, congestion control is considered an issue in the network layer or the transport
layer because of its end-to-end nature.
➢ Although two nodes are physically connected by a transmission medium such as cable or air,
we need to remember that the data-link layer controls how the medium is used.
➢ We can have a data-link layer that uses the whole capacity of the medium; we can also have a
data-link layer that uses only part of the capacity of the link. In other words, we can have a
point-to-point link or a broadcast link.
➢ In a point-to-point link, the link is dedicated to the two devices; in a broadcast link, the link is
shared between several pairs of devices. For example, when two friends use the traditional
home phones to chat, they are using a point-to-point link; when the same two friends use
their cellular phones, they are using a broadcast link (the air is shared among many cell
phone users).
TWO SUBLAYERS
➢ To better understand the functionality of and the services provided by the link layer, we can
divide the data-link layer into two sublayers: data link control (DLC) and media access
control (MAC). This is not unusual because LAN protocols use the same strategy.
➢ The data link control sublayer deals with all issues common to both point-to-point and
broadcast links; the media access control sublayer deals only with issues specific to broadcast
links. In other words, we separate these two types of links at the data-link layer, as shown in
Fig. 4.
LINK-LAYER ADDRESSING
➢ The data-link layer is the link-layer addresses where IP addresses used as the identifiers at
the network layer that define the exact points in the Internet where the source and destination
hosts are connected. However, in a connectionless internetwork such as the Internet we
cannot make a datagram reach its destination using only IP addresses.
➢ The reason is that each datagram in the Internet, from the same source host to the same
destination host, may take a different path. The source and destination IP addresses define the
two ends but cannot define which links the datagram should pass through.
➢ Here, IP addresses in a datagram should not be changed. If the destination IP address in a
datagram change, the packet never reaches its destination; if the source IP address in a
datagram change, the destination host or a router can never communicate with the source if a response
needs to be sent back or an error needs to be reported back to the source. The above discussion shows
that we need another addressing mechanism in a connectionless internetwork: the link-layer addresses
of the two nodes.
➢ A link-layer address is sometimes called a link address, sometimes a physical address, and
sometimes a MAC address. We use these terms interchangeably in this book. Since a link is
controlled at the data-link layer, the addresses need to belong to the data-link layer. When a
datagram passes from the network layer to the data-link layer, the datagram will be
encapsulated in a frame and two data-link addresses are added to the frame header. These
two addresses are changed every time the frame moves from one link to another. Fig. 5
demonstrates the concept in a small internet.
➢ In the internet in Fig. 5, we have three links and two routers. We also have shown only two
hosts: Alice (source) and Bob (destination). For each host, we have shown two addresses, the
IP addresses (N) and the link-layer addresses (L). Note that a router has as many pairs of
addresses as the number of links the router is connected to. We have shown three frames, one
in each link. Each frame carries the same datagram with the same source and destination
addresses (N1 and N8), but the link-layer addresses of the frame change from link to link. In
link 1, the link-layer addresses are L1 and L2. In link 2, they are L4 and L5. In link 3, they are L7 and
L8.
➢ Note that the IP addresses and the link-layer addresses are not in the same order. For IP
addresses, the source address comes before the destination address; for link-layer addresses,
the destination address comes before the source. The datagrams and frames are designed in
this way, and we follow the design.
Some link-layer protocols define three types of addresses: unicast, multicast, and broadcast.
➢ Unicast Address
Each host or each interface of a router is assigned a unicast address. Unicasting means one-
to-one communication. A frame with a unicast address destination is destined only for one
entity in the link.
Example:
The unicast link-layer addresses in the most common LAN, Ethernet, are 48 bits (six bytes)
that are presented as 12 hexadecimal digits separated by colons; for example, the following is
a link-layer address of a computer.
A3:34:45:11:92:F1
➢ Multicast Address
Some link-layer protocols define multicast addresses. Multicasting means one-to-many
communication. However, the jurisdiction is local (inside the link).
Example:
The multicast link-layer addresses in the most common LAN, Ethernet, are 48 bits (six bytes)
that are presented as 12 hexadecimal digits separated by colons.
The second digit, however, needs to be an even number in hexadecimal. The following
shows a multicast address:
A2:34:45:11:92:F1
➢ Broadcast Address
Some link-layer protocols define a broadcast address. Broadcasting means one-to-all
communication. A frame with a destination broadcast address is sent to all entities in the link.
Example:
The broadcast link-layer addresses in the most common LAN, Ethernet, are 48 bits, all 1s,
that are presented as 12 hexadecimal digits separated by colons. The following shows a
broadcast address:
FF:FF:FF:FF:FF:FF
➢ Anytime a host or a router needs to find the link-layer address of another host or router in its
network, it sends an ARP request packet. The packet includes the link-layer and IP addresses
of the sender and the IP address of the receiver. Because the sender does not know the link-
layer address of the receiver, the query is broadcast over the link using the link-layer
broadcast address (see Fig. 7).
Fig. 7: ARP operation
➢ Every host or router on the network receives and processes the ARP request packet, but only
the intended recipient recognizes its IP address and sends back an ARP response packet. The
response packet contains the recipient’s IP and link-layer addresses. The packet is unicast
directly to the node that sent the request packet.
➢ In Fig. 7a, the system on the left (A) has a packet that needs to be delivered to another system
(B) with IP address N2. System A needs to pass the packet to its data-link layer for the actual
delivery, but it does not know the physical address of the recipient. It uses the services of
ARP by asking the ARP protocol to send a broadcast ARP request packet to ask for the
physical address of a system with an IP address of N2.
➢ This packet is received by every system on the physical network, but only system B will
answer it, as shown in Fig. 7b. System B sends an ARP reply packet that includes its physical
address. Now system A can send all the packets it has for this destination using the physical
address it received.
CACHING
➢ A question that is often asked is this: If system A can broadcast a frame to find the link layer
address of system B, why can’t system A send the datagram for system B using a broadcast
frame? In other words, instead of sending one broadcast frame (ARP request), one unicast
frame (ARP response), and another unicast frame (for sending the datagram), system A can
encapsulate the datagram and send it to the network. System B receives it and keep it; other
systems discard it.
➢ To answer the question, we need to think about the efficiency. It is probable that system A
has more than one datagram to send to system B in a short period of time. For example, if
system B is supposed to receive a long e-mail or a long file, the data do not fit in one
datagram.
➢ Let us assume that there are 20 systems connected to the network (link): system A, system B,
and 18 other systems. We also assume that system A has 10 datagrams to send to system B in
one second.
• Without using ARP, system A needs to send 10 broadcast frames. Each of the 18 other
systems need to receive the frames, decapsulate the frames, remove the datagram and
pass it to their network-layer to find out the datagrams do not belong to them. This means
processing and discarding 180 broadcast frames.
• Using ARP, system A needs to send only one broadcast frame. Each of the 18 other
systems need to receive the frames, decapsulate the frames, remove the ARP message
and pass the message to their ARP protocol to find that the frame must be discarded. This
means processing and discarding only 18 (instead of 180) broadcast frames. After system
B responds with its own data-link address, system A can store the link-layer address in its
cache memory. The rest of the nine frames are only unicast. Since processing broadcast
frames is expensive (time consuming), the first method is preferable.
PACKET FORMAT
➢ Fig. 8 shows the format of an ARP packet. The names of the fields are self-explanatory. The
hardware type field defines the type of the link-layer protocol; Ethernet is given the type 1.
➢ The protocol type field defines the network-layer protocol: IPv4 protocol is (0800)16. The
source hardware and source protocol addresses are variable-length fields defining the link-
layer and network-layer addresses of the sender. The destination hardware address and
destination protocol address fields define the receiver link-layer and network-layer addresses.
➢ An ARP packet is encapsulated directly into a data-link frame. The frame needs to have a
field to show that the payload belongs to the ARP and not to the network-layer datagram.
Fig. 8: ARP packet
Example:
A host with IP address N1 and MAC address L1 has a packet to send to another host with IP
address N2 and physical address L2 (which is unknown to the first host). The two hosts are on
the same network. Fig. 9 shows the ARP request and response messages.
DLC SERVICES
The data link control (DLC) deals with procedures for communication between two adjacent
nodes—node-to-node communication—no matter whether the link is dedicated or broadcast.
Data link control functions include framing and flow and error control.
FRAMING
➢ Data transmission in the physical layer means moving bits in the form of a signal from the
source to the destination. The physical layer provides bit synchronization to ensure that the
sender and receiver use the same bit durations and timing.
➢ The data-link layer, on the other hand, needs to pack bits into frames, so that each frame is
distinguishable from another.
➢ Our postal system practices a type of framing. The simple act of inserting a letter into an
envelope separates one piece of information from another; the envelope serves as the
delimiter. In addition, each envelope defines the sender and receiver addresses, which is
necessary since the postal system is a many to many carriers facility.
➢ Framing in the data-link layer separates a message from one source to a destination by
adding a sender address and a destination address. The destination address defines where the
packet is to go; the sender address helps the recipient acknowledge the receipt.
➢ Although the whole message could be packed in one frame, that is not normally done. One
reason is that a frame can be very large, making flow and error control very inefficient.
➢ When a message is carried in one very large frame, even a single-bit error would require the
retransmission of the whole frame. When a message is divided into smaller frames, a single-
bit error affects only that small frame.
Frame Size
➢ Frames can be of fixed or variable size.
➢ In fixed-size framing, there is no need for defining the boundaries of the frames; the size
itself can be used as a delimiter. An example of this type of framing is the ATM WAN,
which uses frames of fixed size called cells.
➢ In variable-size framing, prevalent in local-area networks which need a way to define the end
of one frame and the beginning of the next.
➢ Here, two approaches were used for this purpose: a character-oriented approach and a bit-
oriented approach.
Character-Oriented Framing
➢ In character-oriented (or byte-oriented) framing, data to be carried are 8-bit characters from a
coding system such as ASCII. The header, which normally carries the source and destination
addresses and other control information and the trailer, which carries error detection
redundant bits, are also multiples of 8 bits.
➢ To separate one frame from the next, an 8-bit (1-byte) flag is added at the beginning and the
end of a frame. The flag, composed of protocol-dependent special characters, signals the start
or end of a frame. Figure 10 shows the format of a frame in a character-oriented protocol.
➢ Character-oriented framing was popular when only text was exchanged by the data-link layers.
The flag could be selected to be any character not used for text communication. Now, however,
we send other types of information such as graphs, audio, and video; any character used for the
flag could also be part of the information. If this happens, the receiver, when it encounters this
pattern in the middle of the data, thinks it has reached the end of the frame.
➢ To fix this problem, a byte-stuffing strategy was added to character-oriented framing. In byte
stuffing (or character stuffing), a special byte is added to the data section of the frame when there
is a character with the same pattern as the flag. The data section is stuffed with an extra byte.
This byte is usually called the escape character (ESC) and has a predefined bit pattern.
➢ Whenever the receiver encounters the ESC character, it removes it from the data section and
treats the next character as data, not as a delimiting flag. Fig. 11 shows the situation.
➢ Byte stuffing by the escape character allows the presence of the flag in the data section of the
frame, but it creates another problem. What happens if the text contains one or more escape
characters followed by a byte with the same pattern as the flag? The receiver removes the
escape character, but keeps the next byte, which is incorrectly interpreted as the end of the
frame.
➢ To solve this problem, the escape characters that are part of the text must also be marked by
another escape character. In other words, if the escape character is part of the text, an extra
one is added to show that the second one is part of the text.
Bit-Oriented Framing
➢ In bit-oriented framing, the data section of a frame is a sequence of bits to be interpreted by
the upper layer as text, graphic, audio, video, and so on. However, in addition to headers (and
possible trailers), we still need a delimiter to separate one frame from the other.
➢ Most protocols use a special 8-bit pattern flag, 01111110, as the delimiter to define the
beginning and the end of the frame, as shown in Fig. 12.
➢ This flag can create the same type of problem we saw in the character-oriented protocols. That is,
if the flag pattern appears in the data, we need to somehow inform the receiver that this is not the
end of the frame. We do this by stuffing 1 single bit (instead of 1 byte) to prevent the pattern
from looking like a flag. The strategy is called bit stuffing.
➢ In bit stuffing, if 0 and five consecutive 1 bits are encountered, an extra 0 is added. This extra
stuffed bit is eventually removed from the data by the receiver. Note that the extra bit is added
after one 0 followed by five 1s regardless of the value of the next bit. This guarantees that the
flag field sequence does not inadvertently appear in the frame.
➢ Fig. 13 shows bit stuffing at the sender and bit removal at the receiver. Note that even if we have
a 0 after five 1s, we still stuff a 0. The 0 will be removed by the receiver. This means that if the
flaglike pattern 01111110 appears in the data, it will change to 011111010 (stuffed) and is not
mistaken for a flag by the receiver. The real flag 01111110 is not stuffed by the sender and is
recognized by the receiver.
Flow Control
➢ Whenever an entity produces items and another entity consumes them, there should be a
balance between production and consumption rates.
➢ If the items are produced faster than they can be consumed, the consumer can be
overwhelmed and may need to discard some items.
➢ If the items are produced more slowly than they can be consumed, the consumer must wait,
and the system becomes less efficient.
➢ Flow control is related to the first issue. We need to prevent losing the data items at the
consumer site.
➢ In communication at the data-link layer, we are dealing with four entities: network and data-
link layers at the sending node and network and data-link layers at the receiving node.
Although we can have a complex relationship with more than one producer and consumer,
we ignore the relationships between networks and data-link layers and concentrate on the
relationship between two data-link layers, as shown in Fig. 14.
Fig. 14: Flow control at the data-link layer
➢ The Fig. 14 shows that the data-link layer at the sending node tries to push frames toward the
data-link layer at the receiving node. If the receiving node cannot process and deliver the
packet to its network at the same rate that the frames arrive, it becomes overwhelmed with
frames. Flow control in this case can be feedback from the receiving node to the sending
node to stop or slow down pushing frames.
Buffers
➢ Although flow control can be implemented in several ways, one of the solutions is normally
to use two buffers; one at the sending data-link layer and the other at the receiving data-link
layer.
➢ A buffer is a set of memory locations that can hold packets at the sender and receiver. The
flow control communication can occur by sending signals from the consumer to the producer.
➢ When the buffer of the receiving data-link layer is full, it informs the sending data-link layer
to stop pushing frames.
Example
➢ The above discussion requires that the consumers communicate with the producers on two
occasions: when the buffer is full and when there are vacancies. If the two parties use a
buffer with only one slot, the communication can be easier.
➢ Assume that each data-link layer uses one single memory slot to hold a frame. When this
single slot in the receiving data-link layer is empty, it sends a note to the network layer to
send the next frame.
Error Control
➢ Since the underlying technology at the physical layer is not fully reliable, we need to
implement error control at the data-link layer to prevent the receiving node from delivering
corrupted packets to its network layer.
➢ Error control at the data-link layer is normally very simple and implemented using one of the
following two methods. In both methods, a CRC is added to the frame header by the sender
and checked by the receiver.
• In the first method, if the frame is corrupted, it is silently discarded; if it is not corrupted,
the packet is delivered to the network layer. This method is used mostly in wired LANs
such as Ethernet.
• In the second method, if the frame is corrupted, it is silently discarded; if it is not
corrupted, an acknowledgment is sent (for the purpose of both flow and error control) to
the sender.
Connectionless Protocol
➢ In a connectionless protocol, frames are sent from one node to the next without any
relationship between the frames; each frame is independent.
➢ Here, the term connectionless here does not mean that there is no physical connection
(transmission medium) between the nodes; it means that there is no connection between
frames.
➢ The frames are not numbered and there is no sense of ordering. Most of the data-link
protocols for LANs are connectionless protocols.
Connection-Oriented Protocol
➢ In a connection-oriented protocol, a logical connection should first be established between
the two nodes (setup phase).
➢ After all frames that are somehow related to each other are transmitted (transfer phase), the
logical connection is terminated (teardown phase). In this type of communication, the frames
are numbered and sent in order.
➢ If they are not received in order, the receiver needs to wait until all frames belonging to the
same set are received and then deliver them in order to the network layer.
➢ Connection oriented protocols are rare in wired LANs, but we can see them in some point-to-
point protocols, some wireless LANs, and some WANs.
➢ Traditionally four protocols have been defined for the data-link layer to deal with flow and
error control: Simple, Stop-and-Wait, Go-Back-N, and Selective-Repeat. Although the first
two protocols still are used at the data-link layer, the last two have disappeared.
➢ The behavior of a data-link-layer protocol can be better shown as a finite state machine
(FSM). An FSM is thought of as a machine with a finite number of states. The machine is
always in one of the states until an event occurs.
➢ Each event is associated with two reactions: defining the list (possibly empty) of actions to
be performed and determining the next state (which can be the same as the current state).
➢ One of the states must be defined as the initial state, the state in which the machine starts
when it turns on. In Fig. 15, we show an example of a machine using FSM.
➢ We have used rounded-corner rectangles to show states, colored text to show events, and
regular black text to show actions. A horizontal line is used to separate the event from the
actions, although later we replace the horizontal line with a slash. The arrow shows the
movement to the next state.
➢ The Fig. 15 shows a machine with three states. There are only three possible events and three
possible actions. The machine starts in state I. If event 1 occurs, the machine performs
actions 1 and 2 and moves to state II. When the machine is in state II, two events may occur.
If event 1 occurs, the machine performs action 3 and remains in the same state, state II. If
event 3 occurs, the machine performs no action, but move to state I.
SIMPLE PROTOCOL
Our first protocol is a simple protocol with neither flow nor error control. We assume that
the receiver can immediately handle any frame it receives. In other words, the receiver can
never be overwhelmed with incoming frames. Fig. 16 shows the layout for this protocol.
➢ The data-link layer at the sender gets a packet from its network layer, makes a frame out of it,
and sends the frame. The data-link layer at the receiver receives a frame from the link,
extracts the packet from the frame, and delivers the packet to its network layer. The data-link
layers of the sender and receiver provide transmission services for their network layers.
FSMs
➢ The sender site should not send a frame until its network layer has a message to send. The
receiver site cannot deliver a message to its network layer until a frame arrives. We can show
these requirements using two FSMs.
➢ Each FSM has only one state, the ready state. The sending machine remains in the ready state
until a request comes from the process in the network layer. When this event occurs, the
sending machine encapsulates the message in a frame and sends it to the receiving machine.
➢ The receiving machine remains in the ready state until a frame arrives from the sending
machine. When this event occurs, the receiving machine decapsulates the message out of the
frame and delivers it to the process at the network layer. Fig. 17 shows the FSMs for the
simple protocol.
Fig. 18 shows an example of communication using this protocol. It is very simple. The
sender sends frames one after another without even thinking about the receiver.
STOP-AND-WAIT PROTOCOL
➢ Our second protocol is called the Stop-and-Wait protocol, which uses both flow and error
control. We show a primitive version of this protocol here.
➢ In this protocol, the sender sends one frame at a time and waits for an acknowledgment before
sending the next one. To detect corrupted frames, we need to add a CRC to each data frame.
When a frame arrives at the receiver site, it is checked. If its CRC is incorrect, the frame is
corrupted and silently discarded.
➢ The silence of the receiver is a signal for the sender that a frame was either corrupted or lost.
Every time the sender sends a frame, it starts a timer. If an acknowledgment arrives before the
timer expires, the timer is stopped and the sender sends the next frame (if it has one to send).
➢ If the timer expires, the sender resends the previous frame, assuming that the frame was either
lost or corrupted. This means that the sender needs to keep a copy of the frame until its
acknowledgment arrives. When the corresponding acknowledgment arrives, the sender discards
the copy and sends the next frame if it is ready.
➢ Fig. 19 shows the outline for the Stop-and-Wait protocol. Note that only one frame and one
acknowledgment can be in the channels at any time.
Receiver
The receiver is always in the ready state. Two events may occur:
a. If an error-free frame arrives, the message in the frame is delivered to the network layer
and an ACK is sent.
b. If a corrupted frame arrives, the frame is discarded.
Example
Fig. 21 shows an example. The first frame is sent and acknowledged. The second frame is sent,
but lost. After time-out, it is resent. The third frame is sent and acknowledged, but the
acknowledgment is lost. The frame is resent. However, there is a problem with this scheme. The
network layer at the receiver site receives two copies of the third packet, which is not right. In
the next section, we will see how we can correct this problem using sequence numbers and
acknowledgment numbers.
Fig. 21: Flow diagram for Example
Piggybacking
➢ The two protocols we discussed are designed for unidirectional communication, in which
data is flowing only in one direction although the acknowledgment may travel in the other
direction. Protocols have been designed in the past to allow data to flow in both directions.
➢ However, to make the communication more efficient, the data in one direction is
piggybacked with the acknowledgment in the other direction. In other words, when node A is
sending data to node B, Node A also acknowledges the data received from node B. Because
piggybacking makes communication at the datalink layer more complicated, it is not a
common practice.
RANDOM ACCESS
➢ In random-access or contention methods, no station is superior to another station and none is
assigned control over another. At each instance, a station that has data to send uses a
procedure defined by the protocol to make a decision on whether or not to send. This
decision depends on the state of the medium (idle or busy). In other words, each station can
transmit when it desires on the condition that it follows the predefined procedure, including
testing the state of the medium.
➢ Two features give this method its name. First, there is no scheduled time for a station to
transmit. Transmission is random among the stations. That is why these methods are called
random access. Second, no rules specify which station should send next. Stations compete
with one another to access the medium. That is why these methods are also called contention
methods.
➢ In a random-access method, each station has the right to the medium without being
controlled by any other station. However, if more than one station tries to send, there is an
access conflict—collision—and the frames will be either destroyed or modified. To avoid
access conflict or to resolve it when it happens, each station follows a procedure that answers
the following questions:
ALOHA
➢ ALOHA, the earliest random-access method, was developed at the University of Hawaii in
early 1970. It was designed for a radio (wireless) LAN, but it can be used on any shared
medium.
➢ It is obvious that there are potential collisions in this arrangement. The medium is shared
between the stations. When a station sends data, another station may attempt to do so at the
same time. The data from the two stations collide and become garbled.
Pure ALOHA
➢ The original ALOHA protocol is called pure ALOHA. This is a simple but elegant protocol.
The idea is that each station sends a frame whenever it has a frame to send (multiple access).
However, since there is only one channel to share, there is the possibility of collision
between frames from different stations. Fig. 24 shows an example of frame collisions in pure
ALOHA.
➢ There are four stations (unrealistic assumption) that contend with one another for access to
the shared channel. The Fig. 24 shows that each station sends two frames; there are a total of
eight frames on the shared medium. Some of these frames collide because multiple frames
are in contention for the shared channel. Fig. 24 shows that only two frames survive: one
frame from station 1 and one frame from station 3. We need to mention that even if one bit of
a frame coexists on the channel with one bit from another frame, there is a collision and both
will be destroyed. It is obvious that we need to resend the frames that have been destroyed
during transmission.
➢ The pure ALOHA protocol relies on acknowledgments from the receiver. When a station
sends a frame, it expects the receiver to send an acknowledgment. If the acknowledgment
does not arrive after a time-out period, the station assumes that the frame (or the
acknowledgment) has been destroyed and resends the frame. A collision involves two or
more stations. If all these stations try to resend their frames after the time-out, the frames will
collide again.
➢ Pure ALOHA dictates that when the time-out period passes, each station waits a random
amount of time before resending its frame. The randomness will help avoid more collisions.
We call this time the backoff time TB.
➢ Pure ALOHA has a second method to prevent congesting the channel with retransmitted
frames. After a maximum number of retransmissions attempts Kmax, a station must give up
and try later. Fig. 25 shows the procedure for pure ALOHA based on the above strategy.
➢ The time-out period is equal to the maximum possible round-trip propagation delay, which is
twice the amount of time required to send a frame between the two most widely separated
stations (2 × Tp). The backoff time TB is a random value that normally depends on K (the
Vulnerable time
➢ Let us find the vulnerable time, the length of time in which there is a possibility of collision.
We assume that the stations send fixed-length frames with each frame taking Tfr seconds to
send. Fig. 26 shows the vulnerable time for station B.
Fig. 26: Vulnerable time for pure ALOHA protocol
➢ Station B starts to send a frame at time t. Now imagine station A has started to send its frame
after t − Tfr. This leads to a collision between the frames from station B and station A. On the
other hand, suppose that station C starts to send a frame before time t + Tfr. Here, there is also
a collision between frames from station B and station C. Looking at Fig. 26, we see that the
vulnerable time during which a collision may occur in pure ALOHA is 2 times the frame
transmission time.
Throughput
➢ Let us call G the average number of frames generated by the system during one frame
transmission time. Then it can be proven that the average number of successfully transmitted
frames for pure ALOHA is S = G × e−2G. The maximum throughput Smax is 0.184, for G =
1/2. (We can find it by setting the derivative of S with respect to G to 0).
➢ In other words, if one-half a frame is generated during one frame transmission time (one
frame during two frame transmission times), then 18.4 percent of these frames reach their
destination successfully. We expect G = 1/2 to produce the maximum throughput because
the vulnerable time is 2 times the frame transmission time. Therefore, if a station generates
only one frame in this vulnerable time (and no other stations generate a frame during this
time), the frame will reach its destination successfully.
Slotted ALOHA
➢ Pure ALOHA has a vulnerable time of 2 × Tfr. This is so because there is no rule that defines
when the station can send. A station may send soon after another station has started or just
before another station has finished. Slotted ALOHA was invented to improve the efficiency
of pure ALOHA.
➢ In slotted ALOHA we divide the time into slots of Tfr seconds and force the station to send
only at the beginning of the time slot. Fig. 27 shows an example of frame collisions in slotted
ALOHA.
➢ Because a station is allowed to send only at the beginning of the synchronized time slot, if a
station misses this moment, it must wait until the beginning of the next time slot. This means
that the station which started at the beginning of this slot has already finished sending its
frame. Of course, there is still the possibility of collision if two stations try to send at the
beginning of the same time slot. However, the vulnerable time is now reduced to one-half,
equal to Tfr. Fig. 28 shows the situation.
CSMA
➢ To minimize the chance of collision and, therefore, increase the performance, the CSMA
method was developed. The chance of collision can be reduced if a station senses the
medium before trying to use it. Carrier sense multiple access (CSMA) requires that each
station first listen to the medium (or check the state of the medium) before sending. In other
words, CSMA is based on the principle “sense before transmit” or “listen before talk.”
➢ CSMA can reduce the possibility of collision, but it cannot eliminate it. The reason for this is
shown in Fig. 28, a space and time model of a CSMA network. Stations are connected to a
shared channel (usually a dedicated medium).
➢ The possibility of collision still exists because of propagation delay; when a station sends a
frame, it still takes time (although very short) for the first bit to reach every station and for
every station to sense it. In other words, a station may sense the medium and find it idle, only
because the first bit sent by another station has not yet been received.
Vulnerable Time
➢ The vulnerable time for CSMA is the propagation time Tp. This is the time needed for a
signal to propagate from one end of the medium to the other. When a station sends a frame
and any other station tries to send a frame during this time, a collision will result.
➢ But if the first bit of the frame reaches the end of the medium, every station will already have
heard the bit and will refrain from sending. Fig. 29 shows the worst case. The leftmost
station, A, sends a frame at time t1, which reaches the rightmost station D, at time t1 + Tp.
The gray area shows the vulnerable area in time and space.
➢ Nonpersistent: In the nonpersistent method, a station that has a frame to send senses the line.
If the line is idle, it sends immediately. If the line is not idle, it waits a random amount of
time and then senses the line again. The nonpersistent approach reduces the chance of
collision because it is unlikely that two or more stations will wait the same amount of time
and retry to send simultaneously. However, this method reduces the efficiency of the network
because the medium remains idle when there may be stations with frames to send.
➢ p-Persistent: The p-persistent method is used if the channel has time slots with a slot
duration equal to or greater than the maximum propagation time. The p-persistent approach
combines the advantages of the other two strategies. It reduces the chance of collision and
improves efficiency. In this method, after the station finds the line idle it follows these steps:
• With probability p, the station sends its frame.
• With probability q = 1 − p, the station waits for the beginning of the next time slot and
checks the line again.
o If the line is idle, it goes to step 1.
o If the line is busy, it acts as though a collision has occurred and uses the backoff
procedure.
Fig. 31: Flow diagram for three persistence methods
CSMA/CD
➢ The CSMA method does not specify the procedure following a collision. Carrier sense
multiple access with collision detection (CSMA/CD) augments the algorithm to handle the
collision.
➢ In this method, a station monitors the medium after it sends a frame to see if the transmission
was successful. If so, the station is finished. If, however, there is a collision, the frame is sent
again.
➢ To better understand CSMA/CD, let us look at the first bits transmitted by the two stations
involved in the collision. Although each station continues to send bits in the frame until it
detects the collision, we show what happens as the first bits collide. In Fig. 32, stations A and
C are involved in the collision.
➢ At time t1, station A has executed its persistence procedure and starts sending the bits of its
frame. At time t2, station C has not yet sensed the first bit sent by A. Station C executes its
persistence procedure and starts sending the bits in its frame, which propagate both to the left
and to the right. The collision occurs sometime after time t2.
Fig. 32: Collision of the first bits in CSMA/CD
➢ Station C detects a collision at time t3 when it receives the first bit of A’s frame. Station C
immediately (or after a short time, but we assume immediately) aborts transmission. Station
A detects collision at time t4 when it receives the first bit of C’s frame; it also immediately
aborts transmission. Looking at
the Fig. 32, we see that A transmits for the duration t4 − t1; C transmits for the duration t3 −
t2.
➢ Now that we know the time durations for the two transmissions, we can show a more
complete graph in Fig. 33.
Procedure
➢ Now let us look at the flow diagram for CSMA/CD in Fig. 34. It is similar to the one for the
ALOHA protocol, but there are differences. The first difference is the addition of the
persistence process. We need to sense the channel before we start sending the frame by using
one of the persistence processes, we discussed previously (nonpersistent, 1-persistent, or p-
persistent). The corresponding box can be replaced by one of the persistence processes
shown in Fig. 31.
➢ The second difference is the frame transmission. In ALOHA, we first transmit the entire
frame and then wait for an acknowledgment. In CSMA/CD, transmission and collision
detection are continuous processes. We do not send the entire frame and then look for a
collision. The station transmits and receives continuously and simultaneously (using two
different ports or a bidirectional port). We use a loop to show that transmission is a
continuous process. We constantly monitor in order to detect one of two conditions: either
transmission is finished or a collision is detected. Either event stops transmission. When we
come out of the loop, if a collision has not been detected, it means that transmission is
complete; the entire frame is transmitted. Otherwise, a collision has occurred.
➢ The third difference is the sending of a short jamming signal to make sure that all other
stations become aware of the collision.
Energy Level
➢ We can say that the level of energy in a channel can have three values: zero, normal, and
abnormal. At the zero level, the channel is idle. At the normal level, a station has successfully
captured the channel and is sending its frame. At the abnormal level, there is a collision and
the level of the energy is twice the normal level. A station that has a frame to send or is
sending a frame needs to monitor the energy level to determine if the channel is idle, busy, or
in collision mode. Fig. 35 shows the situation.
Throughput
The throughput of CSMA/CD is greater than that of pure or slotted ALOHA. The maximum
throughput occurs at a different value of G and is based on the persistence method and the value
of p in the p-persistent approach. For the 1-persistent method, the maximum throughput is
around 50 percent when G = 1. For the nonpersistent method, the maximum throughput can go
up to 90 percent when G is between 3 and 8.
Traditional Ethernet
One of the LAN protocols that used CSMA/CD is the traditional Ethernet with the data rate of 10
Mbps. Now, it is good to know that the traditional Ethernet was a broadcast LAN that used the 1-
persistence method to control access to the common media. Later versions of Ethernet try to
move from CSMA/CD access methods.
CSMA/CA
➢ Carrier sense multiple access with collision avoidance (CSMA/CA) was invented for
wireless networks. Collisions are avoided using CSMA/CA’s three strategies: the interframe
space, the contention window, and acknowledgments, as shown in Fig. 36. We discuss RTS
and CTS frames later.
➢ Interframe Space (IFS): First, collisions are avoided by deferring transmission even if the
channel is found idle. When an idle channel is found, the station does not send immediately.
It waits for a period of time called the interframe space or IFS. Even though the channel may
appear idle when it is sensed, a distant station may have already started transmitting. The
distant station’s signal has not yet reached this station. The IFS time allows the front of the
transmitted signal by the distant station to reach this station. After waiting an IFS time, if the
channel is still idle, the station can send, but it still needs to wait a time equal to the
contention window. The IFS variable can also be used to prioritize stations or frame types.
For example, a station that is assigned a shorter IFS has a higher priority.
➢ Acknowledgment: With all these precautions, there still may be a collision resulting in
destroyed data. In addition, the data may be corrupted during the transmission. The positive
acknowledgment and the time-out timer can help guarantee that the receiver has received the
frame.
b. After the station is found to be idle, the station waits for a period of time called the DCF
interframe space (DIFS); then the station sends a control frame called the request to send
(RTS).
2. After receiving the RTS and waiting a period of time called the short interframe sp.ace
(SIFS), the destination station sends a control frame, called the clear to send (CTS), to the
source station. This control frame indicates that the destination station is ready to receive
data.
Fig. 38: CSMA/CA and NAV
3. The source station sends data after waiting an amount of time equal to SIFS.
4. The destination station, after waiting an amount of time equal to SIFS, sends an
acknowledgment to show that the frame has been received. Acknowledgment is needed in
this protocol because the station does not have any means to check for the successful arrival
of its data at the destination. On the other hand, the lack of collision in CSMA/CD is a kind
of indication to the source that data have arrived.
allowed to check the channel for idleness. Each time a station accesses the system and sends an RTS
frame, other stations start their NAV. In other words, each station, before sensing the physical medium
to see if it is idle, first checks its NAV to see if it has expired. Fig. 38 shows the idea of NAV.
Hidden-Station Problem
➢ The solution to the hidden station problem is the use of the handshake frames (RTS and CTS). Fig. 38
also shows that the RTS message from B reaches A, but not C.
➢ However, because both B and C are within the range of A, the CTS message, which contains the
duration of data transmission from B to A, reaches C. Station C knows that some hidden station is
using the channel and refrains from transmitting until that duration is over.
ETHERNET PROTOCOL
➢ We discussed about TCP/IP protocol suite does not define any protocol for the data-link or the physical
layer. In other words, TCP/IP accepts any protocol at these two layers that can provide services to the
network layer. The data-link layer and the physical layer are the territory of the local and wide area
Department of ECE,CEC 45
MODULE-2 VII SEM Computer Networks[18EC71]
networks.
➢ This means that when we discuss these two layers, we are talking about networks that are using them.
We learned that a local area network (LAN) is a computer network that is designed for a limited
geographic area such as a building or a campus. Although a LAN can be used as an isolated network
to connect computers in an organization for the sole purpose of sharing resources, most LANs today are
also linked to a wide area network (WAN) or the Internet.
➢ In the 1980s and 1990s several different types of LANs were used. All these LANs used a media-
access method to solve the problem of sharing the media. The Ethernet used the CSMA/CD approach.
The Token Ring, Token Bus, and FDDI (Fiber Distribution Data Interface) used the token-passing
approach. During this period, another LAN technology, ATM LAN, which deployed the high-speed
WAN technology (ATM), appeared in the market.
➢ Almost every LAN except Ethernet has disappeared from the marketplace because Ethernet was able to
update itself to meet the needs of the time. Several reasons for this success have been mentioned in the
literature, but we believe that the Ethernet protocol was designed so that it could evolve with the
demand for higher transmission rates. It is natural that an organization that has used an Ethernet LAN
in the past and now needs a higher data rate would update to the new generation instead of switching to
another technology, which might cost more. This means that we confine our discussion of wired LANs
to the discussion of Ethernet.
Department of ECE,CEC 46
MODULE-2 VII SEM Computer Networks[18EC71]
can provide interconnectivity between different LANs because it makes the MAC sublayer transparent.
ETHERNET EVOLUTION
The Ethernet LAN was developed in the 1970s by Robert Metcalfe and David Boggs. Since then, it has
gone through four generations: Standard Ethernet (10 Mbps), Fast Ethernet (100 Mbps), Gigabit Ethernet
(1 Gbps), and 10 Gigabit Ethernet (10 Gbps), as shown in Fig. 40.
ETHERNET
We refer to the original Ethernet technology with the data rate of 10 Mbps as the Standard Ethernet.
Although most implementations have moved to other technologies in the Ethernet evolution, there are
Department of ECE,CEC 47
MODULE-2 VII SEM Computer Networks[18EC71]
some features of the Standard Ethernet that have not changed during the evolution.
CHARACTERISTICS
➢ The sender sends a frame whenever it has it; the receiver may or may not be ready for it. The sender
may overwhelm the receiver with frames, which may result in dropping frames.
➢ If a frame drops, the sender will not know about it. Since IP, which is using the service of Ethernet, is
also connectionless, it will not know about it either.
➢ If the transport layer is also a connectionless protocol, such as UDP, the frame is lost and salvation
may only come from the application layer. However, if the transport layer is TCP, the sender TCP does
not receive acknowledgment for its segment and sends it again.
➢ Ethernet is also unreliable like IP and UDP. If a frame is corrupted during transmission and the
receiver finds out about the corruption, which has a high level of probability of happening because of
the CRC-32, the receiver drops the frame silently. It is the duty of high-level protocols to find out
about it.
2. Frame Format
The Ethernet frame contains seven fields, as shown in Fig. 40.
➢ Preamble: This field contains 7 bytes (56 bits) of alternating 0s and 1s that alert the receiving system
to the coming frame and enable it to synchronize its clock if it’s out of synchronization. The pattern
provides only an alert and a timing pulse. The 56-bit pattern allows the stations to miss some bits at the
Department of ECE,CEC 48
MODULE-2 VII SEM Computer Networks[18EC71]
beginning of the frame. The preamble is actually added at the physical layer and is not (formally) part
of the frame.
➢ Start frame delimiter (SFD): This field (1 byte: 10101011) signals the beginning of the frame. The
SFD warns the station or stations that this is the last chance for synchronization. The last 2 bits are
(11)2 and alert the receiver that the next field is the destination address. This field is actually a flag that
defines the beginning of the frame. We need to remember that an Ethernet frame is a variable-length
frame. It needs a flag to define the beginning of the frame. The SFD field is also added at the physical
layer.
➢ Destination address (DA): This field is six bytes (48 bits) and contains the link layer address of the
destination station or stations to receive the packet. When the receiver sees its own link-layer address,
or a multicast address for a group that the receiver is a member of, or
a broadcast address, it decapsulates the data from the frame and passes the data to the upper layer
protocol defined by the value of the type field.
➢ Source address (SA): This field is also six bytes and contains the link-layer address of the sender of
the packet.
➢ Type: This field defines the upper-layer protocol whose packet is encapsulated in the frame. This
protocol can be IP, ARP, OSPF, and so on. In other words, it serves the same purpose as the protocol
field in a datagram and the port number in a segment or user datagram. It is used for multiplexing and
demultiplexing.
➢ Data: This field carries data encapsulated from the upper-layer protocols. It is a minimum of 46 and a
maximum of 1500 bytes. We discuss the reason for these minimum and maximum values shortly. If the
data coming from the upper layer is more than 1500 bytes, it should be fragmented and encapsulated in
more than one frame. If it is less than 46 bytes, it needs to be padded with extra 0s. A padded data
frame is delivered to the upper-layer protocol as it is (without removing the padding), which means
that it is the responsibility of the upper layer to remove or, in the case of the sender, to add the padding.
The upper-layer protocol needs to know the length of its data. For example, a datagram has a field that
defines the length of the data.
➢ CRC: The last field contains error detection information, in this case a CRC-32. The CRC is calculated
over the addresses, types, and data field. If the receiver calculates the CRC and finds that it is not zero
(corruption in transmission), it discards the frame.
3. Frame Length
➢ Ethernet has imposed restrictions on both the minimum and maximum lengths of a frame. The
Department of ECE,CEC 49
MODULE-2 VII SEM Computer Networks[18EC71]
Department of ECE,CEC 50
MODULE-2 VII SEM Computer Networks[18EC71]
Department of ECE,CEC 51
MODULE-2 VII SEM Computer Networks[18EC71]
➢ The question is, then, how the actual unicast, multicast, and broadcast transmissions are distinguished
from each other. The answer is in the way the frames are kept or dropped.
• In a unicast transmission, all stations will receive the frame, the intended recipient keeps and
handles the frame; the rest discard it.
• In a multicast transmission, all stations will receive the frame, the stations that are members of the
group keep and handle it; the rest discard it.
• In a broadcast transmission, all stations (except the sender) will receive the frame and all stations
(except the sender) keep and handle it.
Access Method
Since the network that uses the standard Ethernet protocol is a broadcast network, we need to use an access
method to control access to the sharing medium. The standard Ethernet chose CSMA/CD with 1-persistent
method. Let us use a scenario to see how this method works for the Ethernet protocol.
➢ Assume station A in Fig. 42 has a frame to send to station D. Station A first should check whether any
other station is sending (carrier sense). Station A measures the level of energy on the medium (for a
short period of time, normally less than 100 μs). If there is no signal energy on the medium, it means
that no station is sending (or the signal has not reached station A). Station A interprets this situation as
idle medium. It starts sending its frame. On the other hand, if the signal energy level is not zero, it
means that the medium is being used by another station. Station A continuously monitors the medium
until it becomes idle for 100 μs. It then starts sending the frame. However, station A needs to keep a
copy of the frame in its buffer until it is sure that there is no collision. When station A is sure of this is
the subject.
➢ The medium sensing does not stop after station A has started sending the frame Station A needs to send
and receive continuously. Two cases may occur:
Case – 1:
• Station A has sent 512 bits and no collision is sensed (the energy level did not go above the regular
energy level), the station then is sure that the frame will go through and stops sensing the medium.
Where does the number 512 bits come from?
• If we consider the transmission rate of the Ethernet as 10 Mbps, this means that it takes the station
512/(10 Mbps) = 51.2 μs to send out 512 bits. With the speed of propagation in a cable (2 × 108
meters), the first bit could have gone 10,240 meters (one way) or only 5120 meters (round trip), have
collided with a bit from the last station on the cable, and have gone back.
Department of ECE,CEC 52
MODULE-2 VII SEM Computer Networks[18EC71]
• In other words, if a collision were to occur, it should occur by the time the sender has sent out 512 bits
(worst case) and the first bit has made a round trip of 5120 meters. We should know that if the collision
happens in the middle of the cable, not at the end, station A hears the collision earlier and aborts the
transmission. We also need to mention another issue.
• The above assumption is that the length of the cable is 5120 meters. The designer of the standard
Ethernet actually put a restriction of 2500 meters because we need to consider the delays encountered
throughout the journey. It means that they considered the worst case. The whole idea is that if station A
does not sense the collision before sending 512 bits, there must have been no collision, because during
this time, the first bit has reached the end of the line and all other stations know that a station is sending
and refrain from sending.
• In other words, the problem occurs when another station (for example, the last station) starts sending
before the first bit of station A has reached it. The other station mistakenly thinks that the line is free
because the first bit has not yet reached it.
• The reader should notice that the restriction of 512 bits actually helps the sending station: The sending
station is certain that no collision will occur if it is not heard during the first 512 bits, so it can discard
the copy of the frame in its buffer.
Case – 2:
• Station A has sensed a collision before sending 512 bits. This means that one of the previous bits has
collided with a bit sent by another station. In this case both stations should refrain from sending and
keep the frame in their buffer for resending when the line becomes available.
• However, to inform other stations that there is a collision in the network, the station sends a 48-bit jam
signal. The jam signal is to create enough signal (even if the collision happens after a few bits) to alert
other stations about the collision.
• After sending the jam signal, the stations need to increment the value of K (number of attempts). If
after increment K = 15, the experience has shown that the network is too busy, the station needs to
abort its effort and try again. If K < 15, the station can wait a backoff time (TB in Fig. 25) and restart
the process.
• As Fig. 25 shows, the station creates a random number between 0 and 2K − 1, which means each time
the collision occurs, the range of the random number increases exponentially. After the first collision
(K = 1) the random number is in the range (0, 1). After the second collision
(K = 2) it is in the range (0, 1, 2, 3). After the third collision (K = 3) it is in the range (0, 1, 2,
Department of ECE,CEC 53
MODULE-2 VII SEM Computer Networks[18EC71]
3, 4, 5, 6, 7).
• So, after each collision, the probability increases that the backoff time becomes longer. This is due to
the fact that if the collision happens even after the third or fourth attempt, it means that the network is
really busy; a longer backoff time is needed.
IMPLEMENTATION
➢ The Standard Ethernet defined several implementations, but only four of them became popular during
the 1980s. Table 1 shows a summary of Standard Ethernet implementations.
➢ In the nomenclature 10BaseX, the number defines the data rate (10 Mbps), the term Base means
baseband (digital) signal, and X approximately defines either the maximum size of the cable in 100
meters (for example 5 for 500 or 2 for 185 meters) or the type of cable, T for unshielded twisted pair
cable (UTP) and F for fiber-optic.
➢ The standard Ethernet uses a baseband signal, which means that the bits are changed to a digital signal
and directly sent on the line.
Table 1: Summary of Standard Ethernet
Department of ECE,CEC 54
MODULE-2 VII SEM Computer Networks[18EC71]
➢ The transceiver is responsible for transmitting, receiving, and detecting collisions. The transceiver is
connected to the station via a transceiver cable that provides separate paths for sending and receiving.
This means that collision can only happen in the coaxial cable.
➢ The maximum length of the coaxial cable must not exceed 500 m, otherwise, there is excessive
degradation of the signal. If a length of more than 500 m is needed, up to five segments, each a
maximum of 500 meters, can be connected using repeaters.
Department of ECE,CEC 55
MODULE-2 VII SEM Computer Networks[18EC71]
➢ Note that the collision here occurs in the thin coaxial cable. This implementation is more cost effective
than 10Base5 because thin coaxial cable is less expensive than thick coaxial and the tee connections
are much cheaper than taps.
➢ Installation is simpler because the thin coaxial cable is very flexible. However, the length of each
segment cannot exceed 185 m (close to 200 m) due to the high level of attenuation in thin coaxial
cable.
10Base-T: Twisted-Pair Ethernet
➢ The third implementation is called 10Base-T or twisted-pair Ethernet. 10Base-T uses a physical star
topology. The stations are connected to a hub via two pairs of twisted cable, as shown in Fig. 46.
Department of ECE,CEC 56
MODULE-2 VII SEM Computer Networks[18EC71]
➢ Note that two pairs of twisted cable create two paths (one for sending and one for receiving) between
the station and the hub. Any collision here happens in the hub.
➢ Compared to 10Base5 or 10Base2, we can see that the hub actually replaces the coaxial cable as far as
a collision is concerned. The maximum length of the twisted cable here is defined as 100 m, to
minimize the effect of attenuation in the twisted cable.
WIRELESS LANs
Wireless communication is one of the fastest-growing technologies. The demand for connecting devices
without the use of cables is increasing everywhere. Wireless LANs can be found on college campuses, in
office buildings, and in many public areas.
Architectural Comparison
Let us first compare the architecture of wired and wireless LANs to give some idea of what we need
to look for when we study wireless LANs.
Medium
➢ The first difference we can see between a wired and a wireless LAN is the medium. In a wired LAN,
we use wires to connect hosts. Earlier, we saw that we moved from multiple access to point-to-point
access through the generation of the Ethernet.
Department of ECE,CEC 57
MODULE-2 VII SEM Computer Networks[18EC71]
➢ In a switched LAN, with a link-layer switch, the communication between the hosts is point to- point
and full-duplex (bidirectional).
➢ In a wireless LAN, the medium is air, the signal is generally broadcast. When hosts in a wireless LAN
communicate with each other, they are sharing the same medium (multiple access).
➢ In a very rare situation, we may be able to create a point-to-point communication between two wireless
hosts by using a very limited bandwidth and two-directional antennas. Here it is to know about the
multiple-access medium, which means we need to use MAC protocols.
Hosts
➢ In a wired LAN, a host is always connected to its network at a point with a fixed link layer address
related to its network interface card (NIC). Of course, a host can move from one point in the Internet to
another point.
➢ In this case, its link-layer address remains the same, but its network-layer address will change.
However, before the host can use the services of the Internet, it needs to be physically connected to the
Internet.
➢ In a wireless LAN, a host is not physically connected to the network; it can move freely (as we’ll see)
and can use the services provided by the network. Therefore, mobility in a wired network and wireless
network are totally different issues.
Isolated LANs
➢ The concept of a wired isolated LAN also differs from that of a wireless isolated LAN. A wired
isolated LAN is a set of hosts connected via a link-layer switch (in the recent generation of Ethernet).
➢ A wireless isolated LAN, called an ad hoc network in wireless LAN terminology, is a set of hosts that
communicate freely with each other. The concept of a link-layer switch does not exist in wireless
LANs. Fig. 48 shows two isolated LANs, one wired and one wireless.
Department of ECE,CEC 58
MODULE-2 VII SEM Computer Networks[18EC71]
➢ An access point is gluing two different environments together: one wired and one wireless.
Communication between the AP and the wireless host occurs in a wireless environment;
communication between the AP and the infrastructure occurs in a wired environment.
Fig. 50: Connection of a wired LAN and a wireless LAN to other networks
Department of ECE,CEC 59
MODULE-2 VII SEM Computer Networks[18EC71]
CHARACTERISTICS
There are several characteristics of wireless LANs that either do not apply to wired LANs or the existence
of which is negligible and can be ignored. Following are some of these characteristics here to pave the way
for discussing wireless LAN protocols.
➢ Attenuation
The strength of electromagnetic signals decreases rapidly because the signal disperses in all directions;
only a small portion of it reaches the receiver. The situation becomes worse with mobile senders that
operate on batteries and normally have small power supplies.
➢ Interference
Another issue is that a receiver may receive signals not only from the intended sender, but also from other
senders if they are using the same frequency band.
➢ Multipath Propagation
A receiver may receive more than one signal from the same sender because electromagnetic waves can be
reflected back from obstacles such as walls, the ground, or objects. The result is that the receiver receives
some signals at different phases (because they travel different paths). This makes the signal less
recognizable.
➢ Error
With the above characteristics of a wireless network, we can expect that errors and error detection are
more serious issues in a wireless network than in a wired network. If we think about the error level as the
measurement of signal-to-noise ratio (SNR), we can better understand why error detection and error
correction and retransmission are more important in a wireless network. SNR is enough to say that it
measures the ratio of good stuff to bad stuff (signal to noise). If SNR is high, it means that the signal is
stronger than the noise (unwanted signal), so we may be able to convert the signal to actual data. On the
other hand, when SNR is low, it means that the signal is corrupted by the noise and the data cannot be
recovered.
ACCESS CONTROL
➢ Maybe the most important issue we need to discuss in a wireless LAN is access control— how a
wireless host can get access to the shared medium (air). We know that the Standard Ethernet uses the
CSMA/CD algorithm. In this method, each host contends to access the medium and sends its frame if it
finds the medium idle.
➢ If a collision occurs, it is detected and the frame is sent again. Collision detection in CSMA/CD serves
Department of ECE,CEC 60
MODULE-2 VII SEM Computer Networks[18EC71]
two purposes. If a collision is detected, it means that the frame has not been received and needs to be
resent. If a collision is not detected, it is a kind of acknowledgment that the frame was received.
➢ The CSMA/CD algorithm does not work in wireless LANs for three reasons:
1. To detect a collision, a host needs to send and receive at the same time (sending the frame and
receiving the collision signal), which means the host needs to work in a duplex mode. Wireless
hosts do not have enough power to do so (the power is supplied by batteries). They can only send
or receive at one time.
2. Because of the hidden station problem, in which a station may not be aware of another station’s
transmission due to some obstacles or range problems, collision may occur but not be detected. Fig.
51 shows an example of the hidden station problem. Station B has a transmission range shown by
the left oval (sphere in space); every station in this range can hear any signal transmitted by station
B. Station C has a transmission range shown by the right oval (sphere in space); every station
located in this range can hear any signal transmitted by C. Station C is outside the transmission
range of B; likewise, station B is outside the transmission range of C. Station A, however, is in the
area covered by both B and C; it can hear any signal transmitted by B or C. The figure also shows
that the hidden station problem may also occur due to an obstacle. Assume that station B is sending
data to station A. In the middle of this transmission, station C also has data to send to station
A. However, station C is out of B’s range and transmissions from B cannot reach C. Therefore, C
thinks the medium is free. Station C sends its data to A, which results in a collision at A because
this station is receiving data from both B and C. In this case, we say that stations B and C are
hidden from each other with respect to A. Hidden stations can reduce the capacity of the network
because of the possibility of collision.
3. The distance between stations can be great. Signal fading could prevent a station at one end from
hearing a collision at the other end.
➢ To overcome the above three problems, Carrier Sense Multiple Access with Collision Avoidance
(CSMA/CA) was invented for wireless LANs.
Department of ECE,CEC 61