Computer Network
Computer Network
Define Network:
A network is a set of devices (often referred to as nodes) connected by communication links. A
node can be a computer, printer, or any other device capable of sending and/or receiving data
generated by other nodes on the network. “Computer network’’ to mean a collection of
autonomous computers interconnected by a single technology. Two computers are said to be
interconnected if they are able to exchange information. A computer network consists of various
kinds of nodes. Servers, networking hardware, personal computers, and other specialized or
general-purpose hosts can all be nodes in a computer network. Host names and network
addresses are used to identify them.
Classification:
Based on Transmission Mode: Transmission mode defines the direction of signal flow
between two linked devices. There are three types of transmission modes.
Simplex: In Simplex mode, the communication is unidirectional, i.e., the data flow in
one direction.
A device can only send the data but cannot receive it or it can receive the data
but cannot send the data.
This transmission mode is not very popular as mainly communications require
the two-way exchange of data. The simplex mode is used in the business field as in sales
that do not require any corresponding reply.
In half-duplex mode, when one device is sending the data, then another
has to wait; this causes the delay in sending the data at the right time.
Full-Duplex: In Full duplex mode, the communication is bi-directional, i.e., the data
flow in both the directions.
In Full-Duplex mode, both stations can transmit and receive
simultaneously.
PAN: It is a network formed by connecting a few personal devices like computers, laptops,
mobile phones, smart phones, printers etc. All these devices lie within an approximate range of
10 meters. A personal area network may be wired or wireless. One of the most common real-
world examples of a PAN is the connection between a Bluetooth earpiece and a smart phone.
PANs can also connect laptops, tablets, printers, keyboards, and other computerized devices.
LAN: LAN is a small high speed network. In LAN few numbers of systems are interconnected
with networking device to create network. As the distance increases between the nodes or system
it speed decreases. It is a network that connects computers, mobile phones, tablet, mouse, printer,
etc., placed at a limited distance. The geographical area covered by a LAN can range from a
single room, a floor, an office having one or more buildings in the same premise, laboratory, a
school, college, or university campus. The connectivity is done by means of wires, Ethernet
cables, fiber optics, or Wi-Fi.
MAN: Metropolitan area network is an extension of local area network to spread over the city. It
may be a single network or a network in which more than one local area network can share their
resources. Cable TV network or cable based broadband internet services are examples of MAN.
This kind of network can be extended up to 30-40 km. Sometimes, many LANs are connected
together to form MAN. Most MANs use fiber optic cables to form connections between LANs.
WAN: Wide Area Network connects computers and other LANs and MANs, which are spread
across different geographical locations of a country or in different countries or continents. WAN
spread over the world may be spread over more than one city country or continent. Systems in
this network are connected indirectly. Generally WAN network are slower speed than LAN’s.
Large business, educational and government organizations connect their different branches in
different locations across the world through WAN. The Internet is the largest WAN that connects
billions of computers, smart phones and millions of LANs from different continents.
Network Topology:
The arrangement of computers and other peripherals in a network is called its topology. Two or
more devices connect to a link; two or more links form a topology. The topology of a network is
the geometric representation of the relationship of all the links and linking devices (usually
called nodes) to one another. There are 7 basic topologies -
1. Point to Point Topology: It is a type of topology that works on the functionality of the
sender and receiver. It is the simplest communication between two nodes, in which one is
the sender and the other one is the receiver. Point-to-Point provides high bandwidth.
Advantages: The connectivity is extremely easy to set up.
It has low Latency.
Simple to use.
Due to the limited number of nodes it can reach, this network topology
is extremely quick.
Disadvantages: This architecture can be used in a small area where the nodes are close
together.
If a link in the network is broken, the entire network will be rendered
inoperable.
Topology with the fewest number of elements.
Advantages: Failure during a single device won’t break the network. Data transmission is
more consistent because failure doesn’t disrupt its processes.
In a mesh topology, each node acts like Router. However, there are no
exclusive Routers. It easy to add an additional node in this topology and connect it to the
network.
This topology has robust features to beat any situation. A mesh doesn’t
have a centralized authority.
Disadvantages: In this topology, each node works as a router that increases complexity.
As compared to other network topologies, such as point to point, star,
bus, the cost of mesh topology is high.
In the mesh topology, the installation is much hard.
Complex process. Power requirement is higher as all the nodes will need
to remain active all the time and share the load.
3. Star Topology: A star network, star topology is one of the most common network setups.
In this configuration, every node connects to a central network device, like a hub, switch,
or computer. The central node acts as a server, and the connecting nodes act as clients.
Benefit is that if a cable fails, just one node is going to be brought down.
Advantages: It is very reliable – if one cable or device fails then all the others will
still work.
Centralized management of the network, through the use of the central
computer, hub, or switch.
Adding or removing devices requires only a single cable connection to
the central hub. There is no need to reconfigure the entire network.
Easy fault detection because the link are often easily identified. Less
expensive because each device only needs one I/O port and wishes to be connected with hub
with one link.
Disadvantages: If the central computer, hub, or switch fails, the entire network goes
down and all computers are disconnected from the network.
Managing and maintaining the central hub requires more resources and
technical expertise than simpler topologies.
4. Bus Topology: Bus topology, is a type of network topology where all devices on network
are connected to a single cable, called a bus or backbone. This cable serves as a shared
communication line, allowing all devices to receive the same signal simultaneously. When a
device wants to send data to another device, it broadcasts the data onto the cable. However,
only the device with the matching destination address will process the data. Other devices
will ignore the data.
Advantages: Cabling is less. A single cable connects all nodes in a bus topology.
The length of cable required is less than a star topology.
It is easy to connect or remove devices in this network without affecting
any other device.
This is best suited for situations where only a few computers are required
for connection establishment.
Disadvantage: Bus topology is not good for large networks.
This network topology is very slow as compared to other topologies.
The entire network will fail if the central cable gets damaged or
faulty.
Advantages: This type of topology combines the benefits of different types of topologies in
one topology.
It is an effective type of topology with higher speed since two or more
topologies are combined in a network.
Data can be securely transferred between different networks.
There are many different protocols, each designed for specific purpose.
Hypertext Transfer Protocol (HTTP): This is designed for transferring a hypertext between
two or more systems. Most of the data sharing over the web is done through using HTTP.
User Datagram Protocol (UDP): It is a standard protocol over the internet. The UDP protocol
allows the computer applications to send the messages in the form of datagrams from one
machine to another machine over the Internet Protocol (IP) network. The UDP is an alternative
communication protocol to the TCP protocol. Like other protocols, UDP provides a set of rules
that governs how the data should be exchanged over the internet. The UDP works by
encapsulating the data into the packet and providing its own header information to the packet.
File Transfer Protocol (FTP): It is a Client/server protocol that is used for moving files to or
from a host computer. FTP is a protocol designed for transferring files between computers. It
allows users to upload, download, and manipulate files on remote servers. FTP is often used for
website maintenance and content updates.
Simple Mail Transfer Protocol (SMTP): Sending and receiving email is done via the SMTP
protocol. It makes email message transmission between servers and from a client possible. To
ensure that emails are sent consistently and in the right format, SMTP is necessary for email
communication.
Hypertext Transfer Protocols (HTTPS): It is the secured version of HTTP. this protocol
ensures secure communication between two computers where one sends the request through the
browser and the other fetches the data from the web server.
OSI Model:
OSI stands for Open System Interconnection is a reference model that describes
how information from a software application in one computer moves through a
physical medium to the software application in another computer.
It has been developed by ISO – ‘International Organization of Standardization ‘,
in the late 1970s.
The OSI model describes how data flows from one computer through a network
to another computer.
The OSI model consists of seven separate but related layers, each of which
defines a part of the process of moving information across a network.
Each layer relies on the next lower layer to perform more primitive functions.
Each layer is self-contained, so that task assigned to each layer can be performed
independently.
Each layer provides services to the next higher layer.
Changes in one layer should not require changes in other layers.
1. Physical Layer: The lowest layer of the OSI reference model is the physical layer.
It is responsible for the actual physical connection between the devices. The
physical layer contains information in the form of bits. It is responsible for
transmitting individual bits from one node to the next. When receiving data, this
layer will get the signal received and convert it into 0s and 1s and send them to the
Data Link layer, which will put the frame back together.
2. Data Link Layer: The data link layer is responsible for the node-to-node
delivery of the message. The main function of this layer is to make sure data
transfer is error-free from one node to another, over the physical layer. When a
packet arrives in a network, it is the responsibility of DLL to transmit it to the
Host using its MAC address.
The packet received from Network layer is further divided into frames depending
on the frame size of NIC (Network Interface Card). DLL also encapsulates Sender and
Receiver’s MAC address in the header.
4. Transport Layer: The Transport layer is a Layer 4 ensures that messages are
transmitted in the order in which they are sent and there is no duplication of data.
The main responsibility of the transport layer is to transfer the data completely
(process to process delivery). It receives the data from the upper layer and
converts them into smaller units known as segments. This layer can be termed as
an end-to-end layer as it provides a point-to point connection between source and
destination to deliver the data reliably.
5. Session Layer: This layer is responsible for establishment of connection,
maintenance of sessions, authentication and also ensures security. The Session
layer is also used to establish, maintain and synchronizes the interaction
between communicating devices. Session layer adds some checkpoints when
transmitting the data in a sequence. If some error occurs at any point of the
transmission of data, then the re-transmission will take place from the
checkpoint. This process is known as Synchronization and recovery and ends of
the messages are not cut prematurely and data loss is avoided.
The TCP/IP model is a concise version of the OSI model. TCP/IP forms the base
of present-day internet.
TCP/IP was designed and developed by the Department of Defense (DoD) in the
1960s and is based on standard protocols. It stands for Transmission Control
Protocol/Internet Protocol.
TCP/IP specifies how data is exchanged over the internet by providing end-to-
end communications that identify how it should be broken into packets,
addressed, transmitted, routed and received at the destination. TCP/IP requires
little central management and is designed to make networks reliable with the
ability to recover automatically from the failure of any device on the network.
o An internet layer is the second layer of the TCP/IP model. This layer is
also known as the network layer.
o The main responsibility of the internet layer is to send the packets from
any network, and they arrive at the destination irrespective of the route
they take.
o The main protocols residing at this layer are IP, ICMP, ARP
Transport Layer:
o The transport layer is responsible for the reliability, flow control, and
correction of data which is being sent over the network.
Application Layer:
The parameters that can be changed in the carrier signal are either its amplitude, frequency or
phase for an electronic communication system. Modulation is of two types; Analog Modulation
and Digital Modulation. Pulse code modulation is a type of digital modulation.
PCM is a digital method of representing analog signals. It’s widely used in digital audio,
telecommunications, and other applications where analog signals need to be converted to digital
form for processing, storage or transmission.
Pulse Code Modulation (PCM) is defined as the conversion of sampled analog signals into
digital signals which are in the form of a series of ‘on’ or ‘off’ amplitudes represented as Binary
0 and 1. An analog signal is a continuous wave, and the PCM signal is a wave with a series of
digits. Thus, we can define PCM as the modulation method that transmits the pulses in the form
of binary digits representing a code number. A simple diagram of the binary digits represented in
the form of electric pulses is shown below:
A binary digit '0' indicates the absence of a pulse, and the binary digits '1' indicates the presence
of a pulse.
The basic operations in the receiver section are regeneration of impaired signals,
decoding, and reconstruction of the quantized pulse train.
In Pulse Code Modulation, the signal is passed through the following components and steps to
convert into a digital signal.
LPF (Low Pass Filter): As the name implies, a filter passes a certain range of frequencies and
reject the other. A LPF rejects the higher frequencies from the input signal and passes the other
frequencies, specified by the filter. This filters out the high-frequency components, which are
higher than the greatest frequency of the message signal. It is done to avoid any aliasing or
distortion in the input signal.
Sampler: In this step, the analog signal is sampled to convert it into discrete time signals. The
input signal of the PCM system is analog, which is a continuous time-varying signal. The analog
signal passes through the sampler, where it is sampled periodically. The sampler measures the
instantaneous value of the analog signal, converts it to the discrete symbols and sends it to the
quantizer.
Quantizer: Quantizing is a process of reducing the excessive bits and confining the data. The
sampled output when given to Quantizer, it reduces the number of discreet symbols. The
quantizer performs the process of data compression and data redundancy. It adds some redundant
bits and compresses the data to make it suitable for storage and transmission.
Encoder: The digitization of analog signal is done by the encoder. This step converts the
discrete signals into final binary signals in the form of ‘on’ or ‘off’ amplitudes and represented as
0 and 1. It responds to each sample by generating a binary pulse or pattern. The combination of
Low pass filter, quantizer, and encoder works as an A/D or Analog to Digital Converter.
Encoding minimizes the bandwidth used.
Regenerative repeater: This section increases the signal strength. The output of the channel
also has one regenerative repeater circuit, to compensate the signal loss and reconstruct the
signal, and also to increase its strength.
Decoder: The digitally encoded signal arrives at the receiver. It first removes the noise from the
signal. The quantization process does not allow the easy separation of the signal and the noise.
Hence, it is essential to remove the noise from the signal at the decoding stage. It works similar
to the demodulation process and converts the binary pulses to the original form or the analog
signal.
Reconstruction Filter: After the digital-to-analog conversion is done by the regenerative circuit
and the decoder, a low-pass filter is employed, called as the reconstruction filter to get back the
original signal. A reconstruction filter helps in the smooth conversion of the digital signal back to
the original analog signal.
Thus, we can conclude that PCM system converts the analog signal to the digital signal, removes
the noise, and converts it back to the analog signal as the output.
Serial and Parallel Transmission:
The process of sending data between two or more digital devices is known as data transmission.
There are two methods used for transferring data between computers: Serial Transmission and
Parallel Transmission.
The main distinction between these transmissions is that the data is transferred bit by bit in Serial
Transmission. In Parallel Transmission, the data is sent one byte (8 bits) or character at a time.
Serial Transmission:
A serial transmission transfers data one bit at a time, consecutively, via a communication channel
or computer bus in telecommunication and data transmission. On the other hand, parallel
communication delivers multiple bits as a single unit through a network with many similar
channels.
8-bits are conveyed at a time in serial transmission, with a start bit and a stop bit, which
are 0 and 1.
In this transmission, serial data cables are utilized to send data across extended distances.
All long-distance communication and most computer networks employ serial
communication.
In this transmission, the data is delivered in proper order.
The majority of communication systems use serial mode. Serial networks may be
extended over vast distances for far less money since fewer physical wires are required.
Parallel Transmission:
Parallel communication is a means of transmitting multiple binary digits (bits) simultaneously in
data transmission.
Parallel Transmission is faster than serial transmission to transmit the bits. Parallel
transmission is used for short distance.
A parallel interface comprises parallel wires that individually contain data and other
cables that allow the transmitter and receiver to communicate. Therefore, the wires for a
similar transmission system are put in a single physical thread to simplify installation and
troubleshooting.
The data stream must be transmitted through n communication lines, which necessitates
using many wires. This is an expensive mode of transportation, hence it is usually limited
to shorter distances.
Serial Transmission vs. Parallel Transmission:
Multiplexing is a method used to send multiple signals or information streams over a single
communication line or channel. It is a technique used to combine and send the multiple data
streams over a single medium. The process of combining the data streams is known as
multiplexing and hardware used for multiplexing is known as a multiplexer. Networks use
multiplexing to combine several signals, either digital or analogue, into a single composite signal
that is sent across a single medium, like radio waves or fiber optic cables. When the composite
signal reaches its destination, it is demultiplexed, and the individual signals are restored and
made available for processing.
Networks use a variety of multiplexing techniques, but at a conceptual level, they all operate in a
similar manner. The individual network signals are input into a multiplexer (mux) that combines
them into a composite signal, which is then transmitted through a shared medium. When the
composite signal reaches its destination, a demultiplexer (demux) splits the signal back into the
original component signals and outputs them into separate lines for use by other operations.
The transmission medium is used to send the signal from sender to receiver. There can
only be one signal on the medium at once. When several signals need to share a single
medium, the medium must be divided so that each signal has access to a certain amount
of the available bandwidth. When multiple signals share the common medium, there is a
possibility of collision. Multiplexing concept is used to avoid such collision.
Types of Multiplexing:
1. Frequency Division Multiplexing(FDM)
2. Time-Division Multiplexing(TDM)
3. Wavelength Division Multiplexing(WDM)
FDM: Frequency division multiplexing is defined as a type of multiplexing where the
bandwidth of a single physical medium is divided into a number of smaller, independent
frequency channels.
In this type, the total bandwidth available in a communication medium is divided into frequency
bands. Each individual signal is assigned a unique frequency range and is transmitted
simultaneously with the others, each using its own band, so there is no overlap.
A single transmission medium is subdivided into several frequency channels, and each frequency
channel is given to different devices.
Using the modulation technique, the input signals are transmitted into frequency bands and then
combined to form a composite signal.
Ex: A traditional television transmitter, which sends a number of channels through a single cable,
uses FDM.
This type involves dividing the time available on a channel into different time slots. Each signal
is allocated a specific slot and can transmit its data during that slot only. This ensures that the
signals do not interfere with each other since they are transmitted at different times.
In Time Division Multiplexing technique, the total time available in the channel is distributed
among different users. Therefore, each user is allocated with different time interval known as a
Time slot at which data is to be transmitted by the sender.
The Time Division Multiplexing technology transmits data one piece at a time instead of all at
once.
In TDM, the signal is transmitted in the form of frames. Frames contain a cycle of time slots in
which each frame contains one or more slots dedicated to each user.
TDM is used in telephone networks to transmit multiple voice calls over a single communication
link by allocating each call a specific time slot.
WDM: Similar to FDM, WDM is used primarily with fiber optic cables. Different signals are
transmitted simultaneously at different wavelengths (or colors) of light. This is a highly efficient
method for increasing the amount of data that can be transmitted over a single fiber.
Each signal is carried on a different wavelength of light, and the resulting signals are combined
onto a single optical fiber for transmission. At the receiving end, the signals are separated by
their wavelengths, demultiplexed and routed to their respective destinations.
Multiplexing and Demultiplexing can be achieved by using a prism. Prism can perform a role of
multiplexer by combining the various optical signals to form a composite signal, and the
composite signal is transmitted through a fiber optical cable.
One solution is to make a point-to-point connection between each pair of devices or between a
central device and every other device. But when applied to very large network, these methods are
impractical and wasteful. To overcome this, networking introduced Switching.
Switches: A switch is a device that directs incoming data from multiple input ports
to a specific output port that takes the data to its intended destination.
Switching: Switching is the process of transferring data packets from one device to another in
a network, or from one network to another, using specific devices called switches.
In large networks, there can be multiple paths from sender to receiver. The switching technique
will decide the best route for data transmission. Switching technique is used to connect the
systems for making one-to-one communication.
A computer user experiences switching all the time for example, accessing the Internet from
your computer device, whenever a user requests a webpage to open, the request is processed
through switching of data packets only.
A switch is a specific type of computer hardware that makes it easier to switch, or move,
incoming data packets to the appropriate location. In the OSI Model, a switch operates at the
Data Link layer. Incoming data packets from a source computer or network are mostly handled
by a switch, which also selects the proper port via which the packets will travel to their
destination computer or network.
In other word, Circuit Switching is a mechanism of assigning a predefined path from source node
to destination node during the entire period of connection. This is a switching technique that
establishes a dedicated path between sender and receiver. In this Technique, once the connection
is established then the dedicated path will remain to exist until the connection is terminated.
Once we establish the connection, we can transfer data between devices over the dedicated path.
This path typically comprises a series of interconnected switches or nodes that route the data to
its destination.
In case of circuit switching technique, when any user wants to send the data, voice, video, a
request signal is sent to the receiver then the receiver sends back the acknowledgment to ensure
the availability of the dedicated path. After receiving the acknowledgment, dedicated path
transfers the data.
Circuit switching is used in public telephone network. It is used for voice transmission. Fixed
data can be transferred at a time in circuit switching technology.
Circuit switching isn’t commonly used in computer networks, as it isn’t very efficient for data
transmission. We reserve the dedicated path for the entire duration of the communication.
Therefore, we waste a significant amount of bandwidth during those times. Additionally, circuit
switching is not well-suited for networks with high traffic volumes.
Circuits can be permanent or temporary. Applications which use circuit switching may have to
go through three phases:
Establish a circuit
Transfer the data
Disconnect the circuit
Advantages: Circuit switching provides a dedicated communication path between two devices
for the duration of the communication. Hence, we reserve the bandwidth for the entire
conversation. This results in guaranteed bandwidth, which can be important for applications that
require a constant data rate.
As we reserve the dedicated communication for the entire conversation, there’s no packet loss.
Finally, circuit switching provides predictable performance.
Disadvantages: Circuit switching requires the dedicated communication path to be reserved for
the entire duration of the communication. This results in an inefficient use of bandwidth, as the
dedicated path isn’t being utilized during these times.
Circuit switching isn’t well-suited for networks with high traffic volumes. This limits the
scalability of circuit switching in large networks.
It requires dedicated resources, such as switches or nodes, to establish the dedicated
communication path. This can result in high costs for establishing and maintaining circuit-
switched networks.
Message Switching: Message switching is a method of data transmission that was popular in the
early days of networking, before the development of packet switching. In message switching, we
divide a message into fixed-length blocks or frames.
In this technique a message is transferred as a complete unit and routed through intermediate
nodes at which it is stored and forwarded. In this technique, there is no establishment of a
dedicated path between the sender and receiver.
The destination address is appended to the message. Message Switching provides a dynamic
routing as the message is routed through the intermediate nodes based on the information
available in the message.
Each and every node stores the entire message and then forward it to the next node. This type of
network is known as store and forward network.
Message switches are programmed in such a way so that they can provide the most efficient
routes.
Traffic congestion can be reduced because the message is temporarily stored in the nodes.
Data channels are shared among the communicating devices that improve the efficiency of using
available bandwidth.
It’s a simple method of data transmission that doesn’t require complex routing algorithms or
network management techniques. This makes it easy to implement and manage, particularly in
small or low-bandwidth networks.
Disadvantages: The message switches must be equipped with sufficient storage to enable them to
store the messages until the message is forwarded.
The Long delay can occur due to the storing and forwarding facility provided by the message
switching technique.
It requires more network resources for each message. This means that message-switching
networks may be unable to support large numbers of devices or high-bandwidth applications.
Packet Switching: Packet switching is a method used to transmit data over a network. We
divide data into small packets and transmit them over the network independently. Each packet
contains the data and destination address information required to route the packet to its
destination.
In packet switching, each packet travels separately through the network and can take different
paths to reach its destination. This approach allows for more efficient use of network resources
because we can transmit multiple packets simultaneously over the same network.
It is a technique in which the message is sent in one go, but it is divided into smaller pieces, and
they are sent individually.
The message splits into smaller pieces known as packets and packets are given a unique number
to identify their order at the receiving end. Every packet contains some information in its headers
such as source address, destination address and sequence number. Packets will travel across the
network, taking the shortest path as possible. All the packets are reassembled at the receiving end
in correct order.
If any packet is missing or corrupted, then the message will be sent to resend the message.If the
correct order of the packets is reached, then the acknowledgment message will be sent.
Advantages: In packet switching technique, switching devices do not require massive secondary
storage to store the packets, so cost is minimized to some extent. Therefore, we can say that the
packet switching technique is a cost-effective technique.
It allows multiple packets to be transmitted simultaneously over the network, making more
efficient use of the available bandwidth.
Packet switching is a robust and reliable method of data transmission. If one packet is lost or
delayed, it doesn’t affect the transmission of other packets, as we route packets independently
through the network.
The protocols used in a packet switching technique are very complex and requires high
implementation cost.
If the network is overloaded or corrupted, then it requires retransmission of lost packets. It can
also lead to the loss of critical information if errors are nor recovered.