EVOLUTION OF
COMPUTER NETWORKS
COURSE TITLE
COMPUTER NETWORKS
SUBMITTED TO
SIR DR. ANWAR
SUBMITTED BY
M. AWAIS
ROLL NO.
BSF2204696
BSIT 6th
Evolution of Computer Networks
The world’s current population is around 8.1 billion people, and the number of devices
connected to the internet is estimated to be over 50 billion, implying that each person uses
approximately 6 internet-connected devices. This number has been increasing since the day
when the first network, ARPANET was made. In today’s world, one cannot imagine life
without the internet. In this article, we will learn about how computer networks evolve with
time.
What is Computer Networking?
A computer network is a collection of computers capable of transmitting, receiving,
and exchanging voice, data, and video traffic. Because of the capability of computer
networking, everything is becoming more automated and capable of communicating and
managing itself.
If there is no computer network, you will not be able to read this article by simply conducting
a search on the topic and getting results in a matter of milliseconds. Because of the internet’s
powerful network, you can use Google and YouTube and watch other information with just a
few clicks. It is possible due to computer networks.
Computer networking is central to today’s constant evolution of the information technology
(IT) landscape. Network and
communication technologies have
been influential in this rise. Computer
networking makes the
interconnection of endpoints and
devices possible on local area
networks (LANs) or wide area
networks (WANs). This enables the
interaction, communication, and
sharing of resources between
businesses, service providers, and
consumers. Networking became as
essential as it is today, it is important
to study its origins. The vastness of
computer networking makes it a challenge to pinpoint in terms of its exact origins. However,
since the late 1950s, the impact of networks on tech evolution has become increasingly
important.
Networking in the 1960s
The 1960s marked the emergence of computer networking, laying the groundwork for
modern communication technologies. One of the most important milestones of this decade
was the development of the Advanced Research Projects Agency Network (ARPANET) in
1969, which became the foundation of the internet. The launch of Telstar 1 in 1962, the first
communications satellite, demonstrated the potential for global data transmission. In the same
year, the first commercial touch-tone phone was introduced, replacing rotary dial systems and
improving telecommunication efficiency.
Another crucial development in networking was the publication of the first Request for
Comments (RFC) document, which established guidelines for defining computer
communication networks and protocols. As a result, Network Control Protocol (NCP) was
specified, becoming ARPANET’s first transport protocol, which enabled data exchange
between connected computers. Additionally, IBM’s System/360, a mainframe computing
system introduced in 1964, contributed to the evolution of networking by allowing large-
scale data processing in businesses and research institutions.
ARPANET
The Advanced Research Projects Agency Network (ARPANET) was the first public packet-
switched network, a revolutionary method of transmitting data by breaking it into smaller
packets. Funded by the U.S. Department of Defence’s ARPA, it was originally designed to
support research and academic institutions by enabling remote access to shared computing
resources. Many of the network protocols still in use today were initially developed for
ARPANET.
The first successful ARPANET transmission occurred on October 29, 1969, between UCLA
(University of California, Los Angeles) and SRI (Stanford Research Institute). Though the
system crashed after sending only the first two letters of the intended message “LOGIN”, this
event was a groundbreaking moment in computer networking history.
UNIX Operating System
Another major innovation of 1969 was the development of the UNIX operating system by
Bell Laboratories (AT&T’s research division). UNIX became significant in the history of
networking because it supported multi-user and multitasking environments, which were
essential for early networked systems.
UNIX was also the first operating system written
entirely in the C programming language, making it
more portable and adaptable for different hardware
systems. It gained popularity in academic and
enterprise computing environments during the 1970s,
eventually becoming a fundamental part of many
networking infrastructures. A basic UNIX system at the
time consisted of a PDP-11 minicomputer connected to
dumb terminals, allowing multiple users to access the
same machine.
Character Encoding Systems
As computer networking expanded, the need for
standardized character encoding became evident. In
response, IBM introduced Extended Binary Coded
Decimal Interchange Code (EBCDIC), the first 8-bit character encoding system, in the 1960s.
Shortly after, the American Standard Code for Information Interchange (ASCII) was
introduced to compete with EBCDIC.
The American National Standards Institute (ANSI) officially standardized ASCII in 1968.
Although it was a 7-bit encoding system, ASCII gained widespread adoption and eventually
became the dominant character encoding standard for computing and networking
technologies. ASCII’s flexibility and compatibility contributed to its continued use in modern
internet protocols, programming languages, and text processing systems.
Networking in the 1970s
The 1970s was a decade of significant advancements in networking technology, leading to the
development of protocols and systems that continue to shape modern communication. One of
the most impactful innovations was the introduction of Ethernet, which became the
foundation of local area networks (LANs). Additionally, ARPANET experienced rapid
expansion, enabling international connectivity and the first-ever email transmission. The
decade also saw the development of Transmission Control Protocol (TCP), which later
evolved into TCP/IP, forming the backbone of the internet.
Ethernet (Xerox)
In 1973, Xerox’s research laboratory pioneered Ethernet, a networking technology that
allowed multiple computers to communicate within a local network. Initially, the Xerox
networking system operated at 2.94 Mbps, but it remained experimental and was not
commercially implemented at that time.
By 1979, the DIX consortium—a collaboration between Digital Equipment Corporation,
Intel, and Xerox—was established. The consortium worked to define a standardized Ethernet
specification, leading to the release of the 10 Mbps Ethernet standard in 1980. This
specification laid the groundwork for modern LAN technology, which became widely
adopted in businesses and institutions.
ARPANET
Throughout the 1970s, ARPANET experienced rapid growth as universities and government
institutions recognized its potential. The network was declared operational in 1975, marking a
major milestone in networking history. Satellite links were integrated into ARPANET,
allowing computers from other countries to connect to the network, which helped establish
global communication capabilities.
A notable event in ARPANET’s history occurred in 1971, when the first email was sent over
the network. This breakthrough introduced a new method of digital communication that
would later become one of the most widely used internet applications. As ARPANET
expanded, it continued to serve as a testing ground for advancements in networking
technology.
Transmission Control Protocol (TCP)
The success of ARPANET led to the emergence of multiple packet-switched networks.
However, these networks could not communicate with one another due to differences in their
protocols and equipment. To address this issue, Transmission Control Protocol (TCP) was
developed to enable network interoperability.
By 1977, TCP/IP was successfully tested on ARPANET, allowing different networks to
communicate seamlessly. This development was crucial in shaping the internet by enabling
inter-network communication. TCP/IP eventually became the standard networking protocol,
ensuring reliable data transmission between computers across different networks.
Networking in the 1980s
The 1980s saw a major shift in networking with the rise of client-server architectures,
reducing reliance on mainframe computing. This decade also marked significant
advancements in ARPANET, the emergence of the World Wide Web (WWW), the growth of
NSFNET, and the evolution of Ethernet as the dominant local area network (LAN) standard.
Additionally, the development of the Network File System (NFS) and Token Ring topology
played key roles in shaping network communication.
ARPANET
In 1983, ARPANET was split into two separate networks: MILNET for military use and a
civilian network that retained the name ARPANET. However, by the mid-1980s, other
networks started gaining dominance, leading to ARPANET’s decline. In 1986, the National
Science Foundation Network (NSFNET) was introduced, becoming the new backbone of the
internet and gradually replacing ARPANET.
As commercial networks and private network providers expanded, ARPANET lost its
significance. It was officially shut down in 1989 and decommissioned in 1990, marking the
end of the network that laid the foundation for the internet.
World Wide Web (WWW)
As the commercialization of the internet increased, multiple networks were developed across
the world. However, these networks used different communication protocols, making it
difficult to connect them seamlessly. To address this issue, Tim Berners-Lee and a team of
computer scientists at CERN, Switzerland, worked on creating a unified global network
called the World Wide Web (WWW).
The WWW is a system of interconnected web pages linked by hypertext. A hypertext link
allows users to navigate between different web pages, even if they belong to different
websites. This concept, combined with advancements in internet protocols, led to the
transition from ARPANET to the WWW, shaping the modern internet.
NSFNET
Introduced in 1986, the National Science Foundation Network (NSFNET) became the new
backbone of the public internet, replacing ARPANET. It was a more capable and scalable
network designed primarily for academic and research purposes. However, to support the
growing demand for commercial use, NSFNET was eventually divided into for-profit and
not-for-profit networks in the early 1990s, allowing for the development of the commercial
internet.
Evolution of Ethernet
Ethernet continued to evolve and become the
dominant LAN technology during the 1980s. The
Institute of Electrical and Electronics Engineers
(IEEE) launched Project 802 to establish a unified
LAN standard. As part of this initiative, the IEEE
802.3 group was dedicated to Ethernet
development.
In 1983, IEEE released the 802.3 10Base5
Ethernet standard, commonly known as Thicknet,
which used thick coaxial cables for data transmission. This was the first commercially
available Ethernet standard.
By 1985, the 10Base2 Ethernet standard—also known as Thinnet—was introduced, using
thinner coaxial cables, making Ethernet more practical for widespread adoption. These
advancements solidified Ethernet’s position as the standard networking technology for LANs.
Network File System (NFS)
Developed in 1985, the Network File System (NFS) allowed computers to access files over a
network as if they were stored locally. The introduction of NFS led to a surge in diskless
UNIX workstations equipped with built-in Ethernet
interfaces. This played a major role in establishing
UNIX as the dominant operating system in
academic and professional computing
environments.
Token Ring Topology
IBM introduced its Token Ring networking
technology in 1982 and later submitted it to IEEE
for standardization. The standard was finalized in
1984, and by 1985, Token Ring was introduced as an alternative to Ethernet. Token Ring
operated using a token-passing method, ensuring that only one device transmitted data at a
time, reducing network collisions. Although it gained some adoption, Ethernet eventually
outpaced Token Ring due to its simplicity and cost-effectiveness.
Networking in the 1990s to Today
The 1990s and beyond marked a period of rapid advancements in networking technology,
with Ethernet continuing to be the dominant local area network (LAN) technology. As the
demand for higher speeds and greater bandwidth increased, new innovations like Full-Duplex
Ethernet, Fast Ethernet, and Gigabit Ethernet emerged, significantly improving network
efficiency. At the same time, new technologies such as Voice over IP (VoIP) and wireless
communication transformed how data and voice were transmitted, laying the foundation for
modern networking.
Full-Duplex Ethernet
In traditional Ethernet, data could only travel in one direction at a time, causing collisions
when multiple devices attempted to send data simultaneously. To solve this, Full-Duplex
Ethernet was introduced in 1992, allowing simultaneous sending and receiving of data,
effectively doubling network capacity to 20 Mbps.
As networks continued to expand, a standardized Full-Duplex Ethernet version was
developed in 1995 and finalized in 1997, greatly enhancing network efficiency and
performance. Around the same time, Grand Junction Networks introduced a commercial
Ethernet bus capable of 100 Mbps speeds. This advancement led the IEEE 802.3 working
group to establish the 802.3u 100Base-T Fast Ethernet standard, which allowed 100 Mbps
data transmission over fiber-optic and twisted-pair cables.
Gigabit Ethernet
While Fast Ethernet (100 Mbps) improved network speeds, businesses and data centers
needed even higher bandwidth. This led to the introduction of Gigabit Ethernet (1,000 Mbps)
in 1999, which was 10 times faster than Fast Ethernet. Due to its significantly improved
performance, Gigabit Ethernet quickly became the standard for wired local networks,
particularly in enterprise environments where high-speed data transmission was critical.
Voice over IP (VoIP)
Traditionally, voice communication relied on circuit-switched telephone lines, which were
costly and inefficient. In the mid-1990s, researchers developed Voice over IP (VoIP),
allowing voice data to be transmitted over the internet
instead of using standard phone lines. While initially
an experimental concept, VoIP gained significant
traction in the late 1990s, as businesses realized its
cost-saving potential. By routing voice telephone
traffic over IP networks, VoIP revolutionized
telecommunication and became a fundamental part of
modern communication systems, powering services
like internet calling, video conferencing, and business
communication platforms.
Wireless Communication
Before the 1990s, computer networking relied mainly
on wired connections. However, the need for wireless
communication led to the development of the first Wi-
Fi standard (IEEE 802.11) in 1997, which provided
wireless connectivity with speeds of up to 2 Mbps.
By 1999, advancements in Wi-Fi technology resulted in a new version that could achieve 25
Mbps speeds while operating on the 5 GHz frequency band, reducing interference and
improving performance. This breakthrough paved the way for the widespread adoption of
wireless networking, making it possible for laptops, mobile devices, and other wireless-
enabled devices to connect to the internet without physical cables. Since then, Wi-Fi has
evolved significantly, offering higher speeds, enhanced security, and greater reliability,
making it an essential technology for both home and enterprise networking.
Computer Networking Today
From the first computer network, ARPANET, to the latest advancements like Web 3.0,
networking technology has continuously evolved in speed, reliability, and user experience.
Today, modern networks emphasize high-speed data transmission, security, and efficiency,
making communication seamless and more robust. Several key advancements have played a
crucial role in shaping the current state of networking.
Optical Fiber Cables
Optical Fiber technology has transformed internet connectivity by replacing traditional
copper coaxial cables with Fiber-optic cables, which use light signals to transmit data over
long distances with minimal loss. These cables’ function based on the principle of Total
Internal Reflection (TIR), where light is continuously reflected within the fiber core, ensuring
efficient data transmission. Compared to copper-based networks, fiber-optic cables provide
higher speeds, lower latency, and greater reliability, making them ideal for modern broadband
infrastructure. Leading service providers such as Reliance JIO and Airtel Xstream Fiber use
fiber-optic technology to deliver high-speed internet connections, reaching up to 1 Gbps or
more.
Li-Fi Technology
Li-Fi, or Light Fidelity, is an emerging wireless communication technology that transmits
data using LED light instead of radio waves. This is achieved by rapidly switching LEDs on
and off millions of
times per second, a
process unnoticeable to
the human eye. Li-Fi
offers significant
advantages over
traditional wireless
communication
methods. It provides
ultra-fast data transfer
speeds of up to 100 Gbps, enhancing performance for high-bandwidth applications. Because
light cannot pass through walls, Li-Fi offers an additional layer of security, reducing the risk
of unauthorized access or cyber threats.
Unlike radio waves, Li-Fi is also safer for human exposure and operates on a spectrum that is
1,000 times wider than the radio spectrum, reducing network congestion and interference.
The use of LED lights for transmission further improves energy efficiency. With its potential
applications in smart homes, hospitals, and IoT devices, Li-Fi is expected to complement Wi-
Fi technology in the near future.
Blockchain Technology
Blockchain is a decentralized and encrypted digital ledger system that records and links data
blocks in chronological order. It eliminates the need for a central authority by ensuring data
integrity, security, and transparency. Originally developed for cryptocurrencies like Bitcoin,
blockchain technology has expanded its reach into various industries, including finance,
supply chain management, healthcare, and cybersecurity.
In financial transactions, blockchain enables secure and tamper-proof digital exchanges
without intermediaries. In supply chain management, it helps track products from their origin
to final delivery, ensuring authenticity and reducing fraud. Healthcare institutions use
blockchain to protect patient records and ensure secure data sharing between organizations.
Its trustless and tamper-proof nature makes blockchain a foundational technology for the
decentralized web and digital ecosystems.
Web 3.0
Web 3.0 represents the next evolution of the internet, emphasizing decentralization, artificial
intelligence, and blockchain-based applications. Unlike previous versions of the web, which
relied on centralized servers and data control by corporations, Web 3.0 aims to create a more
open and user-driven internet.
This transformation allows users to have greater ownership and control over their data while
interacting with applications powered by machine learning and AI. Blockchain technology
plays a key role in Web 3.0 by enabling secure, peer-to-peer transactions and decentralized
applications (DApps) that function without traditional intermediaries. As artificial
intelligence becomes more advanced, Web 3.0 will further enhance online experiences by
providing intelligent, adaptive applications that respond to user needs in real time.
Firewall Technology
A firewall is a critical security system used to monitor and control incoming and outgoing
network traffic based on predefined security rules. Acting as a barrier between private
internal networks and public networks like the internet, firewalls protect devices from
unauthorized access, cyber threats, and malicious attacks. They create "choke points" where
web traffic is examined and filtered before being allowed to pass through. Some advanced
firewalls also maintain audit logs, keeping track of traffic history to analyze security threats.
As cybersecurity threats continue to evolve, firewalls remain an essential component of
network security, safeguarding both personal and enterprise-level systems.