KEMBAR78
Chapter 1updatef | PDF | Central Processing Unit | Supercomputer
0% found this document useful (0 votes)
9 views26 pages

Chapter 1updatef

The document outlines various computing paradigms including High-Performance Computing (HPC), Parallel Computing, Distributed Computing, Cluster Computing, Grid Computing, Cloud Computing, Biocomputing, Mobile Computing, Quantum Computing, Optical Computing, Nano Computing, and Network Computing. Each paradigm is described in terms of its structure, functionality, and applications, emphasizing their roles in solving complex problems and enhancing computational efficiency. The document highlights the evolution of computing technologies and their interconnections, showcasing the shift from traditional models to more advanced, network-based systems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views26 pages

Chapter 1updatef

The document outlines various computing paradigms including High-Performance Computing (HPC), Parallel Computing, Distributed Computing, Cluster Computing, Grid Computing, Cloud Computing, Biocomputing, Mobile Computing, Quantum Computing, Optical Computing, Nano Computing, and Network Computing. Each paradigm is described in terms of its structure, functionality, and applications, emphasizing their roles in solving complex problems and enhancing computational efficiency. The document highlights the evolution of computing technologies and their interconnections, showcasing the shift from traditional models to more advanced, network-based systems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 26

UNIT – I

Computing Paradigms
• High-Performance Computing
• In high-performance computing systems, a pool of processors
(processor machines or central processing units [CPUs]) connected
(networked) with other resources like memory, storage, and input
and output devices, and the deployed software is enabled to run in
the entire system of connected components.
• The processor machines can be of homogeneous or heterogeneous
type.
• The legacy meaning of high-performance computing (HPC) is the
supercomputers.
• examples of HPC include a small cluster of desktop computers or
personal computers (PCs) to the fastest supercomputers.
• HPC systems are normally found in those applications where it is
required to use or solve scientific problems. Most of the time, the
challenge in working with these kinds of problems is to perform
suitable simulation study.
• Parallel Computing
• Parallel computing is also one of the Example of HPC. Here, a set of
processors work cooperatively to solve a computational problem.
• These processor machines or CPUs are mostly of homogeneous type.
Therefore, this definition is the same as that of HPC and is broad to
include supercomputers that have hundreds or thousands of
processors interconnected with other resources.
• One can difference between conventional (also known as serial or
sequential) computers and parallel computers in the way the
applications are executed.
• In serial or sequential computers, the following apply:
• It runs on a single computer/processor machine having a single CPU.
• A problem is broken down into a discrete series of instructions.
• Instructions are executed one after another.
• In parallel computing, since there is simultaneous use of
multiple processor machines, the following apply:
• It is run using multiple processors (multiple CPUs).
• A problem is broken down into discrete parts that can be solved
concurrently.
• Each part is further broken down into a series of instructions.
• Instructions from each part are executed simultaneously on different
processors.
• An overall control/coordination mechanism is employed.
• Distributed Computing
• Distributed computing is also a computing system that consists of
multiple computers or processor machines connected through a
network, which can be homogeneous or heterogeneous, but run as a
single system.
• The connectivity can be such that the CPUs in a distributed system
can be physically close together and connected by a local network,
or they can be geographically distance and connected by a wide area
network.
• The heterogeneity in a distributed system supports any number of
possible configurations in the processor machines, such as
mainframes, PCs, workstations, and minicomputers.
• The goal of distributed computing is to make such a network work as
a single computer.
• Distributed computing systems are advantageous over centralized
systems, because there is a support for the following characteristic
features:
• Scalability: It is the ability of the system to be easily expanded by
adding more machines as needed, and vice versa, without affecting
the existing setup.
• Redundancy or replication: Here, several machines can provide the
same services, so that even if one is unavailable (or failed), work
does not stop because other similar computing supports will be
available.
• Cluster Computing
• A cluster computing system consists of a set of the same or similar
type of processor machines connected using a dedicated network
infrastructure.
• All processor machines share resources such as a common home
directory and have a software such as a message passing interface
(MPI) implementation installed to allow programs to be run across
all nodes simultaneously.
• This is also a kind of HPC category. The individual computers in a
cluster can be referred to as nodes. The reason to realize a cluster as
HPC is due to the fact that the individual nodes can work together to
solve a problem larger than any computer can easily solve. And, the
nodes need to communicate with one another in order to work
cooperatively and meaningfully together to solve the problem in
hand.
• If we have processor machines of heterogeneous types in a cluster,
this kind of clusters become a subtype and still mostly are in the
• Grid Computing
• The computing resources in most of the organizations are
underutilized but are necessary for certain operations. The idea of
grid computing is to make use of such nonutilized computing power
by the needy organizations, and thereby the return on investment
(ROI) on computing investments can be increased.
• Thus, grid computing is a network of computing or processor
machines managed with a kind of software such as middleware, in
order to access and use the resources remotely. The managing
activity of grid resources through the middleware is called grid
services .
• Grid services provide access control, security, access to data
including digital libraries and databases, and access to large-scale
interactive and long-term storage facilities.
• Grid computing is more popular due to the following reasons:
• Its ability to make use of unused computing power, and thus, it is a
cost-effective solution (reducing investments, only recurring costs)
• As a way to solve problems in line with any HPC-based application
• Enables heterogeneous resources of computers to work
cooperatively and collaboratively to solve a scientific problem
• Cloud Computing
• The computing trend moved toward cloud from the concept of grid
computing, particularly when large computing resources are
required to solve a single problem, using the ideas of computing
power as a utility and other allied concepts. However, the potential
difference between grid and
• cloud is that grid computing supports leveraging several computers
in parallel to solve a particular application, while cloud computing
supports leveraging multiple resources, including computing
resources, to deliver a unified service e to the end user.
• In cloud computing, the IT and business resources, such as servers,
storage, network, applications, and processes, can be dynamically
provisioned to the user needs and workload. In addition, while a
cloud can provision and support a grid, a cloud can also support
nongrid environments, such as a three-tier web architecture running
on traditional or Web 2.0 applications.
• Biocomputing
• Biocomputing systems use the concepts of biologically derived or
simulated molecules (or models) that perform computational
processes in order to solve a problem. The biologically derived
models aid in structuring the computer programs that become part
of the application.
• Biocomputing provides the theoretical background and practical
tools for scientists to explore proteins and DNA. DNA and proteins
are nature’s building blocks, but these building blocks are not exactly
used as bricks ; the function of the final molecule rather strongly
depends on the order of these blocks. Thus, the Biocomputing
scientist works on inventing the order suitable for various
applications mimicking biology. Biocomputing shall, therefore, lead
to a better understanding of life and the molecular causes of certain
diseases.
• Mobile Computing
• In mobile computing, the processing (or computing) elements are
small (i.e., handheld devices) and the communication between
various resources is taking place using wireless media.
• Mobile communication for voice applications (e.g., cellular phone) is
widely established throughout the world and witnesses a very rapid
growth in all its dimensions including the increase in the number of
subscribers of various cellular networks. An extension of this
technology is the ability to send and receive data across various
cellular networks using small devices such as smartphones. There
can be numerous applications based on this technology; for
example, video call or conferencing is one of the important
applications that people prefer to use in place of existing voice (only)
communications on mobile phones.
• Mobile computing–based applications are becoming very important
and rapidly evolving with various technological advancements as it
allows users to transmit data from remote locations to other remote
or fixed locations.
• Quantum Computing
• Manufacturers of computing systems say that there is a limit for
cramming more and more transistors into smaller and smaller spaces
of integrated circuits (ICs) and thereby doubling the processing
power about every 18 months.
• This problem will have to be overcome by a new quantum computing
–based solution, wherein the dependence is on quantum
information, the rules that govern the subatomic world.
• Quantum computers are millions of times faster than even our most
powerful supercomputers today. Since quantum computing works
differently on the most fundamental level than the current
technology, and although there are working prototypes, these
systems have not so far proved to be alternatives to today’s silicon-
based machines.
• Optical Computing
• Optical computing system uses the photons in visible light or infrared
beams, rather than electric current, to perform digital computations.
An electric current flows at only about 10% of the speed of light.
• This limits the rate at which data can be exchanged over long
distances and is one of the factors that led to the evolution of optical
fiber.
• By applying some of the advantages of visible and/or IR networks at
the device and component scale, a computer can be developed that
can perform operations 10 or more times faster than a conventional
electronic computer.
• Nano computing
• Nano computing refers to computing systems that are constructed
from Nano scale components. The silicon transistors in traditional
computers may be replaced by transistors based on carbon
nanotubes.
• The successful realization of Nano computers relates to the scale and
integration of these nanotubes or components. The issues of scale
relate to the dimensions of the components; they are, at most, a few
nanometers in at least two dimensions.
• The issues of integration of the components are twofold: first, the
manufacture of complex arbitrary patterns may be economically
infeasible, and second, Nano computers may include massive
quantities of devices. Researchers are working on all these issues to
bring Nano computing a reality
• Network Computing
• Network computing is a way of designing systems to take advantage
of the latest technology and maximize its positive impact on business
solutions and their ability to serve their customers using a strong
underlying network of computing resources.
• In any network computing solution, the client component of a
networked architecture or application will be with the customer or
client or end user, and in modern days, they provide an essential set
of functionality necessary to support the appropriate client functions
at minimum cost and maximum simplicity.
• Unlike conventional PCs, they do not need to be individually
configured and maintained according to their intended use. The
other end of the client component in the network architecture will
be a typical server environment to push the services of the
application to the client end.
• Almost all the computing paradigms that were discussed earlier are
of this nature. Even in the future, if any one invents a totally new
computing paradigm, it would be based on a networked
architecture, without which it is impossible to realize the benefits for
any end user.

You might also like