Completion of Module 1 Cloud Computing
Completion of Module 1 Cloud Computing
Cloud Computing is defined as storing and accessing of data and computing services over the
internet. It is the use of remote servers on the internet to store, manage process data rather
than local servers. It doesn't store any data on your personal computer. It is the on-demand
availability of computer services like servers, data storage, networking, databases, etc. The main
purpose of cloud computing is to give access to data centres to many users. Users can also access
data from a remote server. Cloud computing is the delivery of different services through the
Internet. These resources include tools and applications like data storage, servers, databases,
networking, and software.
Whenever you travel through a bus or train, you take a ticket for your destination and hold back
to your seat till you reach your destination. Likewise other passengers also takes ticket and travel
in the same bus with you and it hardly bothers you where they go. When your stop comes you
get off the bus thanking the driver. Cloud computing is just like that bus, carrying data and
information for different users and allows to use its service with minimal cost.
Rather than keeping files on a proprietary hard drive or local storage device, cloud-based storage
makes it possible to save them to a remote database. As long as an electronic device has access
to the web, it has access to the data and the software programs to run it. Cloud computing is a
popular option for people and businesses for a number of reasons including cost savings,
increased productivity, speed and efficiency, performance, and security.
Cloud computing is a virtualization-based technology that allows us to create, configure, and
customize applications via an internet connection. The cloud technology includes development
platform, hard disk, software application, and database.
The term cloud refers to a network or the Internet. We can say that Cloud is something, which is
present at remote location. It is a technology that uses remote servers on the internet to store,
manage, and access data online rather than local drives. The data can be anything such as files,
images, documents, audio, video, and more. Cloud can provide services over public and private
networks, i.e., WAN, LAN or VPN (Virtual Private Network).
There are the following operations that we can do using cloud computing:
Developing new applications and services
Storage, back up, and recovery of data
Hosting blogs and websites
Delivery of software on demand
Analysis of data
Streaming videos and audios
Applications such as e-mail, web conferencing, customer relationship management (CRM)
execute on cloud.
Cloud Computing refers to manipulating, configuring, and accessing the hardware and software
resources remotely. It offers online data storage, infrastructure, and application.
Cloud computing offers platform independency, as the software is not required to be installed
locally on the PC. Hence, the Cloud Computing is making our business applications mobile and
collaborative.
Low Cost
By using cloud computing, the cost will be reduced because to take the services of cloud
computing, IT company need not to set its own infrastructure and pay-as-per usage of resources.
Services in pay-per-use mode
Application Programming Interfaces (APIs) are provided to the users so that they can access
services on the cloud by using these APIs and pay the charges as per the usage of services.
On Demand Self Service
Cloud Computing allows the users to use web services and resources on demand. One can login
to a website at any time and use them.
Low cost
There is no requirement of high-power computers and technology because the application will
run on the cloud, not on the user’s PC. The cloud reduces the software costs because there is no
need to purchase software for every computer in an organization. Cloud computing reduces both
hardware and software maintenance costs for organizations.
Mobility
Cloud computing allows us to easily access all cloud data via mobile.
Pay-Per-Use model
Cloud computing offers Application Programming Interfaces (APIs) to the users f
or access services on the cloud and pays the charges as per the usage of service.
Unlimited storage capacity
Cloud offers us a huge amount of storing capacity for storing our important data such as
documents, images, audio, video, etc. in one place.
Increase computing power
Cloud servers have a very high-capacity for running and processing tasks and the processing of
applications.
Updating
Instant software update is possible and users don't have to face the choice problem between
obsolete and high-upgrade software.
Cloud computing is the on-demand availability of computer system resources, especially data
storage (cloud storage) and computing power, without direct active management by the user.The
term is generally used to describe e data centers available to many users
over the Internet.
Cloud computing metaphor: the group of networked elements providing services need not be
individually addressed or managed by users; instead, the entire provider-managed suite of
hardware and software can be thought of as an amorphous cloud.
Computing
The ACM (Association for Computing Machinery) Computing Curricula 2005 and 2020 defined
"computing" as follows:
"In a general way, we can define computing to mean any goal-oriented activity requiring,
benefiting from, or creating computers. Thus, computing includes designing and building
hardware and software systems for a wide range of purposes; processing, structuring, and
managing various kinds of information; doing scientific studies using computers; making
computer systems behave intelligently; creating and using communications and entertainment
media; finding and gathering information relevant to any particular purpose, and so on. The list
is virtually endless, and the possibilities are vast."
NIST (National Institute of Standards and Technology) Definition of Cloud Computing
Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a
shared pool of configurable computing resources (e.g., networks, servers, storage, applications,
and services) that can be rapidly provisioned and released with minimal management effort or
service provider interaction. This cloud model is composed of five essential characteristics, three
service models, and four deployment models.
Essential Characteristics:
On-demand self-service. A consumer can unilaterally provision computing capabilities, such as
server time and network storage, as needed automatically without requiring human interaction
with each service provider.
Broad network access. Capabilities are available over the network and accessed through standard
mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile
phones, tablets, laptops, and workstations).
Resource pooling. The provider’s computing resources are pooled to serve multiple
consumersusing a multi-tenant model, with different physical and virtual resources dynamically
assigned and reassigned according to consumer demand. There is a sense of location
independence in that the customer generally has no control or knowledge over the exact location
of the provided resources but may be able to specify location at a higher level of abstraction
(e.g., country, state, or data center). Examples of resources include storage, processing, memory,
and network bandwidth.
Rapid elasticity. Capabilities can be elastically provisioned and released, in some
casesautomatically, to scale rapidly outward and inward commensurate with demand. To
theconsumer, the capabilities available for provisioning often appear to be unlimited and can be
appropriated in any quantity at any time.
Measured service. Cloud systems automatically control and optimize resource use by leveraging
a metering capability at some level of abstraction appropriate to the type of service (e.g.,storage,
processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled,
and reported, providing transparency for both the provider and consumer of the utilized service.
Trends in Computing
Distributed Computing
Grid Computing
Cluster Computing
Utility Computing
Cloud Computing
Centralized System
Centralized systems are systems that use client/server architecture where one or more client
nodes are directly connected to a central server. This is the most commonly used type of system
in many organisations where client sends a request to a company server and receives the
response.
Example –Bitcoin. Let’s take bitcoin for example because it’s the most popular use case of
decentralized systems. No single entity/organisation owns the bitcoin network. The network is a
sum of all the nodes who talk to each other for maintaining the amount of bitcoin every account
holder has.
Characteristics of Decentralized System
Lack of a global clock: Every node is independent of each other and hence, have different
clocks that they run and follow.
Multiple central units (Computers/Nodes/Servers): More than one central unit which can
listen for connections from other nodes.
Dependent failure of components: one central node failure causes a part of system to fail; not
the whole system.
Architecture of Decentralized System
peer-to-peer architecture – all nodes are peers of each other. No one node has supremacy over
other nodes.
master-slave architecture – One node can become a master by voting and help in coordinating of
a part of the system but this does not mean the node has supremacy over the other node which it
is coordinating.
Applications of Decentralized System
Private networks – peer nodes joined with each other to make a private network.
Cryptocurrency – Nodes joined to become a part of a system in which digital currency is
exchanged without any trace and location of who sent what to whom.
Use Cases
Blockchain
Decentralized databases – Entire database split in parts and distributed to different nodes for
storage and use. For example, records with names starting from ‘A’ to ‘K’ in one node, ‘L’ to
‘N’ in second node and ‘O’ to ‘Z’ in third node.
Cryptocurrency
Distributed Systems
A distributed system is a collection of independent computers that appears to its users as asingle
coherent system.
Early computing was performed on a single processor. Uni-processor computing can be called
centralized computing.
Centralized data networks are those that maintain all the data in a single computer, location and
to access the information you must access the main computer of the system, known as “server”.
On the other hand, a distributed data network works as a single logical data network, installed in
a series of computers (nodes) located in different geographic locations and that are not connected
to a single processing unit, but are fully connected to provide integrity and accessibility to
information from any point. In this system all the nodes contain information and all the clients of
the system are in equal condition. In this way, distributed data networks can perform autonomous
processing.
Maintenance -Centralized networks are the easiest to maintain since they have only one point of
failure, this is not the case of the distributed ones, which are more difficult to maintain.
Stability - The centralized ones are very unstable, since any problem that affects the central
server can generate chaos throughout the system. However, the distributed ones are more stable,
by storing the totality of the system information in a large number of nodes that maintain equal
conditions with each other.
Security - Distributed networks have higher level of security, since to carry out malicious
attacks would have to attack a large number of nodes at the same time. As the information is
distributed among the nodes of the network, in this case if a legitimate change is made it will be
reflected in the rest of the nodes of the system that will accept and verify the new information;
but if some illegitimate change is made, the rest of the nodes will be able to detect it and will not
validate this information. This consensus between nodes protects the network from deliberate
attacks or accidental changes of information.
Speed - Distributed systems have an advantage over centralized systems in terms of network
speed, since as the information is not stored in a central location, a bottleneck is less likely, in
which the number of people attempting to access a server is larger than it can support, causing
waiting times and slowing down the system.
Scalability - Centralized systems tend to present scalability problems since the capacity of the
server is limited and cannot support infinite traffic. Distributed systems have greater scalability,
due to the large number of nodes that support the network.
Availability – In centralized systems, if there are several requests, the server can break down
and no longer respond. But distributes systems can withstand significant pressure on the
network. All the nodes in the network have the data. Then, the requests are distributed among the
nodes. Therefore, the pressure does not fall on a computer, but on the entire network. In this
case, the total availability of the network is much greater than in the centralized one.
Reliability –In centralized system, server failure can cause failure of the entire system. But in
distributed system, if one machine crashes, the system as a whole can still survive. Higher
availability and improved reliability can be achieved in distributed systems.
Distributed Applications
Applications that consist of a set of processes that is distributed across a network of machines
and work together as an ensemble to solve a common problem. There are several applications
which coordinate among themselves to address a particular problem.
Not only in the past; now also, it is mostly, several applications are client server type of things,
resource management centralized at the server. So, we want to make it in a distributed fashion.
There is peer to peer computing which represents a movement towards more truly distributed
applications.
In client-server model, different clients invoke a particular server:
Peer-to-Peer
Peer-to-peer (P2P) computing or networking is a distributed application architecture that
partitions tasks or workloads between peers. Peers are equally privileged, equipotent participants
in the application. They are said to form a peer-to-peer network of nodes.
Peers make a portion of their resources, such as processing power, disk storage or network
bandwidth, directly available to other network participants, without the need for central
coordination by servers or stable hosts. Peers are both suppliers and consumers of resources, in
contrast to the traditional client-server model in which the consumption and supply of resources
is divided.
A peer-to-peer (P2P) network in which interconnected nodes ("peers") share resources amongst
each other without the use of a centralized administrative system:
A network based on the client-server model, where individual clients request services and
resources from centralized servers:
Grid Computing
Grid computing is a group of networked computers which work together as a virtual
supercomputer to perform large tasks, such as analysing huge sets of data or weather modeling.
The term grid computing originated in the early 1990s as a metaphor for making computer power
as easy to access as an electric power grid. An electrical grid is an interconnected network for
delivering electricity from producers to consumers. It consists of generating stations, electrical
substations, high voltage transmission lines, distribution lines that connect individual customers.
Grid computing is the use of widely distributed computer resources to reach a common goal.
Grid computing is distinguished from conventional high-performance computing systems such as
cluster computing in that grid computers have each node set to perform a different
task/application. Grid computers also tend to be more heterogeneous and geographically
dispersed (thus not physically coupled) than cluster computers. Although a single grid can be
dedicated to a particular application, commonly a grid is used for a variety of purposes. Grids are
a form of distributed computing whereby a "super virtual computer" is composed of many
networked loosely coupled computers acting together to perform large tasks.
Grid Computing can be defined as a network of computers working together to perform a task
that would rather be difficult for a single machine. All machines on that network work under the
same protocol to act like a virtual supercomputer. The task that they work on may include
analysing huge datasets or simulating situations which require high computing power.
Computers on the network contribute resources like processing power and storage capacity to the
network.
Grid Computing is a subset of distributed computing, where a virtual super computer comprises
of machines on a network connected by some bus, mostly Ethernet or sometimes the Internet. It
can also be seen as a form of Parallel Computing where instead of many CPU cores on a single
machine, it contains multiple cores spread across various locations.
A form of networking. Unlike conventional networks that focus on communication among
devices, grid computing harnesses unused processing cycles of all computers in a network for
solving problems too intensive for any stand-alone machine.
Grid computing represents a distributed computing approach that attempts to achieve high
computational performance by a non-traditional means. Rather than achieving high performance
computational needs by having large clusters of similar computing resources or a single high-
performance system, such as a supercomputer, grid computing attempts to harness the
computational resources of a large number of dissimilar devices.Grid computing typically
leverages the spare CPU cycles of devices that are not currently needed for a system’s own
needs, and then focus them on the particular goal of the grid computing resources. While these
few spare cycles from each individual computer might not mean much to the overall task, in
aggregate, the cycles are significant.
Grid computing is a computing infrastructure that provides dependable, consistent, pervasive and
inexpensive access to computational capabilities.
Grid computing enables the virtualization of distributed computing and data resources such as
processing, network bandwidth and storage capacity to create a single system image, granting
users and applications seamless access to vast IT capabilities. Just as an Internet user views a
unified instance of content via the Web, a grid user essentially sees a single, large virtual
computer.
Utilising Underutilised
Resources
In most organisations, many
computing resources are
idle and underutilised at most
of the times. For
example, most desktop
computers are idle more than
95% of their time [17].
Realising that these idle times
are being wasted and not
profitable to the
organisation, GC provide the
solution for exploiting
underutilised resources.
In addition to processing
resources, it is often that
computing resources have
also large amount of
unused storage capacity. And
Grid Computing
allows these unused capacities
to be considered as a
single virtual storage media
where the need of huge
storage capacity within a
particular application is
resolved. Thus, the
performance of this
application is
improved if compared running
this application over a
single computer.
Utilising Underutilised Resources - In most organisations, many computing resources are
idle and underutilised at most of the times. Realising that these idle times are being wasted and
not profitable to the organisation, grid computing provide the solution for exploiting
underutilised resources. In addition to processing resources, it is often that computing resources
have also large amount of unused storage capacity. Grid computing allows these unused
capacities to be considered as a single virtual storage media where the need of huge storage
capacity within a particular application is resolved. Thus, the performance of this application is
improved if compared running this application over a single computer.
Parallel CPU Capacity - The possibility of applying massive parallel CPU activity within an
application is one of the main exciting features of grid computing.
Resource Balancing – Grid computing groups multiple heterogeneous resources into a single
virtual resource. Furthermore, the grid also facilitates in balancing these resources depending on
the requirements of the tasks. As a result, appropriate resources are selected based on the time of
execution and the priority of each task.
The benefits of grid computing can be categorised into:
a) Business benefits
Faster time to obtain the results
Increase productivity
b) Technology benefits
Optimise existing infrastructure
Increase access to data and collaboration
Type of Grids
Grid have been divided into a number of types, on the basis of their use.
Computational Grid: These grids provide secure access to huge pool of shared processing
power suitable for high throughput applications and computation intensive computing.
Data Grid: Data grids provide an infrastructure to support data storage, data discovery, data
handling, data publication, and data manipulation of large volumes of data actually stored in
various heterogeneous databases and file systems.
Collaboration Grid: With the advent of Internet, there has been an increased demand for better
collaboration. Such advanced collaboration is possible using the grid. For instance, persons from
different companies in a virtual enterprise can work on different components of a CAD project
without even disclosing their proprietary technologies.
Grid Components
A computer cluster may be a simple two-node system which just connects two personal
computers, or may be a very fast supercomputer. A basic approach to building a cluster is that of
a Beowulf cluster which may be built with a few personal computers to produce a cost-effective
alternative to traditional high-performance computing. The developers used Linux, the Parallel
Virtual Machine toolkit and the Message Passing Interface library to achieve high performance
at a relatively low cost.
Types of Cluster
High Availability (HA) or Failover Cluster- These cluster models create availability of
services and resources in an uninterrupted method using the system’s implicit redundancy. The
basic idea in this form of Cluster is that if a node fails, then applications and services can be
made available to other nodes. High availability clusters are used to protect one or more sensitive
applications. These types of Clusters serve as the base for critical missions, mails, files, and
application servers.These are also termed as failover clusters. Computers so often faces failure
issues.In this approach, redundant computer systems are utilized in the situation of any
component malfunction. So, when there is a single point malfunction, the system seems to be
completely reliable as the network has redundant cluster elements. Through the implementation
of high availability clusters, systems can go with extended functionality and provides consistent
computing services like complicated databases, business activities, customer services and
network file distribution.
Load Balancing Cluster - This Cluster distributes all the incoming traffic/requests for resources
from nodes that run the same programs. In this Cluster model, all the nodes are responsible for
tracking orders, and if a node fails, then the requests are distributed amongst all the nodes
available. Such a solution is usually used on web server farms. A splitter is required to distribute
the requests of users to each node, it verifies that each node has the same workload. The
application will be sent to the node that has the fastest time in response to it.
Distributed/Parallel Processing Cluster - This Cluster model enhances availability and
performance for applications that have large computational tasks. A large computational task
gets divided into smaller tasks and distributed across the stations. Such Clusters are usually used
for scientific computing or financial analysis that require high processing power. More tightly
connected computer clusters are developed for work that might consider supercomputing.
Cluster Components
The basic building blocks of clusters are broken down into multiple categories:
The cluster nodes
Cluster operating system
Network switching hardware
The node/switch interconnect
Cluster Architecture
Need for Cluster Computing
Clusters or combinations of clusters are used when the content is critical, and services need to be
available. Internet Service Providers or ISPs and e-commerce sites demand high availability and
load balancing in a scalable manner. The parallel clusters are being extensively used in the film
industry as they need high-quality graphics and animations. Talking about the Beowulf clusters,
they are dominantly used in science, engineering, and finance to perform various critical
projects. Researchers, organizations, and businesses use clusters to demand enhanced scalability,
resource management, availability, and processing at affordable prices.
Computer Nodes must be homogenous i.e. they Nodes may have different operating
Type should have same type of hardware systems and hardwires. Machines can
and operating system. be homogenous or heterogeneous.
Location Computers are located close to each Computers of Grid Computing may be
other. Computers are connected by a located at a huge distance from one
high speed local area network bus. another. Computers are connected
using a low speed bus or the internet.
Utility Computing
Utility computing is purely a concept which cloud computing practically implements. Utility
computing is a service provisioning model in which a service provider makes computing
resources and infrastructure management available to the customer as needed, and charges them
for specific usage rather than a flat rate. This model has the advantage of a low or no initial cost
to acquire computer resources; instead, computational resources are essentially rented.
Utility computing is the process of providing computing service through an on-demand, pay-per-
use billing method. Utility computing is a computing business model in which the provider
owns, operates and manages the computing infrastructure and resources, and the subscribers
accesses it as and when required on a rental or metered basis.
The word utility is used to make an analogy to other services, such as electrical power, that seek
to meet fluctuating customer needs, and charge for the resources based on usage rather than on a
flat-rate basis. This approach, sometimes known as pay-per-use or metered services is becoming
increasingly common in enterprise computing and is sometimes used for the consumer market as
well, for Internet service, Web site access, file sharing, and other applications.
Different pricing model for different customer based on factors like scale, commitment and
payment frequency. But principle of utility computing remains same. The pricing model is
simply an expression of the provider of the costs of provision of the resources and a profit
margin.
Convenience - For most clients, the biggest advantage of utility computing is convenience.The
client doesn't have to buy all the hardware, software and licenses needed to do business. Instead,
the client relies on another party to use these services. The burden of maintaining and
administering the system falls to the utility computing company, allowing the client to
concentrate on other tasks.
However, in some cases what the client needs and what the provider offers aren't in alignment. If
the client is a small business and the provider offers access to expensive supercomputers at a
hefty fee, there's a good chance the client will choose to handle its own computing needs. Why
pay a high service charge for something you don't need?
somewhere in third party and if there is a crash or that service provider itself goes out of the
business then your data will be lost. If a utility computing company is in financial trouble or has
frequent equipment problems, clients could get cut off from the services for which they're
paying.
Security - Utility computing systems can be attractive targets for hackers. A hacker might want
to access services without paying for them or snoop around and investigate client files. Much of
the responsibility of keeping the system safe falls to the provider.
Grid Computing, as the name suggests, is a type of computing that combine resources from
various administrative domains to achieve common goal. It can be considered as a distributed
system involving a large number of files, yet more loosely coupled, heterogeneous, and
geographically dispersed as compared to cluster computing. In its simplest form, grid computing
may be represented as a “super virtual computer” composed of many networked loosely coupled
computers acting together to perform humongous tasks.Its main goal is to virtualized resources
to simply solve problems or issues and apply resources of several computers in network to single
problem at same time to solve technical or scientific problem.
Utility Computing, as the name suggests, is a type of computing that provide services and
computing resources to customers. It is basically a facility that is being provided to users on their
demand and charge them for specific usage.Utility computing involves the renting of computing
resources such as hardware, software and network bandwidth on an as-required, on-demand
basis. It is similar to cloud computing and therefore requires cloud-like infrastructure.
Grid Computing Utility Computing
1. Resources Pooling
Resource pooling is one of the essential characteristics of Cloud Computing. Resource pooling
means that a cloud service provider can share resources among several clients, providing
everyone with a different set of services as per their requirements. It means that the Cloud
provider pulled the computing resources to provide services to multiple customers with the help
of a multi-tenant model.It is a multi-client strategy that can be applied to data storage services,
processing services, and bandwidth provided services. The customer generally has no control or
information over the location of the provided resources but is able to specify location at a higher
level of abstraction.
2. On-Demand Self-Service
It is one of the important and valuable features of Cloud Computing as the user can continuously
monitor the server uptime, capabilities, and allotted network storage. This is a fundamental
characteristic of Cloud Computing, and a client can likewise control the computing abilities as
per his needs.
3. Easy Maintenance
The servers are easily maintained and the downtime is very low and even in some cases, there is
no downtime. Cloud Computing powered resources undergo several updates frequently to
optimize their capabilities and potential. The updates are more compatible with the devices and
perform faster than previous versions.
4. Large Network Access
A big part of the cloud characteristics is its ubiquity. The client can access the cloud data or
transfer the data to the cloud from any place just with a device and internet connection. These
capacities are accessible everywhere in the organization and get to with the help of the internet.
These capabilities are available all over the network and accessed with the help of internet.
5. Availability
Resilience in cloud computing means the ability of the service to quickly recover from any
disruption. A cloud’s resilience is measured by how fast its servers, databases, and network
system restarts and recovers from any kind of harm or damage. Availability is another major
characteristic of cloud computing. Since cloud services can be accessed remotely, there is no
geographic restriction or limitation when it comes to utilizing cloud resources.The capabilities of
the Cloud can be modified as per the use and can be extended a lot. It analyses the storage usage
and allows the user to buy extra Cloud storage if needed for a very small amount.
6. Automatic System
The ability of cloud computing to automatically install, configure, and maintain a cloud service
is known as automation in cloud computing.Cloud computing automatically analyses the data
needed and supports a metering capability at some level of services. We can monitor, control,
and report the usage. It will provide transparency for the host as well as the customer.
7. Economical
This cloud characteristic helps in reducing the IT expenditure of the organizations. In Cloud
Computing, the client needs to pay the administration for the space they have used. There is no
covered up or additional charge which needs to be paid.
8. Security
Cloud Security, is one of the best features of cloud computing. It creates a snapshot of the data
stored so that the data may not get lost even if one of the servers gets damaged. The data is
stored within the storage devices, which cannot be utilized by any other person. The storage
service is quick and reliable.
9. Pay as you go
In cloud computing, the user has to pay only for the service or the space they have utilized. There
is no hidden or extra charge which is to be paid. The service is economical and most of the time
some space is allotted for free.
10. Measured Service
Reporting services are one of the many cloud characteristics that make it the best choice for
organizations. Measuring and reporting service is helpful for both cloud providers and their
clients. It enables both the provider and the client to monitor and report what services have been
used and for what purpose. This helps in monitoring billing and ensuring the optimum usage of
resources.
Utility computing requires an infrastructure like a cloud but its main focus is on the business
model on which the computing services are based. It is basically one in which the customers will
get the computing resources through a service provider and they pay as much as they consume.
The main benefit of using utility computing is for its better economics. Utility computing lets
companies pay for the resources of computing and they pay based on when and how much they
need it.
Utility computing is a predecessor to cloud computing. Cloud computing does everything that
utility computing does and also offers much more than that. Cloud computing is not restricted to
any specific network but it is accessible through the internet. The resource virtualization and its
scalability advantage and reliability are more pronounced in the case of cloud computing.
Utility computing can get implemented without cloud computing. Utility computing can be
understood by say a supercomputer that rents out the processing time to various clients. The user
will pay for the resources that they use.
Utility computing is more like a business model than a particular technology. Cloud computing
does support utility computing but not every utility computing will be based on the cloud.
Degrees of Parallelism
o DLP through SIMD (single instruction, multiple data) and vector machines using vector or
array types of instructions.
o DLP requires even more hardware support and compiler assistance to work properly.
• Task-level parallelism (TLP):
o Ever since the introduction of multicore processors and chip multiprocessors (CMPs), we have
been exploring TLP
o TLP is far from being very successful due to difficulty in programming and compilation of
code for efficient execution on multicore CMPs.
Cyber-Physical Systems
Memory, Storage, and Wide-Area Networking: Memory chips have experienced a 4x increase in
capacity every three years. For hard drives, capacity increased from 260 MB in 1981 to 250 GB
in 2004. Disks or disk arrays have exceeded 3 TB in capacity. The rapid growth of flash memory
and solid-state drives (SSDs) also impacts the future of HPC and HTC systems.
As Figure shows, a LAN typically is used to connect client hosts to big servers. A storage area
network (SAN) connects servers to network storage such as disk arrays. Network attached
storage (NAS) connects client hosts
directly to the disk arrays. All three types of networks often appear in a large cluster built with
commercial components.