BCAM051
CLOUD COMPUTING
UNIT 1 Introduction To Cloud
Computing
SYLLABUS
Introduction To Cloud Computing: Definition of Cloud – Evolution of
Cloud Computing – Underlying Principles of Parallel and Distributed
Computing – Cloud Characteristics – Elasticity in Cloud – On‐demand
Provisioning
Cloud Computing
Cloud Computing means storing and accessing the data and programs on remote servers that are
hosted on the internet instead of the computer’s hard drive or local server.
The following are some of the Operations that can be performed with Cloud Computing
*Storage backup and recovery of data. *Streaming videos and audio
*Delivery of software on demand *Development of new applications and services.
Understanding How Cloud Computing Works?
Cloud computing helps users in easily accessing computing resources like storage, and processing
over internet rather than local hardwares. Here we discussing how it works in nutshell:
Infrastructure: Cloud computing depends on remote network servers hosted on internet for store,
manage, and process the data.
On-Demand Acess: Users can access cloud services and resources based on-demand they can scale
up or down the without having to invest for physical hardware.
Types of Services: Cloud computing offers various benefits such as cost saving, scalability,
reliability and acessibility it reduces capital expenditures, improves efficiency.
Evolution of Cloud Computing
The phrase “Cloud Computing” was first introduced in the 1950s to describe internet-related
services, and it evolved from distributed computing to the modern technology known as cloud
computing. Cloud services include those provided by Amazon, Google, and Microsoft. Cloud
computing allows users to access a wide range of services stored in the cloud or on the
Internet. Cloud computing services include computer resources, data storage, apps, servers,
development tools, and networking protocols.
Distributed Systems
Distributed System is a composition of multiple independent systems but all of them are
depicted as a single entity to the users. The purpose of distributed systems is to share
resources and also use them effectively and efficiently. Distributed systems possess
characteristics such as scalability, concurrency, continuous availability, heterogeneity, and
independence in failures. But the main problem with this system was that all the systems were
required to be present at the same geographical location. Thus to solve this problem,
distributed computing led to three more types of computing and they were-Mainframe
computing, cluster computing, and grid computing.
Mainframe Computing
Mainframes which first came into existence in 1951 are highly powerful and reliable
computing machines. These are responsible for handling large data such as massive input-
output operations. Even today these are used for bulk processing tasks such as online
transactions etc. These systems have almost no downtime with high fault tolerance. After
distributed computing, these increased the processing capabilities of the system. But these
were very expensive. To reduce this cost, cluster computing came as an alternative to
mainframe technology.
Cluster Computing
In 1980s, cluster computing came as an alternative to mainframe computing. Each machine
in the cluster was connected to each other by a network with high bandwidth. These were way
cheaper than those mainframe systems. These were equally capable of high computations.
Also, new nodes could easily be added to the cluster if it was required. Thus, the problem of the
cost was solved to some extent but the problem related to geographical restrictions still
pertained. To solve this, the concept of grid computing was introduced.
Grid Computing
In 1990s, the concept of grid computing was introduced. It means that different systems
were placed at entirely different geographical locations and these all were connected via the
internet. These systems belonged to different organizations and thus the grid consisted of
heterogeneous nodes. Although it solved some problems but new problems emerged as the
distance between the nodes increased. The main problem which was encountered was the low
availability of high bandwidth connectivity and with it other network associated issues. Thus.
cloud computing is often referred to as “Successor of grid computing”.
Virtualization
Virtualization was introduced nearly 40 years back. It refers to the process of creating a
virtual layer over the hardware which allows the user to run multiple instances simultaneously
on the hardware. It is a key technology used in cloud computing. It is the base on which major
cloud computing services such as Amazon EC2, VMware vCloud, etc work on. Hardware
virtualization is still one of the most common types of virtualization.
Web 2.0
Web 2.0 is the interface through which the cloud computing services interact with the clients. It
is because of Web 2.0 that we have interactive and dynamic web pages. It also increases
flexibility among web pages. Popular examples of web 2.0 include Google Maps, Facebook,
Twitter, etc. Needless to say, social media is possible because of this technology only. It gained
major popularity in 2004.
Service Orientation
A service orientation acts as a reference model for cloud computing. It supports low-cost,
flexible, and evolvable applications. Two important concepts were introduced in this computing
model. These were Quality of Service (QoS) which also includes the SLA (Service Level
Agreement) and Software as a Service (SaaS).
Utility Computing
Utility Computing is a computing model that defines service provisioning techniques for
services such as compute services along with other major services such as storage,
infrastructure, etc which are provisioned on a pay-per-use basis.
Cloud Computing
Cloud Computing means storing and accessing the data and programs on remote servers that
are hosted on the internet instead of the computer’s hard drive or local server. Cloud
computing is also referred to as Internet-based computing, it is a technology where the
resource is provided as a service through the Internet to the user. The data that is stored can
be files, images, documents, or any other storable document.
Advantages of Cloud Computing
•Cost Saving
•Data Redundancy and Replication
•Ransomware/Malware Protection
•Flexibility
•Reliability
•High Accessibility
•Scalable
Disadvantages of Cloud Computing
•Internet Dependency
•Issues in Security and Privacy
•Data Breaches
• Limitations on Control
Parallel Computing
It is the use of multiple processing elements simultaneously for solving any problem. Problems are broken down into
instructions and are solved concurrently as each resource that has been applied to work is working at the same time.
Advantages of Parallel Computing over Serial Computing are as follows:
1.It saves time and money as many resources working together will reduce the time and cut potential costs.
2.It can be impractical to solve larger problems on Serial Computing.
3.It can take advantage of non-local resources when the local resources are finite.
4.Serial Computing ‘wastes’ the potential computing power, thus Parallel Computing makes better work of the
hardware.
Types of Parallelism
1.Bit-level parallelism
It is the form of parallel computing which is based on the increasing processor’s size. It reduces the number of
instructions that the system must execute in order to perform a task on large-sized data.
Example: Consider a scenario where an 8-bit processor must compute the sum of two 16-bit integers. It must first sum
up the 8 lower-order bits, then add the 8 higher-order bits, thus requiring two instructions to perform the operation. A
16-bit processor can perform the operation with just one instruction.
2.Instruction-level parallelism
A processor can only address less than one instruction for each clock cycle phase. These instructions can be re-ordered
and grouped which are later on executed concurrently without affecting the result of the program. This is called
instruction-level parallelism.
3.Task Parallelism
Task parallelism employs the decomposition of a task into subtasks and then allocating each of the subtasks for
execution. The processors perform the execution of sub-tasks concurrently.
4. Data-level parallelism (DLP)
Instructions from a single stream operate concurrently on several data – Limited by non-regular data manipulation
patterns and by memory bandwidth
Why parallel computing?
•The whole real-world runs in dynamic nature i.e. many things happen at a certain time but at different places
concurrently. This data is extensively huge to manage.
•Real-world data needs more dynamic simulation and modeling, and for achieving the same, parallel computing is
the key.
•Parallel computing provides concurrency and saves time and money.
•Complex, large datasets, and their management can be organized only and only using parallel computing’s
approach.
•Ensures the effective utilization of the resources. The hardware is guaranteed to be used effectively whereas in
serial computation only some part of the hardware was used and the rest rendered idle.
•Also, it is impractical to implement real-time systems using serial computing.
Applications of Parallel Computing:
•Databases and Data mining.
•Real-time simulation of systems.
•Science and Engineering.
•Advanced graphics, augmented reality, and virtual reality.
Limitations of Parallel Computing:
•It addresses such as communication and synchronization between multiple sub-tasks and processes which is
difficult to achieve.
•The algorithms must be managed in such a way that they can be handled in a parallel mechanism.
•The algorithms or programs must have low coupling and high cohesion. But it’s difficult to create such programs.
•More technically skilled and expert programmers can code a parallelism-based program well.
Future of Parallel Computing:
The computational graph has undergone a great transition from serial computing to parallel computing. Tech giant such
as Intel has already taken a step towards parallel computing by employing multicore processors. Parallel computation
will revolutionize the way computers work in the future, for the better good. With all the world connecting to each other
even more than before, Parallel Computing does a better role in helping us stay that way. With faster networks,
distributed systems, and multi-processor computers, it becomes even more necessary.
DISTRIBUTED COMPUTING
Distributed computing refers to a system where processing and data storage is distributed
across multiple devices or systems, rather than being handled by a single central device. In a
distributed system, each device or system has its own processing capabilities and may also
store and manage its own data. These devices or systems work together to perform tasks and
share resources, with no single device serving as the central hub.
One example of a distributed computing system is a cloud computing system, where resources
such as computing power, storage, and networking are delivered over the Internet and
accessed on demand. In this type of system, users can access and use shared resources
through a web browser or other client software.
Components
There are several key components of a Distributed Computing System
•Devices or Systems: The devices or systems in a distributed system have their own
processing capabilities and may also store and manage their own data.
•Network: The network connects the devices or systems in the distributed system, allowing
them to communicate and exchange data.
•Resource Management: Distributed systems often have some type of resource
management system in place to allocate and manage shared resources such as computing
power, storage, and networking.
The architecture of a Distributed Computing System is typically a Peer-to-Peer Architecture,
where devices or systems can act as both clients and servers and communicate directly with
each other.
Characteristics
There are several characteristics that define a Distributed Computing System
•Multiple Devices or Systems: Processing and data storage is distributed across multiple
devices or systems.
•Peer-to-Peer Architecture: Devices or systems in a distributed system can act as both
clients and servers, as they can both request and provide services to other devices or systems
in the network.
•Shared Resources: Resources such as computing power, storage, and networking are shared
among the devices or systems in the network.
•Horizontal Scaling: Scaling a distributed computing system typically involves adding more
devices or systems to the network to increase processing and storage capacity. This can be
done through hardware upgrades or by adding additional devices or systems to the network..
Advantages and Disadvantages
Some Advantages of the Distributed Computing System are:
•Scalability: Distributed systems are generally more scalable than centralized systems, as
they can easily add new devices or systems to the network to increase processing and storage
capacity.
•Reliability: Distributed systems are often more reliable than centralized systems, as they can
continue to operate even if one device or system fails.
•Flexibility: Distributed systems are generally more flexible than centralized systems, as they
can be configured and reconfigured more easily to meet changing computing needs.
There are a few limitations to Distributed Computing System
•Complexity: Distributed systems can be more complex than centralized systems, as they
involve multiple devices or systems that need to be coordinated and managed.
•Security: It can be more challenging to secure a distributed system, as security measures
must be implemented on each device or system to ensure the security of the entire system.
•Performance: Distributed systems may not offer the same level of performance as
centralized systems, as processing and data storage is distributed across multiple devices or
systems.
Applications
Distributed Computing Systems have a number of applications, including:
•Cloud Computing: Cloud Computing systems are a type of distributed computing system
that are used to deliver resources such as computing power, storage, and networking over the
Internet.
•Peer-to-Peer Networks: Peer-to-Peer Networks are a type of distributed computing system
that is used to share resources such as files and computing power among users.
•Distributed Architectures: Many modern computing systems, such as microservices
architectures, use distributed architectures to distribute processing and data storage across
multiple devices or systems.
Some underlying principles of distributed computing in cloud
computing include:
• Fault tolerance
Distributed systems can continue to operate reliably even when there are faults or failures.
• Scalability
Distributed computing allows for the expansion of computational resources to handle growing
workloads.
• Collaborative approach
Distributed systems allow organizations to use hardware, software, and data resources from a
network of computers. This approach can improve efficiency and optimize resource utilization.
• Loose coordination
Distributed computing loosely coordinates applications across remote nodes, which allows for
independent scaling.
• Peer-to-peer systems
In peer-to-peer systems, each unit in the network has equal responsibilities. This means that no
unit has all the power, and each unit can act as a client or a server.
• Grid computing
Grid computing is a model that allows multiple organizations to collaborate and exchange
computation resources. It can be efficient for large computations that need to be completed
quickly.
Other principles of distributed computing include: Parallel computing, Distributed networking,
Client/server, and Security
Characteristics of Cloud Computing
There are many characteristics of Cloud Computing here are few of them :
1.On-demand self-services: The Cloud computing services does not require any human administrators, user
themselves are able to provision, monitor and manage computing resources as needed.
2.Broad network access: The Computing services are generally provided over standard networks and heterogeneous
devices.
3.Rapid elasticity: The Computing services should have IT resources that are able to scale out and in quickly and on a
need basis. Whenever the user require services it is provided to him and it is scale out as soon as its requirement gets
over.
4.Resource pooling: The IT resource (e.g., networks, servers, storage, applications, and services) present are shared
across multiple applications and occupant in an uncommitted manner. Multiple clients are provided service from a same
physical resource.
5.Measured service: The resource utilization is tracked for each application and occupant, it will provide both the user
and the resource provider with an account of what has been used. This is done for various reasons like monitoring
billing and effective use of resource.
6.Multi-tenancy: Cloud computing providers can support multiple tenants (users or organizations) on a single set of
shared resources.
7.Virtualization: Cloud computing providers use virtualization technology to abstract underlying hardware resources
and present them as logical resources to users.
8.Resilient computing: Cloud computing services are typically designed with redundancy and fault tolerance in mind,
which ensures high availability and reliability.
9.Flexible pricing models: Cloud providers offer a variety of pricing models, including pay-per-use, subscription-
based, and spot pricing, allowing users to choose the option that best suits their needs.
10.Security: Cloud providers invest heavily in security measures to protect their users’ data and ensure the privacy of
sensitive information
11.Automation: Cloud computing services are often highly automated, allowing users to deploy and manage resources
with minimal manual intervention.
12.Sustainability: Cloud providers are increasingly focused on sustainable practices, such as energy-efficient data
centers and the use of renewable energy sources, to reduce their environmental impact.
Cloud Elasticity
Elasticity refers to the ability of a cloud to automatically expand or compress the infrastructural resources on a sudden
up and down in the requirement so that the workload can be managed efficiently. This elasticity helps to minimize
infrastructural costs. This is not applicable for all kinds of environments, it is helpful to address only those scenarios
where the resource requirements fluctuate up and down suddenly for a specific time interval. It is not quite practical to
use where persistent resource infrastructure is required to handle the heavy workload.
The versatility is vital for mission basic or business basic applications where any split the difference in the exhibition
may prompts enormous business misfortune. Thus, flexibility comes into picture where extra assets are provisioned for
such application to meet the presentation prerequisites.
It works such a way that when number of client access expands, applications are naturally provisioned the extra
figuring, stockpiling and organization assets like central processor, Memory, Stockpiling or transfer speed what’s more,
when fewer clients are there it will naturally diminish those as
per prerequisite.
The Flexibility in cloud is a well-known highlight related with scale-out arrangements (level scaling), which takes into
consideration assets to be powerfully added or eliminated when required.
It is for the most part connected with public cloud assets which is generally highlighted in pay-per-use or pay-more only
as costs arise administrations.
The Flexibility is the capacity to develop or contract framework assets (like process, capacity or organization)
powerfully on a case by case basis to adjust to responsibility changes in the
applications in an autonomic way.
It makes make most extreme asset use which bring about reserve funds in foundation costs in general.
Relies upon the climate, flexibility is applied on assets in the framework that isn’t restricted to equipment,
programming, network, QoS and different arrangements.
The versatility is totally relying upon the climate as now and again it might become negative characteristic where
execution of certain applications probably ensured execution.
It is most commonly used in pay-per-use, public cloud services. Where IT managers are willing to pay only for the
duration to which they consumed the resources.
Example: Consider an online shopping site whose transaction workload increases during festive season like Christmas.
So for this specific period of time, the resources need a spike up. In order to handle this kind of situation, we can go for
a Cloud-Elasticity service rather than Cloud Scalability. As soon as the season goes out, the deployed resources can then
be requested for withdrawal.
Cloud Scalability: Cloud scalability is used to handle the growing workload where good performance is also needed to
work efficiently with software or applications. Scalability is commonly used where the persistent deployment of
resources is required to handle the workload statically.
Example: Consider you are the owner of a company whose database size was small in earlier days but as time passed
your business does grow and the size of your database also increases, so in this case you just need to request your cloud
service vendor to scale up your database capacity to handle a heavy workload.
It is totally different from what you have read above in Cloud Elasticity. Scalability is used to fulfill the static needs
while elasticity is used to fulfill the dynamic need of the organization. Scalability is a similar kind of service provided
by the cloud where the customers have to pay-per-use. So, in conclusion, we can say that Scalability is useful where the
workload remains high and increases statically.
Types of Scalability:
1. Vertical Scalability (Scale-up) –
In this type of scalability, we increase the power of existing
resources in the working environment in an upward direction.
2. Horizontal Scalability: In this kind of scaling, the
resources are added in a horizontal row.
3. Diagonal Scalability –
It is a mixture of both Horizontal and Vertical scalability
where the resources are added both vertically and
horizontally.
Difference Between Cloud Elasticity and
Scalability :
Cloud Elasticity Cloud Scalability
Elasticity is used just to meet the sudden up and down Scalability is used to meet the static increase in the
1
in the workload for a small period of time. workload.
Elasticity is used to meet dynamic changes, where the Scalability is always used to address the increase in
2
resources need can increase or decrease. workload in an organization.
Elasticity is commonly used by small companies Scalability is used by giant companies whose customer
3 whose workload and demand increases only for a circle persistently grows in order to do the operations
specific period of time. efficiently.
Cloud Elasticity Cloud Scalability
It is a short term planning and adopted just to deal
Scalability is a long term planning and adopted just to
4 with an unexpected increase in demand or seasonal
deal with an expected increase in demand.
demands.
What is on-demand computing (ODC)?
On-demand computing (ODC) is a delivery model in which computing resources are made available to the user as
needed. The resources might be maintained within the user's enterprise or made available by a cloud service provider.
The on-demand business computing model was developed to overcome the challenge of enterprises meeting fluctuating
demands efficiently. Because an enterprise's demand for computing resources can be unpredictable at times,
maintaining sufficient resources to meet peak requirements can be costly. And cutting costs by only maintaining
minimal resources means there are likely insufficient resources to meet peak loads. The on-demand model provides an
enterprise with the ability to scale computing resources up or down whenever needed, with the click of a button.
As the term suggests, on-demand computing simply means making computing resources available to users on demand.
The term cloud computing is often used as a synonym for on-demand computing when the services are provided by
a third party -- such as a cloud hosting organization (also known as a cloud service provider or CSP). For this reason,
the definition of on-demand computing can be extended to the cloud realm as "a delivery model in which cloud
computing resources such as compute, storage, networking, and software are made available to the user as per their
need or demand."
On-demand computing normally provides computing resources such as storage capacity, or hardware and
software applications. The service itself is provided with methods including virtualization,
computer clusters and distributed computing.
How does cloud computing on-demand work?
In the context of cloud computing, the on-demand computing model is characterized by three attributes:
scalability, pay-per-use and self-service. Whether the resource is an application program that helps team
members collaborate or provides additional storage, the computing resources are elastic, metered and easy to obtain.
When an organization pairs with a third party, such as a CSP, to provide on-demand computing, it either subscribes to
the service or uses a pay-per-use model. The third party then provides computing resources whenever needed, including
when the organization is working on temporary projects, has expected or unexpected workloads, or has long-term
computing requirements. For example, a retail organization could use on-demand computing to scale up its online
services, providing additional computing resources during a high-volume time, like Black Friday.
How does cloud computing provide on-demand functionality?
On-demand computing often involves cloud computing methods, such as infrastructure as a service (IaaS), software as a
service (SaaS), desktop as a service (DaaS), platform as a service (PaaS), managed hosting services, as well as cloud
storage and backup services.
• IaaS provides virtualized computing resources over the internet.
• SaaS is a software distribution model where a cloud provider hosts applications and makes them available to users
over the internet.
• DaaS is a form of cloud computing where a third party hosts the back end of a virtual desktop infrastructure.
• PaaS is a model in which a third-party provider hosts customer applications on their infrastructure. Hardware and
software tools are delivered to users over the internet.
• Managed hosting services are an IT provisioning and cloud server hosting model where a service provider leases
dedicated servers and associated hardware to a single customer and manages those systems on the customer's
behalf.
• Cloud storage is a service model where data is transmitted and stored securely on remote storage systems, where it
is maintained, managed, backed up and made available to users over a network.
• Cloud backup is a strategy for sending a copy of a file or database to a secondary location for preservation in case
of equipment failure.
These cloud-based services are typically made on-demand and in real time for users. Computing resources are delivered
using a shared pool of servers, storage devices, networks and applications.
Cloud hosting providers may provide an enterprise-level control panel where they can quickly view and scale up or
down their cloud services. An organization could use this to access the storage space, speed, software applications,
servers or networks they need at any given time.
What are the advantages of on-demand computing?
The on-demand computing model was developed to overcome the challenge of enterprises meeting fluctuating demands
efficiently. Because an enterprise's demand for computing resources can be unpredictable at times, maintaining
sufficient resources to meet peak requirements can be costly. And, cutting costs by only maintaining minimal resources
means there likely are insufficient resources to meet peak loads. The on-demand model provides an enterprise with the
ability to scale computing resources up or down whenever needed, with the click of a button.
In addition, on-demand computing offers the following benefits:
• Flexibility to meet fluctuating demands. Users can quickly increase or decrease their computing resources as
needed -- either short-term or long-term.
• Eliminates the need to purchase, maintain and upgrade hardware. The CSP managing the on-demand services
handles resources such as servers and hardware, system updates and maintenance so the user organization is saved
from making large capital expenditures on these elements. Organizations also don't have to worry about updating
or maintaining those resources because the CSP takes care of these aspects as well.
• User-friendly. Many cloud-based, on-demand computing services in the cloud are user-friendly and easy to access
since they are available via a self-service model. This enables most users to easily acquire additional computing
resources with minimal or no help from their IT department. This can help speed up access to these business
or mission-critical resources, which then improves business agility.
• Access to the latest technologies. The largest CSPs invest in the latest technologies such as machine
learning, computer vision, and the internet of things (IoT) that organizations can access for their requirements
without having to make large investments. For smaller companies, access to these technologies helps level the
playing field, allowing them to innovate and compete on equal grounds with larger competitors.
The possible drawbacks of on-demand computing
Notwithstanding the benefits of on-demand computing, organizations must also be aware of some of its possible
pitfalls. For example, they must be concerned about the unauthorized use of added resources via on-demand computing.
This problem is known as shadow IT and can pose security risks for the organization. This is because the added
resources (by employees without the permission, approval, or knowledge of the IT team) might contain
security vulnerabilities that can be exploited by threat actors to compromise the organization and its resources (such as
its accounts or data).
For this reason, IT departments should perform periodic cloud audits to identify unauthorized use of on-demand
applications and other rogue IT scenarios, and then take appropriate action to remove those resources. It's also important
to train users about the risks of shadow IT with on-demand computing.
The future of on-demand computing
Large vendors such as Amazon Web Services, HPE, IBM and Microsoft offer on-demand computing products.
Microsoft, for example, provides Azure SaaS and AWS offers pay-as-you-go pricing with its IaaS offerings.
As more of these services become available, there is a greater chance that enterprises will look to on-demand computing
as a way to facilitate the challenges of fluctuating computing resource needs as their organizations grow and evolve to
keep up with changes in their markets, industries, and customer needs.