KEMBAR78
Cloud Computing Notes | PDF | Cloud Computing | Software As A Service
0% found this document useful (0 votes)
95 views61 pages

Cloud Computing Notes

Uploaded by

coc7987515756
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
95 views61 pages

Cloud Computing Notes

Uploaded by

coc7987515756
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 61

Chameli Devi Group of Institutions

Department of Artificial Intelligence and Data Science


AD 702 (A) Cloud Computing
B. Tech VII Semester
Unit -1
…………………………………………………………………………………………………………………
Syllabus: Introduction To Cloud Computing: Definition, Characteristics, Components, Cloud
Architecture: Software as a Service, Plat form as a Service, Infrastructure as Service. Cloud deployment
model: Public clouds–Private clouds–Community clouds-Hybrid clouds- Advantages of Cloud computing.
Comparing cloud providers with traditional IT service providers.
…………………………………………………………………………………………………………………

Definition of Cloud Computing

Cloud computing is the delivery of various services over the internet, including storage, databases, servers,
networking, software, and analytics. Instead of owning and managing physical servers and infrastructure, users
can rent access to these resources from cloud service providers.

Cloud Computing means storing and accessing the data and programs on remote servers that are hosted on
the internet instead of the computer’s hard drive or local server. Cloud computing is also referred to as
Internet-based computing, it is a technology where the resource is provided as a service through the Internet to
the user. The data that is stored can be files, images, documents, or any other storable document.

Key Points:

 On-Demand Access: Users can access computing resources whenever needed.


 Internet-Based: Services are delivered via the internet.
 Pay-as-You-Go: Users pay only for the resources they use.

Why Cloud Computing?

Small as well as large IT companies, follow the traditional methods to provide the IT infrastructure. That
means for any IT company, we need a Server Room that is the basic need of IT companies.

In that server room, there should be a database server, mail server, networking, firewalls, routers, modem,
switches, QPS (Query Per Second means how much queries or load will be handled by the server),
configurable system, high net speed, and the maintenance engineers.

To establish such IT infrastructure, we need to spend lots of money. To overcome all these problems and to
reduce the IT infrastructure cost, Cloud Computing comes into existence.

2. Characteristics of Cloud Computing

1. On-Demand Self-Service:
o Users can provision computing capabilities such as server time and network storage
automatically without requiring human intervention from the service provider.
o Example: Provisioning a virtual server from AWS EC2 via the AWS Management Console.
2. Broad Network Access:
o Services are accessible over the network through standard mechanisms (e.g., web browsers) and
from a variety of devices (e.g., smartphones, laptops).
o Example: Accessing Google Docs from a mobile device or desktop computer.
3. Resource Pooling:
o Cloud providers use multi-tenant models to pool computing resources. These resources are
dynamically assigned and reassigned based on demand.
o Example: AWS uses a single pool of servers to provide resources to multiple users.
4. Rapid Elasticity:
o Resources can be rapidly and elastically provisioned to scale outward or inward according to
demand. This means that if demand increases, additional resources can be allocated quickly.
o Example: Automatically scaling web server instances up or down based on website traffic
using AWS Auto Scaling.
5. Measured Service:
o Cloud computing systems automatically control and optimize resource use by leveraging a
metering capability. This allows for resource usage monitoring, control, and reporting.
o Example: Billing is based on the amount of storage used or the number of virtual machines
running.

Components of Cloud Computing Architecture

Following are the components of Cloud Computing Architecture


1. Client Infrastructure – Client Infrastructure is a part of the frontend component. It contains the
applications and user interfaces which are required to access the cloud platform. In other words, it
provides a GUI( Graphical User Interface ) to interact with the cloud.

2. Application : Application is a part of backend component that refers to a software or platform to which
client accesses. Means it provides the service in backend as per the client requirement.

3. Service: Service in backend refers to the major three types of cloud based services like SaaS, PaaS and
IaaS. Also manages which type of service the user accesses.

4. Runtime Cloud: Runtime cloud in backend provides the execution and Runtime platform/environment
to the Virtual machine.

5. Storage: Storage in backend provides flexible and scalable storage service and management of stored
data.

6. Infrastructure: Cloud Infrastructure in backend refers to the hardware and software components of
cloud like it includes servers, storage, network devices, virtualization software etc.

7. Management: Management in backend refers to management of backend components like application,


service, runtime cloud, storage, infrastructure, and other security mechanisms etc.

8. Security: Security in backend refers to implementation of different security mechanisms in the


backend for secure cloud resources, systems, files, and infrastructure to end-users.

9. Internet: Internet connection acts as the medium or a bridge between frontend and backend and
establishes the interaction and communication between frontend and backend.

10. Database: Database in backend refers to provide database for storing structured data, such as SQL and
NOSQL databases. Example of Databases services include Amazon RDS, Microsoft Azure SQL
database and Google CLoud SQL.

11. Networking: Networking in backend services that provide networking infrastructure for application in
the cloud, such as load balancing, DNS and virtual private networks.

12. Analytics: Analytics in backend service that provides analytics capabilities for data in the cloud, such
as warehousing, business intelligence and machine learning.

4. Cloud Architecture

Cloud computing technology is used by both small and large organizations to store the information in cloud
and access it from anywhere at anytime using the internet connection.
Cloud computing architecture is a combination of service-oriented architecture and event-driven
architecture.

Cloud computing architecture is divided into the following two parts –

 Front End
 Back End

The below diagram shows the architecture of cloud computing -

Front End

The front end is used by the client. It contains client-side interfaces and applications that are required to access
the cloud computing platforms. The front end includes web servers (including Chrome, Firefox, internet
explorer, etc.), thin & fat clients, tablets, and mobile devices.

Back End

The back end is used by the service provider. It manages all the resources that are required to provide cloud
computing services. It includes a huge amount of data storage, security mechanism, virtual machines,
deploying models, servers, traffic control mechanisms, etc.

1. Software as a Service (SaaS):


o Definition: SaaS delivers applications over the internet. Users access these applications via web browsers,
and the provider manages the underlying infrastructure and platform.

Software As A Service (SAAS) allows users to run existing online applications and it is a model software that
is deployed as a hosting service and is accessed over Output Rephrased/Re-written Text the internet or
software delivery model during which software and its associated data are hosted centrally and accessed using
their client, usually an online browser over the web. SAAS services are used for the development and
deployment of modern applications.
It allows software and its functions to be accessed from anywhere with good internet connection device and a
browser. An application is hosted centrally and also provides access to multiple users across various locations
via the internet.

Characteristics of SAAS (Software as a Service)

 Applications are ready to use, and updates and maintenance are handled by the provider.

 You access the software through a web browser or app, usually paying a subscription fee.

 It’s convenient and requires minimal technical expertise, ideal for non-technical users.

Example of SAAS (Software as a Service)

 Salesforce

 Google Workspace apps

 Microsoft 365

 Trello

 Zoom

 Slack

 Adobe Creative Cloud

2. Platform as a Service (PaaS):


o Definition: PaaS provides a platform that allows developers to build, deploy, and manage applications
without dealing with the underlying infrastructure.

Platform As A Service (PAAS) is a cloud delivery model for applications composed of services managed by a
third party. It provides elastic scaling of your application which allows developers to build applications and
services over the internet and the deployment models include public, private and hybrid.

Basically, it is a service where a third-party provider provides both software and hardware tools to the cloud
computing. The tools which are provided are used by developers. PAAS is also known as Application PAAS.
It helps us to organize and maintain useful applications and services. It has a well-equipped management
system and is less expensive compared to IAAS.
Characteristics of PAAS (Platform as a Service)

 PAAS is like a toolkit for developers to build and deploy applications without worrying about
infrastructure.

 Provides pre-built tools, libraries, and development environments.

 Developers focus on building and managing applications, while the provider handles infrastructure
management.

 It speeds up the development process and allows for easy collaboration among developers.

Examples of PAAS (Platform as a Service)

 AWS Lambda

 Google App Engine

 Google Cloud

 IBM Cloud

3. Infrastructure as a Service (IaaS):


o Definition: IaaS provides virtualized computing resources over the internet. Users can rent virtual servers,
storage, and networking capabilities.

Infrastructure As A Service (IAAS) is means of delivering computing infrastructure as on-demand services. It


is one of the three fundamental cloud service models. The user purchases servers, software data center space,
or network equipment and rent those resources through a fully outsourced, on-demand service model. It allows
dynamic scaling and the resources are distributed as a service. It generally includes multiple-user on a single
piece of hardware.

It totally depends upon the customer to choose its resources wisely and as per need. Also, it provides billing
management too.

Characteristics of IAAS (Infrastructure as a Service)

 IAAS is like renting virtual computers and storage space in the cloud.

 You have control over the operating systems, applications, and development frameworks.

 Scaling resources up or down is easy based on your needs.


Example of IAAS (Infrastructure As A Service)

 Amazon Web Services

 Microsoft Azure

 Google Compute Engine

 Digital Ocean

4. Cloud Deployment Models

The cloud deployment model identifies the specific type of cloud environment based on ownership, scale, and
access, as well as the cloud’s nature and purpose. The location of the servers you’re utilizing and who controls
them are defined by a cloud deployment model. It specifies how your cloud infrastructure will look, what you
can change, and whether you will be given services or will have to create everything yourself. Relationships
between the infrastructure and your users are also defined by cloud deployment types. Different types of cloud
computing deployment models are described below.

 Public Cloud
 Private Cloud
 Hybrid Cloud
 Community Cloud
 Multi-Cloud

Public Cloud

The public cloud makes it possible for anybody to access systems and services. The public cloud may be less
secure as it is open to everyone. The public cloud is one in which cloud infrastructure services are provided
over the internet to the general people or major industry groups. The infrastructure in this cloud model is
owned by the entity that delivers the cloud services, not by the consumer.

It is a type of cloud hosting that allows customers and users to easily access systems and services. This form of
cloud computing is an excellent example of cloud hosting, in which service providers supply services to a
variety of customers. In this arrangement, storage backup and retrieval services are given for free, as a
subscription, or on a per-user basis. For example, Google App Engine etc.
Advantages of the Public Cloud Model

 Minimal Investment: Because it is a pay-per-use service, there is no substantial upfront fee, making it
excellent for enterprises that require immediate access to resources.
 No setup cost: The entire infrastructure is fully subsidized by the cloud service providers, thus there is
no need to set up any hardware.
 Infrastructure Management is not required: Using the public cloud does not necessitate
infrastructure management.
 No maintenance: The maintenance work is done by the service provider (not users).
 Dynamic Scalability: To fulfill your company’s needs, on-demand resources are accessible.

Disadvantages of the Public Cloud Model

 Less secure: Public cloud is less secure as resources are public so there is no guarantee of high-level
security.
 Low customization: It is accessed by many public so it can’t be customized according to personal
requirements.

Private Cloud

The private cloud deployment model is the exact opposite of the public cloud deployment model. It’s a one-
on-one environment for a single user (customer). There is no need to share your hardware with anyone
else. The distinction between private and public clouds is in how you handle all of the hardware. It is also
called the “internal cloud” & it refers to the ability to access systems and services within a given border or
organization. The cloud platform is implemented in a cloud-based secure environment that is protected by
powerful firewalls and under the supervision of an organization’s IT department. The private cloud gives
greater flexibility of control over cloud resources.
Advantages of the Private Cloud Model

 Better Control: You are the sole owner of the property. You gain complete command over service
integration, IT operations, policies, and user behavior.
 Data Security and Privacy: It’s suitable for storing corporate information to which only authorized
staff have access. By segmenting resources within the same infrastructure, improved access and
security can be achieved.
 Supports Legacy Systems: This approach is designed to work with legacy systems that are unable to
access the public cloud.
 Customization: Unlike a public cloud deployment, a private cloud allows a company to tailor its
solution to meet its specific needs.

Disadvantages of the Private Cloud Model

 Less scalable: Private clouds are scaled within a certain range as there is less number of clients.
 Costly: Private clouds are more costly as they provide personalized facilities.

Hybrid Cloud

By bridging the public and private worlds with a layer of proprietary software, hybrid cloud computing gives
the best of both worlds. With a hybrid solution, you may host the app in a safe environment while taking
advantage of the public cloud’s cost savings. Organizations can move data and applications between different
clouds using a combination of two or more cloud deployment methods, depending on their needs.
Advantages of the Hybrid Cloud Model

 Flexibility and control: Businesses with more flexibility can design personalized solutions that meet
their particular needs.
 Cost: Because public clouds provide scalability, you’ll only be responsible for paying for the extra
capacity if you require it.
 Security: Because data is properly separated, the chances of data theft by attackers are considerably
reduced.

Disadvantages of the Hybrid Cloud Model

 Difficult to manage: Hybrid clouds are difficult to manage as it is a combination of both public and
private cloud. So, it is complex.
 Slow data transmission: Data transmission in the hybrid cloud takes place through the public cloud so
latency occurs.

Community Cloud

It allows systems and services to be accessible by a group of organizations. It is a distributed system that is
created by integrating the services of different clouds to address the specific needs of a community, industry,
or business. The infrastructure of the community could be shared between the organization which has shared
concerns or tasks. It is generally managed by a third party or by the combination of one or more organizations
in the community.

Advantages of the Community Cloud Model

 Cost Effective: It is cost-effective because the cloud is shared by multiple organizations or


communities.
 Security: Community cloud provides better security.
 Shared resources: It allows you to share resources, infrastructure, etc. with multiple organizations.
 Collaboration and data sharing: It is suitable for both collaboration and data sharing.
Disadvantages of the Community Cloud Model

 Limited Scalability: Community cloud is relatively less scalable as many organizations share the same
resources according to their collaborative interests.
 Rigid in customization: As the data and resources are shared among different organizations according
to their mutual interests if an organization wants some changes according to their needs they cannot do
so because it will have an impact on other organizations.

Multi-Cloud

We’re talking about employing multiple cloud providers at the same time under this paradigm, as the name
implies. It’s similar to the hybrid cloud deployment approach, which combines public and private cloud
resources. Instead of merging private and public clouds, multi-cloud uses many public clouds. Although public
cloud providers provide numerous tools to improve the reliability of their services, mishaps still occur. It’s
quite rare that two distinct clouds would have an incident at the same moment. As a result, multi-cloud
deployment improves the high availability of your services even more.

Advantages of the Multi-Cloud Model

 You can mix and match the best features of each cloud provider’s services to suit the demands of your
apps, workloads, and business by choosing different cloud providers.
 Reduced Latency: To reduce latency and improve user experience, you can choose cloud regions and
zones that are close to your clients.
 High availability of service: It’s quite rare that two distinct clouds would have an incident at the same
moment. So, the multi-cloud deployment improves the high availability of your services.

Disadvantages of the Multi-Cloud Model

 Complex: The combination of many clouds makes the system complex and bottlenecks may occur.
 Security issue: Due to the complex structure, there may be loopholes to which a hacker can take
advantage hence, makes the data insecure.

6. Advantages of Cloud Computing

1. Cost Efficiency:
o Reduces capital expenditures by eliminating the need for physical hardware and data centers.
Users pay only for the resources they use.
2. Scalability:
o Easily scale resources up or down based on current needs. This is beneficial for handling
varying workloads and seasonal demands.
3. Performance:
o High-performance computing environments are provided by cloud providers, utilizing state-of-
the-art hardware and data centers.
4. Accessibility:
o Access services and data from anywhere with an internet connection, supporting remote work
and global operations.
5. Disaster Recovery:
o Provides robust disaster recovery options, including data backup, redundancy, and failover
solutions.
6. Automatic Updates:
o Cloud providers manage and deploy updates and patches automatically, ensuring systems are
up-to-date with the latest features and security enhancements.

Coparing Cloud Providers with Traditional IT Service Providers

What is Traditional Computing?

Traditional Computing, as name suggests, is a possess of using physical data centers for storing digital assets
and running complete networking system for daily operations. In this, access to data, or software, or storage by
users is limited to device or official network they are connected with. In this computing, user can have access
to data only on system in which data is stored.

Aspect Cloud Computing Traditional Computing

Cloud Computing refers to delivery of


different services such as data and Traditional Computing refers to delivery of
Definition
programs through internet on different different services on local server.
servers.

Cloud Computing takes place on third-


Infrastructure Traditional Computing takes place on physical
party servers that is hosted by third-party
Location hard drives and website servers.
hosting companies.

Data Cloud Computing is ability to access data User can access data only on system in which
Accessibility anywhere at any time by user. data is stored.

Cloud Computing is more cost effective


Traditional Computing is less cost effective as
as compared to tradition computing as
Cost compared to cloud computing because one has
operation and maintenance of server is
Effectiveness to buy expensive equipment’s to operate and
shared among several parties that in turn
maintain server.
reduce cost of public services.

Cloud Computing is more user-friendly as Traditional Computing is less user-friendly as


User- compared to traditional computing compared to cloud computing because data
Friendliness because user can have access to data cannot be accessed anywhere and if user has to
anytime anywhere using internet. access data in another system, then he need to
Aspect Cloud Computing Traditional Computing

save it in external storage medium.

Cloud Computing requires fast, reliable Traditional Computing does not require any
Internet
and stable internet connection to access internet connection to access data or
Dependency
information anywhere at any time. information.

Cloud Computing provides more storage


Storage and
space and servers as well as more Traditional Computing provides less storage as
Computing
computing power so that applications and compared to cloud computing.
Power
software run must faster and effectively.

Cloud Computing also provides


scalability and elasticity i.e., one can
Scalability and Traditional Computing does not provide any
increase or decrease storage capacity,
Elasticity scalability and elasticity.
server resources, etc., according to
business needs.

Traditional Computing requires own team to


Maintenance Cloud service is served by provider’s
maintain and monitor system that will need a lot
and Support support team.
of time and efforts.

Software is offered as an on-demand


Software Software in purchased individually for every
service (SaaS) that can be accessed
Delivery Model user and requires to be updated periodically.
through subscription service.

Comparing Cloud Providers with Traditional IT Service Providers

1. Cost Structure:
o Cloud Providers: Operate on a pay-as-you-go or subscription basis, reducing upfront capital
costs and providing predictable expenses.
o Traditional IT: Involves significant capital investment in hardware and software, with
additional ongoing maintenance costs.
2. Scalability:
o Cloud Providers: Offer on-demand scalability with the ability to rapidly adjust resources based
on demand.
o Traditional IT: Scaling requires additional hardware and infrastructure, which can be time-
consuming and expensive.
3. Management:
o Cloud Providers: Manage and maintain infrastructure, including security, updates, and
backups, allowing organizations to focus on their core business activities.
o Traditional IT: Requires internal resources to manage and maintain hardware and software,
which can divert focus from core business operations.
o

4. Deployment Time:
o Cloud Providers: Services and applications can be deployed quickly, often within minutes.
o Traditional IT: Deployment can be lengthy due to the need to purchase, install, and configure
hardware and software.
5. Accessibility:
o Cloud Providers: Services are accessible from anywhere with an internet connection,
supporting remote work and global collaboration.
o Traditional IT: Access is often limited to on-premises environments or requires VPNs for
remote access.
6. Disaster Recovery:
o Cloud Providers: Typically offer built-in disaster recovery solutions and data redundancy as
part of their service offerings.
o Traditional IT: Disaster recovery solutions can be complex and costly, requiring additional
infrastructure and management.
Unit -II
…………………………………………………………………………………………………………………
Syllabus: Services Virtualization Technology and Study of Hypervisor: Utility Computing, Elastic
computing & grid computing. Study of Hypervisor Virtualization applications in enterprises, High-
performance computing, Pitfalls of virtualization Multitenant software: Multi-entity support, Multi schema
approach.
…………………………………………………………………………………………………………………
Virtualization Technology Overview

Virtualization is used to create a virtual version of an underlying service With the help of Virtualization,
multiple operating systems and applications can run on the same machine and its same hardware at the same
time, increasing the utilization and flexibility of hardware. It was initially developed during the mainframe era.

 Host Machine: The machine on which the virtual machine is going to be built is known as Host
Machine.

 Guest Machine: The virtual machine is referred to as a Guest Machine.

Virtualization technology is a transformative approach in computing that allows multiple virtual environments
or virtual machines (VMs) to operate on a single physical hardware system. By abstracting the hardware
resources, virtualization creates isolated virtual instances, each capable of running its own operating system
and applications. This abstraction is facilitated by a software layer known as the hypervisor, which manages
the distribution of physical resources, such as CPU, memory, and storage, among the VMs. Virtualization
enhances resource utilization, improves management efficiency, and provides a level of isolation that can
bolster security and streamline testing and development processes.
Uses of Virtualization

 Data-integration

 Business-integration

 Service-oriented architecture data-services

 Searching organizational data

Benefits of Virtualization

 More flexible and efficient allocation of resources.

 Enhance development productivity.

 It lowers the cost of IT infrastructure.

 Remote access and rapid scalability.

 High availability and disaster recovery.

 Pay peruse of the IT infrastructure on demand.

 Enables running multiple operating systems.

Drawback of Virtualization

 High Initial Investment: Clouds have a very high initial investment, but it is also true that it will help
in reducing the cost of companies.

 Learning New Infrastructure: As the companies shifted from Servers to Cloud, it requires highly
skilled staff who have skills to work with the cloud easily, and for this, you have to hire new staff or
provide training to current staff.

 Risk of Data: Hosting data on third-party resources can lead to putting the data at risk, it has the
chance of getting attacked by any hacker or cracker very easily.

What is a hypervisor

A hypervisor, also known as a virtual machine monitor or VMM. The hypervisor is a piece of software that
allows us to build and run virtual machines which are abbreviated as VMs.A hypervisor allows a single host
computer to support multiple virtual machines (VMs) by sharing resources including memory and processing.
What is the use of a hypervisor?

Hypervisors allow the use of more of a system's available resources and provide greater IT versatility because
the guest VMs are independent of the host hardware which is one of the major benefits of the Hypervisor.

In other words, this implies that they can be quickly switched between servers. Since a hypervisor with the
help of its special feature, it allows several virtual machines to operate on a single physical server. So, it helps
us to reduce:

 The Space efficiency


 The Energy uses
 The Maintenance requirements of the server.

Hypervisor Types

There are two types of hypervisors: "Type 1" (also known as "bare metal") and "Type 2" (also known as
"hosted"). A type 1 hypervisor functions as a light operating system that operates directly on the host's
hardware, while a type 2 hypervisor functions as a software layer on top of an operating system, similar to
other computer programs.

The Type 1 hypervisor

The native or bare metal hypervisor, the Type 1 hypervisor is known by both names.

It replaces the host operating system, and the hypervisor schedules VM services directly to the hardware.The
type 1 hypervisor is very much commonly used in the enterprise data center or other server-based
environments.

It includes KVM, Microsoft Hyper-V, and VMware vSphere. If we are running the updated version of the
hypervisor then we must have already got the KVM integrated into the Linux kernel in 2007.
The Type 2 hypervisor

It is also known as a hosted hypervisor, The type 2 hypervisor is a software layer or framework that runs on a
traditional operating system.

It operates by separating the guest and host operating systems. The host operating system schedules VM
services, which are then executed on the hardware.

Individual users who wish to operate multiple operating systems on a personal computer should use a form 2
hypervisor.

What is a cloud hypervisor?

Hypervisors are a key component of the technology that enables cloud computing since they are a software
layer that allows one host device to support several virtual machines at the same time.

Hypervisors allow IT to retain control over a cloud environment's infrastructure, processes, and sensitive data
while making cloud-based applications accessible to users in a virtual environment.

Increased emphasis on creative applications is being driven by digital transformation and increasing consumer
expectations. As a result, many businesses are transferring their virtual computers to the cloud.

Utility Computing

Utility computing is a service model in which computing resources are provided and billed based on actual
usage, akin to traditional utilities like electricity or water. In this model, resources such as

processing power, storage, and network bandwidth are offered on an on-demand basis. This approach enables
businesses and individuals to scale resources dynamically according to their needs, leading to potential cost
savings since users only pay for the resources they consume. Utility computing supports flexible scaling and
resource allocation, making it well-suited for applications with varying workloads and for scenarios where the
demand for resources fluctuates significantly.

Utility Computing, as name suggests, is a type of computing that provide services and computing resources to
customers. It is basically a facility that is being provided to users on their demand and charge them for specific
usage. It is similar to cloud computing and therefore requires cloud-like infrastructure.

Utility computing examples

Virtually any activity performed in a data center can be replicated in a utility computing offering. Services
available include the following:

 Access to file, application and web servers;


 infrastructure as a service, software as a service and platform as a service;
 Virtually unlimited processing power and computation storage space;
 Support for customer computing applications;
 Storage space for data, databases and applications;

Elastic Computing

EC2 stands for Elastic Compute Cloud. EC2 is an on-demand computing service on the AWS cloud platform.
Under computing, it includes all the services a computing device can offer to you along with the flexibility of a
virtual environment. It also allows the user to configure their instances as per their requirements i.e. allocate
the RAM, ROM, and storage according to the need of the current task. Even the user can dismantle the virtual
device once its task is completed and it is no more required. For providing, all these scalable resources AWS
charges some bill amount at the end of every month, the bill amount is entirely dependent on your usage. EC2
allows you to rent virtual computers. The provision of servers on AWS Cloud is one of the easiest ways in
EC2. EC2 has resizable capacity. EC2 offers security, reliability, high performance, and cost-effective
infrastructure so as to meet the demanding business needs.

Elastic computing refers to the capability of dynamically adjusting computing resources to meet varying
workload demands. This dynamic scalability allows systems to expand or contract resource allocations—such
as CPU, memory, and storage—based on real-time needs. The principle of elastic computing ensures that
resources are used efficiently, reducing costs by aligning resource availability with demand. This approach
involves pooling resources from multiple servers or systems to create a flexible and scalable infrastructure.
Elastic computing is particularly valuable in cloud environments, where users benefit from the ability to
handle both anticipated and unexpected changes in workload.

Elastic computing refers to a scenario in which the overall resource footprint available in a system or
consumed by a specific job can grow or shrink on demand. This usually relies on external cloud computing
services, where the local cluster provides only part of the resource pool available to all jobs. However, elastic
computing may also be implemented on standalone clusters

Grid Computing

Grid computing is a distributed architecture that combines computer resources from different locations to
achieve a common goal. It breaks down tasks into smaller subtasks, allowing concurrent processing. In this
article, we are going to discuss grid computing.

Grid computing involves connecting a network of computers to work collaboratively on complex tasks by
pooling their computational resources. This distributed approach enables the sharing of processing power,
storage, and network capabilities across multiple nodes, often spanning different locations. Grid computing is
particularly effective for solving large-scale problems that require substantial computational power, such as
scientific simulations or data analysis tasks. By leveraging the combined resources of multiple machines, grid
computing provides a scalable and cost-effective solution for high-performance tasks that exceed the
capabilities of individual systems.

What is Grid Computing?

Grid Computing can be defined as a network of computers working together to perform a task that would
rather be difficult for a single machine. All machines on that network work under the same protocol to act as a
virtual supercomputer. The tasks that they work on may include analyzing huge datasets or simulating
situations that require high computing power. Computers on the network contribute resources like processing
power and storage capacity to the network.

Why is Grid Computing Important?

 Scalability: It allows organizations to scale their computational resources dynamically. As workloads


increase, additional machines can be added to the grid, ensuring efficient processing.

 Resource Utilization: By pooling resources from multiple computers, grid computing maximizes
resource utilization. Idle or underutilized machines contribute to tasks, reducing wastage.

 Complex Problem Solving: Grids handle large-scale problems that require significant computational
power. Examples include climate modeling, drug discovery, and genome analysis.

 Collaboration: Grids facilitate collaboration across geographical boundaries. Researchers, scientists,


and engineers can work together on shared projects.

 Cost Savings: Organizations can reuse existing hardware, saving costs while accessing excess
computational resources. Additionally, cloud resources can be cost-effectively.

Applications of Hypervisor Virtualization in Enterprises

Hypervisor virtualization has become a cornerstone of modern IT infrastructure in enterprises, offering a range
of benefits that streamline operations, reduce costs, and enhance flexibility. Here’s a detailed look at how
hypervisor virtualization is applied in enterprise environments:
1. Server Consolidation

Overview: Server consolidation refers to the practice of reducing the number of physical servers by running
multiple virtual machines (VMs) on fewer physical hosts.

This is achieved through virtualization technology, which allows multiple VMs, each with its own operating
system and applications, to operate on a single physical server.

Benefits:

 Cost Reduction: Decreases the need for physical hardware, leading to lower capital expenditure on
servers and reduced operational costs related to power, cooling, and physical space.
 Efficient Resource Utilization: Optimizes the use of server resources (CPU, memory, storage) by
allowing better allocation and usage compared to traditional single-application servers.
 Simplified Management: Reduces the complexity of managing numerous physical servers, making
system administration and monitoring more straightforward.

Implementation Example: An enterprise with multiple underutilized servers might consolidate these servers
into a few high-performance physical machines. By running VMs for different applications or departments on
these machines, the enterprise can maximize hardware usage and lower overall infrastructure costs.

2. Development and Testing Environments

Overview: Virtualization provides isolated environments that can be easily created, modified, and destroyed.
This is particularly useful for development and testing purposes, where different configurations and versions of
applications need to be tested without impacting production systems.

Benefits:

 Isolation: Developers and testers can work in environments that replicate production conditions
without risking the stability of the live environment.
 Cost Efficiency: Enables the creation of multiple test environments on a single physical server,
reducing hardware costs.
 Flexibility: Allows for rapid deployment and teardown of test environments, making it easier to test
various scenarios and configurations.

Implementation Example: A software development team might use VMs to create various configurations of
their application for testing purposes. They can quickly spin up VMs with different operating systems or
application versions, conduct their tests, and then decommission the VMs once testing is complete.
3. Disaster Recovery and Business Continuity

Overview: Disaster recovery (DR) and business continuity plans benefit greatly from virtualization.
Virtualization simplifies the replication and restoration of IT systems in the event of a disaster, ensuring
minimal downtime and quick recovery.

Benefits:

 VM Snapshots and Cloning: Enables the creation of snapshots and clones of VMs, which can be used
to restore systems to a previous state in case of failure.
 Geographic Flexibility: Allows replication of VMs to offsite locations, ensuring that backup systems
are available even if the primary data center is compromised.
 Rapid Recovery: Facilitates quick recovery by allowing VMs to be moved or copied to alternative
hardware in case of system failure.

Implementation Example: An enterprise might implement a DR solution where critical applications are
virtualized and replicated to a secondary data center. In the event of a failure at the primary site, the VMs can
be quickly activated at the secondary site, minimizing downtime and maintaining business operations.

4. Desktop Virtualization

Overview: Desktop virtualization involves running desktop operating systems and applications on centralized
servers rather than on individual user devices. Users access their desktops remotely through thin clients or
other devices.

Benefits:

 Centralized Management: Simplifies the management and updating of desktop environments, as


changes can be made on the server and propagated to all users.
 Security: Enhances security by keeping data centralized and reducing the risk of data loss or theft on
individual devices.
 Flexibility and Mobility: Allows users to access their desktop environments from various locations
and devices, supporting remote work and flexible working arrangements.

Implementation Example: An organization might deploy virtual desktop infrastructure (VDI) where
employees use thin clients or personal devices to connect to virtual desktops hosted on central servers. This
setup allows for easy updates, improved security, and consistent user experiences across different locations.
5. Server and Application Isolation

Overview: Virtualization provides isolation between different applications or services running on the same
physical server. This isolation helps in managing dependencies and conflicts between applications and
enhances security by compartmentalizing processes.

Benefits:

 Improved Security: Isolates applications and services, reducing the risk of one application affecting
the stability or security of another.
 Resource Management: Allocates resources (CPU, memory) to each VM based on application needs,
preventing resource contention and performance issues.
 Conflict Resolution: Minimizes compatibility issues between different applications or services by
running them in separate virtual environments.

Implementation Example: An enterprise might run different business-critical applications (e.g., email
servers, databases, web applications) in separate VMs on a single physical server. This setup ensures that if
one application encounters issues, it does not impact the others, and performance can be tuned individually for
each application.

6. Scalability and Flexibility

Overview: Virtualization provides scalability and flexibility in managing IT resources, allowing enterprises to
adjust their infrastructure based on changing demands.

Benefits:

 Dynamic Scaling: Resources can be scaled up or down based on workload requirements, allowing
enterprises to respond quickly to changes in demand.
 Resource Pooling: Pools resources from multiple physical servers to provide a flexible and scalable
environment.
 Cost Efficiency: Reduces the need for over-provisioning and enables more efficient use of resources.

Implementation Example: An e-commerce company experiencing high traffic during peak seasons can use
virtualization to quickly scale their infrastructure by adding additional VMs to handle the increased load. Once
the peak period ends, resources can be scaled back down to save costs.

7. Testing and Validation of New Technologies

Overview: Virtualization allows enterprises to test new technologies and configurations in isolated
environments before deploying them in production.
Benefits:

 Risk Mitigation: Tests new technologies without impacting existing systems, reducing the risk of
disruptions.
 Cost Savings: Avoids the need for additional physical hardware for testing purposes.
 Accelerated Innovation: Facilitates rapid experimentation with new tools and configurations.

Implementation Example: A company exploring a new database management system can deploy it in a
virtual environment to evaluate its performance and compatibility with existing applications. If successful,
they can then consider a full-scale deployment in the production environment.

High-Performance Computing (HPC) and Virtualization

High-Performance Computing (HPC) involves using supercomputers or clusters of computers to solve


complex computational problems that require substantial processing power. These problems often arise in
scientific research, engineering simulations, financial modeling, and other fields that need massive
computational resources. Virtualization in HPC can provide numerous benefits, but it also poses certain
challenges that need to be addressed to fully leverage its advantages.

1. Benefits of Virtualization in HPC

1.1 Improved Resource Utilization: Virtualization allows multiple virtual machines (VMs) to share the same
physical resources. In HPC environments, this means that the computational power of a cluster can be more
efficiently used by running multiple virtual instances on each physical node. This helps in maximizing the
utilization of expensive HPC hardware.

1.2 Flexibility and Scalability: Virtualization provides the ability to quickly provision and de-provision VMs
based on workload demands. For HPC applications, this means that resources can be dynamically allocated as
needed. When a particular simulation or computation requires additional resources, VMs can be spun up to
meet the demand. Conversely, when the workload decreases, resources can be scaled back to avoid waste.

1.3 Simplified Management: Managing a large number of physical servers can be complex and resource-
intensive. Virtualization abstracts the underlying hardware, making it easier to manage and maintain
computing resources. Tasks such as provisioning, monitoring, and maintaining systems can be streamlined
through virtualization management tools, reducing administrative overhead.

1.4 Isolation and Fault Tolerance: Virtualization provides isolation between different VMs, which can be
advantageous in an HPC environment. It ensures that if one VM experiences a failure or issues, other VMs can
continue to operate without disruption. This isolation also helps in testing and development, allowing new
software or configurations to be tested in a controlled environment without affecting production workloads.
1.5 Cost Efficiency: By consolidating multiple workloads onto fewer physical servers, virtualization can
reduce hardware costs. This is particularly beneficial in HPC environments where hardware is often expensive.
Virtualization allows organizations to leverage their hardware investments more effectively and reduce overall
infrastructure costs.

2. Challenges and Considerations

2.1 Performance Overhead: One of the primary concerns with virtualization in HPC is the performance
overhead introduced by the hypervisor. Virtualization adds an additional layer between the hardware and the
application, which can lead to performance degradation compared to running directly on physical hardware.
This overhead can be significant in HPC applications where performance is critical.

2.2 Resource Contention: In a virtualized HPC environment, multiple VMs sharing the same physical
resources can lead to resource contention. Proper management and allocation of resources are essential to
ensure that high-priority applications receive the necessary computational power. Virtualization solutions must
be carefully configured to balance workloads and prevent performance bottlenecks.

2.3 Compatibility and Support: Not all HPC applications are well-suited for virtualization. Some
applications may require direct access to hardware features or have specific performance requirements that are
difficult to meet in a virtualized environment. It's important to evaluate the compatibility of HPC applications
with virtualization and ensure that they can operate effectively within virtual machines.

2.4 Complexity in Management: While virtualization can simplify management in some respects, it also
introduces additional complexity. Managing a virtualized HPC environment requires expertise in both
virtualization technology and HPC workloads. This includes understanding how to configure and optimize the
hypervisor, as well as how to manage resource allocation and performance tuning.

2.5 Security Considerations: Virtualization introduces new security challenges, such as the potential for
vulnerabilities in the hypervisor that could affect multiple VMs. Ensuring that the hypervisor is secure and
properly configured is crucial to maintaining the security of the entire HPC environment. Additionally, proper
isolation between VMs is necessary to prevent unauthorized access to sensitive data or applications.

3. Best Practices for Virtualizing HPC Environments

3.1 Performance Optimization: To minimize performance overhead, choose hypervisors and virtualization
technologies that are optimized for HPC workloads. Look for features such as hardware acceleration and
resource management tools that can help mitigate performance impacts.

3.2 Resource Allocation and Management: Implement resource allocation strategies to ensure that critical
applications receive the necessary resources. Use virtualization management tools to monitor resource usage
and dynamically adjust allocations based on workload demands.
3.3 Compatibility Testing: Thoroughly test HPC applications in a virtualized environment before full
deployment. Assess how well applications perform in virtual machines and make any necessary adjustments to
ensure compatibility and performance.

3.4 Security Practices: Adopt robust security practices to protect the virtualized HPC environment. This
includes keeping the hypervisor up-to-date with security patches, using strong access controls, and regularly
auditing the virtual environment for potential vulnerabilities.

3.5 Training and Expertise: Ensure that IT staff have the necessary expertise in both virtualization and HPC
to manage and optimize the environment effectively. Investing in training and professional development can
help address the complexities of managing virtualized HPC infrastructure.

Pitfalls of Virtualization

Despite its advantages, virtualization comes with certain challenges. Performance overhead is a significant
concern, as the additional layer of abstraction introduced by the hypervisor can lead to reduced efficiency
compared to direct hardware access. The complexity of managing virtual environments also requires
specialized skills and tools to ensure effective operation and maintenance. Additionally, security risks are
amplified in virtualized environments, where multiple VMs share the same physical host. This scenario
necessitates robust security measures to prevent potential vulnerabilities and attacks that could affect multiple
virtual instances.

1. Performance Overhead

 Issue: Additional layer (hypervisor) between hardware and VMs can cause performance degradation.
 Impact: Reduced efficiency and increased latency.
 Mitigation: Use high-performance hypervisors, leverage hardware-assisted virtualization, and monitor
performance regularly.

2. Resource Contention and Over commitment

 Issue: Multiple VMs sharing the same physical resources can lead to contention and overcommitment.
 Impact: Performance degradation and unpredictable behavior.
 Mitigation: Implement resource management policies, monitor utilization, and conduct capacity
planning.

3. Security Concerns

 Issue: Hypervisor vulnerabilities and potential data isolation issues between VMs.
 Impact: Risk of data breaches and unauthorized access.
 Mitigation: Harden the hypervisor, ensure strong VM isolation, and conduct regular security audits.
4. Complexity in Management

 Issue: Increased complexity due to the interplay between virtual and physical infrastructure.
 Impact: Higher administrative overhead and troubleshooting difficulties.
 Mitigation: Use comprehensive management tools, standardize configurations, and invest in staff
training.

5. Data Backup and Recovery Challenges

 Issue: Virtualized environments complicate traditional backup and recovery processes.


 Impact: Complexity in backups and potentially longer recovery times.
 Mitigation: Use virtualization-aware backup solutions, regularly test backups, and automate backup
processes.

6. Vendor Lock-In

 Issue: Proprietary technologies and tools can lead to dependency on a single vendor.
 Impact: Challenges in migrating workloads and reduced flexibility.
 Mitigation: Adopt open standards, plan for portability, and evaluate vendor options carefully.

Multitenant Software: Multi-Entity Support

Multitenant software is designed to serve multiple customers or tenants from a single software instance while
maintaining data and configuration isolation. This model allows each tenant to operate in a shared environment
without interfering with other tenants' data or operations. Multi-entity support in such software ensures that
different organizations or users can customize their experiences while leveraging a common application
framework. This approach is particularly advantageous for SaaS (Software as a Service) applications, where
cost-efficiency and scalability are key.

Definition: Multitenancy allows multiple clients (tenants) to use the same software instance while keeping
their data and configurations isolated.

Key Aspects:

1. Data Isolation:
o Purpose: Ensures each tenant's data is separate and secure.
o Method: Uses separate schemas or tables in the database.
2. Configuration Isolation:
o Purpose: Allows each tenant to have custom settings and preferences.
o Method: Manages through tenant-specific configuration settings.
3. Access Control:
o Purpose: Restricts data and functionality access based on tenant identity.
o Method: Implements role-based or attribute-based access controls.
4. Resource Allocation:
o Purpose: Ensures fair distribution of computing resources among tenants.
o Method: Employs resource quotas and load balancing.

Implementation Strategies:

 Data Management: Use encryption and database partitioning.


 Customization: Allow tenant-specific settings and feature toggles.
 Security: Implement strong access controls and regular audits.
 Scalability: Use dynamic scaling and performance monitoring.

Challenges:

 Management Complexity: Handling multiple tenant configurations.


 Data Security: Keeping data isolated and secure.
 Performance Balance: Preventing one tenant’s usage from affecting others.
 Compliance: Meeting varied regulatory requirements.

Multitenant software efficiently serves multiple clients by ensuring data isolation, customizable configurations,
and secure, fair resource allocation.

Multi-Schema Approach

Multi-Schema Approach: In a multi-schema approach, each tenant's data is stored in a separate schema
within the same database. A schema is a logical container that holds database objects such as tables, views, and
procedures.

Benefits

1. Data Isolation:
o Purpose: Ensures that data from different tenants is kept separate, providing security and
privacy.
o Method: Each tenant’s data resides in its own schema, preventing accidental access or leakage.
2. Simplified Management:
o Purpose: Easier to manage and maintain data structures for each tenant.
o Method: Admins can handle backups, updates, and schema changes on a per-schema basis.
3. Customizability:
o Purpose: Allows customization of database objects and structures per tenant.
o Method: Schema-specific customizations can be implemented without affecting other tenants.
4. Performance Optimization:
o Purpose: Helps optimize performance by isolating data access patterns.
o Method: Database queries and operations are scoped to a specific schema, reducing the risk of
performance bottlenecks caused by inter-tenant data.

Implementation

1. Schema Design:
o Structure: Design separate schemas for each tenant within the same database.
o Objects: Define tables, indexes, and other database objects within each schema.
2. Access Control:
o Security: Implement access controls to ensure that users can only access their respective
schemas.
o Authentication: Use tenant-specific authentication mechanisms to enforce access restrictions.
3. Backup and Recovery:
o Backup: Perform backups at the schema level to isolate tenant data and simplify recovery
processes.
o Recovery: Restore specific schemas as needed without affecting others.

Challenges

1. Schema Management:
o Issue: Managing a large number of schemas can be complex and resource-intensive.
o Solution: Use automation and management tools to handle schema creation and maintenance.
2. Performance Considerations:
o Issue: Performance can be impacted if the database becomes too large or if schemas are not
properly optimized.
o Solution: Monitor performance and optimize schemas and indexes to ensure efficient data
access.
3. Scalability:
o Issue: As the number of tenants grows, managing many schemas may become challenging.
o Solution: Plan for scalability with efficient schema management practices and consider
database partitioning or sharding if needed.
4. Data Migration:
o Issue: Migrating data between schemas or between different environments can be complex.
o Solution: Develop a robust data migration strategy and use tools to automate and streamline the
process.
Unit III:

Installing cloud platforms and performance evaluation: Organizational scenarios of


clouds, Administering & Monitoring cloud services, load balancing, Resource optimization, Resource
dynamic reconfiguration, implementing real time application, Mobile Cloud Computing and edge computing.

1. Cloud Platforms and Performance Evaluation

 Cloud Platforms Overview:


o Provide on-demand computing resources over the internet.
o Types: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), Software as a Service
(SaaS).

Cloud computing performance evaluation is the process by which companies assess how well their cloud
computing resources are operating. By migrating to the cloud, you will tap into virtually limitless scaling and
flexibility. However, being on the cloud does not guarantee performance. Compared to on-premises systems,
you may be surprised at the slowdown in performance once you migrate data-intensive workloads and very
large data sets to the cloud. Cloud computing performance evaluation allows you to get a clear picture of
which components in your cloud environment are draining performance.

 Performance Evaluation Metrics:


o Latency:
 Definition: Time taken for a request to travel from the client to the server and back.
 Importance: Critical for user experience in real-time applications.
o Throughput:
 Definition: The number of requests processed in a specific time frame (requests per
second).
 Importance: Indicates the capacity and efficiency of the system.
o Scalability:
 Definition: The ability to handle increasing loads by adding resources (vertical vs.
horizontal scaling).
 Importance: Essential for applications with variable workloads.
o Availability:
 Definition: The percentage of time services are operational (often measured as uptime).
 Importance: Critical for business continuity; often quantified in SLAs (Service Level
Agreements).
o Cost Efficiency:
 Definition: Balancing operational costs with performance needs.
 Importance: Helps organizations stay within budget while meeting performance
expectations.

2. Organizational Scenarios of Clouds

 Public Cloud:
o Characteristics: Resources shared among multiple organizations; managed by third-party
providers.
o Examples: Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform.
o Use Cases: Startups, development/testing environments, and applications with variable demand.
 Private Cloud:
o Characteristics: Exclusive use by a single organization; greater control and customization.
o Deployment: On-premises or hosted by a third-party provider.
o Use Cases: Regulated industries (healthcare, finance), sensitive workloads.
 Hybrid Cloud:
o Characteristics: Combination of public and private clouds; allows for flexibility and data
sharing.
o Benefits: Balances the need for security and control with scalability and cost-effectiveness.
o Use Cases: Seasonal workloads, data backup and recovery.
 Community Cloud:
o Characteristics: Infrastructure shared among several organizations with similar concerns.
o Management: Can be managed by one of the organizations or a third party.
o Use Cases: Collaborative projects, research organizations.

o
3. Administering & Monitoring Cloud Services

 Administration:
o User Management:
 Role-based access control (RBAC) to restrict permissions based on user roles.
 Identity and Access Management (IAM) solutions to manage user identities and
permissions.
o Service Provisioning:
 Automating the deployment of resources using Infrastructure as Code (IaC) tools (e.g.,
Terraform, CloudFormation).
 Lifecycle management of cloud resources.

 Monitoring:
o Monitoring Tools:
 AWS Cloud Watch, Azure Monitor, Google Cloud Operations.
 Features include performance dashboards, log analytics, and alerting.
o Key Metrics to Monitor:
 Resource utilization (CPU, memory, disk I/O).
 Application performance (response times, error rates).
 Cost management (spending trends, budget alerts).

Cloud monitoring is the process of evaluating the health of cloud-based IT infrastructures. Using cloud-
monitoring tools, organizations can proactively monitor the availability, performance, and security of their
cloud environments to find and fix problems before they impact the end-user experience. Cloud monitoring
assesses three main areas: performance, security, and compliance.
4. Load Balancing
Definition: The process of distributing incoming network traffic across multiple servers to optimize
resource use and prevent overload.
Load balancing is an essential technique used in cloud computing to optimize resource utilization and ensure
that no single resource is overburdened with traffic. It is a process of distributing workloads across multiple
computing resources, such as servers, virtual machines, or containers, to achieve better performance,
availability, and scalability.

 Types of Load Balancing:


o Round Robin: Distributes requests sequentially across all available servers.
o Least Connections: Sends requests to the server with the least number of active connections,
suitable for long-lived connections.
o IP Hash: Uses the client’s IP address to determine which server should handle the request,
providing session persistence.
 Benefits:
o Improves application availability and reliability.
o Enhances resource utilization and responsiveness.
o Facilitates failover, ensuring service continuity.

5. Resource Optimization

 Techniques:
o Auto-scaling: Automatically adjusts the number of active servers based on demand.
o Rightsizing: Evaluating resource usage and adjusting instance sizes to match actual
requirements, avoiding over-provisioning.
o Cost Optimization: Using reserved instances or spot instances to reduce costs; analyzing usage
patterns for efficiency.

 Tools:
o AWS Cost Explorer, Azure Cost Management, GCP Billing Reports for analyzing and
optimizing resource costs.

6. Resource Dynamic Reconfiguration

 Dynamic Resource Allocation:


o Automatically reallocating resources in response to real-time demands.
 Techniques:
o VM Migration: Moving virtual machines between physical hosts to balance load or for
maintenance.
o Container Orchestration: Tools like Kubernetes manage containers dynamically, scaling them
up or down based on current load.
 Benefits: Increased efficiency and availability, reduced downtime.

7. Implementing Real-Time Applications


Cloud computing is enabling businesses to take advantage of the latest technologies without having to spend
fortunes on costly software, hardware and IT services. Today, many businesses and companies have embraced
cloud computing and they are using it in different ways. Here are some of the most common ways through
which businesses are applying cloud computing.

 Communication:
Emails are some of the most popular communication methods that businesses and companies use today. This
service is evolving at a very fast rate becoming more reliable and faster. Today, most businesses are always
email campaigning clients and using emails to store important data about their customers. Through cloud
computing, webmail clients can use cloud storage while providing analytics surrounding email data from any
location globally. Companies are also using cloud-based SaaS apps to enable access to enterprise information
instantly from any location. Ideally, cloud computing has made it easier for companies and businesses to
executive internal processes smoothly.

 Collaboration:
Cloud computing has made it easier for employees, clients and businesses to collaborate with ease. Sharing
files and documents has been made easier by cloud computing. This has enhanced connections that are easy
and less time-consuming. Google Wave for instance enables users to create files then invite other users to
edit, collaborate with the files or comment. Collaboration with cloud computing is the same as instant
messaging. However, it provides complete, specific tasks that take just hours instead of months to
accomplish.
 Data storage:
Businesses are using cloud computing solutions to store crucial data. Data store in a business or home
computer can only be accessed when using that device. However, cloud computing enables to store and
access data anytime, anywhere and from any device. This storage is also secure because user gets a unique
password and username that ensures that only user can access files online as well as encryption of the data.
There are several security layers for cloud storage and this makes it extremely difficult for hackers to access
the data in the cloud. Virtual office Perhaps, the most popular among all real-time applications of cloud
computing is the ability to rent software (i.e. SaaS) rather than use it. For instance, Google Docs can be used
to run a virtual office.

 Characteristics:
o Require low latency and quick response times; often need high throughput and scalability.
 Use Cases:
o Online gaming, financial services (trading platforms), real-time collaboration tools, IoT
applications.
 Technologies:
o Web Sockets: Enables real-time bi-directional communication between clients and servers.
o Message Brokers: Systems like Rabbit MQ, Apache Kafka for handling real-time data streams.

8. Mobile Cloud Computing


Mobile Cloud Computing which is defined as a combination of mobile computing, cloud computing, and
wireless network that come up together purpose such as rich computational resources to mobile users, network
operators, as well as to cloud computing providers. Mobile Cloud Computing is meant to make it possible for
rich mobile applications to be executed on a different number of mobile devices. In this technology, data
processing, and data storage happen outside of mobile devices. Mobile Cloud Computing applications leverage
this IT architecture to generate the following advantages:
1. Extended battery life.
2. Improvement in data storage capacity and processing power.
3. Improved synchronization of data due to “store in one place, accessible from anywhere” platform theme.
4. Improved reliability and scalability.
5. Ease of integration.

Characteristics of Mobile Cloud Computing Application


1. Cloud infrastructure: Cloud infrastructure is a specific form of information architecture that is used to
store data.
2. Data cache: The data can be locally cached.
3. User Accommodation: Scope of accommodating different user requirements in cloud app development is
available in mobile Cloud Computing.
4. Easy Access: It is easily accessed from desktop or mobile devices alike.
5. Cloud Apps: facilitate to provide access to a whole new range of services.

 Definition: Leveraging cloud computing services on mobile devices to enhance capabilities and user
experiences.
 Benefits:
o Access to powerful processing and storage capabilities via the cloud.
o Improved app functionality without heavy local resource usage.
o Enhanced data synchronization and sharing across devices.
 Challenges:
o Network connectivity issues impacting performance.
o Security risks associated with data transmission and storage.
o Battery consumption concerns for mobile devices.
9. Edge Computing
 Definition: A distributed computing paradigm that brings computation and data storage closer to the
location of data generation. Edge computing is a distributed computing model that brings computation
and data storage closer to the sources of data.

Edge computing is an emerging computing paradigm which refers to a range of networks and devices at
or near the user. Edge is about processing data closer to where it's being generated, enabling processing at
greater speeds and volumes, leading to greater action-led results in real time.

Edge computing optimizes Internet devices and web applications by bringing computing closer to the source of
the data. This minimizes the need for long distance communications between client and server, which reduces
latency and bandwidth usage.

 Benefits:
o Reduced latency for applications needing immediate processing.
o Decreased bandwidth usage by processing data locally before sending to the cloud.
o Enhanced performance for IoT devices, smart cities, and autonomous systems.
 Applications:
o Smart homes, connected vehicles, healthcare monitoring systems, and real-time analytics.
Unit IV:

Cloud security fundamentals & Issues in cloud computing: Secure Execution Environments and
Communications in cloud, General Issues and Challenges while migrating to Cloud. The Seven-step model of
migration into a cloud, Vulnerability assessment tool for cloud, Trusted Cloud computing, Virtualization
security management-virtual threats, VM Security Recommendations and VM-Specific Security
techniques.QOS Issues in Cloud, Depend ability, data migration, challenges and risks in cloud adoption.

CLOUD SECURITY FUNDAMENTALS

Cloud computing security consists of a set of policies, controls, procedures and technologies that work
together to protect cloud-based systems, data and infrastructure. These security measures are configured to
protect cloud data, support regulatory compliance and protect customer's privacy as well as setting
authentication rules for individual users and devices.
Protection measures:
 No single person should accumulate all these privileges.
 A provider should deploy stringent security devices, restricted access control policies, and surveillance
mechanisms to protect the physical integrity of the hardware.
 By enforcing security processes, the provider itself can prevent attacks that require physical access to the
machines.
 The only way a system administrator would be able to gain physical access to a node running a customer’s
VM is by diverting this VM to a machine under his/her control, located outside the IaaS’s security
perimeter.
 The cloud computing platform must be able to confine the VM execution inside the perimeter and guarantee
that at any point a system administrator with root privileges remotely logged to a machine hosting a VM
cannot access its memory.
 TCG (trusted computing group), a consortium of an industry leader to identify and implement security
measures at the infrastructure level proposes a set of hardware and software technologies to enable the
construction of trusted platforms suggests the use of “remote attestation” (a mechanism to detect changes to
the user’s computers by authorized parties).

SECURE EXECUTION ENVIRONMENTS AND COMMUNICATIONS IN CLOUD

 An Execution Environment is an environment for executing code, in which those executing the code can
have high levels of trust in that surrounding environment because it can ignore threats from the rest of the
device.
Execution Environment stands and distinguishes them from the uncertain nature of applications. Generally,
the rest of the device hosts a feature Rich OS like Android, and so is generically known in this context as
the REE (Rich Operating System Execution Environment).
 Cloud communications are the blending of multiple communication modalities. These include methods such
as voice, email, chat, and video, in an integrated fashion to reduce or eliminate communication lag. Cloud
communications are essentially internet-based communication.
 Cloud communications evolved from data to voice with the introduction of VoIP (voice over Internet
Protocol). A branch of cloud communication is cloud telephony, which refers specifically to voice
communications
 Cloud communications providers host communication services through servers that they own and maintain.
The customers, in turn, access these services through the cloud and only pay for services that they use,
doing away with maintenance associated with PBX (private branch exchange) system deployment.
 Cloud communications provide a variety of communication resources, from servers and storage to
enterprise applications such as data security, email, backup and data recovery, and voice, which are all
delivered over the internet. The cloud provides a hosting environment that is flexible, immediate, scalable,
secure, and readily available.
The need for cloud communications has resulted from the following trends in the enterprise:
 Distributed and decentralized company operations in branch and home offices
 Increase in the number of communication and data devices accessing the enterprise networks
 Hosting and managing IT assets and applications
These trends have forced many enterprises to seek external services and to outsource their requirement for IT
and communications. The cloud is hosted and managed by a third party, and the enterprise pays for and uses
space on the cloud for its requirements. This has allowed enterprises to save on costs incurred for hosting and
managing data storage and communication on their own.
The following are some of the communication and application products available under cloud communications
that an enterprise can utilize:
 Private branch exchange
 SIP Trucking
 Call center
 Fax services
 Interactive voice response
 Text messaging
 Voice broadcast
 Call-tracking software
 Contact center telephony
All of these services cover the various communication needs of an enterprise. These include customer
relations, intra-branch and inter-branch communication, inter-department memos, conference, call forwarding,
and tracking services, operations center, and office communications hub.
Cloud communication is a center for all enterprise-related communication that is hosted, managed, and
maintained by third-party service providers for a fee charged to the enterprise.
General Issues and Challenges while migrating to Cloud
ISSUES IN CLOUD COMPUTING
Cloud Computing is Internet-based computing, where shared resources, software, and information are
provided to computers and other devices on demand.
These are major issues in Cloud Computing:

1. Privacy:
The user data can be accessed by the host company with or without permission. The service provider may
access the data that is on the cloud at any point in time. They could accidentally or deliberately alter or
even delete information.
2. Compliance:
There are many regulations in places related data and hosting. To comply with regulations (Federal
Information Security Management Act, Health Insurance Portability and Accountability Act) user may
have to adopt deployment modes that are expensive.
3. Security:
Cloud-based services involve third-party for storage and security. One can assume that a cloud-based
company will protect and secure one’s data if one is using their services at a very low or for free, they may
share user’s information with others. Security presents a real threat to cloud.
4. Sustainability:
This issue refers to minimizing the effect of cloud computing on environment. Citing the server’s effects
on the environmental effects of cloud computing, in areas where climate favors natural cooling and
renewable electricity is readily available, the countries with favorable conditions, such as Finland, Sweden,
and Switzerland are trying to attract cloud computing data centers.
5. Abuse:
While providing cloud services, it should be ascertained that the client is not purchasing the services of
cloud computing for nefarious purpose. A banking Trojan illegally used the popular Amazon service as a
command and control channel that issued software updates and malicious instruction to PCs that were
infected by the malware.

The following table illustrates the dependencies which should be taken into consideration when architecting
security controls into applications for cloud deployments:

Public/Hybrid Cloud –Threats Private Cloud -Threats Mitigation


IaaS  OWASP Top 10  OWASP Top 10  Testing apps and API for
 Data leakage(inadequateACL)  Data theft (insiders) OWASP Top 10 vulnerabilities
 Privilege escalation via  Privilege escalation via  Hardening of VM image
management console mis- management console  Security controls including
configuration mis-configuration encryption, multi-factor
 Exploiting VM weakness authentication, fine granular
 DoS attack via API authorization, logging
 Weak protection of privileged  Security automation-Automatic
keys provisioning of firewall
 VM Isolation failure policies, privileged accounts,
DNS, application identity
PaaS  Privilege escalation via API  Privilege escalation via
 Authorization weakness in API
platform services such as
Message Queue, NoSQL, Blob
services
 Vulnerabilities in the run time
engine resulting in tenant
isolation failure

CLOUD COMPUTING SECURITY CHALLENGES


 DDOS and DDoS attacks: A DDoS attack is designed to overwhelm website servers so it can no longer
respond to legitimate user requests.
If a DDoS attack is successful, it renders a website useless for hours, or even days. This can result in a loss of
revenue, customer trust and brand authority.
 Data breaches: Data breaches can be the main goal of an attack through which sensitive information such
as health, financial, personal identity, intellectual, and other related information is viewed, stolen, or used
by an unauthorized user.
 System vulnerability: Security breaches may occur due to exploitable bugs in programs that stay within a
system. This allows a bad actor to infiltrate and get access to sensitive information or crash the service
operations.
 Account or service hijacking using stolen passwords: Account or service hijacking can be done to gain
access and abuse highly privileged accounts. Attack methods like fraud, phishing, and exploitation of
software vulnerability are carried out mostly using the stolen passwords.
 Data loss: The data loss threat occurs in the cloud due to interaction with risks within the cloud or
architectural characteristics of the cloud application. Unauthorized parties may access data to delete or alter
records of an organization.
 Shared technology vulnerabilities: Cloud providers deliver their services by sharing applications, or
infrastructure. Sometimes, the components that make up the infrastructure for cloud technology-as-a-service
offers are not designed to offer strong isolation properties for a multi-tenant cloud service.
Risks to Cloud Environments:
 Isolation failure: Multi-tenancy and shared resources are defining characteristics of cloud computing. This
risk category covers the failure of mechanisms separating storage, memory, routing, and reputation between
different tenants.
It should be considered that attacks on resource isolation mechanisms are still less numerous and much more
difficult for an attacker to put in practice compared to attacks on traditional OSs.
 Management interface compromise: Customer management interfaces of a public cloud provider are
accessible through the Internet and mediate access to larger sets of resources (than traditional hosting
providers) and therefore pose an increased risk, especially when combined with remote access and web
browser vulnerabilities.
 Data protection: Cloud computing poses several data protection risks for cloud customers and providers.
In some cases, it may be difficult for the cloud customer (in its role as data controller) to effectively check
the data handling practices of the cloud provider and thus to be sure that the data is handled lawfully.
 Malicious insider: while usually less likely, the damage which may be caused by malicious insiders is
often far greater. Cloud architectures necessitate certain roles which are extremely high-risk. Examples
include CP system administrators and managed security service providers.

Overcoming Challenges in Cloud Computing:

1. Security and Privacy: Security is arguably the biggest challenge in cloud computing. Cloud security refers
to a set of technologies or policies to protect data. Remember, violating privacy can cause havoc to end-
users.
 Implementing security applications, encrypted file systems, and data loss software to prevent attacks on
cloud infrastructures.
 Using security tools and adopting a corporate culture that upholds data security discreetly.

2. Cloud Costs: Costing is a significant challenge in the adoption, migration, and operation of cloud
computing services, especially for small and medium-sized businesses.
 Prepare a cost estimate budget right from the start. It involves experts who will help for cloud cost
management. An additional measure is creating a centralized team to oversee budget details.

3. Reliability and Availability: cloud providers continue to improve their uptimes; service disruption is still
an enrollment problem. Small-scale cloud service providers are more prone to downtime. This problem
persists today even with well-developed backups and platform advancements.
 Cloud computing service providers have resorted to creating multiple redundancy levels in their systems.
Also, they are developing disaster recovery setups and backup plans to mitigate outages.
The Seven-step model of migration into a cloud

Cloud migration is the procedure of transferring applications, data, and other types of business components
to any cloud computing platform. There are several parts of cloud migration an organization can perform. The
most used model is the applications and data transfer through an on-premises and local data center to any
public cloud.

But, a cloud migration can also entail transferring applications and data from a single cloud environment or
facilitate them to another- a model called cloud-to-cloud migration. The other type of cloud migration is
reverse cloud migration, cloud exit, and cloud repatriation where applications or data are transferred and back
to the local data center.

Migrating a model to a cloud can help in several ways, such as improving scalability, flexibility, and
accessibility. Also, migrating models to the cloud can be a complex process that requires careful planning. For
a step-by-step guide on how Dev Ops practices can streamline your cloud migration journey, the DevOps
Engineering – Planning to Production course covers cloud migration techniques with practical DevOps
integration.

Now let’s discuss the seven steps to follow when migrating a model to the cloud:

Step 1: Choose the right cloud provider ( Assessment step):The first step in migrating your model to the
cloud is to choose a cloud provider that aligns with your needs, budget, and model requirement. consider the
factors such as compliance, privacy, and security.

Step 2: Prepare your data ( Isolation step):Before migrating to your cloud, you need to prepare your data.
for that ensure your data is clean and well organized, and in a format that is compatible with your chosen cloud
provider.

Step 3: Choose your cloud storage ( Mapping step): Once your data is prepared, you need to choose your
cloud storage. This is where your data is stored in the cloud. there are many cloud storage services such as
GCP Cloud Storage, AWS S3, or Azure Blob Storage.

Step 4: Set up your cloud computing resources and deploy your model ( Re- architect step) :If you want
to run a model in the cloud, you will need to set up your cloud computing resources. This includes selecting
the appropriate instance type and setting up a virtual machine(VM) or container for your model. After setting
up your computing resource, it is time to deploy your model to the cloud. This includes packaging your model
into a container or virtual machine image and deploying it to your cloud computing resource. and while
deploying it may be possible that some functionality gets lost so due to this some parts of the application need
to be re-architect.
Step-5: Augmentation step:It is the most important step for our business for which we migrate to the cloud in
this step by taking leverage of the internal features of cloud computing service we augment our enterprise.

Step 6: Test your Model:Once your model is deployed, we need to test it to ensure that it is working or not.
That involves running test data through your model and comparing the results with your expected output.

Step 7: Monitor and maintain your Model:After the model is deployed and tested, it is important to monitor
and maintain it. That includes monitoring the performance, updating the model as needed, and need to ensure
your data stays up-to-date. Migrating your machine learning model to the cloud can be a complex process, but
above 7 steps, you can help ensure a smooth and successful migration, ensuring that your model is scalable
and accessible.

VULNERABILITY ASSESSMENT TOOL FOR CLOUD


 Qualys makes public cloud deployments are secure and compliant. Quays' continuous security platform
enables customers to easily detect and identify vulnerable systems and apps, helping them better face the
challenges of growing cloud workloads.
 Proof point focuses specifically on email, with cloud-only services tailored to both enterprises and small to
medium-sized businesses. Not only does it make sure none of the bad stuff gets in, but it also protects any
outgoing data.
 Zscaler calls its product the “Direct to Cloud Network” and like many of these products, boasts that it’s
much easier to deploy and can be much more cost-efficient than traditional appliance security.
 Cipher Cloud is here to secure all those other “as a service” products used, such as Salesforce, Chatter,
Box, Office 365, Gmail, Amazon Web Services, and more.
 Centrify aims at identity management across several applications and devices. The main goal is to make
users, employers, and customer’s look-alike as a central area to be viewed and accessed through company
policies. It gives an alarm when a person tries to sign in from on premise cloud software or cloud
applications.
TRUSTED CLOUD COMPUTING
Trusted computing is a broad term that refers to technologies and proposals for resolving computer security
problems through hardware enhancements and associated software modifications. Several major hardware
manufacturers and software vendors, collectively known as the Trusted Computing Group (TCG), are
cooperating in this venture and have come up with specific plans.
The TCG develops and promotes specifications for the protection of computer resources from threats posed by
malicious entities without infringing on the rights of end-users. Microsoft defines trusted computing by
breaking it down into four technologies, all of which require the use of new or improved hardware at the
personal computer (PC) level:
 Memory curtaining -- prevents programs from inappropriately reading from or writing to each other's
memory.
 Secure input/output (I/O) -- addresses threats from spyware such as key loggers and programs that capture
the contents of a display.
 Sealed storage -- allows computers to securely store encryption keys and other critical data.
 Remote attestation -- detects unauthorized changes to software by generating encrypted certificates for all
applications on a PC.
To be effective, these measures must be supported by advances and refinements in the software and operating
systems (OSs) that PCs use.
The trusted computing base (TCB) encompasses everything in a computing system that provides a secure
environment. This includes the OS and its standard security mechanisms, computer hardware, physical
locations, network resources, and prescribed procedures.
The term trusted PC refers to the industry ideal of a PC with built-in security mechanisms that place minimal
reliance on the end-user to keep the machine and its peripheral devices secure. The intent is that, once effective
mechanisms are built into the hardware, computer security will be less dependent on the vigilance of
individual users and network administrators than it has historically been.
VIRTUALIZATION SECURITY MANAGEMENT
 Migration management: VM migration is easy to attack and is a vulnerable process. Special security
mechanisms should be applied when a VM is migrated from a place to somewhere else. It sounds like an
easy process but it is not.
When any of the organizations or an enterprise tries to use any of the automated tools such as live migration
many other factors creep in. Two different VMs on a single machine may cause a violation to Payment Card
Industry (PCI).
 VM Image Management: VM Image (VMI) is a type of file or the format of the data which is used to
create the virtual machine in the environment of virtualization. Hence, the confidential data and the
integrity of VMIs are very important when the VMs are migrating or starting.
 Patch Management: Patch management is acquiring, installing, or testing system management or inserting
code changes to the computer system administration. It also includes on the available patches of the
maintaining current knowledge ensuring the patches are installed properly. Patch management is built for
identify and test the various types of code changes.
 Audit: In the lifecycle of the Virtual machines, the sensitive data and the behavior of the virtual machines
should be monitored throughout the virtual system. This may be done with auditing which provides the
mechanism to check the traces of the activities left by the virtual system.
VIRTUAL THREATS
Some of the virtual threats to Cloud computing security are:
1. Shared clipboard:
Shared clipboard technologies enable information to become transferred between VMs as well as the host,
offering a means of moving information between malicious programs in VMs of various security realms.
2. Keystroke logging:
Some VM technologies allow the logging of keystrokes and screen updates to become passed across virtual
terminals within the virtual machine, writing to host files and permitting the monitoring of encrypted
terminal connections in the VM.
3. VM monitoring in the host:
Since all network packets coming from or planning to a VM pass with the host, the host may be able to impact
the VM from the following this:
 Starting, stopping, pausing, and restart VMs
 Monitoring and configuring resources available to the VMs, including CPU, memory, disk, and network
usage of VMs
 Adjusting the amount of CPUs, level of memory, quantity, and variety of virtual disks, and quantity of
virtual network interfaces offered to a VM.
 Monitoring the applications running inside the VM.
 The viewing, copying, and modifying data stored about the VM’s virtual disks.
4. Virtual machine monitoring from another VM:
VMs shouldn’t have the ability to directly access one another’s virtual disks around the host. Nevertheless, if
the VM platform uses a virtual hub or switches for connecting the VMs to the host, then intruders may be
able to use a hacker technique called “ARP poisoning” to redirect packets planning to or in the other VM
for sniffing.
5. Virtual machine backdoors:
Virtual machine backdoors, covert communications channel between guest and host could allow intruders to
execute potentially harmful operations.
VM SECURITY RECOMMENDATIONS
Following virtual machine security recommendations help ensure the integrity of the cloud:
 General Virtual Machine Protection: A virtual machine is, in most respects, the equivalent of a physical
server. Employ the same security measures in virtual machines that for physical systems.
 Minimize Use of the Virtual Machine Console: The virtual machine console provides the same function
for a virtual machine that a monitor provides on a physical server.
Users with access to the virtual machine console have access to virtual machine power management and
removable device connectivity controls. Console access might therefore allow a malicious attack on a
virtual machine.
 Prevent Virtual Machines from Taking over Resources: When one virtual machine consumes so much
of the host resources that other virtual machines on the host cannot perform their intended functions, a
Denial of Service (DoS) might occur.
To prevent a virtual machine from causing a DoS, use host resource management features such as setting
Shares and using resource pools.
 Disable Unnecessary Functions Inside Virtual Machines: Any service that is running in a virtual
machine provides the potential for attack. By disabling system components that are not necessary to support
the application or service that is running on the system, to reduce the potential.

VM-SPECIFIC SECURITY TECHNIQUES


 Protecting the VMM:
A hypervisor can be used to monitor the virtualized systems it is hosting. However, the hypervisor can in turn
be targeted and modified by an attack. As the hypervisor possesses every privilege on its guest systems, it is
crucial to preserve its integrity. However, while it is possible to ensure the integrity of a system during boot
it is much harder to ensure runtime integrity.
To ensure runtime integrity, one could think of installing a second hypervisor under the initial hypervisor
dedicated to monitoring it, similar to one would have to guarantee that the most privileged hypervisor
cannot, in turn, be corrupted. Several studies have therefore focused on using other means to ensure the
integrity of the most privileged element.
 Protecting the VMs against their VMM:
The purpose of CloudVisor is to ensure data confidentiality and integrity for the VM, even if some elements
of the virtualization system (hypervisor, management VM, another guest VM) are compromised. The idea is
that data belonging to a VM but accessed by something else than this VM appears encrypted.
 Virtual Machine Encryption:
A virtual machine consists of a set of files; machine theft has now become much easier. Furthermore, stealing
a virtual machine can be achieved with relative ease by simply snap shooting the VM and copying the snap
shorted files.
 Encryption under the hypervisor:
VMs can be encrypt the hypervisor. By using standard protocols such as NFS or iSCSI, the encryption is
independent of the hypervisor platform. That means hypervisor features such as VMotion and Live
Migration continue to work unchanged. As VMs are copied into an encrypted data store, they will be
encrypted according to the encryption policy.
 Encryption within the VM:
In this model, for all devices encrypted, there is an encrypted path from the VM's operating system through
the hypervisor and down to the storage layer. This prevents VM administrators from being able to view
sensitive data that resides within the VM. In this environment, as with the previous one described, the key
server could reside anywhere.
 Encryption of VM images and application data:
Another model combines encryption at the VM and storage layers. This combined option is superior because
there's an encrypted path for sensitive data from the VM through the hypervisor. This prevents the VM
administrator from seeing clear text data.

QOS ISSUES IN CLOUD


Cloud computing must assure the best service level for users. Services outlined in the service-level agreements
must include guarantees on round-the-clock availability, adequate resources, performance, and bandwidth.
Any compromise on these guarantees could prove fatal for customers.
The decision to switch to cloud computing should not be based on the hype in the industry. A good
understanding of the technology enables the user to make smarter decisions. Knowing all the features will
empower the business users to understand and negotiate with the Service Providers in a proactive manner.

 Workload modeling involves the assessment or prediction of the arrival rates of requests and of the demand
for resources (e.g., CPU requirements) placed by applications on an infrastructure or platform, and the QoS
observed in response to such workloads.
 System modeling aims at evaluating the performance of a cloud system, either at design time or at runtime.
Models are used to predict the value of specific QoS metrics such as response time, reliability or availability.
 Applications of QoS models often appear in relation to decision-making problems in system management.
Techniques to determine optimized decisions range from simple heuristics to nonlinear programming and
meta-heuristics.

DEPENDABILITY
Dependability is one of the highly crucial issues in cloud computing environments given the serious impact of
failures on user experience. Cloud computing is a complex system based on virtualization and large scalability,
which makes it a frequent place for failure. In order to fight against failures in a cloud, administrator assure
dependability differently from the common way where the focus of fault management is on the Infrastructure
as a Service and on the cloud provider side only.
DATA MIGRATION
Data migration is referred to as the process of transferring data from one location to another new and improved
system or location. It effectively selects, prepares and transforms data to permanently transfer it from one
system storage to another. With the focus of enterprises increasing on optimization and technological
advancements, they are availing database migration services to move from their on-premises infrastructure to
cloud-based storage and applications.
Types of data migration
 Cloud Migration: It is the process of moving data, applications and all important business elements from
on premise data center to the cloud, or from one cloud to another.
 Application Migration: Involves transfer of application programs to a modern environment. It may move
an entire application system from on premise IT center to the cloud or between clouds.
 Storage Migration: It is the process of moving data to a modern system from outdated arrays. It enhances
the performance while offering cost-effective scaling.

CHALLENGES AND RISKS IN CLOUD ADOPTION

1. SETTING A BUSINESS OBJECTIVE: Setting clear and flexible objectives and plans based on business
requirements is important in any cloud migration. Create a clear migration plan, factoring in projected costs,
downtime, training needs, migration time, etc. Be prepared with risk mitigation plans as well.

2. FINANCIAL COSTS: Though cloud migration brings in a lot of returns and benefits, in the long run,
getting there is usually expensive and time-consuming. Costs include architecture changes, human resources,
training, migration partners, cloud provider, and bandwidth costs. Proper planning and a phase-wise migration
will reduce financial risks.

3. WHAT TO MIGRATE, WHAT NOT?:Decision on what to migrate and what to leave behind is
important for a successful migration strategy. Therefore, careful selection is very important.To reduce risks,
one can migrate applications with lesser dependencies, lesser criticality, compatible with cloud services, or
those aligned with critical business goals. A phase-wise migration approach is always better.

4. DATA SECURITY

Data security is the biggest concern when enterprises store their sensitive data with a third-party cloud
provider. If data is lost, leaked, or exposed, it could cause severe disruption and damage to the business Make
a strategy to keep mission-critical data at your premises or ensure complete data security at rest and in transit
while you migrate to a cloud environment. Follow and implement the best practices and policies to protect data
and access. Seek the help of a consultant/team with previous experience in setting up security in cloud
environments.
5. CLOUD SERVICE PROVIDER: The availability of multiple similar cloud service providers makes it a
hurdle to choose the right one. Goals, budget, priorities of the organization, along with the services offered,
security, compliance, manageability, cost, etc., of the service provider are the main factors to be considered in
selections. Opt for Hybrid Cloud to reduce vendor lock-in.

6. COMPLEXITY & LACK OF EXPERTISE: Most organizations are scared of the complexity of cloud
environments and migration processes. However, surveys show that complexity is still a blocking factor in
cloud adoptions. If you do not have enough in-house expertise in dealing with cloud, migration processes, and
compliance requirements, better engage a partner with previous experience. Encourage maximum automation
with the right hassle-free automation tools and technologies.

7. COMPLIANCE: For companies operating under strict regulatory and compliance frameworks, it is hard to
migrate to the cloud.Cloud migration should ensure compliance with local and global regulatory requirements.
For example, data should be protected at rest and in transit; integrated audit trails, dashboard, and incident
management systems should also be available to meet regulatory compliance requirements.

8. RESISTANCE TO ADOPTION: Resistance to change is inherent. Therefore, building organization-wide


acceptance is key to any transition and its success. An enterprise ensuring leadership buy-in right from the start
and a business case with clear reasons behind the changes is likely to get more acceptance and adoption. In
addition, employees’ knowledge of the value that the change brings in will create a positive difference.

9. TRAINING & RESOURCES: Cloud environments and cloud migration processes are still complex to
understand and practice. Moreover, lack of expertise and training resources are a concern even today for many
enterprises.The skill gap is one of the main reasons for the slowdown of cloud migration. Therefore, providing
proper training and support to employees, using hassle-free migration platforms and experienced partners are
critical to a successful and timely migration.

10. MIGRATION STRATEGY: Whether to rebuild, lift and shift, re-host(IaaS), refactor (PaaS), replace
(SaaS) or opt for a combination? is always a challenging question during migration.This decision is specific to
the nature of applications, infrastructure, network, security, privacy, scalability, regulatory, and business
requirements of the organization. A detailed analysis of all these factors, including the budget, risk, time, etc.,
will help arrive at the right distribution strategy.

11. DOWNTIME!:Downtime can be catastrophic to business in terms of revenue and reputation to many
organizations.Adopt a methodology that minimizes disruption and ensures business continuity. For example,
test migration offline, use the right tools, and end-to-end automation tools to reduce risks and downtime.

12. POST-MIGRATION:One of the other key concerns is the data privacy, security, and monitoring
capabilities of the application running on the cloud. Ensure there is complete observability on the build,
deployment, and running of applications and data on the cloud. Use the right tools, which provide logs, audit
trails, alerts, visual dashboards, and approval workflows to control and monitor the entire stack and operations.
Unit V:
Case Study on Open Source and Commercial Clouds: Open Stack, Eucalyptus, Open Nebula, Apache
Cloud Stack, Amazon (AWS),Microsoft Azure, Google cloud etc.

Open Stack: Open Stack is an open source platform that uses pooled virtual resources to build and manage
private and public clouds. The tools that comprise the Open Stack platform, called "projects," handle the core
cloud-computing services of compute, networking, storage, identity, and image services. More than a dozen
optional projects can also be bundled together to create unique, deployable clouds.
Open Stack is a free, open-source cloud computing platform launched on July 21, 2010, by Rackspace Hosting
and NASA. It provides Infrastructure-as-a-Service (IaaS) for public and private clouds, enabling users to
access virtual resources. The platform comprises various interrelated components, known as "projects," which
manage hardware pools for computing, storage, and networking. Unlike traditional virtualization, OpenStack
uses APIs to directly interact with and manage cloud services.

Open Stack consists of several key components that work together to provide cloud services. The main
services include:

 Nova: Manages compute resources for creating and scheduling virtual machines.
 Neutron: Handles networking, managing networks and IP addresses through an API.
 Swift: An object storage service designed for high fault tolerance, capable of managing petabytes of
unstructured data via a RESTful API.
 Cinder: Provides persistent block storage accessible through a self-service API, allowing users to
manage their storage needs.
 Keystone: Manages authentication and authorization for OpenStack services through a central
directory.
 Glance: Responsible for registering and retrieving virtual disk images from various back-end systems.
 Horizon: Offers a web-based dashboard for managing and monitoring OpenStack resources.
 Ceilometer: Tracks resource usage for metering and billing, and can generate alarms for threshold
breaches.
 Heat: Facilitates orchestration and auto-scaling of cloud resources on demand, working in conjunction
with Ceilometer.

ADVANTAGES OF USING OPENSTACK

 Rapid Provisioning: Enables quick orchestration and scaling of resources.


 Efficient Deployment: Applications can be deployed quickly.
 Scalability: Resources can be efficiently adjusted based on demand.
 Manageable Compliance: Easier to handle regulatory compliance.
Disadvantages of Using OpenStack

 Orchestration Limitations: Not as robust in orchestration capabilities.


 API Compatibility Issues: Integration with hybrid cloud providers can be challenging.
 Security Risks: Vulnerable to security breaches like other cloud services.

EUCALYPTUS

The open-source cloud refers to software or applications publicly available for the users in the cloud to set up
for their own purpose or for their organization.Eucalyptus is a Linux-based open-source software architecture
for cloud computing and also a storage platform that implements Infrastructure a Service (IaaS). It provides
quick and efficient computing services. Eucalyptus was designed to provide services compatible with
Amazon’s EC2 cloud and Simple Storage Service(S3).
Eucalyptus is a powerful open-source tool for building private clouds. If you’re looking to understand
how it fits into cloud-based DevOps practices, the DevOps Engineering – Planning to Production course
provides detailed insights on private cloud implementation.

Eucalyptus Architecture
Eucalyptus enables management of both Amazon Web Services and private cloud instances, allowing seamless
transfers between them. Its architecture includes a virtualization layer that handles network, storage, and
computing resources, with instances isolated through hardware virtualization.

Features:

 Images: Eucalyptus Machine Images (EMIs) are software bundles for the cloud.
 Instances: Running an image creates an active instance.
 Networking: Three modes—Static (allocates IPs), System (integrates with physical networks), and
Managed (local instance networking).
 Access Control: Manages user permissions.
 Elastic Block Storage: Offers block-level storage for instances.
 Auto-scaling and Load Balancing: Adjusts instances based on demand.

Components of Eucalyptus Architecture

 Node Controller: Manages instance lifecycles on each node, interacting with the OS and hypervisor.
 Cluster Controller: Oversees multiple Node Controllers and the Cloud Controller, scheduling VM
execution.
 Storage Controller (Walrus): Provides block storage, allows snapshots, and uses S3 APIs for file
storage.
 Cloud Controller: Front-end interface for client tools and communication with other components.
Operation Modes of Eucalyptus

 Managed Mode: Uses VLAN for security groups and network isolation.
 Managed (No VLAN): No network isolation; root access to all VMs.
 System Mode: Basic mode, assigning MAC addresses to VMs.
 Static Mode: Maps MAC/IP pairs in a static DHCP setup for better IP control.

Advantages of Eucalyptus Cloud

 Supports both private and public clouds.


 Compatible with Amazon Machine Images (AMIs) and APIs.
 Integrates with DevOps tools like Chef and Puppet.
 Offers an alternative to OpenStack and CloudStack.
 Facilitates hybrid cloud creation, extending private cloud services.

Open Nebula: OpenNebula is an open-source cloud computing platform that streamlines and simplifies the
manufacture and management of virtualized hybrid, public, and private clouds. It is a straightforward yet
feature-rich, flexible solution to build and manage enterprise clouds and data center virtualization.

OpenNebula is an open-source cloud management platform designed for building and managing private,
public, and hybrid clouds. It enables efficient orchestration of virtualized data centers.

Features

 Multi-Cloud Management: Integrates private and public clouds.


 Virtual Machine Management: Simplifies creation and deployment of VMs.
 Flexible Networking: Supports VLANs and virtual networks.
 Storage Management: Works with various storage backends for dynamic provisioning.
 User Management: Offers role-based access control.
 Scalability: Scales from small setups to large enterprises.
 APIs and CLI: Provides RESTful APIs and command-line tools.

Advantages

 Open Source: Cost-effective with community support.


 Flexibility: Highly customizable for diverse needs.
 Interoperability: Easily integrates with existing solutions.
 Simplicity: User-friendly interface and streamlined processes.

Use Cases

 Private Cloud: Optimize resource usage and security.


 Hybrid Cloud: Combine private and public clouds for flexibility.
 Development and Testing: Quickly provision resources for projects.

Open Nebula offers a robust solution for organizations looking to implement cloud infrastructure, providing
flexibility, scalability, and ease of use.

APACHE CLOUD STACK

Apache Cloud Stack is an open-source cloud computing software designed to deploy, manage, and orchestrate
large networks of virtual machines. It provides Infrastructure-as-a-Service (IaaS) capabilities, enabling
organizations to create private and public clouds efficiently.

Key Features

 Multi-Hypervisor Support: Compatible with various hypervisors, including KVM, VMware, and
Xen.
 Scalability: Designed to scale from small deployments to large-scale infrastructures.
 Resource Management: Automates resource provisioning, including compute, storage, and
networking.
 Self-Service Portal: Users can manage resources through a web-based interface.
 Network as a Service (NaaS): Supports advanced networking features, such as load balancing and
VPNs.
 API Access: Offers a comprehensive API for integration and automation.

Architecture

 Management Server: Central component that manages the Cloud Stack environment and orchestrates
resources.
 Hypervisor Hosts: Physical servers that run the hypervisors and virtual machines.
 Storage: Manages storage resources, including block and object storage.
 Network Infrastructure: Provides virtual networking capabilities, including public and private
networks.

Advantages

 Open Source: Cost-effective with an active community and regular updates.


 Flexibility: Supports a wide range of configurations and deployment scenarios.
 Integration: Easily integrates with existing systems and third-party tools.
 User-Friendly: Intuitive interface for managing cloud resources.

Use Cases

 Private Cloud: Organizations can create secure, scalable private cloud environments.
 Public Cloud: Service providers can deploy public cloud services with multi-tenancy.
 Hybrid Cloud: Combines private and public resources for greater flexibility.

Apache Cloud Stack is a powerful and flexible solution for building and managing cloud infrastructures,
providing a robust feature set and strong community support for organizations looking to leverage cloud
technology.

AMAZON (AWS)

Amazon Web Services (AWS) Overview

Amazon Web Services (AWS) is a comprehensive and widely adopted cloud computing platform offered by
Amazon. It provides a broad set of services, including computing power, storage options, and networking
capabilities, enabling businesses to scale and grow efficiently.

Features

 Wide Range of Services: Offers over 200 fully featured services, including computing (EC2), storage
(S3, EBS), databases (RDS, DynamoDB), machine learning (SageMaker), and more.
 Scalability: Easily scale resources up or down based on demand, ensuring optimal performance.
 Global Reach: Data centers located in multiple geographic regions and availability zones worldwide,
ensuring low latency and redundancy.
 Security and Compliance: Robust security features, including identity and access management (IAM),
encryption, and compliance certifications (e.g., HIPAA, GDPR).
 Pay-as-You-Go Pricing: Flexible pricing model based on usage, allowing organizations to pay only
for the resources they consume.
Services

 Amazon EC2 (Elastic Compute Cloud): Provides scalable virtual servers for hosting applications.
 Amazon S3 (Simple Storage Service): Object storage service for storing and retrieving any amount of
data.
 Amazon RDS (Relational Database Service): Managed relational database service for various
database engines (MySQL, PostgreSQL, etc.).
 Amazon Lambda: Serverless computing service that allows users to run code without provisioning
servers.
 Amazon VPC (Virtual Private Cloud): Enables users to create isolated networks within the AWS
cloud.

Advantages

 Flexibility: Supports a wide range of technologies and deployment models.


 Innovation: Continuous addition of new features and services.
 Community and Support: Extensive documentation, tutorials, and a large community for support.

Use Cases

 Web Hosting: Host websites and web applications with scalable infrastructure.
 Data Analytics: Analyze large datasets using services like Amazon Redshift and AWS Glue.
 Machine Learning: Build and deploy machine learning models with AWS tools.
 Disaster Recovery: Implement backup and recovery solutions using AWS storage services.

AWS is a leading cloud platform that offers a vast array of services and features, enabling organizations to
innovate and scale their operations efficiently. Its flexibility, security, and global reach make it a preferred
choice for businesses of all sizes.

MICROSOFT AZURE OVERVIEW

Microsoft Azure is a cloud computing platform and service offered by Microsoft, providing a wide range of
cloud services, including computing, analytics, storage, and networking. It enables businesses to build, deploy,
and manage applications and services through Microsoft-managed data centers.

Features

 Comprehensive Service Offerings: Includes services for virtual machines, app hosting, databases, AI,
machine learning, and IoT.
 Scalability: Supports automatic scaling to handle varying workloads seamlessly.
 Global Reach: Data centers located in numerous regions worldwide, providing low-latency access and
redundancy.
 Security and Compliance: Built-in security features and compliance with various standards (e.g., ISO,
HIPAA, GDPR).
 Hybrid Cloud Capabilities: Integrates on-premises infrastructure with cloud services through Azure
Arc and Azure Stack.

Services

 Azure Virtual Machines: Provides scalable virtual machines for various workloads.
 Azure App Service: Platform for building and hosting web applications and APIs.
 Azure SQL Database: Managed relational database service for SQL Server.
 Azure Functions: Serverless computing service that allows users to run code on demand without
managing servers.
 Azure Blob Storage: Object storage service for unstructured data.

Advantages

 Integration with Microsoft Products: Seamless integration with other Microsoft services (e.g., Office
365, Dynamics 365).
 Flexibility: Supports various programming languages, frameworks, and operating systems.
 Robust Development Tools: Provides development tools like Visual Studio and Azure DevOps for
streamlined workflows.

Use Cases

 Application Development: Build and deploy applications using Azure services.


 Data Analytics: Utilize Azure Synapse Analytics for big data and analytics solutions.
 Machine Learning: Develop and deploy machine learning models with Azure Machine Learning.
 Disaster Recovery: Implement backup and recovery solutions using Azure Backup and Site Recovery.

Microsoft Azure is a powerful and versatile cloud platform that enables businesses to innovate, scale, and
manage their applications and services efficiently. Its extensive service offerings, security features, and
integration capabilities make it a preferred choice for enterprises looking to leverage cloud technology.

Google Cloud Overview

Google Cloud is a suite of cloud computing services offered by Google, providing a range of solutions for
computing, data storage, data analytics, machine learning, and more. It enables businesses to build, deploy,
and scale applications on Google’s infrastructure.
Features

 Comprehensive Services: Includes computing (Google Compute Engine), storage (Google Cloud
Storage), big data (BigQuery), and machine learning (AI Platform).
 Global Infrastructure: Utilizes Google’s extensive global network of data centers for low-latency and
reliable services.
 Security: Offers robust security features, including data encryption, identity management, and
compliance with various standards (e.g., ISO, GDPR).
 Serverless Computing: Supports serverless architectures with services like Cloud Functions and
Cloud Run.
 Hybrid and Multi-Cloud Solutions: Integrates with on-premises systems and other cloud providers
through Anthos and Google Kubernetes Engine (GKE).

Services

 Google Compute Engine: Provides scalable virtual machines for various workloads.
 Google Kubernetes Engine (GKE): Managed service for running Kubernetes clusters.
 Google Cloud Storage: Object storage service for storing and retrieving data.
 BigQuery: Fully managed data warehouse for analytics and data processing.
 Google Cloud AI: Tools and services for building machine learning models.

Advantages

 Data Analytics Expertise: Leverages Google’s data analytics capabilities for powerful insights.
 Strong Machine Learning Tools: Provides advanced AI and machine learning services.
 Integration with Google Services: Seamlessly integrates with other Google products, such as
Workspace and Firebase.

Use Cases

 Application Development: Build and host applications on Google Cloud’s infrastructure.


 Data Warehousing and Analytics: Utilize BigQuery for large-scale data analysis.
 Machine Learning: Develop and deploy machine learning models using Google Cloud AI services.
 Disaster Recovery: Implement backup and recovery solutions with Google Cloud Storage and other
services.

Google Cloud offers a robust and flexible cloud platform with a wide array of services tailored for businesses
looking to innovate and scale their operations. Its focus on data analytics, machine learning, and global
infrastructure makes it a strong choice for organizations seeking to leverage cloud technology effectively.

You might also like