Cloud Computing Notes
Cloud Computing Notes
Cloud computing is the delivery of various services over the internet, including storage, databases, servers,
networking, software, and analytics. Instead of owning and managing physical servers and infrastructure, users
can rent access to these resources from cloud service providers.
Cloud Computing means storing and accessing the data and programs on remote servers that are hosted on
the internet instead of the computer’s hard drive or local server. Cloud computing is also referred to as
Internet-based computing, it is a technology where the resource is provided as a service through the Internet to
the user. The data that is stored can be files, images, documents, or any other storable document.
Key Points:
Small as well as large IT companies, follow the traditional methods to provide the IT infrastructure. That
means for any IT company, we need a Server Room that is the basic need of IT companies.
In that server room, there should be a database server, mail server, networking, firewalls, routers, modem,
switches, QPS (Query Per Second means how much queries or load will be handled by the server),
configurable system, high net speed, and the maintenance engineers.
To establish such IT infrastructure, we need to spend lots of money. To overcome all these problems and to
reduce the IT infrastructure cost, Cloud Computing comes into existence.
1. On-Demand Self-Service:
o Users can provision computing capabilities such as server time and network storage
automatically without requiring human intervention from the service provider.
o Example: Provisioning a virtual server from AWS EC2 via the AWS Management Console.
2. Broad Network Access:
o Services are accessible over the network through standard mechanisms (e.g., web browsers) and
from a variety of devices (e.g., smartphones, laptops).
o Example: Accessing Google Docs from a mobile device or desktop computer.
3. Resource Pooling:
o Cloud providers use multi-tenant models to pool computing resources. These resources are
dynamically assigned and reassigned based on demand.
o Example: AWS uses a single pool of servers to provide resources to multiple users.
4. Rapid Elasticity:
o Resources can be rapidly and elastically provisioned to scale outward or inward according to
demand. This means that if demand increases, additional resources can be allocated quickly.
o Example: Automatically scaling web server instances up or down based on website traffic
using AWS Auto Scaling.
5. Measured Service:
o Cloud computing systems automatically control and optimize resource use by leveraging a
metering capability. This allows for resource usage monitoring, control, and reporting.
o Example: Billing is based on the amount of storage used or the number of virtual machines
running.
2. Application : Application is a part of backend component that refers to a software or platform to which
client accesses. Means it provides the service in backend as per the client requirement.
3. Service: Service in backend refers to the major three types of cloud based services like SaaS, PaaS and
IaaS. Also manages which type of service the user accesses.
4. Runtime Cloud: Runtime cloud in backend provides the execution and Runtime platform/environment
to the Virtual machine.
5. Storage: Storage in backend provides flexible and scalable storage service and management of stored
data.
6. Infrastructure: Cloud Infrastructure in backend refers to the hardware and software components of
cloud like it includes servers, storage, network devices, virtualization software etc.
9. Internet: Internet connection acts as the medium or a bridge between frontend and backend and
establishes the interaction and communication between frontend and backend.
10. Database: Database in backend refers to provide database for storing structured data, such as SQL and
NOSQL databases. Example of Databases services include Amazon RDS, Microsoft Azure SQL
database and Google CLoud SQL.
11. Networking: Networking in backend services that provide networking infrastructure for application in
the cloud, such as load balancing, DNS and virtual private networks.
12. Analytics: Analytics in backend service that provides analytics capabilities for data in the cloud, such
as warehousing, business intelligence and machine learning.
4. Cloud Architecture
Cloud computing technology is used by both small and large organizations to store the information in cloud
and access it from anywhere at anytime using the internet connection.
Cloud computing architecture is a combination of service-oriented architecture and event-driven
architecture.
Front End
Back End
Front End
The front end is used by the client. It contains client-side interfaces and applications that are required to access
the cloud computing platforms. The front end includes web servers (including Chrome, Firefox, internet
explorer, etc.), thin & fat clients, tablets, and mobile devices.
Back End
The back end is used by the service provider. It manages all the resources that are required to provide cloud
computing services. It includes a huge amount of data storage, security mechanism, virtual machines,
deploying models, servers, traffic control mechanisms, etc.
Software As A Service (SAAS) allows users to run existing online applications and it is a model software that
is deployed as a hosting service and is accessed over Output Rephrased/Re-written Text the internet or
software delivery model during which software and its associated data are hosted centrally and accessed using
their client, usually an online browser over the web. SAAS services are used for the development and
deployment of modern applications.
It allows software and its functions to be accessed from anywhere with good internet connection device and a
browser. An application is hosted centrally and also provides access to multiple users across various locations
via the internet.
Applications are ready to use, and updates and maintenance are handled by the provider.
You access the software through a web browser or app, usually paying a subscription fee.
It’s convenient and requires minimal technical expertise, ideal for non-technical users.
Salesforce
Microsoft 365
Trello
Zoom
Slack
Platform As A Service (PAAS) is a cloud delivery model for applications composed of services managed by a
third party. It provides elastic scaling of your application which allows developers to build applications and
services over the internet and the deployment models include public, private and hybrid.
Basically, it is a service where a third-party provider provides both software and hardware tools to the cloud
computing. The tools which are provided are used by developers. PAAS is also known as Application PAAS.
It helps us to organize and maintain useful applications and services. It has a well-equipped management
system and is less expensive compared to IAAS.
Characteristics of PAAS (Platform as a Service)
PAAS is like a toolkit for developers to build and deploy applications without worrying about
infrastructure.
Developers focus on building and managing applications, while the provider handles infrastructure
management.
It speeds up the development process and allows for easy collaboration among developers.
AWS Lambda
Google Cloud
IBM Cloud
It totally depends upon the customer to choose its resources wisely and as per need. Also, it provides billing
management too.
IAAS is like renting virtual computers and storage space in the cloud.
You have control over the operating systems, applications, and development frameworks.
Microsoft Azure
Digital Ocean
The cloud deployment model identifies the specific type of cloud environment based on ownership, scale, and
access, as well as the cloud’s nature and purpose. The location of the servers you’re utilizing and who controls
them are defined by a cloud deployment model. It specifies how your cloud infrastructure will look, what you
can change, and whether you will be given services or will have to create everything yourself. Relationships
between the infrastructure and your users are also defined by cloud deployment types. Different types of cloud
computing deployment models are described below.
Public Cloud
Private Cloud
Hybrid Cloud
Community Cloud
Multi-Cloud
Public Cloud
The public cloud makes it possible for anybody to access systems and services. The public cloud may be less
secure as it is open to everyone. The public cloud is one in which cloud infrastructure services are provided
over the internet to the general people or major industry groups. The infrastructure in this cloud model is
owned by the entity that delivers the cloud services, not by the consumer.
It is a type of cloud hosting that allows customers and users to easily access systems and services. This form of
cloud computing is an excellent example of cloud hosting, in which service providers supply services to a
variety of customers. In this arrangement, storage backup and retrieval services are given for free, as a
subscription, or on a per-user basis. For example, Google App Engine etc.
Advantages of the Public Cloud Model
Minimal Investment: Because it is a pay-per-use service, there is no substantial upfront fee, making it
excellent for enterprises that require immediate access to resources.
No setup cost: The entire infrastructure is fully subsidized by the cloud service providers, thus there is
no need to set up any hardware.
Infrastructure Management is not required: Using the public cloud does not necessitate
infrastructure management.
No maintenance: The maintenance work is done by the service provider (not users).
Dynamic Scalability: To fulfill your company’s needs, on-demand resources are accessible.
Less secure: Public cloud is less secure as resources are public so there is no guarantee of high-level
security.
Low customization: It is accessed by many public so it can’t be customized according to personal
requirements.
Private Cloud
The private cloud deployment model is the exact opposite of the public cloud deployment model. It’s a one-
on-one environment for a single user (customer). There is no need to share your hardware with anyone
else. The distinction between private and public clouds is in how you handle all of the hardware. It is also
called the “internal cloud” & it refers to the ability to access systems and services within a given border or
organization. The cloud platform is implemented in a cloud-based secure environment that is protected by
powerful firewalls and under the supervision of an organization’s IT department. The private cloud gives
greater flexibility of control over cloud resources.
Advantages of the Private Cloud Model
Better Control: You are the sole owner of the property. You gain complete command over service
integration, IT operations, policies, and user behavior.
Data Security and Privacy: It’s suitable for storing corporate information to which only authorized
staff have access. By segmenting resources within the same infrastructure, improved access and
security can be achieved.
Supports Legacy Systems: This approach is designed to work with legacy systems that are unable to
access the public cloud.
Customization: Unlike a public cloud deployment, a private cloud allows a company to tailor its
solution to meet its specific needs.
Less scalable: Private clouds are scaled within a certain range as there is less number of clients.
Costly: Private clouds are more costly as they provide personalized facilities.
Hybrid Cloud
By bridging the public and private worlds with a layer of proprietary software, hybrid cloud computing gives
the best of both worlds. With a hybrid solution, you may host the app in a safe environment while taking
advantage of the public cloud’s cost savings. Organizations can move data and applications between different
clouds using a combination of two or more cloud deployment methods, depending on their needs.
Advantages of the Hybrid Cloud Model
Flexibility and control: Businesses with more flexibility can design personalized solutions that meet
their particular needs.
Cost: Because public clouds provide scalability, you’ll only be responsible for paying for the extra
capacity if you require it.
Security: Because data is properly separated, the chances of data theft by attackers are considerably
reduced.
Difficult to manage: Hybrid clouds are difficult to manage as it is a combination of both public and
private cloud. So, it is complex.
Slow data transmission: Data transmission in the hybrid cloud takes place through the public cloud so
latency occurs.
Community Cloud
It allows systems and services to be accessible by a group of organizations. It is a distributed system that is
created by integrating the services of different clouds to address the specific needs of a community, industry,
or business. The infrastructure of the community could be shared between the organization which has shared
concerns or tasks. It is generally managed by a third party or by the combination of one or more organizations
in the community.
Limited Scalability: Community cloud is relatively less scalable as many organizations share the same
resources according to their collaborative interests.
Rigid in customization: As the data and resources are shared among different organizations according
to their mutual interests if an organization wants some changes according to their needs they cannot do
so because it will have an impact on other organizations.
Multi-Cloud
We’re talking about employing multiple cloud providers at the same time under this paradigm, as the name
implies. It’s similar to the hybrid cloud deployment approach, which combines public and private cloud
resources. Instead of merging private and public clouds, multi-cloud uses many public clouds. Although public
cloud providers provide numerous tools to improve the reliability of their services, mishaps still occur. It’s
quite rare that two distinct clouds would have an incident at the same moment. As a result, multi-cloud
deployment improves the high availability of your services even more.
You can mix and match the best features of each cloud provider’s services to suit the demands of your
apps, workloads, and business by choosing different cloud providers.
Reduced Latency: To reduce latency and improve user experience, you can choose cloud regions and
zones that are close to your clients.
High availability of service: It’s quite rare that two distinct clouds would have an incident at the same
moment. So, the multi-cloud deployment improves the high availability of your services.
Complex: The combination of many clouds makes the system complex and bottlenecks may occur.
Security issue: Due to the complex structure, there may be loopholes to which a hacker can take
advantage hence, makes the data insecure.
1. Cost Efficiency:
o Reduces capital expenditures by eliminating the need for physical hardware and data centers.
Users pay only for the resources they use.
2. Scalability:
o Easily scale resources up or down based on current needs. This is beneficial for handling
varying workloads and seasonal demands.
3. Performance:
o High-performance computing environments are provided by cloud providers, utilizing state-of-
the-art hardware and data centers.
4. Accessibility:
o Access services and data from anywhere with an internet connection, supporting remote work
and global operations.
5. Disaster Recovery:
o Provides robust disaster recovery options, including data backup, redundancy, and failover
solutions.
6. Automatic Updates:
o Cloud providers manage and deploy updates and patches automatically, ensuring systems are
up-to-date with the latest features and security enhancements.
Traditional Computing, as name suggests, is a possess of using physical data centers for storing digital assets
and running complete networking system for daily operations. In this, access to data, or software, or storage by
users is limited to device or official network they are connected with. In this computing, user can have access
to data only on system in which data is stored.
Data Cloud Computing is ability to access data User can access data only on system in which
Accessibility anywhere at any time by user. data is stored.
Cloud Computing requires fast, reliable Traditional Computing does not require any
Internet
and stable internet connection to access internet connection to access data or
Dependency
information anywhere at any time. information.
1. Cost Structure:
o Cloud Providers: Operate on a pay-as-you-go or subscription basis, reducing upfront capital
costs and providing predictable expenses.
o Traditional IT: Involves significant capital investment in hardware and software, with
additional ongoing maintenance costs.
2. Scalability:
o Cloud Providers: Offer on-demand scalability with the ability to rapidly adjust resources based
on demand.
o Traditional IT: Scaling requires additional hardware and infrastructure, which can be time-
consuming and expensive.
3. Management:
o Cloud Providers: Manage and maintain infrastructure, including security, updates, and
backups, allowing organizations to focus on their core business activities.
o Traditional IT: Requires internal resources to manage and maintain hardware and software,
which can divert focus from core business operations.
o
4. Deployment Time:
o Cloud Providers: Services and applications can be deployed quickly, often within minutes.
o Traditional IT: Deployment can be lengthy due to the need to purchase, install, and configure
hardware and software.
5. Accessibility:
o Cloud Providers: Services are accessible from anywhere with an internet connection,
supporting remote work and global collaboration.
o Traditional IT: Access is often limited to on-premises environments or requires VPNs for
remote access.
6. Disaster Recovery:
o Cloud Providers: Typically offer built-in disaster recovery solutions and data redundancy as
part of their service offerings.
o Traditional IT: Disaster recovery solutions can be complex and costly, requiring additional
infrastructure and management.
Unit -II
…………………………………………………………………………………………………………………
Syllabus: Services Virtualization Technology and Study of Hypervisor: Utility Computing, Elastic
computing & grid computing. Study of Hypervisor Virtualization applications in enterprises, High-
performance computing, Pitfalls of virtualization Multitenant software: Multi-entity support, Multi schema
approach.
…………………………………………………………………………………………………………………
Virtualization Technology Overview
Virtualization is used to create a virtual version of an underlying service With the help of Virtualization,
multiple operating systems and applications can run on the same machine and its same hardware at the same
time, increasing the utilization and flexibility of hardware. It was initially developed during the mainframe era.
Host Machine: The machine on which the virtual machine is going to be built is known as Host
Machine.
Virtualization technology is a transformative approach in computing that allows multiple virtual environments
or virtual machines (VMs) to operate on a single physical hardware system. By abstracting the hardware
resources, virtualization creates isolated virtual instances, each capable of running its own operating system
and applications. This abstraction is facilitated by a software layer known as the hypervisor, which manages
the distribution of physical resources, such as CPU, memory, and storage, among the VMs. Virtualization
enhances resource utilization, improves management efficiency, and provides a level of isolation that can
bolster security and streamline testing and development processes.
Uses of Virtualization
Data-integration
Business-integration
Benefits of Virtualization
Drawback of Virtualization
High Initial Investment: Clouds have a very high initial investment, but it is also true that it will help
in reducing the cost of companies.
Learning New Infrastructure: As the companies shifted from Servers to Cloud, it requires highly
skilled staff who have skills to work with the cloud easily, and for this, you have to hire new staff or
provide training to current staff.
Risk of Data: Hosting data on third-party resources can lead to putting the data at risk, it has the
chance of getting attacked by any hacker or cracker very easily.
What is a hypervisor
A hypervisor, also known as a virtual machine monitor or VMM. The hypervisor is a piece of software that
allows us to build and run virtual machines which are abbreviated as VMs.A hypervisor allows a single host
computer to support multiple virtual machines (VMs) by sharing resources including memory and processing.
What is the use of a hypervisor?
Hypervisors allow the use of more of a system's available resources and provide greater IT versatility because
the guest VMs are independent of the host hardware which is one of the major benefits of the Hypervisor.
In other words, this implies that they can be quickly switched between servers. Since a hypervisor with the
help of its special feature, it allows several virtual machines to operate on a single physical server. So, it helps
us to reduce:
Hypervisor Types
There are two types of hypervisors: "Type 1" (also known as "bare metal") and "Type 2" (also known as
"hosted"). A type 1 hypervisor functions as a light operating system that operates directly on the host's
hardware, while a type 2 hypervisor functions as a software layer on top of an operating system, similar to
other computer programs.
The native or bare metal hypervisor, the Type 1 hypervisor is known by both names.
It replaces the host operating system, and the hypervisor schedules VM services directly to the hardware.The
type 1 hypervisor is very much commonly used in the enterprise data center or other server-based
environments.
It includes KVM, Microsoft Hyper-V, and VMware vSphere. If we are running the updated version of the
hypervisor then we must have already got the KVM integrated into the Linux kernel in 2007.
The Type 2 hypervisor
It is also known as a hosted hypervisor, The type 2 hypervisor is a software layer or framework that runs on a
traditional operating system.
It operates by separating the guest and host operating systems. The host operating system schedules VM
services, which are then executed on the hardware.
Individual users who wish to operate multiple operating systems on a personal computer should use a form 2
hypervisor.
Hypervisors are a key component of the technology that enables cloud computing since they are a software
layer that allows one host device to support several virtual machines at the same time.
Hypervisors allow IT to retain control over a cloud environment's infrastructure, processes, and sensitive data
while making cloud-based applications accessible to users in a virtual environment.
Increased emphasis on creative applications is being driven by digital transformation and increasing consumer
expectations. As a result, many businesses are transferring their virtual computers to the cloud.
Utility Computing
Utility computing is a service model in which computing resources are provided and billed based on actual
usage, akin to traditional utilities like electricity or water. In this model, resources such as
processing power, storage, and network bandwidth are offered on an on-demand basis. This approach enables
businesses and individuals to scale resources dynamically according to their needs, leading to potential cost
savings since users only pay for the resources they consume. Utility computing supports flexible scaling and
resource allocation, making it well-suited for applications with varying workloads and for scenarios where the
demand for resources fluctuates significantly.
Utility Computing, as name suggests, is a type of computing that provide services and computing resources to
customers. It is basically a facility that is being provided to users on their demand and charge them for specific
usage. It is similar to cloud computing and therefore requires cloud-like infrastructure.
Virtually any activity performed in a data center can be replicated in a utility computing offering. Services
available include the following:
Elastic Computing
EC2 stands for Elastic Compute Cloud. EC2 is an on-demand computing service on the AWS cloud platform.
Under computing, it includes all the services a computing device can offer to you along with the flexibility of a
virtual environment. It also allows the user to configure their instances as per their requirements i.e. allocate
the RAM, ROM, and storage according to the need of the current task. Even the user can dismantle the virtual
device once its task is completed and it is no more required. For providing, all these scalable resources AWS
charges some bill amount at the end of every month, the bill amount is entirely dependent on your usage. EC2
allows you to rent virtual computers. The provision of servers on AWS Cloud is one of the easiest ways in
EC2. EC2 has resizable capacity. EC2 offers security, reliability, high performance, and cost-effective
infrastructure so as to meet the demanding business needs.
Elastic computing refers to the capability of dynamically adjusting computing resources to meet varying
workload demands. This dynamic scalability allows systems to expand or contract resource allocations—such
as CPU, memory, and storage—based on real-time needs. The principle of elastic computing ensures that
resources are used efficiently, reducing costs by aligning resource availability with demand. This approach
involves pooling resources from multiple servers or systems to create a flexible and scalable infrastructure.
Elastic computing is particularly valuable in cloud environments, where users benefit from the ability to
handle both anticipated and unexpected changes in workload.
Elastic computing refers to a scenario in which the overall resource footprint available in a system or
consumed by a specific job can grow or shrink on demand. This usually relies on external cloud computing
services, where the local cluster provides only part of the resource pool available to all jobs. However, elastic
computing may also be implemented on standalone clusters
Grid Computing
Grid computing is a distributed architecture that combines computer resources from different locations to
achieve a common goal. It breaks down tasks into smaller subtasks, allowing concurrent processing. In this
article, we are going to discuss grid computing.
Grid computing involves connecting a network of computers to work collaboratively on complex tasks by
pooling their computational resources. This distributed approach enables the sharing of processing power,
storage, and network capabilities across multiple nodes, often spanning different locations. Grid computing is
particularly effective for solving large-scale problems that require substantial computational power, such as
scientific simulations or data analysis tasks. By leveraging the combined resources of multiple machines, grid
computing provides a scalable and cost-effective solution for high-performance tasks that exceed the
capabilities of individual systems.
Grid Computing can be defined as a network of computers working together to perform a task that would
rather be difficult for a single machine. All machines on that network work under the same protocol to act as a
virtual supercomputer. The tasks that they work on may include analyzing huge datasets or simulating
situations that require high computing power. Computers on the network contribute resources like processing
power and storage capacity to the network.
Resource Utilization: By pooling resources from multiple computers, grid computing maximizes
resource utilization. Idle or underutilized machines contribute to tasks, reducing wastage.
Complex Problem Solving: Grids handle large-scale problems that require significant computational
power. Examples include climate modeling, drug discovery, and genome analysis.
Cost Savings: Organizations can reuse existing hardware, saving costs while accessing excess
computational resources. Additionally, cloud resources can be cost-effectively.
Hypervisor virtualization has become a cornerstone of modern IT infrastructure in enterprises, offering a range
of benefits that streamline operations, reduce costs, and enhance flexibility. Here’s a detailed look at how
hypervisor virtualization is applied in enterprise environments:
1. Server Consolidation
Overview: Server consolidation refers to the practice of reducing the number of physical servers by running
multiple virtual machines (VMs) on fewer physical hosts.
This is achieved through virtualization technology, which allows multiple VMs, each with its own operating
system and applications, to operate on a single physical server.
Benefits:
Cost Reduction: Decreases the need for physical hardware, leading to lower capital expenditure on
servers and reduced operational costs related to power, cooling, and physical space.
Efficient Resource Utilization: Optimizes the use of server resources (CPU, memory, storage) by
allowing better allocation and usage compared to traditional single-application servers.
Simplified Management: Reduces the complexity of managing numerous physical servers, making
system administration and monitoring more straightforward.
Implementation Example: An enterprise with multiple underutilized servers might consolidate these servers
into a few high-performance physical machines. By running VMs for different applications or departments on
these machines, the enterprise can maximize hardware usage and lower overall infrastructure costs.
Overview: Virtualization provides isolated environments that can be easily created, modified, and destroyed.
This is particularly useful for development and testing purposes, where different configurations and versions of
applications need to be tested without impacting production systems.
Benefits:
Isolation: Developers and testers can work in environments that replicate production conditions
without risking the stability of the live environment.
Cost Efficiency: Enables the creation of multiple test environments on a single physical server,
reducing hardware costs.
Flexibility: Allows for rapid deployment and teardown of test environments, making it easier to test
various scenarios and configurations.
Implementation Example: A software development team might use VMs to create various configurations of
their application for testing purposes. They can quickly spin up VMs with different operating systems or
application versions, conduct their tests, and then decommission the VMs once testing is complete.
3. Disaster Recovery and Business Continuity
Overview: Disaster recovery (DR) and business continuity plans benefit greatly from virtualization.
Virtualization simplifies the replication and restoration of IT systems in the event of a disaster, ensuring
minimal downtime and quick recovery.
Benefits:
VM Snapshots and Cloning: Enables the creation of snapshots and clones of VMs, which can be used
to restore systems to a previous state in case of failure.
Geographic Flexibility: Allows replication of VMs to offsite locations, ensuring that backup systems
are available even if the primary data center is compromised.
Rapid Recovery: Facilitates quick recovery by allowing VMs to be moved or copied to alternative
hardware in case of system failure.
Implementation Example: An enterprise might implement a DR solution where critical applications are
virtualized and replicated to a secondary data center. In the event of a failure at the primary site, the VMs can
be quickly activated at the secondary site, minimizing downtime and maintaining business operations.
4. Desktop Virtualization
Overview: Desktop virtualization involves running desktop operating systems and applications on centralized
servers rather than on individual user devices. Users access their desktops remotely through thin clients or
other devices.
Benefits:
Implementation Example: An organization might deploy virtual desktop infrastructure (VDI) where
employees use thin clients or personal devices to connect to virtual desktops hosted on central servers. This
setup allows for easy updates, improved security, and consistent user experiences across different locations.
5. Server and Application Isolation
Overview: Virtualization provides isolation between different applications or services running on the same
physical server. This isolation helps in managing dependencies and conflicts between applications and
enhances security by compartmentalizing processes.
Benefits:
Improved Security: Isolates applications and services, reducing the risk of one application affecting
the stability or security of another.
Resource Management: Allocates resources (CPU, memory) to each VM based on application needs,
preventing resource contention and performance issues.
Conflict Resolution: Minimizes compatibility issues between different applications or services by
running them in separate virtual environments.
Implementation Example: An enterprise might run different business-critical applications (e.g., email
servers, databases, web applications) in separate VMs on a single physical server. This setup ensures that if
one application encounters issues, it does not impact the others, and performance can be tuned individually for
each application.
Overview: Virtualization provides scalability and flexibility in managing IT resources, allowing enterprises to
adjust their infrastructure based on changing demands.
Benefits:
Dynamic Scaling: Resources can be scaled up or down based on workload requirements, allowing
enterprises to respond quickly to changes in demand.
Resource Pooling: Pools resources from multiple physical servers to provide a flexible and scalable
environment.
Cost Efficiency: Reduces the need for over-provisioning and enables more efficient use of resources.
Implementation Example: An e-commerce company experiencing high traffic during peak seasons can use
virtualization to quickly scale their infrastructure by adding additional VMs to handle the increased load. Once
the peak period ends, resources can be scaled back down to save costs.
Overview: Virtualization allows enterprises to test new technologies and configurations in isolated
environments before deploying them in production.
Benefits:
Risk Mitigation: Tests new technologies without impacting existing systems, reducing the risk of
disruptions.
Cost Savings: Avoids the need for additional physical hardware for testing purposes.
Accelerated Innovation: Facilitates rapid experimentation with new tools and configurations.
Implementation Example: A company exploring a new database management system can deploy it in a
virtual environment to evaluate its performance and compatibility with existing applications. If successful,
they can then consider a full-scale deployment in the production environment.
1.1 Improved Resource Utilization: Virtualization allows multiple virtual machines (VMs) to share the same
physical resources. In HPC environments, this means that the computational power of a cluster can be more
efficiently used by running multiple virtual instances on each physical node. This helps in maximizing the
utilization of expensive HPC hardware.
1.2 Flexibility and Scalability: Virtualization provides the ability to quickly provision and de-provision VMs
based on workload demands. For HPC applications, this means that resources can be dynamically allocated as
needed. When a particular simulation or computation requires additional resources, VMs can be spun up to
meet the demand. Conversely, when the workload decreases, resources can be scaled back to avoid waste.
1.3 Simplified Management: Managing a large number of physical servers can be complex and resource-
intensive. Virtualization abstracts the underlying hardware, making it easier to manage and maintain
computing resources. Tasks such as provisioning, monitoring, and maintaining systems can be streamlined
through virtualization management tools, reducing administrative overhead.
1.4 Isolation and Fault Tolerance: Virtualization provides isolation between different VMs, which can be
advantageous in an HPC environment. It ensures that if one VM experiences a failure or issues, other VMs can
continue to operate without disruption. This isolation also helps in testing and development, allowing new
software or configurations to be tested in a controlled environment without affecting production workloads.
1.5 Cost Efficiency: By consolidating multiple workloads onto fewer physical servers, virtualization can
reduce hardware costs. This is particularly beneficial in HPC environments where hardware is often expensive.
Virtualization allows organizations to leverage their hardware investments more effectively and reduce overall
infrastructure costs.
2.1 Performance Overhead: One of the primary concerns with virtualization in HPC is the performance
overhead introduced by the hypervisor. Virtualization adds an additional layer between the hardware and the
application, which can lead to performance degradation compared to running directly on physical hardware.
This overhead can be significant in HPC applications where performance is critical.
2.2 Resource Contention: In a virtualized HPC environment, multiple VMs sharing the same physical
resources can lead to resource contention. Proper management and allocation of resources are essential to
ensure that high-priority applications receive the necessary computational power. Virtualization solutions must
be carefully configured to balance workloads and prevent performance bottlenecks.
2.3 Compatibility and Support: Not all HPC applications are well-suited for virtualization. Some
applications may require direct access to hardware features or have specific performance requirements that are
difficult to meet in a virtualized environment. It's important to evaluate the compatibility of HPC applications
with virtualization and ensure that they can operate effectively within virtual machines.
2.4 Complexity in Management: While virtualization can simplify management in some respects, it also
introduces additional complexity. Managing a virtualized HPC environment requires expertise in both
virtualization technology and HPC workloads. This includes understanding how to configure and optimize the
hypervisor, as well as how to manage resource allocation and performance tuning.
2.5 Security Considerations: Virtualization introduces new security challenges, such as the potential for
vulnerabilities in the hypervisor that could affect multiple VMs. Ensuring that the hypervisor is secure and
properly configured is crucial to maintaining the security of the entire HPC environment. Additionally, proper
isolation between VMs is necessary to prevent unauthorized access to sensitive data or applications.
3.1 Performance Optimization: To minimize performance overhead, choose hypervisors and virtualization
technologies that are optimized for HPC workloads. Look for features such as hardware acceleration and
resource management tools that can help mitigate performance impacts.
3.2 Resource Allocation and Management: Implement resource allocation strategies to ensure that critical
applications receive the necessary resources. Use virtualization management tools to monitor resource usage
and dynamically adjust allocations based on workload demands.
3.3 Compatibility Testing: Thoroughly test HPC applications in a virtualized environment before full
deployment. Assess how well applications perform in virtual machines and make any necessary adjustments to
ensure compatibility and performance.
3.4 Security Practices: Adopt robust security practices to protect the virtualized HPC environment. This
includes keeping the hypervisor up-to-date with security patches, using strong access controls, and regularly
auditing the virtual environment for potential vulnerabilities.
3.5 Training and Expertise: Ensure that IT staff have the necessary expertise in both virtualization and HPC
to manage and optimize the environment effectively. Investing in training and professional development can
help address the complexities of managing virtualized HPC infrastructure.
Pitfalls of Virtualization
Despite its advantages, virtualization comes with certain challenges. Performance overhead is a significant
concern, as the additional layer of abstraction introduced by the hypervisor can lead to reduced efficiency
compared to direct hardware access. The complexity of managing virtual environments also requires
specialized skills and tools to ensure effective operation and maintenance. Additionally, security risks are
amplified in virtualized environments, where multiple VMs share the same physical host. This scenario
necessitates robust security measures to prevent potential vulnerabilities and attacks that could affect multiple
virtual instances.
1. Performance Overhead
Issue: Additional layer (hypervisor) between hardware and VMs can cause performance degradation.
Impact: Reduced efficiency and increased latency.
Mitigation: Use high-performance hypervisors, leverage hardware-assisted virtualization, and monitor
performance regularly.
Issue: Multiple VMs sharing the same physical resources can lead to contention and overcommitment.
Impact: Performance degradation and unpredictable behavior.
Mitigation: Implement resource management policies, monitor utilization, and conduct capacity
planning.
3. Security Concerns
Issue: Hypervisor vulnerabilities and potential data isolation issues between VMs.
Impact: Risk of data breaches and unauthorized access.
Mitigation: Harden the hypervisor, ensure strong VM isolation, and conduct regular security audits.
4. Complexity in Management
Issue: Increased complexity due to the interplay between virtual and physical infrastructure.
Impact: Higher administrative overhead and troubleshooting difficulties.
Mitigation: Use comprehensive management tools, standardize configurations, and invest in staff
training.
6. Vendor Lock-In
Issue: Proprietary technologies and tools can lead to dependency on a single vendor.
Impact: Challenges in migrating workloads and reduced flexibility.
Mitigation: Adopt open standards, plan for portability, and evaluate vendor options carefully.
Multitenant software is designed to serve multiple customers or tenants from a single software instance while
maintaining data and configuration isolation. This model allows each tenant to operate in a shared environment
without interfering with other tenants' data or operations. Multi-entity support in such software ensures that
different organizations or users can customize their experiences while leveraging a common application
framework. This approach is particularly advantageous for SaaS (Software as a Service) applications, where
cost-efficiency and scalability are key.
Definition: Multitenancy allows multiple clients (tenants) to use the same software instance while keeping
their data and configurations isolated.
Key Aspects:
1. Data Isolation:
o Purpose: Ensures each tenant's data is separate and secure.
o Method: Uses separate schemas or tables in the database.
2. Configuration Isolation:
o Purpose: Allows each tenant to have custom settings and preferences.
o Method: Manages through tenant-specific configuration settings.
3. Access Control:
o Purpose: Restricts data and functionality access based on tenant identity.
o Method: Implements role-based or attribute-based access controls.
4. Resource Allocation:
o Purpose: Ensures fair distribution of computing resources among tenants.
o Method: Employs resource quotas and load balancing.
Implementation Strategies:
Challenges:
Multitenant software efficiently serves multiple clients by ensuring data isolation, customizable configurations,
and secure, fair resource allocation.
Multi-Schema Approach
Multi-Schema Approach: In a multi-schema approach, each tenant's data is stored in a separate schema
within the same database. A schema is a logical container that holds database objects such as tables, views, and
procedures.
Benefits
1. Data Isolation:
o Purpose: Ensures that data from different tenants is kept separate, providing security and
privacy.
o Method: Each tenant’s data resides in its own schema, preventing accidental access or leakage.
2. Simplified Management:
o Purpose: Easier to manage and maintain data structures for each tenant.
o Method: Admins can handle backups, updates, and schema changes on a per-schema basis.
3. Customizability:
o Purpose: Allows customization of database objects and structures per tenant.
o Method: Schema-specific customizations can be implemented without affecting other tenants.
4. Performance Optimization:
o Purpose: Helps optimize performance by isolating data access patterns.
o Method: Database queries and operations are scoped to a specific schema, reducing the risk of
performance bottlenecks caused by inter-tenant data.
Implementation
1. Schema Design:
o Structure: Design separate schemas for each tenant within the same database.
o Objects: Define tables, indexes, and other database objects within each schema.
2. Access Control:
o Security: Implement access controls to ensure that users can only access their respective
schemas.
o Authentication: Use tenant-specific authentication mechanisms to enforce access restrictions.
3. Backup and Recovery:
o Backup: Perform backups at the schema level to isolate tenant data and simplify recovery
processes.
o Recovery: Restore specific schemas as needed without affecting others.
Challenges
1. Schema Management:
o Issue: Managing a large number of schemas can be complex and resource-intensive.
o Solution: Use automation and management tools to handle schema creation and maintenance.
2. Performance Considerations:
o Issue: Performance can be impacted if the database becomes too large or if schemas are not
properly optimized.
o Solution: Monitor performance and optimize schemas and indexes to ensure efficient data
access.
3. Scalability:
o Issue: As the number of tenants grows, managing many schemas may become challenging.
o Solution: Plan for scalability with efficient schema management practices and consider
database partitioning or sharding if needed.
4. Data Migration:
o Issue: Migrating data between schemas or between different environments can be complex.
o Solution: Develop a robust data migration strategy and use tools to automate and streamline the
process.
Unit III:
Cloud computing performance evaluation is the process by which companies assess how well their cloud
computing resources are operating. By migrating to the cloud, you will tap into virtually limitless scaling and
flexibility. However, being on the cloud does not guarantee performance. Compared to on-premises systems,
you may be surprised at the slowdown in performance once you migrate data-intensive workloads and very
large data sets to the cloud. Cloud computing performance evaluation allows you to get a clear picture of
which components in your cloud environment are draining performance.
Public Cloud:
o Characteristics: Resources shared among multiple organizations; managed by third-party
providers.
o Examples: Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform.
o Use Cases: Startups, development/testing environments, and applications with variable demand.
Private Cloud:
o Characteristics: Exclusive use by a single organization; greater control and customization.
o Deployment: On-premises or hosted by a third-party provider.
o Use Cases: Regulated industries (healthcare, finance), sensitive workloads.
Hybrid Cloud:
o Characteristics: Combination of public and private clouds; allows for flexibility and data
sharing.
o Benefits: Balances the need for security and control with scalability and cost-effectiveness.
o Use Cases: Seasonal workloads, data backup and recovery.
Community Cloud:
o Characteristics: Infrastructure shared among several organizations with similar concerns.
o Management: Can be managed by one of the organizations or a third party.
o Use Cases: Collaborative projects, research organizations.
o
3. Administering & Monitoring Cloud Services
Administration:
o User Management:
Role-based access control (RBAC) to restrict permissions based on user roles.
Identity and Access Management (IAM) solutions to manage user identities and
permissions.
o Service Provisioning:
Automating the deployment of resources using Infrastructure as Code (IaC) tools (e.g.,
Terraform, CloudFormation).
Lifecycle management of cloud resources.
Monitoring:
o Monitoring Tools:
AWS Cloud Watch, Azure Monitor, Google Cloud Operations.
Features include performance dashboards, log analytics, and alerting.
o Key Metrics to Monitor:
Resource utilization (CPU, memory, disk I/O).
Application performance (response times, error rates).
Cost management (spending trends, budget alerts).
Cloud monitoring is the process of evaluating the health of cloud-based IT infrastructures. Using cloud-
monitoring tools, organizations can proactively monitor the availability, performance, and security of their
cloud environments to find and fix problems before they impact the end-user experience. Cloud monitoring
assesses three main areas: performance, security, and compliance.
4. Load Balancing
Definition: The process of distributing incoming network traffic across multiple servers to optimize
resource use and prevent overload.
Load balancing is an essential technique used in cloud computing to optimize resource utilization and ensure
that no single resource is overburdened with traffic. It is a process of distributing workloads across multiple
computing resources, such as servers, virtual machines, or containers, to achieve better performance,
availability, and scalability.
5. Resource Optimization
Techniques:
o Auto-scaling: Automatically adjusts the number of active servers based on demand.
o Rightsizing: Evaluating resource usage and adjusting instance sizes to match actual
requirements, avoiding over-provisioning.
o Cost Optimization: Using reserved instances or spot instances to reduce costs; analyzing usage
patterns for efficiency.
Tools:
o AWS Cost Explorer, Azure Cost Management, GCP Billing Reports for analyzing and
optimizing resource costs.
Communication:
Emails are some of the most popular communication methods that businesses and companies use today. This
service is evolving at a very fast rate becoming more reliable and faster. Today, most businesses are always
email campaigning clients and using emails to store important data about their customers. Through cloud
computing, webmail clients can use cloud storage while providing analytics surrounding email data from any
location globally. Companies are also using cloud-based SaaS apps to enable access to enterprise information
instantly from any location. Ideally, cloud computing has made it easier for companies and businesses to
executive internal processes smoothly.
Collaboration:
Cloud computing has made it easier for employees, clients and businesses to collaborate with ease. Sharing
files and documents has been made easier by cloud computing. This has enhanced connections that are easy
and less time-consuming. Google Wave for instance enables users to create files then invite other users to
edit, collaborate with the files or comment. Collaboration with cloud computing is the same as instant
messaging. However, it provides complete, specific tasks that take just hours instead of months to
accomplish.
Data storage:
Businesses are using cloud computing solutions to store crucial data. Data store in a business or home
computer can only be accessed when using that device. However, cloud computing enables to store and
access data anytime, anywhere and from any device. This storage is also secure because user gets a unique
password and username that ensures that only user can access files online as well as encryption of the data.
There are several security layers for cloud storage and this makes it extremely difficult for hackers to access
the data in the cloud. Virtual office Perhaps, the most popular among all real-time applications of cloud
computing is the ability to rent software (i.e. SaaS) rather than use it. For instance, Google Docs can be used
to run a virtual office.
Characteristics:
o Require low latency and quick response times; often need high throughput and scalability.
Use Cases:
o Online gaming, financial services (trading platforms), real-time collaboration tools, IoT
applications.
Technologies:
o Web Sockets: Enables real-time bi-directional communication between clients and servers.
o Message Brokers: Systems like Rabbit MQ, Apache Kafka for handling real-time data streams.
Definition: Leveraging cloud computing services on mobile devices to enhance capabilities and user
experiences.
Benefits:
o Access to powerful processing and storage capabilities via the cloud.
o Improved app functionality without heavy local resource usage.
o Enhanced data synchronization and sharing across devices.
Challenges:
o Network connectivity issues impacting performance.
o Security risks associated with data transmission and storage.
o Battery consumption concerns for mobile devices.
9. Edge Computing
Definition: A distributed computing paradigm that brings computation and data storage closer to the
location of data generation. Edge computing is a distributed computing model that brings computation
and data storage closer to the sources of data.
Edge computing is an emerging computing paradigm which refers to a range of networks and devices at
or near the user. Edge is about processing data closer to where it's being generated, enabling processing at
greater speeds and volumes, leading to greater action-led results in real time.
Edge computing optimizes Internet devices and web applications by bringing computing closer to the source of
the data. This minimizes the need for long distance communications between client and server, which reduces
latency and bandwidth usage.
Benefits:
o Reduced latency for applications needing immediate processing.
o Decreased bandwidth usage by processing data locally before sending to the cloud.
o Enhanced performance for IoT devices, smart cities, and autonomous systems.
Applications:
o Smart homes, connected vehicles, healthcare monitoring systems, and real-time analytics.
Unit IV:
Cloud security fundamentals & Issues in cloud computing: Secure Execution Environments and
Communications in cloud, General Issues and Challenges while migrating to Cloud. The Seven-step model of
migration into a cloud, Vulnerability assessment tool for cloud, Trusted Cloud computing, Virtualization
security management-virtual threats, VM Security Recommendations and VM-Specific Security
techniques.QOS Issues in Cloud, Depend ability, data migration, challenges and risks in cloud adoption.
Cloud computing security consists of a set of policies, controls, procedures and technologies that work
together to protect cloud-based systems, data and infrastructure. These security measures are configured to
protect cloud data, support regulatory compliance and protect customer's privacy as well as setting
authentication rules for individual users and devices.
Protection measures:
No single person should accumulate all these privileges.
A provider should deploy stringent security devices, restricted access control policies, and surveillance
mechanisms to protect the physical integrity of the hardware.
By enforcing security processes, the provider itself can prevent attacks that require physical access to the
machines.
The only way a system administrator would be able to gain physical access to a node running a customer’s
VM is by diverting this VM to a machine under his/her control, located outside the IaaS’s security
perimeter.
The cloud computing platform must be able to confine the VM execution inside the perimeter and guarantee
that at any point a system administrator with root privileges remotely logged to a machine hosting a VM
cannot access its memory.
TCG (trusted computing group), a consortium of an industry leader to identify and implement security
measures at the infrastructure level proposes a set of hardware and software technologies to enable the
construction of trusted platforms suggests the use of “remote attestation” (a mechanism to detect changes to
the user’s computers by authorized parties).
An Execution Environment is an environment for executing code, in which those executing the code can
have high levels of trust in that surrounding environment because it can ignore threats from the rest of the
device.
Execution Environment stands and distinguishes them from the uncertain nature of applications. Generally,
the rest of the device hosts a feature Rich OS like Android, and so is generically known in this context as
the REE (Rich Operating System Execution Environment).
Cloud communications are the blending of multiple communication modalities. These include methods such
as voice, email, chat, and video, in an integrated fashion to reduce or eliminate communication lag. Cloud
communications are essentially internet-based communication.
Cloud communications evolved from data to voice with the introduction of VoIP (voice over Internet
Protocol). A branch of cloud communication is cloud telephony, which refers specifically to voice
communications
Cloud communications providers host communication services through servers that they own and maintain.
The customers, in turn, access these services through the cloud and only pay for services that they use,
doing away with maintenance associated with PBX (private branch exchange) system deployment.
Cloud communications provide a variety of communication resources, from servers and storage to
enterprise applications such as data security, email, backup and data recovery, and voice, which are all
delivered over the internet. The cloud provides a hosting environment that is flexible, immediate, scalable,
secure, and readily available.
The need for cloud communications has resulted from the following trends in the enterprise:
Distributed and decentralized company operations in branch and home offices
Increase in the number of communication and data devices accessing the enterprise networks
Hosting and managing IT assets and applications
These trends have forced many enterprises to seek external services and to outsource their requirement for IT
and communications. The cloud is hosted and managed by a third party, and the enterprise pays for and uses
space on the cloud for its requirements. This has allowed enterprises to save on costs incurred for hosting and
managing data storage and communication on their own.
The following are some of the communication and application products available under cloud communications
that an enterprise can utilize:
Private branch exchange
SIP Trucking
Call center
Fax services
Interactive voice response
Text messaging
Voice broadcast
Call-tracking software
Contact center telephony
All of these services cover the various communication needs of an enterprise. These include customer
relations, intra-branch and inter-branch communication, inter-department memos, conference, call forwarding,
and tracking services, operations center, and office communications hub.
Cloud communication is a center for all enterprise-related communication that is hosted, managed, and
maintained by third-party service providers for a fee charged to the enterprise.
General Issues and Challenges while migrating to Cloud
ISSUES IN CLOUD COMPUTING
Cloud Computing is Internet-based computing, where shared resources, software, and information are
provided to computers and other devices on demand.
These are major issues in Cloud Computing:
1. Privacy:
The user data can be accessed by the host company with or without permission. The service provider may
access the data that is on the cloud at any point in time. They could accidentally or deliberately alter or
even delete information.
2. Compliance:
There are many regulations in places related data and hosting. To comply with regulations (Federal
Information Security Management Act, Health Insurance Portability and Accountability Act) user may
have to adopt deployment modes that are expensive.
3. Security:
Cloud-based services involve third-party for storage and security. One can assume that a cloud-based
company will protect and secure one’s data if one is using their services at a very low or for free, they may
share user’s information with others. Security presents a real threat to cloud.
4. Sustainability:
This issue refers to minimizing the effect of cloud computing on environment. Citing the server’s effects
on the environmental effects of cloud computing, in areas where climate favors natural cooling and
renewable electricity is readily available, the countries with favorable conditions, such as Finland, Sweden,
and Switzerland are trying to attract cloud computing data centers.
5. Abuse:
While providing cloud services, it should be ascertained that the client is not purchasing the services of
cloud computing for nefarious purpose. A banking Trojan illegally used the popular Amazon service as a
command and control channel that issued software updates and malicious instruction to PCs that were
infected by the malware.
The following table illustrates the dependencies which should be taken into consideration when architecting
security controls into applications for cloud deployments:
1. Security and Privacy: Security is arguably the biggest challenge in cloud computing. Cloud security refers
to a set of technologies or policies to protect data. Remember, violating privacy can cause havoc to end-
users.
Implementing security applications, encrypted file systems, and data loss software to prevent attacks on
cloud infrastructures.
Using security tools and adopting a corporate culture that upholds data security discreetly.
2. Cloud Costs: Costing is a significant challenge in the adoption, migration, and operation of cloud
computing services, especially for small and medium-sized businesses.
Prepare a cost estimate budget right from the start. It involves experts who will help for cloud cost
management. An additional measure is creating a centralized team to oversee budget details.
3. Reliability and Availability: cloud providers continue to improve their uptimes; service disruption is still
an enrollment problem. Small-scale cloud service providers are more prone to downtime. This problem
persists today even with well-developed backups and platform advancements.
Cloud computing service providers have resorted to creating multiple redundancy levels in their systems.
Also, they are developing disaster recovery setups and backup plans to mitigate outages.
The Seven-step model of migration into a cloud
Cloud migration is the procedure of transferring applications, data, and other types of business components
to any cloud computing platform. There are several parts of cloud migration an organization can perform. The
most used model is the applications and data transfer through an on-premises and local data center to any
public cloud.
But, a cloud migration can also entail transferring applications and data from a single cloud environment or
facilitate them to another- a model called cloud-to-cloud migration. The other type of cloud migration is
reverse cloud migration, cloud exit, and cloud repatriation where applications or data are transferred and back
to the local data center.
Migrating a model to a cloud can help in several ways, such as improving scalability, flexibility, and
accessibility. Also, migrating models to the cloud can be a complex process that requires careful planning. For
a step-by-step guide on how Dev Ops practices can streamline your cloud migration journey, the DevOps
Engineering – Planning to Production course covers cloud migration techniques with practical DevOps
integration.
Now let’s discuss the seven steps to follow when migrating a model to the cloud:
Step 1: Choose the right cloud provider ( Assessment step):The first step in migrating your model to the
cloud is to choose a cloud provider that aligns with your needs, budget, and model requirement. consider the
factors such as compliance, privacy, and security.
Step 2: Prepare your data ( Isolation step):Before migrating to your cloud, you need to prepare your data.
for that ensure your data is clean and well organized, and in a format that is compatible with your chosen cloud
provider.
Step 3: Choose your cloud storage ( Mapping step): Once your data is prepared, you need to choose your
cloud storage. This is where your data is stored in the cloud. there are many cloud storage services such as
GCP Cloud Storage, AWS S3, or Azure Blob Storage.
Step 4: Set up your cloud computing resources and deploy your model ( Re- architect step) :If you want
to run a model in the cloud, you will need to set up your cloud computing resources. This includes selecting
the appropriate instance type and setting up a virtual machine(VM) or container for your model. After setting
up your computing resource, it is time to deploy your model to the cloud. This includes packaging your model
into a container or virtual machine image and deploying it to your cloud computing resource. and while
deploying it may be possible that some functionality gets lost so due to this some parts of the application need
to be re-architect.
Step-5: Augmentation step:It is the most important step for our business for which we migrate to the cloud in
this step by taking leverage of the internal features of cloud computing service we augment our enterprise.
Step 6: Test your Model:Once your model is deployed, we need to test it to ensure that it is working or not.
That involves running test data through your model and comparing the results with your expected output.
Step 7: Monitor and maintain your Model:After the model is deployed and tested, it is important to monitor
and maintain it. That includes monitoring the performance, updating the model as needed, and need to ensure
your data stays up-to-date. Migrating your machine learning model to the cloud can be a complex process, but
above 7 steps, you can help ensure a smooth and successful migration, ensuring that your model is scalable
and accessible.
Workload modeling involves the assessment or prediction of the arrival rates of requests and of the demand
for resources (e.g., CPU requirements) placed by applications on an infrastructure or platform, and the QoS
observed in response to such workloads.
System modeling aims at evaluating the performance of a cloud system, either at design time or at runtime.
Models are used to predict the value of specific QoS metrics such as response time, reliability or availability.
Applications of QoS models often appear in relation to decision-making problems in system management.
Techniques to determine optimized decisions range from simple heuristics to nonlinear programming and
meta-heuristics.
DEPENDABILITY
Dependability is one of the highly crucial issues in cloud computing environments given the serious impact of
failures on user experience. Cloud computing is a complex system based on virtualization and large scalability,
which makes it a frequent place for failure. In order to fight against failures in a cloud, administrator assure
dependability differently from the common way where the focus of fault management is on the Infrastructure
as a Service and on the cloud provider side only.
DATA MIGRATION
Data migration is referred to as the process of transferring data from one location to another new and improved
system or location. It effectively selects, prepares and transforms data to permanently transfer it from one
system storage to another. With the focus of enterprises increasing on optimization and technological
advancements, they are availing database migration services to move from their on-premises infrastructure to
cloud-based storage and applications.
Types of data migration
Cloud Migration: It is the process of moving data, applications and all important business elements from
on premise data center to the cloud, or from one cloud to another.
Application Migration: Involves transfer of application programs to a modern environment. It may move
an entire application system from on premise IT center to the cloud or between clouds.
Storage Migration: It is the process of moving data to a modern system from outdated arrays. It enhances
the performance while offering cost-effective scaling.
1. SETTING A BUSINESS OBJECTIVE: Setting clear and flexible objectives and plans based on business
requirements is important in any cloud migration. Create a clear migration plan, factoring in projected costs,
downtime, training needs, migration time, etc. Be prepared with risk mitigation plans as well.
2. FINANCIAL COSTS: Though cloud migration brings in a lot of returns and benefits, in the long run,
getting there is usually expensive and time-consuming. Costs include architecture changes, human resources,
training, migration partners, cloud provider, and bandwidth costs. Proper planning and a phase-wise migration
will reduce financial risks.
3. WHAT TO MIGRATE, WHAT NOT?:Decision on what to migrate and what to leave behind is
important for a successful migration strategy. Therefore, careful selection is very important.To reduce risks,
one can migrate applications with lesser dependencies, lesser criticality, compatible with cloud services, or
those aligned with critical business goals. A phase-wise migration approach is always better.
4. DATA SECURITY
Data security is the biggest concern when enterprises store their sensitive data with a third-party cloud
provider. If data is lost, leaked, or exposed, it could cause severe disruption and damage to the business Make
a strategy to keep mission-critical data at your premises or ensure complete data security at rest and in transit
while you migrate to a cloud environment. Follow and implement the best practices and policies to protect data
and access. Seek the help of a consultant/team with previous experience in setting up security in cloud
environments.
5. CLOUD SERVICE PROVIDER: The availability of multiple similar cloud service providers makes it a
hurdle to choose the right one. Goals, budget, priorities of the organization, along with the services offered,
security, compliance, manageability, cost, etc., of the service provider are the main factors to be considered in
selections. Opt for Hybrid Cloud to reduce vendor lock-in.
6. COMPLEXITY & LACK OF EXPERTISE: Most organizations are scared of the complexity of cloud
environments and migration processes. However, surveys show that complexity is still a blocking factor in
cloud adoptions. If you do not have enough in-house expertise in dealing with cloud, migration processes, and
compliance requirements, better engage a partner with previous experience. Encourage maximum automation
with the right hassle-free automation tools and technologies.
7. COMPLIANCE: For companies operating under strict regulatory and compliance frameworks, it is hard to
migrate to the cloud.Cloud migration should ensure compliance with local and global regulatory requirements.
For example, data should be protected at rest and in transit; integrated audit trails, dashboard, and incident
management systems should also be available to meet regulatory compliance requirements.
9. TRAINING & RESOURCES: Cloud environments and cloud migration processes are still complex to
understand and practice. Moreover, lack of expertise and training resources are a concern even today for many
enterprises.The skill gap is one of the main reasons for the slowdown of cloud migration. Therefore, providing
proper training and support to employees, using hassle-free migration platforms and experienced partners are
critical to a successful and timely migration.
10. MIGRATION STRATEGY: Whether to rebuild, lift and shift, re-host(IaaS), refactor (PaaS), replace
(SaaS) or opt for a combination? is always a challenging question during migration.This decision is specific to
the nature of applications, infrastructure, network, security, privacy, scalability, regulatory, and business
requirements of the organization. A detailed analysis of all these factors, including the budget, risk, time, etc.,
will help arrive at the right distribution strategy.
11. DOWNTIME!:Downtime can be catastrophic to business in terms of revenue and reputation to many
organizations.Adopt a methodology that minimizes disruption and ensures business continuity. For example,
test migration offline, use the right tools, and end-to-end automation tools to reduce risks and downtime.
12. POST-MIGRATION:One of the other key concerns is the data privacy, security, and monitoring
capabilities of the application running on the cloud. Ensure there is complete observability on the build,
deployment, and running of applications and data on the cloud. Use the right tools, which provide logs, audit
trails, alerts, visual dashboards, and approval workflows to control and monitor the entire stack and operations.
Unit V:
Case Study on Open Source and Commercial Clouds: Open Stack, Eucalyptus, Open Nebula, Apache
Cloud Stack, Amazon (AWS),Microsoft Azure, Google cloud etc.
Open Stack: Open Stack is an open source platform that uses pooled virtual resources to build and manage
private and public clouds. The tools that comprise the Open Stack platform, called "projects," handle the core
cloud-computing services of compute, networking, storage, identity, and image services. More than a dozen
optional projects can also be bundled together to create unique, deployable clouds.
Open Stack is a free, open-source cloud computing platform launched on July 21, 2010, by Rackspace Hosting
and NASA. It provides Infrastructure-as-a-Service (IaaS) for public and private clouds, enabling users to
access virtual resources. The platform comprises various interrelated components, known as "projects," which
manage hardware pools for computing, storage, and networking. Unlike traditional virtualization, OpenStack
uses APIs to directly interact with and manage cloud services.
Open Stack consists of several key components that work together to provide cloud services. The main
services include:
Nova: Manages compute resources for creating and scheduling virtual machines.
Neutron: Handles networking, managing networks and IP addresses through an API.
Swift: An object storage service designed for high fault tolerance, capable of managing petabytes of
unstructured data via a RESTful API.
Cinder: Provides persistent block storage accessible through a self-service API, allowing users to
manage their storage needs.
Keystone: Manages authentication and authorization for OpenStack services through a central
directory.
Glance: Responsible for registering and retrieving virtual disk images from various back-end systems.
Horizon: Offers a web-based dashboard for managing and monitoring OpenStack resources.
Ceilometer: Tracks resource usage for metering and billing, and can generate alarms for threshold
breaches.
Heat: Facilitates orchestration and auto-scaling of cloud resources on demand, working in conjunction
with Ceilometer.
EUCALYPTUS
The open-source cloud refers to software or applications publicly available for the users in the cloud to set up
for their own purpose or for their organization.Eucalyptus is a Linux-based open-source software architecture
for cloud computing and also a storage platform that implements Infrastructure a Service (IaaS). It provides
quick and efficient computing services. Eucalyptus was designed to provide services compatible with
Amazon’s EC2 cloud and Simple Storage Service(S3).
Eucalyptus is a powerful open-source tool for building private clouds. If you’re looking to understand
how it fits into cloud-based DevOps practices, the DevOps Engineering – Planning to Production course
provides detailed insights on private cloud implementation.
Eucalyptus Architecture
Eucalyptus enables management of both Amazon Web Services and private cloud instances, allowing seamless
transfers between them. Its architecture includes a virtualization layer that handles network, storage, and
computing resources, with instances isolated through hardware virtualization.
Features:
Images: Eucalyptus Machine Images (EMIs) are software bundles for the cloud.
Instances: Running an image creates an active instance.
Networking: Three modes—Static (allocates IPs), System (integrates with physical networks), and
Managed (local instance networking).
Access Control: Manages user permissions.
Elastic Block Storage: Offers block-level storage for instances.
Auto-scaling and Load Balancing: Adjusts instances based on demand.
Node Controller: Manages instance lifecycles on each node, interacting with the OS and hypervisor.
Cluster Controller: Oversees multiple Node Controllers and the Cloud Controller, scheduling VM
execution.
Storage Controller (Walrus): Provides block storage, allows snapshots, and uses S3 APIs for file
storage.
Cloud Controller: Front-end interface for client tools and communication with other components.
Operation Modes of Eucalyptus
Managed Mode: Uses VLAN for security groups and network isolation.
Managed (No VLAN): No network isolation; root access to all VMs.
System Mode: Basic mode, assigning MAC addresses to VMs.
Static Mode: Maps MAC/IP pairs in a static DHCP setup for better IP control.
Open Nebula: OpenNebula is an open-source cloud computing platform that streamlines and simplifies the
manufacture and management of virtualized hybrid, public, and private clouds. It is a straightforward yet
feature-rich, flexible solution to build and manage enterprise clouds and data center virtualization.
OpenNebula is an open-source cloud management platform designed for building and managing private,
public, and hybrid clouds. It enables efficient orchestration of virtualized data centers.
Features
Advantages
Use Cases
Open Nebula offers a robust solution for organizations looking to implement cloud infrastructure, providing
flexibility, scalability, and ease of use.
Apache Cloud Stack is an open-source cloud computing software designed to deploy, manage, and orchestrate
large networks of virtual machines. It provides Infrastructure-as-a-Service (IaaS) capabilities, enabling
organizations to create private and public clouds efficiently.
Key Features
Multi-Hypervisor Support: Compatible with various hypervisors, including KVM, VMware, and
Xen.
Scalability: Designed to scale from small deployments to large-scale infrastructures.
Resource Management: Automates resource provisioning, including compute, storage, and
networking.
Self-Service Portal: Users can manage resources through a web-based interface.
Network as a Service (NaaS): Supports advanced networking features, such as load balancing and
VPNs.
API Access: Offers a comprehensive API for integration and automation.
Architecture
Management Server: Central component that manages the Cloud Stack environment and orchestrates
resources.
Hypervisor Hosts: Physical servers that run the hypervisors and virtual machines.
Storage: Manages storage resources, including block and object storage.
Network Infrastructure: Provides virtual networking capabilities, including public and private
networks.
Advantages
Use Cases
Private Cloud: Organizations can create secure, scalable private cloud environments.
Public Cloud: Service providers can deploy public cloud services with multi-tenancy.
Hybrid Cloud: Combines private and public resources for greater flexibility.
Apache Cloud Stack is a powerful and flexible solution for building and managing cloud infrastructures,
providing a robust feature set and strong community support for organizations looking to leverage cloud
technology.
AMAZON (AWS)
Amazon Web Services (AWS) is a comprehensive and widely adopted cloud computing platform offered by
Amazon. It provides a broad set of services, including computing power, storage options, and networking
capabilities, enabling businesses to scale and grow efficiently.
Features
Wide Range of Services: Offers over 200 fully featured services, including computing (EC2), storage
(S3, EBS), databases (RDS, DynamoDB), machine learning (SageMaker), and more.
Scalability: Easily scale resources up or down based on demand, ensuring optimal performance.
Global Reach: Data centers located in multiple geographic regions and availability zones worldwide,
ensuring low latency and redundancy.
Security and Compliance: Robust security features, including identity and access management (IAM),
encryption, and compliance certifications (e.g., HIPAA, GDPR).
Pay-as-You-Go Pricing: Flexible pricing model based on usage, allowing organizations to pay only
for the resources they consume.
Services
Amazon EC2 (Elastic Compute Cloud): Provides scalable virtual servers for hosting applications.
Amazon S3 (Simple Storage Service): Object storage service for storing and retrieving any amount of
data.
Amazon RDS (Relational Database Service): Managed relational database service for various
database engines (MySQL, PostgreSQL, etc.).
Amazon Lambda: Serverless computing service that allows users to run code without provisioning
servers.
Amazon VPC (Virtual Private Cloud): Enables users to create isolated networks within the AWS
cloud.
Advantages
Use Cases
Web Hosting: Host websites and web applications with scalable infrastructure.
Data Analytics: Analyze large datasets using services like Amazon Redshift and AWS Glue.
Machine Learning: Build and deploy machine learning models with AWS tools.
Disaster Recovery: Implement backup and recovery solutions using AWS storage services.
AWS is a leading cloud platform that offers a vast array of services and features, enabling organizations to
innovate and scale their operations efficiently. Its flexibility, security, and global reach make it a preferred
choice for businesses of all sizes.
Microsoft Azure is a cloud computing platform and service offered by Microsoft, providing a wide range of
cloud services, including computing, analytics, storage, and networking. It enables businesses to build, deploy,
and manage applications and services through Microsoft-managed data centers.
Features
Comprehensive Service Offerings: Includes services for virtual machines, app hosting, databases, AI,
machine learning, and IoT.
Scalability: Supports automatic scaling to handle varying workloads seamlessly.
Global Reach: Data centers located in numerous regions worldwide, providing low-latency access and
redundancy.
Security and Compliance: Built-in security features and compliance with various standards (e.g., ISO,
HIPAA, GDPR).
Hybrid Cloud Capabilities: Integrates on-premises infrastructure with cloud services through Azure
Arc and Azure Stack.
Services
Azure Virtual Machines: Provides scalable virtual machines for various workloads.
Azure App Service: Platform for building and hosting web applications and APIs.
Azure SQL Database: Managed relational database service for SQL Server.
Azure Functions: Serverless computing service that allows users to run code on demand without
managing servers.
Azure Blob Storage: Object storage service for unstructured data.
Advantages
Integration with Microsoft Products: Seamless integration with other Microsoft services (e.g., Office
365, Dynamics 365).
Flexibility: Supports various programming languages, frameworks, and operating systems.
Robust Development Tools: Provides development tools like Visual Studio and Azure DevOps for
streamlined workflows.
Use Cases
Microsoft Azure is a powerful and versatile cloud platform that enables businesses to innovate, scale, and
manage their applications and services efficiently. Its extensive service offerings, security features, and
integration capabilities make it a preferred choice for enterprises looking to leverage cloud technology.
Google Cloud is a suite of cloud computing services offered by Google, providing a range of solutions for
computing, data storage, data analytics, machine learning, and more. It enables businesses to build, deploy,
and scale applications on Google’s infrastructure.
Features
Comprehensive Services: Includes computing (Google Compute Engine), storage (Google Cloud
Storage), big data (BigQuery), and machine learning (AI Platform).
Global Infrastructure: Utilizes Google’s extensive global network of data centers for low-latency and
reliable services.
Security: Offers robust security features, including data encryption, identity management, and
compliance with various standards (e.g., ISO, GDPR).
Serverless Computing: Supports serverless architectures with services like Cloud Functions and
Cloud Run.
Hybrid and Multi-Cloud Solutions: Integrates with on-premises systems and other cloud providers
through Anthos and Google Kubernetes Engine (GKE).
Services
Google Compute Engine: Provides scalable virtual machines for various workloads.
Google Kubernetes Engine (GKE): Managed service for running Kubernetes clusters.
Google Cloud Storage: Object storage service for storing and retrieving data.
BigQuery: Fully managed data warehouse for analytics and data processing.
Google Cloud AI: Tools and services for building machine learning models.
Advantages
Data Analytics Expertise: Leverages Google’s data analytics capabilities for powerful insights.
Strong Machine Learning Tools: Provides advanced AI and machine learning services.
Integration with Google Services: Seamlessly integrates with other Google products, such as
Workspace and Firebase.
Use Cases
Google Cloud offers a robust and flexible cloud platform with a wide array of services tailored for businesses
looking to innovate and scale their operations. Its focus on data analytics, machine learning, and global
infrastructure makes it a strong choice for organizations seeking to leverage cloud technology effectively.