KEMBAR78
Cloud Computing | PDF | Cloud Computing | Virtual Machine
0% found this document useful (0 votes)
11 views71 pages

Cloud Computing

Uploaded by

it22.mdafif.ali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views71 pages

Cloud Computing

Uploaded by

it22.mdafif.ali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 71

The NIST Special Publication 800-145 defines cloud computing as:

"A model for enabling ubiquitous, convenient, on-demand network access to a shared
pool of configurable computing resources (e.g., networks, servers, storage, applications,
and services) that can be rapidly provisioned and released with minimal management
effort or service provider interaction."

Additionally, the NIST definition outlines the following key concepts:

1. Essential Characteristics (Five):

o On-demand self-service

o Broad network access

o Resource pooling

o Rapid elasticity

o Measured service

2. Cloud Service Models (Three types):

o Infrastructure as a Service (IaaS)

o Platform as a Service (PaaS)

o Software as a Service (SaaS)

3. Cloud Deployment Models (Four types):

o Public Cloud

o Private Cloud

o Community Cloud

o Hybrid Cloud

Analysis of the Essential Characteristics of Cloud Computing

The essential characteristics of cloud computing, as defined by NIST SP 800-145, are as


follows:

1. On-Demand Self-Service
Definition:
"A consumer can unilaterally provision computing capabilities, such as server
time and network storage, as needed automatically without requiring human
interaction with each service provider."
Analysis:
This characteristic emphasizes the autonomy provided to consumers in managing and
provisioning resources. Users can request and allocate computing resources such as
virtual machines or storage without needing to interact with the service provider,
thereby reducing delays and improving efficiency.

2. Broad Network Access


Definition:
"Capabilities are available over the network and accessed through standard
mechanisms that promote use by heterogeneous thin or thick client platforms
(e.g., mobile phones, tablets, laptops, and workstations)."

Analysis:
This ensures cloud services are accessible from diverse devices and platforms via
standard protocols. The emphasis on heterogeneity supports a wide range of devices
and operating systems, enhancing accessibility and usability for end-users.

3. Resource Pooling
Definition:
"The provider's computing resources are pooled to serve multiple consumers
using a multi-tenant model, with different physical and virtual resources
dynamically assigned and reassigned according to consumer demand. There is a
sense of location independence in that the customer generally has no control or
knowledge over the exact location of the provided resources but may be able to
specify location at a higher level of abstraction (e.g., country, state, or
datacenter). Examples of resources include storage, processing, memory, and
network bandwidth."

Analysis:
Resource pooling leverages a multi-tenant architecture to optimize resource utilization.
Consumers benefit from location-independent services where resources are
abstracted, ensuring cost efficiency and scalability. At the same time, providers can
dynamically allocate resources based on demand.

4. Rapid Elasticity
Definition:
"Capabilities can be elastically provisioned and released, in some cases
automatically, to scale rapidly outward and inward commensurate with demand.
To the consumer, the capabilities available for provisioning often appear to be
unlimited and can be appropriated in any quantity at any time."

Analysis:
This characteristic highlights the flexibility of cloud services to handle varying
workloads. The ability to scale up or down dynamically ensures that consumers pay
only for the resources they use, providing cost savings while maintaining service
performance during peak and off-peak periods.

5. Measured Service
Definition:
"Cloud systems automatically control and optimize resource use by leveraging a
metering capability at some level of abstraction appropriate to the type of
service (e.g., storage, processing, bandwidth, and active user accounts).
Resource usage can be monitored, controlled, and reported, providing
transparency for both the provider and consumer of the utilized service."

Analysis:
The metering and monitoring capabilities of cloud systems ensure transparency and
accountability in resource usage. Both providers and consumers gain insights into
consumption patterns, enabling cost management and resource optimization. This
characteristic underpins the pay-as-you-go model, which is a core benefit of cloud
computing.

Cloud Cube Model

The Cloud Cube Model is a framework developed by the Jericho Forum to help
organizations understand the different deployment strategies and security
considerations for cloud computing. Their main focus is to protect and secure the cloud
network. This cloud cube model helps to select cloud formation for secure
collaboration. This model helps IT managers, organizations, and business leaders by
providing the secure and protected network.

It helps to categorize the cloud network based on the four-dimensional factor.

▪ Internal/External: defines the physical location of the data. It acknowledges us


whether the data exists inside or outside of your organization’s boundary.
▪ Proprietary/Open: states about the state of ownership of the cloud technology
and interfaces. It also tells the degree of interoperability, while enabling data
transportability between the system and other cloud forms.
▪ Perimeterised/De-perimeterised: dimension tells us whether you are operating
inside your traditional IT mindset or outside it.
▪ Insourced/Outsourced: Differentiates whether the cloud infrastructure is
managed by the organization itself (Insourced) or by a third-party provider
(Outsourced).
Cloud Reference Models

With the increasing popularity of cloud computing, the definitions of various cloud
computing architectures have broadened. To achieve the potential of cloud computing,
there is a need to have a standard cloud reference model for the software architects,
software engineers, security experts and businesses, since it provides a fundamental
reference point for the development of cloud computing. The Cloud Reference Model
brings order to this cloud landscape.

The Cloud Reference Model is a framework used by customers and vendors to define
best practices for cloud computing. The reference model defines five main actors: the
cloud consumer, cloud provider, cloud auditor, cloud broker, and cloud carrier.

Cloud Service Models:

The service models are depicted using "L-shaped" horizontal and vertical bars, instead
of the traditional "three-layer cake" stack. This representation highlights the flexibility
and modularity of cloud service models. The reason for this depiction is that while
cloud services can be dependent on one another within the service stack, it is also
possible for these services to function independently and interact directly with the
resource abstraction and control layer. This flexibility allows for different architectures
and configurations based on the requirements of each service and the specific layer of
the cloud infrastructure.

1. Infrastructure as a Service (IaaS) provides virtualized computing resources


over the internet. In IaaS, virtual private server (VPS) instances are created by
partitioning physical servers using hypervisor technologies. A hypervisor allows
multiple virtual machines to run on a single physical server, each with its own OS
and resources. Partitioning ensures that workloads are isolated and securely
separated within virtualized environments, providing flexibility and resource
optimization.
IaaS providers, such as Amazon Web Services (AWS), offer virtual server
instances and storage, along with application programming interfaces (APIs) that
enable users to migrate workloads to virtual machines (VMs). Users are
allocated storage capacity and can start, stop, access, and configure VMs as
needed. IaaS allows customization of instances to meet various workload
requirements, offering sizes such as small, medium, large, extra-large, or
memory- and compute-optimized. This model is the closest approximation to a
remote data center, offering flexibility and control for business users.
Pods: When the workload of a VM reaches its capacity limit, additional VMs (or
copies of the existing VM) are created to handle more users. Pods are managed
by a Cloud Control System (CCS).
Sizing limitations for pods need to be accounted for if you are building a large
cloud based application. Pods are aggregated into pools within an IaaS region or
site called an availability zone
Pod Aggregation: Multiple pods are aggregated into pools within an Availability
Zone (AZ) in an IaaS environment. An Availability Zone is a distinct location
within a cloud provider's infrastructure that houses multiple data centers. The
aggregation of pods into pools ensures that resources can be scaled and
managed more effectively within an AZ.
High Availability: The aggregation across zones and regions ensures that cloud
computing systems are highly available. When a failure occurs, it typically
happens on a pod-by-pod or zone-by-zone basis, and failover mechanisms
between zones can maintain high availability and business continuity.
Silos:
Information Silo: An information silo in cloud computing occurs when user
clouds are isolated, and their management system cannot interoperate with
other private clouds. These silos can often be seen in Platform-as-a-Service
(PaaS) offerings like Force.com or QuickBase, which create isolated
ecosystems within the cloud infrastructure.
Private Virtual Networks and Silos: When private virtual networks are set up
within an IaaS framework (such as creating private subnets), it often results in
silos. These silos limit interoperability between different clouds, leading to a
more isolated environment. This isolation can enhance security and protection
but at the cost of flexibility.
Vendor Lock-in: Silos often lead to vendor lock-in, where organizations become
dependent on a particular cloud provider's ecosystem and tools. This can limit
the ability to switch to a different provider or integrate with external systems.

Kubernetes Pods:
A Pod is the smallest execution unit in Kubernetes. It encapsulates one or more
containers (like Docker containers) that share the same network namespace,
storage volumes, and configuration data. Pods are ephemeral by nature,
meaning they are short-lived and can be automatically recreated if they fail or if
the node they run on fails. Pods are the fundamental building blocks for
deploying and managing applications in a Kubernetes cluster.

Key Characteristics of Pods:


1. Single or Multiple Containers: While most pods contain a single container, they
can also hold multiple containers that need to run together and share resources.
2. Networking: Each pod is assigned a unique IP address which allows it to
communicate with other pods and containers.
3. Persistent Storage: Pods can have persistent storage volumes, which means
that even if the pod is destroyed or recreated, the storage remains available.
4. Configuration Data: Pods carry the configuration data necessary for running the
container(s) inside them, such as environment variables and secrets.

What Does a Pod Do?


• Represents Processes: Pods represent the processes that are running on a
Kubernetes cluster. They allow Kubernetes to monitor and manage the health of
those processes.
• Sharing Resources: Containers inside a pod share the same network
namespace, meaning they can communicate with each other using localhost.
• Persistent Storage: Pods can mount storage volumes that persist across pod
restarts.
• Simplified Communication: Containers within a pod can easily communicate
and share data as they are all within the same network namespace.

Benefits of Pods:
1. Simplified Communication: When pods contain multiple containers,
communication and data sharing are simplified as all containers in a pod share
the same network namespace and can communicate via localhost.
2. Scalability: Pods make it easy to scale applications. Kubernetes can
automatically replicate pods and scale them up or down based on demand.
3. Efficient Resource Sharing: Containers within a pod share the same resources,
making it efficient for tightly coupled applications to run together.

How Do Pods Work?


• Pod Creation: Pods are created and managed by Kubernetes controllers, which
handle deployment, replication, and health monitoring. Controllers
automatically create replacement pods if a pod fails or is deleted.
• Scheduling: Pods are scheduled to run on nodes in the Kubernetes cluster
(either virtual machines or physical servers). If a pod contains multiple
containers, they are scheduled together on the same node.
• Lifecycle Management: Controllers manage the lifecycle of pods, including
handling pod failures, replication, and scaling.
• Init Containers: A pod can have init containers, which run before the main
application containers to set up the environment or perform tasks like database
migrations.

Pod Communication:
• Internal Communication: Containers within a pod can communicate with each
other using localhost, as they share the same network namespace.
• External Communication: Pods can communicate with each other across the
cluster using their unique IP addresses. Kubernetes automatically assigns a
cluster-private IP address to each pod, which allows it to interact with other
pods without needing to map ports or explicitly link them.
• Exposing Ports: Pods expose their internal ports to communicate with the
outside world or other services within the cluster.

2. Platform as a Service (PaaS) delivers a platform for application development


and deployment. In this model, cloud providers host development tools on their
infrastructure, which users access via APIs, web portals, or gateway software.
PaaS simplifies software development by eliminating the need to manage the
underlying infrastructure.
Many PaaS providers also host applications after development. This model is
widely used for general software development and rapid deployment of
applications. Notable examples include Salesforce Lightning Platform, AWS
Elastic Beanstalk, and Google App Engine.

3. Software as a Service (SaaS) is a distribution model for delivering software


applications over the internet, often referred to as web services. Users can
access SaaS applications from any device with internet connectivity, without the
need for installation or local infrastructure management. SaaS provides access
to application software and associated databases. Commonly used for
productivity, communication, and other essential services, SaaS applications
such as Microsoft 365 and Google Workspace have become integral to modern
workflows.

Cloud Deployment Models

Cloud computing deployment models define how cloud services are delivered and
managed. The major deployment models include private, public, hybrid, multi-cloud,
and community cloud. Each offers unique features tailored to specific business needs.

1. Private Cloud: In a private cloud, the infrastructure is dedicated to the exclusive


use of a single organization, comprising multiple cloud service consumers (CSCs)
such as business units. The private cloud may be owned, managed, and operated by
the organization, a third party, or a combination of both. It can be located on or off-
premises, allowing for flexibility in terms of where the cloud infrastructure is housed
and managed.

2. Community Cloud: A community cloud is a shared cloud infrastructure


supporting a specific community of organizations with common interests, missions,
or compliance requirements. It can be managed by the participating organizations or
a third-party vendor and may be hosted on-premises or off-premises. This model
promotes collaboration among organizations while addressing shared concerns like
security and compliance.
3. Public Cloud: In a public cloud, the cloud infrastructure is made available to the
general public. It is typically owned, managed, and operated by a business,
academic institution, government organization, or a combination of these entities.
The public cloud is accessible over the internet, and services are provided to a
diverse set of clients. Public clouds exist on the premises of the cloud provider and
are intended for open use by the public.
4. Hybrid Cloud: A hybrid cloud combines two or more distinct cloud
infrastructures—private, community, or public—that remain independent entities.
These clouds are interconnected using standardized or proprietary technology,
which enables data and application portability. Hybrid clouds allow for flexibility in
managing workloads, such as utilizing a private cloud for sensitive applications
while using public cloud resources for less critical tasks or to handle load balancing.

5. Multi-Cloud: A multi-cloud strategy involves using multiple IaaS providers to


enable application portability and concurrent operations across different cloud
platforms. Organizations adopt this model to mitigate risks associated with cloud
service outages and to benefit from competitive pricing among providers. However,
multi-cloud deployment poses challenges due to differences in providers’ services
and APIs. Industry efforts, such as the Open Cloud Computing Interface, aim to
standardize services and APIs, making multi-cloud adoption more accessible.

Characteristics of Cloud Computing

• Self-Service Provisioning:
End users can independently provision compute resources, such as server time
and network storage, for almost any workload. This eliminates the traditional
need for IT administrators to manage and allocate these resources.
• Elasticity:
Cloud computing allows organizations to scale resources up or down according
to demand. This flexibility removes the need for significant upfront investments
in local infrastructure, ensuring cost efficiency.
• Pay-Per-Use:
Cloud resources are metered at a granular level, enabling users to pay only for
the resources they consume.
• Workload Resilience:
Cloud service providers (CSPs) implement redundancy across multiple regions
to ensure resilient storage and consistent availability of workloads, minimizing
downtime.
• Migration Flexibility:
Organizations can migrate workloads to and from the cloud or between different
cloud platforms. This capability enables cost savings and provides access to
emerging services and technologies.
• Broad Network Access:
Cloud computing allows users to access data and applications from anywhere
using any device with an internet connection, enhancing flexibility and
collaboration.
• Multi-Tenancy and Resource Pooling:
Multi-tenancy enables multiple customers to share the same physical
infrastructure while maintaining privacy and security. Resource pooling allows
providers to service numerous customers simultaneously, with large and flexible
resource pools to meet diverse demands.

Advantages of Cloud Computing


• Cost Management:
Cloud infrastructure reduces capital expenditures by eliminating the need to
purchase and maintain hardware, facilities, utilities, and large data centers.
Companies also save on staffing costs, as cloud providers manage data center
operations. Additionally, the high reliability of cloud services minimizes
downtime, reducing associated costs.
• Data and Workload Mobility:
Cloud storage enables users to access data from anywhere using any internet-
connected device. This eliminates the need for physical storage devices like USB
drives or external hard drives. Remote employees can stay connected and
productive, while vendors ensure automatic updates and upgrades, saving time
and effort.
• Business Continuity and Disaster Recovery (BCDR):
Storing data in the cloud ensures accessibility even in emergencies such as
natural disasters or power outages. Cloud services facilitate quick data recovery,
enhancing BCDR strategies and ensuring workload and data availability despite
disruptions.

Disadvantages of Cloud Computing

1. Cloud Security:
Security is often cited as the most critical challenge in cloud computing.
Organizations face risks such as data breaches, hacking of APIs and interfaces,
compromised credentials, and authentication issues. Moreover, there is often a
lack of transparency about how and where sensitive data is managed by cloud
providers. Effective security requires meticulous attention to cloud
configurations, business policies, and best practices.
2. Cost Unpredictability:
Pay-as-you-go subscription models, coupled with the need to scale resources
for fluctuating workloads, can make it difficult to predict final costs. Cloud
services often interdepend, with one service utilizing others, leading to complex
and sometimes unexpected billing structures. This unpredictability can result in
unforeseen expenses.
3. Lack of Capability and Expertise:
The rapid evolution of cloud technologies has created a skills gap. Organizations
often struggle to find employees with the expertise required to design, deploy,
and manage cloud-based workloads effectively. This lack of capability can
hinder cloud adoption and innovation.
4. IT Governance:
Cloud computing's emphasis on self-service capabilities can complicate IT
governance. Without centralized control over provisioning, deprovisioning, and
infrastructure management, organizations may struggle to manage risks, ensure
compliance, and maintain data quality.
5. Compliance with Industry Regulations:
Moving data to the cloud can create challenges in adhering to industry-specific
regulations. Organizations must know where their data is stored to maintain
compliance and proper governance, which can be difficult when relying on third-
party cloud providers.
6. Management of Multiple Clouds:
Multi-cloud deployments, while advantageous in some cases, often exacerbate
the challenges of managing diverse cloud environments. Each cloud platform
has unique features, interfaces, and requirements, complicating unified
management efforts.
7. Cloud Performance:
Performance issues, such as latency, are largely beyond an organization's
control when relying on cloud services. Network outages and provider
downtimes can disrupt business operations if contingency plans are not in
place.
8. Building a Private Cloud:
Architecting, implementing, and managing private cloud infrastructures can be a
complex and resource-intensive process. This challenge is magnified when
private clouds are integrated into hybrid cloud environments.
9. Cloud Migration:
Migrating applications and data to the cloud is often more complicated and
costly than initially anticipated. Migration projects frequently exceed budget and
timelines. Additionally, the process of repatriating workloads and data back to
on-premises infrastructure can create unforeseen challenges related to cost and
performance.
10. Vendor Lock-In:
Switching between cloud providers can result in significant difficulties, including
technical incompatibilities, legal and regulatory constraints, and substantial
costs for data migration. This vendor lock-in can limit flexibility and increase
long-term dependency on specific providers.

Cloud Computing Examples and Use Cases

Cloud computing has evolved to offer a diverse range of capabilities and solutions that
cater to a wide variety of business needs. Below are examples of cloud-based services
and their use cases:

Examples of Cloud Computing Capabilities


• Google Docs and Microsoft 365:
These cloud-based productivity tools allow users to access documents,
spreadsheets, and presentations from any device, at any time, and from any
location. This facilitates seamless collaboration and boosts productivity.
• Email, Calendar, Skype, WhatsApp:
These communication tools rely on cloud computing to provide users with
access to personal data remotely. Users can manage their emails, schedules,
and messages from any device, regardless of their location.
• Zoom:
Zoom is a cloud-based platform for video and audio conferencing that allows
users to schedule, host, and record meetings, with recordings saved in the
cloud. This enables access to meeting content from anywhere, at any time.
Microsoft Teams is another common platform with similar cloud-based
functionality.
• AWS Lambda:
AWS Lambda allows developers to run code for applications or back-end
services without the need to manage or provision servers. Its pay-as-you-go
model automatically scales to accommodate changes in data usage and storage
requirements. Other major cloud providers, such as Google Cloud and Azure,
offer similar serverless computing capabilities, including Google Cloud
Functions and Azure Functions.

Use Cases for Cloud Computing

• Testing and Development:


Cloud computing offers ready-made, customizable environments that can
expedite development timelines and enhance project milestones by providing
on-demand resources.
• Production Workload Hosting:
Organizations increasingly host live production workloads on the public cloud.
This requires careful planning and the design of cloud resources that ensure
adequate operational performance and resilience for mission-critical workloads.
• Big Data Analytics:
Cloud storage and computing resources provide scalable and flexible solutions
for big data projects. Cloud providers offer specialized services, such as Amazon
EMR and Google Cloud Dataproc, to process and analyze vast amounts of data.
• IaaS (Infrastructure as a Service):
IaaS enables companies to host IT infrastructures, access computing, storage,
and networking resources in a scalable and cost-effective manner. The pay-as-
you-go model of IaaS helps companies reduce upfront IT costs.
• PaaS (Platform as a Service):
PaaS offers an environment for companies to develop, run, and manage
applications with greater ease and flexibility compared to maintaining on-
premises platforms. It enhances development speed and facilitates higher-level
programming without managing the underlying infrastructure.
• Hybrid Cloud:
Organizations can leverage a combination of private and public clouds to
optimize cost and efficiency based on specific workloads and requirements. This
model offers flexibility and customization according to different business needs.
• Multi-Cloud:
Multi-cloud strategies involve using services from multiple cloud providers to
meet the specific needs of different workloads. This approach helps
organizations select the best cloud service for each requirement and enhances
reliability and cost-effectiveness.
• Storage:
Cloud computing enables the storage of large volumes of data that can be easily
accessed and managed. Users pay only for the storage capacity they use,
optimizing costs and resource management.
• Disaster Recovery (DR):
Cloud-based disaster recovery solutions are more cost-effective and faster than
traditional on-premises options. These solutions ensure that organizations can
recover from disruptions quickly and efficiently, often with minimal downtime.
• Data Backup:
Cloud-based data backup services provide a simple, automated solution for
securing important data. Users do not need to worry about storage availability or
capacity, as cloud providers manage data security and storage operations.

Cloud computing vs. Traditional Web Hosting

Cloud computing and traditional web hosting are often confused, but they differ in
several key aspects.

Cloud services offer three characteristics that set them apart from traditional web
hosting:

1. On-demand computing power: Cloud computing provides users with access to


large amounts of computing resources that are typically sold by the minute or
hour. This allows for flexible scaling as needed.
2. Elasticity: Cloud services are elastic, meaning users can scale their services up
or down as required, giving them more control over their resources.

3. Fully managed service: Unlike traditional hosting, where users may need to
manage and configure hardware and software, cloud services are fully managed
by the provider, requiring only an internet connection and a personal computer
from the user.

The cloud service market is diverse, with many providers offering a range of services.
The three largest public cloud service providers (CSPs) dominating the industry are:

• Amazon Web Services (AWS)

• Google Cloud Platform (GCP)

• Microsoft Azure

Other notable CSPs include:

• Apple

• Citrix

• IBM

• Salesforce

• Alibaba

• Oracle

• VMware

• SAP

• Joyent

• Rackspace

When selecting a cloud service provider, businesses should consider several factors:

• The range of services provided, such as big data analytics or AI capabilities, to


ensure alignment with business needs.

• Pricing models, which can vary across providers. Although most cloud services
follow a pay-per-use model, pricing structures may differ.

• The physical location of servers, particularly if sensitive data will be stored, as


compliance with data privacy regulations may depend on this.

• Reliability and security should be prioritized. A provider’s service-level


agreement (SLA) should guarantee uptime that meets the business’s operational
requirements. It’s also important to review the security technologies and
configurations used to protect sensitive data.

Cloud Computing Security

Security remains a primary concern for businesses contemplating cloud adoption,


especially with public cloud services. In a public cloud, the underlying hardware
infrastructure is shared among multiple customers, creating a multi-tenant
environment. This necessitates significant isolation between logical compute resources
to ensure data privacy and security. Data in cloud should be stored in encrypted form.
To restrict client from accessing the shared data directly, proxy and brokerage services
should be employed.

Security Boundaries in Cloud Computing

The Cloud Security Alliance (CSA) Stack Model defines the security responsibilities
between the cloud service provider and the customer across different service models.
This model helps clarify where the provider's responsibilities end and where the
customer's responsibilities begin.
Key Points of the CSA Model:

• Service Models:

o IaaS (Infrastructure as a Service): Provides the most basic service level,


offering the infrastructure (like virtual machines, networks, and storage).

o PaaS (Platform as a Service): Builds on IaaS, adding a platform


development environment, including tools for developers.

o SaaS (Software as a Service): Provides the operating environment,


where users interact with complete applications hosted by the provider.

• Security Inheritance:

o As you move upward from IaaS to SaaS, each service model inherits the
security concerns and functionalities of the model beneath it.

o IaaS has the least integrated security, and SaaS provides the most
comprehensive integrated security.

• Security Boundaries:

o The CSA model defines where the provider’s security responsibilities end,
and the customer’s responsibilities begin.

o Security mechanisms below the boundary must be implemented and


maintained by the customer, particularly in IaaS, where customers are
responsible for securing their virtual machines, networks, and data.

o In SaaS, the cloud provider typically handles most security functions, but
the customer is still responsible for securing their data and user access.

• Cloud Deployment Models: The security needs also vary depending on the type
of cloud:

o Private Cloud: The cloud infrastructure is used by a single organization,


offering more control over security.

o Public Cloud: Resources are shared among multiple customers, so


security concerns may be higher.

o Hybrid Cloud: Combines both public and private clouds, allowing for
flexibility but also introducing additional security challenges.

o Community Cloud: Shared by several organizations with similar security


or regulatory needs.
Brokered Cloud Storage Access

Since data stored in cloud can be accessed from anywhere, we must have a
mechanism to isolate data and protect it from client’s direct access. Brokered Cloud
Storage Access is an approach for isolating storage in the cloud. In this approach, two
services are created:

• A broker with full access to storage but no access to client.


• A proxy with no access to storage but access to both client and broker.

When the client issues request to access data:

• The client data request goes to the external service interface of proxy.
• The proxy forwards the request to the broker.
• The broker requests the data from cloud storage system.
• The cloud storage system returns the data to the broker.
• The broker returns the data to proxy.
• Finally the proxy sends the data to the client

Cloud Brokers Services

Cloud brokers provide intermediary services that facilitate the integration and
management of cloud resources. Their services fall into three categories:
1. Aggregation: A cloud broker combines and integrates multiple services into one or
more new services, offering a unified solution to the customer.

2. Arbitrage: This is similar to aggregation but involves dynamically selecting and


combining services based on criteria like price, performance, or availability. The
services being aggregated are not fixed but are chosen on-demand to meet specific
requirements.

3. Intermediation: Cloud brokers provide added value to cloud consumers by


improving service capabilities. They can manage access to cloud services, offer identity
management, enhance security, generate performance reports, and more, thus
improving the overall cloud experience.

Benefits of Using a Cloud Broker:

o Cloud Interoperability: Facilitates the integration between different


cloud services and platforms, enabling smooth interactions between
various cloud environments.

o Cloud Portability: Helps businesses move applications or workloads


between different cloud providers with minimal disruption, supporting
flexibility and adaptability.

o Business Continuity: Reduces dependency on a single cloud provider,


helping ensure continued operations if one provider faces issues.

o Cost Efficiency: Brokers can help businesses optimize costs by selecting


the most cost-effective services and solutions based on real-time needs
and performance.

Encryption

Encryption helps to protect data from being compromised. It protects data that is

being transferred as well as data stored in the cloud. Although encryption helps to

protect data from any unauthorized access, it does not prevent data loss.

Exploring the Cloud Architecture

Cloud architecture refers to the structure and components that make up a cloud
computing system. Its architecture is designed for transparency, scalability, security,
and intelligent monitoring, and is composed of two main layers:

1. Frontend (Client Side)


2. Backend (Cloud Side)

1. Frontend (Client Side)

The frontend represents the user's interaction with the cloud. It includes:

• Client Infrastructure:

o Applications and user interfaces (e.g., web browsers) used to access


cloud services.

o Provides a Graphical User Interface (GUI) for seamless interaction.

2. Backend (Cloud Side)

The backend encompasses the cloud infrastructure managed by the service provider. It
includes:

1. Application: The software or platform providing services as per client


requirements.

2. Service: Three primary service models:

▪ SaaS (Software as a Service)

▪ PaaS (Platform as a Service)

▪ IaaS (Infrastructure as a Service)

3. Runtime Cloud: Execution platform/environment for virtual machines.

4. Storage: Scalable and flexible storage solutions, ensuring data management.

5. Infrastructure: Hardware and software components, including servers, storage


devices, networks, and virtualization tools.

6. Management: Handles backend components such as applications, services,


storage, and infrastructure, along with security mechanisms.

7. Security: Implements protocols to secure resources, systems, and user data.

8. Internet: Acts as the communication medium between the frontend and


backend, enabling interaction.
Composability
This refers to the ability to combine various cloud services and components to create
customized solutions. A cloud based application has the property of being built from a
collection of components. This feature is know as composability. It allows businesses
and developers to mix and match different services and resources from the cloud to
meet specific needs, ensuring flexibility in application development and deployment.

• Characteristics of Composable Components:

o Modular: Independent, self-contained units that are reusable,


replaceable, and cooperative.

o Stateless: Each transaction operates independently of others.

• Why Composable Infrastructure?

o Eliminates the need for workload-specific environments and offers


dynamic resource allocation for any application.

o Optimizes application performance and reduces resource


underutilization and overprovisioning.

o Provides an agile, cost-effective data center that can be as easily


deployed as public cloud resources.

o Supports both physical and virtual workloads, unlike converged or


hyperconverged infrastructures, ensuring seamless integration across
workloads.

Infrastructure (IaaS)
Cloud infrastructure relies on virtual machine (VM) technology, allowing a single
physical machine to run multiple VMs. The software responsible for managing these
VMs is known as the Virtual Machine Monitor (VMM) or hypervisor.
Key Composable Infrastructure Terminology

• Bare Metal: Refers to hardware without any software or operating system layer.
"Bare metal applications" run directly on hardware, and "bare metal servers" are
traditional, single-tenant, non-virtualized servers.

• Container: A lightweight runtime environment that provides applications with


the files, variables, and libraries needed to run. Containers share the host's
operating system, rather than providing their own, offering enhanced portability
compared to VMs.

• Fluid Resource Pools: Resources like compute, storage, and networking that are
separated from physical infrastructure and can be independently allocated or
combined as needed.

• Hypervisor: A layer (software, firmware, or hardware) that abstracts physical


resources to create virtual machines, enabling the execution of operating
systems and applications on those VMs.

• Infrastructure as Code: A method of provisioning and managing computing


resources using code, eliminating the need for manual hardware configuration
when deploying or updating applications.

• IT Silo: Refers to infrastructure dedicated to a single application or function,


making it difficult to scale or manage across different workloads.

• Stateless Infrastructure: In composable infrastructure, applications are


managed by software and are not tied to specific hardware. This allows
applications to be moved and scaled as needed without dependencies on
particular machines

Platforms (PaaS)
A cloud platform provides the necessary hardware and software to build custom web
apps or services that leverage the platform’s capabilities. It encompasses the full
software stack, except for the presentation layer, enabling developers to focus on
building applications without managing underlying infrastructure.

Cloud platforms often offer tools for collaboration, testing, program performance
monitoring, versioning, and integration with databases and web services.

Major Cloud Platforms:

• Salesforce.com's Force.com platform: A platform for building and deploying


apps integrated with Salesforce.

• Windows Azure (now Microsoft Azure): Microsoft’s cloud platform that


provides computing, analytics, and storage solutions.
• Google Apps and Google App Engine: Google’s platform for building and
hosting applications in the cloud.

Virtual Appliances
A virtual appliance is a software solution that integrates an application with its operating
system, packaged for use in a virtualized environment. Unlike a complete virtual
machine platform, a virtual appliance contains only the software stack required to run
specific applications, including the application and a minimal operating system. irtual
appliances are easier to manage and update as they are bundled as a single unit,
making it simpler to deploy and maintain them in virtualized environments. Examples
include Linux-based solutions like Ubuntu JeOS.

Just Enough Operating System (JeOS): Virtual appliances often use a slimmed-down
OS tailored specifically to run the application. This makes them more efficient, stable,
and secure than general-purpose OSes, with a smaller footprint. JeOS only includes the
essential elements required for the application, ensuring optimal performance.

Communication Protocols

Cloud computing requires some standard protocols with which different layers of
hardware, software, and clients can communicate with one another. Many of these
protocols are standard Internet protocols. Cloud computing relies on a set of protocols
needed to manage interprocess communications that have been developed over the
years.

Common communication protocols in cloud computing include:

• SOAP (Simple Object Access Protocol): A protocol for exchanging structured


information in web services.

• XML-RPC (XML Remote Procedure Call): A protocol that uses XML to encode
messages for remote procedure calls.

• CORBA (Common Object Request Broker Architecture): A standard for


enabling object interactions across networks, using an object request broker
(ORB).

• APP (Atom Publishing Protocol): A protocol used for creating and updating
information in Atom syndication format.

• REST (Representational State Transfer): A protocol that uses a global identifier


(URI) to access resources via HTTP. REST is the most widely used protocol in
cloud applications for its simplicity and scalability.

Connecting to the Cloud by Clients

Clients can connect to a cloud service using various devices and methods. Below are
the most common methods and security techniques to ensure safe connections.

Common Client Access Methods

1. Web Browser:

o Uses standard protocols like HTTPS for secure communication.

2. Proprietary Applications:

o Specialized software designed for specific services, compatible with PCs,


servers, mobile devices, or cell phones.

Cloud applications often use specialized clients, which are designed to securely
connect and interact with cloud resources. Two notable examples of such clients are:

• JoliCloud: A cloud-based operating system designed to optimize web-based


applications and services. It offers easy access to cloud storage and online
tools.

• Google Chrome OS: A lightweight, cloud-centric operating system that primarily


uses the internet and cloud services for computing tasks, offering secure access
to web applications.
Secure Connection Methods

1. Secure Protocols:

o Data is transferred securely using encryption protocols such as:

▪ SSL (HTTPS): Secure web communication.

▪ FTPS: Secure file transfers.

▪ IPsec: Secures IP communication.

▪ SSH: Secure shell for encrypted command-line access.

2. Virtual Private Network (VPN):

o Establishes a virtual tunnel for secure communication.

o Examples:

▪ Microsoft RDP (Remote Desktop Protocol): Enables secure


remote desktop access.

▪ Citrix ICA (Independent Computing Architecture): Used for


remote application delivery.

3. Data Encryption:

o Ensures intercepted or sniffed data remains unintelligible to unauthorized


parties.

Virtual Appliances vs. Virtual Machines

• Virtual Machines (VMs): A virtual machine is a software-based computer that


provides virtual access to hardware resources like CPU, RAM, storage, and
networking. VMs can encapsulate the operating system (OS), applications, and
data inside a single file. However, users must configure the virtual hardware,
guest operating system, and applications before deployment.

• Virtual Appliances: Virtual appliances also include an application, OS, and


virtual hardware but are delivered pre-configured. Unlike VMs, virtual appliances
simplify deployment by eliminating the need for manual configuration of the OS
and virtual machine, offering a ready-to-use solution for customers.
Virtualization in Cloud Computing

Virtualization is the process of creating a virtual version of something (like a server,


storage device, or network) to abstract the underlying physical resources. Virtualization
is a foundational technology in cloud computing because it enables resource pooling,
multi-tenancy, scalability, and flexibility. Virtualization allows multiple virtual
instances of resources to run on the same physical machine, making it more efficient
and cost-effective.

Some Terminologies Associated with Virtualization

1. Hypervisor: The operating system running on physical hardware, responsible for


executing or emulating virtual processes. It’s also called a virtual machine
manager (VMM). A cloud hypervisor enables sharing a cloud provider's physical
resources across multiple virtual machines (VMs).

2. Virtual Machine (VM): A virtual computer that operates under a hypervisor.

3. Container: Lightweight virtual machines that are subsets of the same OS


instance or hypervisor. They consist of processes running with corresponding
namespaces or identifiers.

4. Virtualization Software: Software that supports the deployment of virtualization


on computers, which can be part of an application package or an operating
system version.

5. Virtual Network: A logically separated network within servers, which can span
across multiple servers.

Types Virtualization and Their Advantages

Virtualization techniques create isolated partitions on a single physical server. These


techniques vary in abstraction levels while aiming for similar goals. The most popular
virtualization techniques are:

1. Full Virtualization

This technique fully virtualizes the physical server, allowing applications and software to
operate as though on separate servers. It enables administrators to run unchanged and
fully virtualized operating systems.

• Advantages:

o Combines existing systems with newer ones, increasing efficiency and


optimizing hardware use.

o Reduces operational costs by eliminating the need for repairing older


systems.
o Improves performance while minimizing physical space requirements.

Full virtualization is widely used in the IT community for its simplicity. It involves the use
of hypervisors to emulate artificial hardware and host operating systems. In this setup,
separate hardware emulations are created for each guest operating system, making
them fully functional and isolated from each other on a single physical machine.
Different operating systems can run on the same server without modification, as they
are independent of each other.

Enterprises use two types of full virtualization:

i. Software-Assisted Full Virtualization

This type uses binary translation to virtualize instruction sets and emulate hardware
using software instruction sets. Examples include:

• VMware Workstation (32-bit guests)

• Virtual PC

• VirtualBox (32-bit guests)

• VMware Server

ii. Hardware-Assisted Full Virtualization

Hardware-assisted virtualization eliminates the need for binary translation by directly


interacting with the hardware through virtualization technology on X86 processors (Intel
VT-x and AMD-V). Privileged instructions are executed directly on the processor based
on the guest OS’s instructions.

This type of virtualization uses two types of hypervisors:

• Type 1 Hypervisor (Bare-metal Hypervisor): This hypervisor runs directly on top


of the physical hardware, providing excellent stability and performance. It has no
intermediate software or operating system, and once installed, the hardware is
dedicated to virtualization. Examples include:

o VMware vSphere with ESX/ESXi

o Kernel-based Virtual Machine (KVM)

o Microsoft Hyper-V

o Oracle VM

o Citrix Hypervisor

• Type 2 Hypervisor (Hosted Hypervisor): Installed inside the host operating


system, this hypervisor adds a software layer beneath it. It is typically used in
data centers with fewer physical servers and is convenient for environments
where ease of setup and management of virtual machines is important.
Examples include:

o Oracle VM VirtualBox

o VMware Workstation Pro/VMware Fusion

o Windows Virtual PC

o Parallels Desktop

2. Virtual Machines (VMs)

Virtual machines simulate hardware environments, requiring resources from the host
machine. A virtual machine monitor (VMM) manages the CPU instructions needing
special privileges.

• Advantages:

o Allows running guest operating systems without modification.

o Ensures secure code execution using VMMs, making it widely used in


tools like Microsoft Virtual Server, QEMU, Parallels, VirtualBox, and
VMware products.

3. Para-Virtualization

Para-virtualization is similar to full virtualization but with one key difference: the guest
systems are aware of each other and work together as a single unit. This approach is
more efficient as it avoids trapping privileged instructions. The operating systems send
hypercalls directly to the hypervisor for better performance. To support hypercalls, both
the hypervisor and the operating system must be modified through an application
programming interface (API). Common products supporting para-virtualization include:

• IBM LPAR

• Oracle VM for SPARC (LDOM)

• Oracle VM for X86 (OVM)

• Advantages:

o Reduces VMM calls and minimizes unnecessary privileged instruction


usage.

o Supports multiple operating systems on a single server.

o Enhances performance per server without increasing host operating


system costs.

S.No. Full Virtualization Paravirtualization

In full virtualization, virtual machines In paravirtualization, a virtual machine


allow the execution of instructions does not implement full isolation but
1
with an unmodified OS running in an provides a different API, which is used
entirely isolated manner. when the OS is modified.

Paravirtualization is more secure than


2 Full virtualization is less secure.
full virtualization.

Full virtualization uses binary


Paravirtualization uses hypercalls at
3 translation and a direct approach for
compile time for operations.
operations.

Full virtualization is slower than Paravirtualization is faster in operation


4
paravirtualization in operation. as compared to full virtualization.

Full virtualization is more portable and Paravirtualization is less portable and


5
compatible. compatible.

Examples of full virtualization include


Microsoft and Parallels systems, Examples of paravirtualization include
6
VMware ESXi, and Microsoft Virtual Microsoft Hyper-V, Citrix Xen, etc.
Server.
S.No. Full Virtualization Paravirtualization

The guest operating system must be


It supports all guest operating systems
7 modified, and only a few operating
without modification.
systems support it.

Using drivers, the guest operating


The guest operating system will issue
8 system will directly communicate with
hardware calls.
the hypervisor.

Full virtualization is less streamlined


9 Paravirtualization is more streamlined.
compared to paravirtualization.

It provides less isolation compared to


10 It provides the best isolation.
full virtualization.

4. Operating System-Level Virtualization

Unlike full and para-virtualization, OS-level virtualization does not use a hypervisor or
the host-guest paradigm. Instead, it uses "containerization" to create multiple user-
space instances (containers) through the OS kernel. Each container operates with its
allocated resources, isolated from the primary OS. However, OS-level virtualization only
supports running different versions of the same OS family (e.g., multiple Linux versions)
but not different OS types (e.g., Linux and Windows). Common containerization
solutions include:

• Oracle Solaris

• Linux LCX

• AIX WPAR

• Advantages:

o Offers superior performance and scalability compared to other


techniques.

o Simplifies control and management through the host system.

VMM (Virtual Machine Manager)


The Virtual Machine Manager (VMM) is a program that manages processor scheduling
and physical memory allocation. It creates virtual machines by partitioning physical
resources and interfaces with the underlying hardware to support both host and guest
operating systems.

Hardware-Assisted Virtualization enables efficient functioning of system hardware


with the help of a Virtual Machine (VM). It involves several abstract or logical layers
where the VM is installed on a host operating system (OS) to create additional virtual
machines. In this setup, the OS acts as a guest, the physical component is the host, the
hypervisor serves as the Virtual Machine Manager, and the emulation process
represents the Virtual Machine.

The hypervisor creates an abstraction layer between the host and guest components of
the VM, using virtual processors. For efficient hardware virtualization, the virtual
machine interacts directly with hardware components without relying on an
intermediary host OS. Multiple VMs can run simultaneously, each isolated from the
others to prevent cyber threats or system crashes, improving overall system efficiency.

Types of Virtualization

1. Application Virtualization: Allows remote access to an application from a


server. The application's data and settings are stored on the server, but it runs on
a local workstation via the internet. This is used in hosted and packaged
applications.

2. Network Virtualization: Enables running multiple virtual networks, each with


separate control and data planes, on a single physical network. It allows for the
management of virtual network components like routers, switches, firewalls, and
VPNs.

3. Desktop Virtualization: Stores the user’s OS on a server, allowing access from


any machine. It provides user mobility and simplifies software management,
patches, and updates, especially for non-Windows OS needs.

4. Storage Virtualization: Manages a set of servers through a virtual storage


system, presenting data from different sources as a single repository. It ensures
smooth operations, performance, and continuous functionality despite
underlying equipment changes.

5. Server Virtualization: Separates computer hardware from the OS, allowing


virtual machines to be treated as files. It provides elasticity, enabling hardware
adjustments based on workload, and helps expand data centers without buying
new hardware.
Characteristics of Virtualization

1. Resource Distribution: Virtualization allows the creation of unique computing


environments from a single host machine, enabling efficient resource
management, power reduction, and easier control.

2. Isolation: Virtual machines (VMs) provide isolated environments for guest users,
protecting sensitive data and allowing secure, independent operations of
applications, operating systems, and devices.

3. Availability: Virtualization enhances uptime, fault tolerance, and availability. It


helps minimize downtime, boosts productivity, and mitigates security risks by
offering features not available in physical servers.

4. Aggregation: Virtualization enables resource sharing from a single machine,


allowing multiple devices to be consolidated into a powerful host. Cluster
management software may be used to connect a group of computers or servers
into a unified resource center.

5. Authenticity and Security: Virtualization platforms ensure continuous uptime


by automatically balancing loads and distributing servers across multiple hosts,
preventing service interruptions.

Benefits of Virtualization

1. Increases development productivity.

2. Reduces the cost of acquiring IT infrastructure.

3. Enables rapid scalability and remote access.

4. Provides greater flexibility.

5. Allows running multiple operating systems on a single machine.

Disadvantages of Virtualization

1. High implementation costs.

2. Potential security risks.

3. Can be time-intensive to set up and manage.

4. Possible lack of availability due to dependency on the host system.

Hyper-V is considered a Type 1 hypervisor.

Despite being installed on Windows Server or Windows 10, Hyper-V operates as a native
hypervisor because it runs directly on the physical hardware, without relying on the host
operating system for control. The original Windows operating system is not directly in
control of hardware resources; instead, Hyper-V manages the system and creates
virtual environments for the guest operating systems.

When you add the Hyper-V role on Windows Server, what happens is that Hyper-V takes
control of the hardware, and the host OS becomes a virtual machine running within
Hyper-V (i.e., it’s running in "Partition 0"). However, Hyper-V itself remains a Type 1
hypervisor because it operates directly on the hardware, not relying on an underlying
OS to manage hardware resources.

In cloud computing, mobility patterns refer to the different ways data, applications, or
virtual machines (VMs) can move between different environments, such as physical
machines, virtual machines, or cloud infrastructure. The following are common mobility
patterns:

1. P2V (Physical to Virtual)

• Description: P2V refers to the process of converting a physical machine into a


virtual machine. It involves migrating the operating system, applications, and
data from a physical server to a virtual environment.

• Use Case: This is commonly used for server consolidation, where multiple
physical servers are virtualized to improve resource utilization and management.

2. V2V (Virtual to Virtual)

• Description: V2V refers to the process of moving a virtual machine from one
virtualization host to another. This is often done for load balancing, fault
tolerance, or maintenance.

• Use Case: Used when there is a need to move virtual machines across different
hypervisors or data centers to optimize performance or resource usage.

3. V2P (Virtual to Physical)

• Description: V2P involves migrating a virtual machine back to a physical server.


This is a less common scenario but may occur in situations where a physical
environment is needed for specific hardware or performance requirements.

• Use Case: Typically used when a virtualized environment becomes insufficient


for specific workloads or when moving to physical infrastructure for more
intensive processing.

4. P2P (Physical to Physical)


• Description: P2P refers to the migration or movement of workloads between
physical machines. This could involve either moving data or transferring entire
systems to different physical servers.

• Use Case: Used in scenarios where physical servers need to be reallocated for
optimal resource utilization or during hardware upgrades.

5. D2C (Datacenter to Cloud)

• Description: D2C refers to the migration of applications, services, or entire


infrastructures from an on-premises data center to the cloud. It involves moving
data and workloads from local servers to cloud environments.

• Use Case: Used when businesses want to take advantage of cloud scalability,
elasticity, or to reduce infrastructure costs.

6. C2C (Cloud to Cloud)

• Description: C2C involves moving workloads, applications, or data between two


cloud environments. This may happen between different cloud service providers
or between different regions or zones within the same cloud provider.

• Use Case: Used for disaster recovery, geographic distribution, or moving


services to a more cost-effective cloud provider.

7. C2D (Cloud to Datacenter)

• Description: C2D refers to migrating workloads from the cloud back to an on-
premises data center. This might be necessary for compliance, security, or
performance reasons.

• Use Case: Businesses may move workloads back to an on-premises


environment if they require more control over data or if cloud costs become
prohibitive.

8. D2D (Datacenter to Datacenter)

• Description: D2D involves moving workloads or data between different on-


premises data centers. This is typically done for disaster recovery, load
balancing, or ensuring redundancy across different physical locations.

• Use Case: Used for backup and disaster recovery purposes, or when
consolidating data from multiple on-premises locations into a more centralized
or optimized data center.

The Virtual Machine Life Cycle involves the following stages:

1. Request & Assessment: A request for a new server is made, and IT reviews the
required resources.
2. Provisioning: IT allocates resources and creates the VM with necessary
configurations.

3. Deployment: The VM is powered on and starts providing the requested services.

4. Monitoring & Management: The VM is monitored and managed for performance


and security.

5. Decommissioning: When no longer needed, the VM is decommissioned, and


resources are released.

Load Balancing in Cloud Computing

In cloud computing, web applications face challenges when managing high traffic,
which may cause system breakdowns.
Load balancing ensures even distribution of traffic and workloads across the cloud
environment, enhancing application efficiency and reliability. When VMs face uneven
traffic, some resources may be underutilized while others are overloaded. This
imbalance degrades performance and affects service quality, making load balancing
crucial.

Goal
The main goal of load balancing is to distribute workloads evenly across multiple
resources (e.g., servers or virtual machines) to prevent any single resource from
becoming a bottleneck.

Process of Load Balancing

1. Load Balancing: Determines the current load on VMs.

2. Resource Discovery: Identifies available resources to handle additional


workloads.

3. Workload Migration: Moves tasks to available VMs to prevent overloading.

These processes are carried out by three units: the load balancer, resource discovery,
and task migration units.

Load Balancing Techniques


Load balancing optimizes parameters like response time and system stability by
redistributing tasks across the cloud infrastructure. This process involves task
scheduling, resource allocation, and management. A two-level load balancing
architecture includes:

1. Physical Machine Level: The first level load balancer balances the given
workload on individual Physical Machines by distributing the workload among its
respective associated Virtual Machines.
2. VM Level: Balances the workload across different Virtual Machines of different
Physical Machines.

These levels involve Intra-VM task migration and Inter-VM task migration to
maintain efficient resource utilization across the system.

Activities Involved in Load Balancing

The load balancing process in cloud computing includes several key activities to
efficiently distribute tasks across virtual machines (VMs) and resources:

1. Identification of User Task Requirements


This phase involves determining the resource requirements (e.g., CPU, memory)
for user tasks that need to be scheduled for execution on a VM.

2. Identification of Resource Details of a VM


This step checks the current resource utilization and unallocated resources of a
VM. Based on the status, the VM can be classified as balanced, overloaded, or
under-loaded, with respect to predefined thresholds.

3. Task Scheduling
After identifying the resource details, tasks are scheduled to appropriate VMs
using a scheduling algorithm. This phase ensures that tasks are assigned based
on available resources.

4. Resource Allocation
Resources are allocated to scheduled tasks for execution. A resource allocation
policy governs this process, aiming to improve resource management and
performance. The strength of the load balancing algorithm depends on the
efficiency of both scheduling and allocation policies.

5. Migration
Migration ensures that load balancing remains effective when VMs are
overloaded. There are two types of migration:

o VM Migration: Moves VMs from one physical host to another to alleviate


overloading. This can be live migration (without downtime) or non-live
migration (with downtime).

o Task Migration: Moves tasks across VMs. This can be intra-VM task
migration (within the same VM) or inter-VM task migration (across
different VMs). Task migration is more time- and cost-effective than VM
migration, making it a preferred method in modern load balancing
approaches.
Factors Contributing to Load Unbalancing in IaaS Clouds

1. Dynamic Nature of User Tasks


The workload and resource demands of user tasks are constantly changing,
making it difficult to predict and balance the load efficiently.

2. Unpredictable Traffic Flow


Traffic to cloud services can be highly variable, making it challenging to manage
resource allocation and prevent overload.

3. Lack of Efficient Mapping Functions


An insufficient or inaccurate mapping function for assigning tasks to appropriate
resources can lead to improper resource allocation, contributing to load
imbalance.

4. NP-Hard Scheduling Problem


The scheduling of tasks is computationally complex (NP-hard), which makes it
difficult to find an optimal solution in real-time.

5. Heterogeneous User Tasks


User tasks have varying resource requirements, which can create difficulties in
evenly distributing the load across available resources.

6. Uneven Distribution of Tasks and Dependencies


Tasks are often unevenly distributed across computing resources, and
dependencies between tasks can exacerbate load unbalancing by restricting the
flexibility in resource management.

Load Balancing Algorithms in Cloud Computing

Load balancing algorithms in cloud computing can be classified based on the current
state of the Virtual Machines (VMs) into static or dynamic algorithms. These algorithms
include scheduling and allocation algorithms, which are essential for efficient
resource management and performance monitoring.

1. Scheduling Algorithms

Scheduling algorithms are decomposed into three key activities:

• Task Scheduling: Assigning user tasks to appropriate computing resources for


execution.

• Resource Scheduling: Planning, managing, and monitoring resources for task


execution.

• VM Scheduling: Managing the creation, destruction, and migration of VMs


across physical hosts.
2. Allocation Algorithms

Allocation algorithms are also divided into three activities:

• Task Allocation: Assigning tasks to specific resources where they will be


executed.

• Resource Allocation: Allocating the necessary resources (e.g., CPU, memory) to


tasks for their completion. Task allocation and resource allocation are inverses
of each other.

• VM Allocation: Assigning VMs to users or groups of users.

Both scheduling and allocation policies play a crucial role in effective resource
management, ensuring that cloud services meet Quality of Service (QoS) requirements
and deliver optimal performance to users.

Classification of Load Balancers

1. Hardware Load Balancer


These are designed to distribute workload at the hardware level across network,
server, storage, and CPU resources.

2. Elastic Load Balancer (ELB)


This automatically distributes incoming application traffic across multiple
targets, such as Amazon EC2 instances, containers, etc.

Elastic Load Balancing (ELB) Types

ELB offers three types of load balancers, all of which feature high availability, automatic
scaling, and robust security:

1. Application Load Balancer (ALB)

o Operates at Layer 7 (Application Layer) of the OSI model.

o Routes traffic to targets (EC2 instances, containers, IP addresses) based


on the content of the request (ideal for HTTP/HTTPS traffic).

o Supports advanced load balancing features.


2. Network Load Balancer (NLB)

o Operates at Layer 4 (Transport Layer) of the OSI model.

o Can handle millions of requests per second.

o Used for distributing traffic based on TCP/IP protocols, often deployed by


services like Microsoft Azure and AWS.

3. Classic Load Balancer (CLB)

o Provides basic load balancing across EC2 instances at both the request
and connection levels.

o Intended for applications built within the EC2-Classic network.

Cloud load balancing, offered by providers like Amazon Web Services (AWS),
Microsoft Azure, and Google Cloud, uses a software-based approach to distribute
network traffic across resources, ensuring optimal performance and availability.

• AWS provides Elastic Load Balancing (ELB), which efficiently distributes traffic
among EC2 instances, containers, and IPs. ELBs are a key architectural
component in most AWS-powered applications.

• Microsoft Azure uses Traffic Manager to manage and allocate traffic across
multiple data centers, optimizing performance and availability.

Key Benefits of Cloud Load Balancing:

1. Prevents Resource Overload: Distributes traffic evenly across multiple


resources to avoid overloading any single instance.

2. Improves Performance: Enhances response times by distributing traffic


intelligently, ensuring faster processing of user requests.

3. Increases Availability: Ensures high availability of applications by routing traffic


only to healthy resources, avoiding downtime.
4. Scalability: Automatically scales the distribution of traffic as demand increases,
allowing resources to adapt to varying workloads.

In contrast to hardware-based load balancing commonly found in enterprise data


centers, cloud load balancing is more flexible, scalable, and cost-efficient, making it a
preferred solution for cloud applications.

Concepts of Platform as a Service:

Containerization allows developers to build, package, and deploy applications in


isolated environments (containers) without worrying about underlying infrastructure.
PaaS providers like Heroku, Google App Engine, and Azure App Service use
containerization to enable scalable and consistent application delivery across various
environments. Containers in PaaS offer portability, faster deployment, and easier
management of dependencies, making them ideal for building cloud-native
applications.

Containers are executable units of software that package application code along with
its libraries and dependencies. These containers ensure that the application can run
consistently across different environments, whether it’s on a developer's desktop, in
traditional IT setups, or in cloud infrastructures.

Key Concepts:

1. Isolation: Containers use OS-level virtualization to isolate processes. This


ensures that each container runs independently without interfering with other
applications or services on the host.
2. Resource Control: Containers can be configured to control the amount of
resources (CPU, memory, disk) they can use, allowing better resource
management and preventing resource contention.
3. Portability: Unlike virtual machines (VMs), containers do not require a full guest
operating system. They use the host OS's kernel and resources, making them
lightweight and fast to deploy. This allows for applications to be easily moved
between environments, ensuring that they run consistently regardless of the
underlying infrastructure.

Benefits of Containers:

1. Efficiency: Containers are smaller and more resource-efficient than VMs since
they don’t need a full operating system. This reduces overhead and improves
performance.
2. Portable and Platform Independent: Containers carry all dependencies within
them, allowing software to run consistently across various environments, such
as local machines, on-premises servers, and the cloud, without needing
reconfiguration.
3. Faster Deployment: Containers can start and stop quickly, enabling faster
development cycles and quicker scaling of applications.
4. Supports Modern Development and Architecture: Containers are a perfect fit
for modern development practices like DevOps, serverless, and microservices.
Their portability, consistency, and small size support continuous
integration/continuous deployment (CI/CD) cycles, and make regular code
deployment easier.
5. Consistency: Developers can package all necessary dependencies within the
container, ensuring that the application runs the same way everywhere, reducing
the “it works on my machine” problem.

History:

• Early Containerization: Containers first appeared in systems like FreeBSD Jails


and AIX Workload Partitions, offering basic isolation of processes.
• Modern Containers: In 2013, Docker revolutionized the containerization
landscape by introducing a user-friendly tool that automated the process of
creating, distributing, and managing containers. Docker’s widespread adoption
marked the start of the modern container era.

Containers vs Virtual Machines (VMs):

• VMs :
o Full OS: VMs require a complete operating system to run, including a
guest OS, which increases resource consumption and startup time.
o Isolation: VMs provide strong isolation since each VM is a self-contained
environment with its own OS.
o Flexibility: VMs can run different operating systems (e.g., running Linux on
a Windows host) and are used for various types of workloads requiring full
OS functionality.
o Resource-Intensive: VMs are heavier, require more resources (CPU,
memory), and are slower to start compared to containers.
• Containers:
o OS-Level Virtualization: Containers virtualize the OS, sharing the host OS
kernel, making them lightweight and faster to start.
o Smaller Footprint: Containers package only the application and
necessary dependencies, making them more portable and easier to
deploy across environments.
o Isolation: While containers offer some isolation, they share the host OS
kernel, which can introduce security concerns, unlike VMs that have
better isolation.
o Faster Start and Deployment: Containers can start and stop quickly,
making them ideal for cloud-native applications that need fast scaling.

Use Cases for Containers:

1. Microservices:

o Containers are ideal for microservice architectures, where an


application is composed of multiple, loosely coupled services that can be
independently deployed, updated, and scaled.

2. DevOps:

o The combination of microservices and containers supports DevOps


practices, enabling teams to build, ship, and run software with greater
efficiency. Containers allow for consistent environments across
development, testing, and production.

3. Hybrid and Multi-cloud:

o Containers are highly portable, making them suitable for hybrid and
multi-cloud environments where businesses run workloads across
different public clouds and on-premises infrastructure.

4. Application Modernization and Migration:

o A common method of modernizing legacy applications is by


containerizing them to make migration to the cloud easier, leveraging the
flexibility and scalability of cloud infrastructure.

How Cloud Containers Work:

• Isolation: Containers rely on OS-level isolation, where the container shares the
kernel of the host OS, but runs its own applications and libraries. This ensures
that containers are lightweight and use fewer resources than virtual machines.

• Cloud Container Services:

o Hosted Container Instances: Let you run containers directly on cloud


infrastructure without the need for a full virtual machine. Example: Azure
Container Instances (ACI).

o Containers as a Service (CaaS): Manages containers at scale, typically


without complex orchestration. Example: Amazon Elastic Container
Service (ECS).
o Kubernetes as a Service (KaaS): Managed Kubernetes service to deploy
clusters of containers, allowing for orchestration and scaling. Example:
Google Kubernetes Engine (GKE).

Bridging Containers and the Cloud: Challenges and Solutions:

1. Migration Challenges:

o Moving traditional applications to containers can be complex, particularly


when IT staff lack experience with container technologies.

o Cloud computing itself presents challenges, and integrating containers


requires adapting to cloud-native technology, which might involve training
or consulting services.

o Solution: Cloud providers offer managed services that streamline


onboarding and simplify container adoption, helping organizations
transition to containerized environments.

2. Container Security:

o Shared Responsibility Model: Cloud providers secure the underlying


infrastructure, while customers must secure containers, their images,
and persistent storage.

o Security Concerns:

▪ Vulnerabilities in Container Images: Containers may have


insecure components or malware.

▪ Excessive Privileges: Docker, by default, grants extensive


privileges that could be exploited by attackers.

▪ Short-Lived Nature: The ephemeral nature of containers makes


them harder to monitor and track for security issues.

o Best Practices:

▪ Scan container images for vulnerabilities and malware.

▪ Follow best practices for container configuration and limit


privileges.

▪ Use monitoring tools to track running containers and ensure


proper security controls are in place.

3. Container Networking:
o Container networking is more complex than traditional networking due to
the use of Container Network Interface (CNI) and overlay networks,
which create isolated networks for communication.

o Cloud Networking Complexity:

▪ Cloud providers use their own terminology (e.g., Virtual Private


Cloud (VPC), Security Groups), adding complexity when
managing networking for containers.

▪ Mistakes in networking configurations can lead to security


vulnerabilities, such as exposing containers to the public internet.

o Solution: Managed container services (e.g., Amazon ECS, Google


Kubernetes Engine) or orchestrators like Kubernetes and Nomad
provide built-in networking management, reducing complexity and
enhancing security for containerized environments.

Docker Overview:

Docker is an open-source containerization platform that enables developers to


package, deploy, and run applications in a consistent environment across different
systems—whether on a developer's laptop, on-premises, or in the cloud. Docker
containers encapsulate the application, its dependencies, and the operating system
libraries needed to run it, providing a portable and standardized runtime environment.

Docker is not just about containers; it is a suite of tools designed for developing,
managing, and sharing containerized applications:

• Docker Build: Creates container images, which include the application code,
binaries, dependencies, and environment configurations needed to run the
application.

• Docker Compose: Defines and runs multi-container applications. It integrates


with code repositories (e.g., GitHub) and CI/CD tools (e.g., Jenkins) for efficient
workflows.

• Docker Hub: A registry service where developers can find and share container
images, similar to GitHub for code.

• Docker Engine: The container runtime that runs on various platforms (macOS,
Windows, Linux, cloud, etc.), built on top of containerd, an open-source
container runtime.

• Docker Swarm: A built-in orchestration tool that manages a cluster of Docker


engines (nodes) to facilitate the deployment and management of containers at
scale.
Kubernetes:

Kubernetes is an open source container orchestration platform that automates many of


the manual processes involved in deploying, managing, and scaling containerized
applications.

The primary advantage of using Kubernetes is that it gives you the platform to schedule
and run containers on clusters of physical or virtual machines (VMs). More broadly, it
helps you fully implement and rely on a container-based infrastructure in production
environments because Kubernetes is all about automation of operational tasks.

Docker vs. Kubernetes:

While Docker is a set of tools for developing and running containers, Kubernetes is an
open-source container orchestration platform that helps manage large-scale
deployments of containerized applications. Here's a breakdown:

• Docker: Focuses on creating, sharing, and running individual containers.

• Kubernetes: Orchestrates containers across multiple hosts, managing their


lifecycle, scaling, and distribution. It handles provisioning, redundancy, health
monitoring, resource allocation, and load balancing.

Although Docker provides Docker Swarm for orchestration, Kubernetes has become
the dominant tool for container orchestration, widely adopted by the industry.

Container Orchestration with Kubernetes:

Kubernetes was introduced by Google in 2014 to solve the complexity of managing large
volumes of containers across distributed systems. It automates several critical tasks
such as:

• Provisioning: Automatically setting up containers and resources.

• Redundancy: Ensuring high availability of containers.

• Health Monitoring: Checking the health of containers and restarting them if


necessary.

• Scaling and Load Balancing: Automatically adjusting the number of container


instances based on demand.

• Resource Allocation: Ensuring efficient use of system resources (CPU,


memory).

• Cross-Host Migration: Moving containers between physical hosts.


Kubernetes uses YAML files to define the desired state of the containerized
environment, and the platform ensures that this state is continuously maintained. This
includes tasks like scaling applications, performing zero-downtime deployments, and
more.

Istio and Knative:

As containers are increasingly used in microservice architectures, additional tools have


emerged to enhance the production use of containers:

1. Istio:

o Purpose: A service mesh that manages the complexity of microservices


communication. It provides features such as traffic management, service
discovery, monitoring, and security for microservices.

o Use Case: Simplifies the management of interactions between numerous


microservices, especially in distributed systems.

2. Knative:

o Purpose: Knative enables serverless architectures by allowing


containerized services to scale to zero. This means a function is not
running until it is called, saving computing resources.

o Use Case: Ideal for cloud-native, serverless applications that need to


scale dynamically based on usage, significantly reducing infrastructure
costs.

Both Istio and Knative are part of the growing container ecosystem, extending the
capabilities of container orchestration and enabling the efficient management of
microservices and serverless applications.

In summary, Docker provides the tools for developing and running containers, while
Kubernetes orchestrates these containers at scale. Tools like Istio and Knative further
enhance container-based microservices and serverless environments, addressing
various operational challenges in cloud-native architectures.

Open SaaS:

Open SaaS refers to software-as-a-service (SaaS) applications that are built using open-
source software. Open-source software is publicly accessible and can be freely used,
modified, and distributed. When this software is deployed in the context of SaaS, it is
called Open SaaS.
Benefits of Open SaaS:

1. No License Required: Since open-source software is free to use, there is no


need to pay for software licenses, which significantly reduces costs for
businesses.

2. Low Deployment Cost: Open-source software is generally cheaper to deploy, as


it does not require purchasing proprietary software or licenses. This makes Open
SaaS a cost-effective option for companies.

3. Less Vendor Lock-in: Open-source solutions are not tied to a single vendor,
reducing the risks associated with vendor lock-in. This allows businesses to
switch providers more easily or modify the software to meet their specific needs.

4. More Portable Applications: Open SaaS applications can run on various


platforms, making them highly portable. They can be deployed on any open-
source operating system and databases, which improves flexibility and
scalability.

5. More Robust Solutions: Open-source software often benefits from a large


community of developers who contribute to its improvement, resulting in more
robust, secure, and feature-rich applications. Companies deploying Open SaaS
can leverage this community-driven development.

Compliance as a Service (CaaS) is a cloud-based solution designed to assist


organizations in meeting regulatory, legal, and industry compliance requirements by
acting as a trusted third-party service provider.

1. Trusted Third Party: Acts as an intermediary ensuring compliance with legal and
regulatory frameworks.

2. SOA Integration: May need its own Service-Oriented Architecture (SOA) layer to
effectively manage and validate compliance processes.

3. Capabilities Required:

o Managing cloud relationships.

o Implementing and enforcing security policies.

o Handling privacy concerns and data management.

o Geographic awareness for jurisdiction-specific compliance.

o Incident response and archiving capabilities.

o Support for system queries as outlined in Service Level Agreements


(SLAs).
Applications in Vertical Clouds:

Vertical clouds are specialized cloud platforms focused on particular industries,


offering tailored CaaS solutions. Examples include:

• athenahealth: For compliance in the medical sector.

• bankserv: Specialized for banking regulations.

• ClearPoint PCI Compliance: Ensures adherence to the Payment Card Industry


Data Security Standard for merchant transactions.

• FedCloud: Designed for government compliance needs.

• Rackserve PCI Compliant Cloud: Offers PCI CaaS services for secure
transactions.

Benefits of CaaS:

• Measures and mitigates compliance-related risks.

• Provides indemnity and assurance against non-compliance penalties.

• Simplifies adherence to complex regulatory frameworks, adding significant value


to businesses.

Identity as a Service (IDaaS)

IDaaS is a maturing overlay on cloud architectures, providing authentication and


authorization services across distributed networks. Examples include:

• Domain Name System (DNS): Associates domains with assigned addresses


and ownership details.

• Two-Factor Authentication: Combines credentials like passwords (something


you know) with tokens or biometric data (something you have or are).

• Unique identifiers like Media Access Control (MAC) addresses and Windows
Product Activation IDs establish device identities.

Evaluating IDaaS Solutions:

A robust IDaaS solution should meet these criteria:

• User Control and Consent: Users should manage how their identity is used.

• Minimal Disclosure: Share only the necessary information for a specific use
case.
• Justifiable Access: Only authorized entities with a legitimate need should
access the information.

• Directional Exposure: Protect private identities while allowing public ones to be


discoverable.

• Interoperability: Must seamlessly integrate with other identity providers.

• Unambiguous Human Interaction: Ensure clarity in user-system interactions


while protecting against identity attacks.

• Consistency of Service: Maintain a simple, uniform experience across different


contexts and technologies.

SAML and Single Sign-On (SSO)

The Security Assertion Markup Language (SAML) facilitates authentication and


authorization in a Service-Oriented Architecture (SOA) by:

• Enabling Web Browser Single Sign-On (SSO) capabilities.

• Allowing users to authenticate once and access multiple services securely


without needing to re-enter credentials.

SSO (Single Sign-On) is a user authentication process that allows a person to access
multiple applications or services with a single set of login credentials (e.g., username
and password). It is convenient, secure and efficient.

OpenID Single Sign-On (SSO)

OpenID is a protocol designed for user authentication, enabling Single Sign-On (SSO)
functionality. It is built on top of OAuth 2.0, an authorization protocol, by adding an ID
Token to the access token used in OAuth 2.0. While OAuth 2.0 focuses on authorization
(granting access to resources), OpenID Connect (OIDC) is an identity authentication
protocol, enabling the verification of a user's identity to client services, also called
Relying Parties.
OAuth OpenID

OpenID logs the user into the


OAuth grants access to your API, user data in
account and makes it available in
other systems.
other systems.

OpenID authenticates the user into


OAuth authorizes the user with the resource.
the service provider.

The role to manage access to the resources is OpenID provides you with an Identity
played by OAuth. Layer.

OAuth cannot differentiate between users


OpenID can differentiate between
logged in as the two users can have the same
users logged in.
access to resources.

How OpenID SSO Works:

1. User Authentication: OpenID Connect redirects a user to an Identity Provider


(IdP) for authentication. The IdP verifies the user's identity, either through an
active session (SSO) or by requesting credentials.

2. Redirection: After authentication, the IdP redirects the user back to the original
application with an ID Token (which includes user attributes, like email) and
possibly an access token. These tokens confirm the user's identity and can be
used by the application to provide access.

3. SSO Flow:

o The user logs in once with an Identity Provider (e.g., Google or Facebook).
o They are then granted access to other applications that support OpenID
without needing to log in again.

Key Components of OpenID:

1. Standard Scopes:

o openid, profile, email, phone, etc. — defines the user information the
client can access.

2. Request Object (JSON): Contains the data used to request authentication from
the IdP.

3. ID Token: Contains information about the authenticated user, such as their


identity and attributes.

4. UserInfo Endpoint: Provides additional information about the user by translating


the ID token.

Purpose of OpenID:

• Single Login for Multiple Sites: Users can log in once and access multiple
applications without repeated sign-ins.

• Secure SSO Access: Provides secure and seamless access to services across
various platforms.

• User Authentication: OpenID sends authentication information about the user,


enabling trusted access to services.

• Support for Web and Mobile Apps: OpenID is compatible with both web
applications (JavaScript-based) and native mobile applications (Android, iOS).

Who Uses OpenID?

• Major Identity Providers like Google, Facebook, and Twitter use OpenID to
allow users to authenticate once on their platform and then access other
applications and websites without having to sign in again.

Service-Oriented Architecture (SOA)

SOA is a design pattern that enables the creation of distributed systems where services
are delivered to other applications through standardized protocols. It is a concept that
can be implemented using different programming languages and platforms.

What is a Service?

A service is a self-contained, well-defined function that represents a unit of


functionality. It communicates with other services in a loosely coupled manner and is
independent of the state of other services. Communication between services occurs
using a message-based model.

Key Features of SOA:

• Reusability: Services can be reused in multiple applications via well-defined


interfaces.

• Integration: SOA provides a method for integrating disparate components


across different systems.

• Interoperability: Services can be integrated across different software systems,


even if they belong to separate business domains.

Difference Between SOA and Microservice Architecture:

• SOA focuses on building large-scale, reusable services that can be integrated


into various systems.

• Microservice Architecture is a more granular approach where each service is a


small, independently deployable unit with a focused functionality.

Roles in SOA:

1. Service Provider:

o Maintains the service and makes it available for use.

o Publishes services in a registry with a service contract, which specifies


how the service works and its usage requirements.

2. Service Consumer:

o Locates services via the registry and develops client components to bind
and use the service.

Service Interaction Patterns:

• Service Orchestration: Aggregates information from multiple services or


creates workflows of services to satisfy the consumer’s request.

• Service Choreography: Involves coordinated interaction of services without a


central control point.

Characteristics of SOA:

1. Standardized Service Contract:


Services are defined by service contracts, typically described using service
description documents. This ensures clarity in how services interact.
2. Loose Coupling and Self-Contained:
Services are self-contained, with minimal dependencies on other services,
which allows for flexibility and scalability.

3. Abstraction:
Services are abstracted, with their implementation hidden. They are only defined
by their service contracts and description documents, making them easier to
use without knowledge of the internal workings.

4. Interoperability:
Services in SOA can interact across different platforms and technologies,
ensuring that they can work together seamlessly.

5. Reusability:
Services are designed as reusable components, reducing development time and
cost by reusing services across multiple applications.

6. Autonomy:
Services have control over their own logic and implementation, and consumers
do not need to understand how they work internally.

7. Discoverability:
Services are described by metadata and service contracts, making them easily
discoverable for integration and reuse.

8. Composability:
Services can be combined to form more complex business processes, enabling
businesses to achieve more sophisticated operations through service
orchestration and choreography.

Components of Service-Oriented Architecture (SOA):

I. Functional Aspects:

1. Transport:
Responsible for transporting service requests from the consumer to the provider
and responses from the provider to the consumer.

2. Service Communication Protocol:


Defines how the service provider and service consumer communicate.

3. Service Description:
Provides metadata that describes the service and its requirements.

4. Service:
The core unit of SOA that provides specific functionality.
5. Business Process:
Represents a sequence of services with associated business rules designed to
meet business requirements.

6. Service Registry:
A centralized directory that contains the descriptions of available services,
allowing service consumers to locate them.

II. Quality of Service Aspects:

1. Policy:
Specifies the protocols for how services should be provided to consumers.

2. Security:
Defines protocols for service authentication, authorization, and data protection.

3. Transaction:
Ensures consistency in service execution, ensuring that either all or none of the
tasks within a service group are completed successfully.

4. Management:
Defines attributes and processes for managing services in the architecture.
Advantages of SOA:

• Service Reusability:
Applications are made from existing services, leading to reduced development
time as services can be reused.

• Easy Maintenance:
Since services are independent, they can be modified or updated without
affecting other services.

• Platform Independence:
SOA allows the integration of services from different platforms, making the
application platform-agnostic.

• Availability:
Services are accessible upon request, providing easy access to necessary
resources.

• Reliability:
Smaller, independent services are easier to debug and maintain, enhancing the
reliability of SOA applications.
• Scalability:
Services can be distributed across multiple servers, increasing the ability to
scale as demand grows.

Disadvantages of SOA:

• High Overhead:
The need for input validation during service interaction can reduce performance,
increasing load and response time.

• High Investment:
Implementing SOA often requires a significant initial investment in infrastructure
and development.

• Complex Service Management:


Managing the interactions of large numbers of services and messages can
become complex and challenging.

Practical Applications of SOA:

1. Military and Defense:


SOA is used to deploy situational awareness systems in military and air force
applications, enabling better decision-making.

2. Healthcare:
SOA is applied to improve healthcare delivery by integrating disparate systems
and services, making data more accessible and manageable.

3. Mobile Applications:
SOA is commonly used in mobile applications, such as integrating GPS
functionality within apps by accessing built-in device services.
Google Web AWS (Amazon Web Microsoft Cloud
Feature
Services Services) Services

Google Search,
Google Analytics,
Azure, Azure AppFabric,
Google Ads, Google EC2, S3, EBS,
SQL Azure, CDN,
Translate, Google App SimpleDB, RDS,
Windows Live Services,
Engine, Google APIs, Lambda, Elastic Load
Azure Virtual Machines,
Main Services Google Cloud Balancing, Amazon
Azure Functions, Azure
Storage, Google VPC, AWS Batch,
Blob Storage, Azure
Compute Engine, AWS Glue, AWS
Kubernetes Service,
Google Kubernetes Lightsail
Power BI, Office 365
Engine, Google
BigQuery

Developers, Enterprises, developers,


Enterprises, startups,
businesses, businesses needing
developers,
enterprises, integration with
researchers, and
individuals, Windows services,
organizations seeking
Target Users researchers looking hybrid cloud solutions,
scalable cloud
for cloud-based APIs, and scalable
infrastructure,
data storage, and application services
storage, and
scalable computing across multiple
computing power.
resources. platforms.

- Indexed Search: - Elastic Compute - Azure App Service:


Fast and efficient Cloud (EC2): Platform for building
search algorithms for Scalable computing and hosting web apps. -
indexing vast capacity with flexible Azure Blob Storage:
amounts of data. - instance types. - Object storage for
Google Analytics: Simple Storage unstructured data. -
Web analytics for Service (S3): Azure SQL Database:
Key Features tracking website Scalable object Managed relational SQL
traffic. - Google Ads: storage. - Elastic database service. -
Pay-per-click Block Store (EBS): Azure Functions:
advertising for Persistent block Serverless compute
targeting users based storage for EC2 service for running
on keywords. - instances. - event-driven
Google Translate: Relational Database applications. - Azure
Multilingual Service (RDS): Virtual Machines
Google Web AWS (Amazon Web Microsoft Cloud
Feature
Services Services) Services

translation. - Google Managed relational (VMs): Scalable virtual


App Engine: Platform- databases supporting machines for running
as-a-service (PaaS) multiple engines apps and services. -
for scalable web app (MySQL, PostgreSQL, Azure Kubernetes
development. - SQL Server, etc.). - Service (AKS): Managed
Google Cloud Lambda: Serverless Kubernetes cluster for
Storage: Object computing for running container orchestration.
storage service for code in response to - Azure Content
large-scale data. - events. - Elastic Load Delivery Network
Google Compute Balancing (ELB): (CDN): Caching service
Engine: Virtual Distributes incoming to speed up the delivery
machines for running traffic across multiple of content globally. -
applications. - Google instances for better Azure Power BI:
Kubernetes Engine: availability. - Amazon Business analytics and
Container VPC: Virtual network data visualization tool. -
orchestration and service for isolating Windows Live
management. - and controlling Services: Integration
Google BigQuery: network resources. - with Microsoft-based
Scalable data AWS Glue: Data services like Outlook,
warehouse for integration service for OneDrive, Office 365.
analytics. ETL (extract,
transform, load). -
AWS Batch: Managed
batch processing for
large-scale
computations.

- Integrated with - AWS Identity and - Azure Active


Google Cloud Identity Access Management Directory (AD):
& Access (IAM): Granular Centralized identity and
Management (IAM) to control over user access management. -
manage access. - access to resources. - Azure Security Center:
Security Data encryption at Data encryption and Unified security
rest and in transit. - secure networking management system. -
Multi-Factor with VPC. - Regular Encryption for data at
Authentication (MFA). security audits and rest and in transit. -
- Advanced threat compliance Regular security
detection through certifications. - AWS certifications and
Google Web AWS (Amazon Web Microsoft Cloud
Feature
Services Services) Services

Google Security Shield: Managed compliance with


Command Center. DDoS protection. - industry standards. -
AWS WAF: Web Azure Sentinel:
Application Firewall Security information
to protect against and event management
common web (SIEM).
exploits.

- Google Cloud Auto- - Auto-scaling for both


- EC2 instances can
scaling adjusts web applications (App
be dynamically
computing resources Service) and virtual
scaled up or down. -
automatically. - machines. - Azure
Auto Scaling groups
Managed Kubernetes Functions supports
Scalability for EC2. - S3 and RDS
Engine for scaling automatic scaling
services are designed
containerized based on demand. -
to scale automatically
applications. - Global Scalable storage
to meet increasing
content delivery with solutions like Azure Blob
demands.
low latency. and Azure SQL.

- Deep integration with


- Integration with
- Seamless integration Microsoft-based tools
other AWS services,
with Google like Office 365,
including S3, EC2,
Workspace (Docs, SharePoint, and
RDS, Lambda, and
Gmail, Sheets, etc.). - Dynamics 365. -
Integration more. - Extensive API
APIs for Google Maps, Support for hybrid cloud
with Other offerings to connect
Google Translate, environments with
Services to different
YouTube, etc. - Google Azure Stack. - Azure
applications. - AWS
Cloud Pub/Sub for Logic Apps for
Marketplace for third-
messaging and event- connecting and
party application
driven systems. automating workflows
integration.
between services.

- Google Cloud’s - Elastic Load - Azure Content


global infrastructure Balancer (ELB) Delivery Network
Performance delivers high distributes traffic (CDN) improves global
Optimization performance across efficiently. - S3 content delivery speed.
geographies. - storage optimized for - Azure Traffic Manager
Compute Engine performance with for traffic routing and
Google Web AWS (Amazon Web Microsoft Cloud
Feature
Services Services) Services

offers virtual different storage improving performance.


machines with fast classes. - Use of - Performance
CPUs, GPUs, and multiple availability optimization tools like
SSDs. - BigQuery for zones for redundancy Azure Monitor and
fast data analysis with and performance Azure Advisor.
distributed optimization.
computing.

- Google Cloud
- AWS meets a wide
complies with various
range of compliance - Azure offers
certifications,
and regulatory compliance
including GDPR,
standards like GDPR, certifications for GDPR,
Compliance HIPAA, ISO 27001,
HIPAA, PCI DSS, SOC HIPAA, ISO 27001, SOC
and SOC 2, SOC 3, and
1, SOC 2, and others. 1, SOC 2, and more. -
Certifications others. - Provides
- Provides tools for Azure Compliance
tools for organizations
auditing, monitoring, Manager helps manage
to manage
and ensuring regulatory compliance.
compliance and data
compliance.
privacy.

- Pay-per-use pricing for


- Pay-per-use model
compute, storage, and
- Pay-as-you-go with pricing based on
networking services. -
pricing. - Sustained compute, storage,
Azure Reserved
use discounts for and data transfer. -
Instances for cost
Pricing Model long-running Reserved instances
savings on long-term
workloads. - for EC2 for long-term
usage. - Free-tier
Preemptible VMs for cost savings. - Free-
services available for
cost-effective scaling. tier services available
testing and
for testing.
experimentation.

- Web application - Running scalable - Hybrid cloud


hosting (Google App web applications environments
Examples of Engine). - Analytics (EC2, Lambda). - combining on-premises
Use Cases and big data Storing large datasets and Azure (Azure Stack).
processing (Google (S3). - Hosting a - Hosting enterprise
BigQuery). - API- relational database applications (Azure App
driven applications (RDS). - Serverless Service). - Business
Google Web AWS (Amazon Web Microsoft Cloud
Feature
Services Services) Services

(Google APIs for applications analytics (Power BI, SQL


Maps, Translate, etc.). (Lambda). Azure).

Cloud Management

Cloud management refers to the administration of resources and services within a


cloud environment. It helps businesses ensure their cloud infrastructure is efficient,
secure, and cost-effective.

A cloud management platform is a software solution that has a robust and extensive set
of APIs that allow it to pull data from every corner of the IT infrastructure.

Cloud management software leverages FCAPS (Fault, Configuration, Accounting,


Performance, and Security) to monitor and control these elements.

• Fault Management: Detecting and resolving issues that could impact the cloud
environment.

• Configuration Management: Handling the setup, deployment, and


maintenance of cloud resources.

• Accounting Management: Tracking usage and costs, often for billing or resource
allocation.

• Performance Management: Monitoring system performance metrics to ensure


cloud resources meet desired standards.

• Security Management: Ensuring the cloud environment is secure from threats,


including data breaches, unauthorized access, etc.

Many products address one or more of these areas, with no single package typically
covering all five aspects of FCAPS comprehensively. Instead, network frameworks often
aggregate different products to provide a holistic view of cloud management.

Two Aspects of Cloud Management

1. Managing Resources in the Cloud:


Cloud management software offers capabilities to manage cloud resources
(e.g., storage, computing instances, network) within a cloud provider’s
infrastructure. This includes provisioning, scaling, and monitoring cloud
resources like Infrastructure as a Service (IaaS), Platform as a Service (PaaS),
and Software as a Service (SaaS).
o IaaS: For example, services like Amazon Web Services (AWS) or
Rackspace provide cloud infrastructure where users have control over
the deployment and configuration of machines and storage. However, you
have limited control over network aspects like bandwidth, traffic flow,
routing, and packet prioritization. Even if you can provision more
bandwidth, specific network management features remain outside your
control.

o PaaS and SaaS: The control diminishes as you move towards Platform as
a Service (PaaS) and Software as a Service (SaaS) models. With PaaS
(e.g., Google App Engine or Windows Azure), the platform provides
management tools for creating, testing, and monitoring applications but
limits access to operational aspects like network and server
management. Similarly, SaaS products like Salesforce offer the
application, but with little operational control over the infrastructure.

2. Using Cloud to Manage On-Premises Resources:


Cloud management also involves utilizing cloud-based services to manage on-
premises resources. In this context, cloud services can be viewed similarly to
other networked services. By leveraging cloud services to manage on-premises
infrastructure (e.g., using hybrid cloud models), businesses can integrate on-
premises systems with the flexibility and scalability of cloud solutions. For
example, cloud-based management platforms can monitor, analyze, and control
on-premises infrastructure, enabling a cohesive management layer that spans
both on-premises and cloud resources.

Managing Cloud Resources via IaaS, PaaS, and SaaS

• IaaS (Infrastructure as a Service)


Provides cloud-based infrastructure for managing computing resources. Users
can alter machine instances, storage, and virtual networks, but with limited
control over lower-level network management aspects (e.g., routing, traffic flow).

• PaaS (Platform as a Service)


Offers a platform for deploying applications without managing the underlying
infrastructure. Tools like Google App Engine allow you to monitor and test
applications, but operational control over the environment (e.g., network, server
management) is provided by the cloud provider.

• SaaS (Software as a Service)


Involves selling software through the cloud, where users only interact with the
application itself. A prime example is Salesforce, where the platform is fully
managed by the cloud provider, offering no operational control to the end-user.
Division of management responsibilities between the user and the cloud provider
in various service models of cloud computing.

Cloud Service Lifecycle Management

Cloud services, like any other system deployment, follow a structured lifecycle. Each
stage of this lifecycle involves specific tasks and responsibilities for the management of
the cloud service. Here are the six stages of the cloud service lifecycle:

1. Service Definition

Create a standardized template defining how service instances are deployed.

• Tasks: Create, update, or delete templates; define configurations and


dependencies.

2. Client Interactions (SLA Management)

Establish and manage Service Level Agreements (SLAs) with clients.

• Tasks: Manage service contracts; monitor performance expectations like


uptime.

3. Service Deployment and Runtime Management

Deploy service instances and manage them during runtime.

• Tasks: Launch, update, or delete instances; handle scaling and resource


allocation.
4. Service Optimization

Modify service attributes to optimize performance and meet evolving needs.

• Tasks: Optimize resources, customize features, and enhance service efficiency.

5. Operational Management

Perform maintenance and monitor service operations to ensure reliability.

• Tasks: Monitor resources, respond to incidents, and manage reporting and


billing.

6. Service Retirement

Decommission services and handle end-of-life tasks.

• Tasks: Protect and migrate data, archive records, and terminate contracts.

Automated Deployment

Automated deployment on IaaS systems represents one class of cloud management


services. One of the more interesting and successful vendors in this area is Rightscale
whose software allows clients to stage and manage applications on AWS (Amazon Web
Service), Eucalyptus, Rackspace, and the Chef Multicloud framework or a combination
of these cloud types.

Service Measurement Index (SMI)

The Service Measurement Index (SMI) evaluates cloud services using six key areas:

1. Agility: Measures adaptability to changes and scalability.

2. Capability: Assesses the service's features and functionalities.

3. Cost: Evaluates affordability and cost-efficiency.

4. Quality: Focuses on service reliability and performance.

5. Risk: Analyzes potential vulnerabilities and compliance adherence.

6. Security: Examines data protection, encryption, and access control.


Service Level Agreements (SLAs)

A Service Level Agreement (SLA) is a formal contract outlining the performance


expectations and responsibilities between a client and a service provider. It is critical for
ensuring both parties agree on service quality and accountability.

Initially, SLAs were custom-negotiated for each client. Now, large utility-style providers
often offer standardized SLAs, customized only for large consumers of services.

Not all SLAs are legally enforceable. Some function more as Operating Level
Agreements (OLAs), lacking legal weight.

Always consult an attorney before committing to an SLA, especially for significant


engagements with a cloud provider.

Key SLA Parameters

1. Service Availability (Uptime):

o Specifies the guaranteed percentage of time the service will be


operational.

o Example: "99.9% availability" for critical systems.

2. Response Times or Latency:

o Defines how quickly the provider responds to client requests or incidents.

3. Reliability of Service Components:

o Outlines expected consistency in service functionality and performance.

4. Responsibilities of Each Party:

o Details the roles and duties of the provider and the client for maintaining
the service.

5. Warranties:

o Assurances for service quality and remedies if expectations are not met.

Cloud Transactions

A cloud transaction refers to an operation or series of operations carried out within the
cloud environment, which follows certain principles of atomicity, consistency, isolation,
and durability (ACID properties). Cloud transactions often need to handle distributed
systems, making them more complex than traditional database transactions.
Transaction management in the cloud involves ensuring consistency across a wide
array of distributed resources.
System Abstraction in Cloud

System abstraction in cloud computing refers to the abstraction of hardware and


software resources into virtualized services that can be accessed by users. The physical
infrastructure (servers, storage, networks) is abstracted and presented to users as
virtualized resources, making it easier to manage, allocate, and scale services.

Examples include:

• Virtual Machines (VMs): Abstracting physical servers into software-based


instances.

• Storage Services: Abstracting data storage into cloud-based services (e.g.,


Amazon S3).

Cloud Bursting

Cloud bursting is a hybrid cloud model where an application primarily runs on a private
cloud or local data center, but during periods of high demand or resource shortages, it
"bursts" into a public cloud to use additional resources. This allows organizations to
scale their services without needing to maintain the infrastructure required for peak
loads.

Cloud APIs

Cloud APIs (Application Programming Interfaces) are crucial for enabling


communication and interaction between cloud applications and the services provided
by the cloud. These APIs allow developers to access cloud functionalities, such as:

• Storage management: APIs that provide access to cloud storage (e.g., AWS S3,
Google Cloud Storage).

• Compute management: APIs for provisioning and managing virtual machines


(e.g., EC2 in AWS).

• Networking: APIs to configure and manage network settings like load balancers,
firewalls, and VPNs.

Cloud-based Storage

Cloud-based storage refers to the storage of data in an online environment that is


hosted on remote servers, often provided by third-party cloud service providers. This
storage is accessible via the internet and can scale according to the user’s needs. There
are two types of cloud storage setups:
1. Manned Cloud Storage

Manned cloud storage is managed by cloud service providers where a team of


administrators oversees the operation and security of the cloud storage system. The
provider is responsible for maintaining the infrastructure, performing regular backups,
ensuring data security, and managing any issues that arise.

Examples:

• Google Drive: Offers users personal storage, collaboration tools, and integration
with other Google services.

• Dropbox: Provides cloud storage with synchronization across devices and


shared file access.

2. Unmanned Cloud Storage

Unmanned cloud storage refers to cloud storage solutions that are often fully
automated, requiring minimal intervention from the provider’s administrators. Users are
responsible for their data management, with the service offering automated storage
provisioning, scaling, and backups. In some cases, there may be little direct human
oversight.

Examples:

• Amazon S3 (Simple Storage Service): A highly scalable and automated object


storage solution for large data storage needs.

• Microsoft Azure Blob Storage: A cloud service designed for storing large
amounts of unstructured data, such as text and binary data, with automated
scalability.

Webmail Services

Webmail services refer to email services that allow users to access their emails via a
web browser, eliminating the need for email client software or desktop applications.
These services are cloud-based, meaning users can access their emails from anywhere
with an internet connection. Webmail services often offer additional features such as
calendars, file sharing, and integration with other web services.

Common Webmail Services:

1. Google Gmail:

o One of the most popular webmail services, providing features like large
storage capacity, powerful search tools, and integration with other Google
services (Drive, Docs, Calendar, etc.).
o Offers 15GB of free storage, with options to upgrade via Google One.

2. Mail2Web:

o A free webmail service that supports multiple email protocols (IMAP,


POP3) and provides an easy-to-use interface for accessing email.

3. Windows Live Hotmail (now Outlook.com):

o Microsoft’s cloud-based email service, now integrated with


Outlook.com, offering features like integrated calendar, Office Online,
and strong security features.

o Provides a free email account with 15GB of storage, with additional


storage available with Office 365 subscription.

4. Yahoo Mail:

o One of the oldest webmail services offering free email storage (1TB of
storage) and additional features such as calendar, contacts, and
customizable themes.

o Focuses on simplicity and integration with other Yahoo services.

Syndication Service

Syndication services enable the distribution and sharing of content across multiple
platforms or websites. They allow content or any type of digital media to be shared and
distributed from a central source to multiple clients or users, in a standardized and
automated way, ensuring that the content is easily accessible to a larger audience
without needing to visit the original site.

Syndication services commonly use formats like RSS (Really Simple Syndication),
Atom, and JSON Feed to structure and distribute content. These formats provide a
standardized way to share content such as articles, blogs, videos, and other types of
media. Here’s how each format plays a role:

RSS (Really Simple Syndication):

• RSS is one of the most popular syndication formats, especially for blogs, news,
and podcast feeds. It allows content to be aggregated in a feed reader, where
users can subscribe to receive new content as it is published.

• RSS is structured in XML format, which allows for easy parsing and integration
with different tools and platforms.

Atom:
• Atom is another XML-based syndication format, similar to RSS, but with a more
flexible structure and additional features. It was designed to overcome some
limitations of RSS and provide better control over metadata and content types.

• Atom is often used in situations where more advanced syndication features are
needed, such as handling non-XML content types.

JSON Feed:

• JSON Feed is a newer alternative to RSS and Atom. It uses JSON (JavaScript
Object Notation) to represent content feeds, making it easier to parse and
integrate into modern web applications, particularly for developers who prefer
working with JSON rather than XML.

• JSON Feed aims to offer the simplicity of RSS while leveraging the advantages of
JSON, such as being more lightweight and easier to handle in JavaScript-based
environments.

You might also like