Cloud Computing Unit 3
Cloud Computing Unit 3
UNIT 4
CLOUD DEPLOYMENT ENVIRONMENT
Google App Engine – Amazon AWS – Microsoft Azure; Cloud Software Environments
-Eucalyptus – OpenStack.
GOOGLE APP ENGINE
Google App Engine (GAE) is a platform-as-a-service (PaaS) product that enables web
app developers and enterprises to build, deploy and host scalable, high-performance
applications in Google's fully managed cloud environment without having to worry about
infrastructure provisioning or management.
GAE is Google's fully managed and serverless application development platform. It
handles all the work of uploading and running the code on Google Cloud. GAE's flexible
environment provisions all the necessary infrastructure based on the central processing unit
(CPU) and memory requirements specified by the developer.
Features of Google App Engine
1. Popular language: Users can build the application using language runtimes such as Java,
Python, C#, Ruby, PHP etc.
2. Open and flexible: Custom runtimes allow users to bring any library and framework to App
Engine by supplying a Docker container.
3. Powerful application diagnostics: Google App engine uses cloud monitoring and cloud
logging to monitor the health and performance of the app and to diagnose and fix bugs quickly
it uses cloud debugger and error reporting.
4. Application versioning: It easily hosts different versions of the app, and create
development, test, staging, and production environments.
Google App Engine Environments
Google Cloud provides two environments:
1) Standard Environment with constrained environments and support for languages such as
Python, Go, node.js
Features of Standard Environment:
Persistent storage with queries, sorting, and transactions.
Auto-scaling and load balancing.
Asynchronous task queues for performing work.
Scheduled tasks for triggering events at regular or specific time intervals.
Integration with other GCP services and APIs.
1
Cloud Deployment Environment
2) Flexible Environment where developers have more flexibility such as running custom
runtimes using Docker, longer request & response timeout, ability to install custom
dependencies/software and SSH into the virtual machine.
Features of Flexible Environment:
Infrastructure Customization: GAE flexible environment instances are Compute Engine
VMs, which implies that users can take benefits of custom libraries, use SSH for
debugging and deploy their own Docker Containers.
It’s an open-source community.
Native feature support: Features such as microservices, authorization, databases,
traffic-splitting, logging, etc are supported.
Performance: Users can use a wider CPU and memory setting.
GAE ARCHITECTURE
App Engine is created under Google Cloud Platform project when an application
resource is created. The Application part of GAE is a top-level container that includes the
service, version and instance-resources that make up the app. When you create App Engine
application, all your resources are created in the user defined region, including app code and
collection of settings, credentials and your app's metadata.
Each GAE application includes at least one service, the default service, which can hold
many versions, depends on your app's billing status.
2
Cloud Deployment Environment
The above diagram shows the hierarchy of a GAE application running with two
services. In this diagram, the app has 2 services that contain different versions, and two of those
versions are actively running on different instances:
Services
Services used in GAE is to constitute our large apps into logical components that can
securely share the features of App Engine and communicate with each other. Generally, App
Engine services behave like microservices. Therefore, we can run our app in a single service
or we can deploy multiple services to run as a microservice-set.
Ex: An app which handles customer requests may include distinct services, each handle
different tasks, such as:
Internal or administration-type requests
Backend processing (billing pipelines and data analysis)
API requests from mobile devices
Each service in GAE consists of the source code from our app and the corresponding
App Engine configuration files. The set of files that we deploy to a service represent a single
version of that service and each time when we deploy the set of files to that service, we are
creating different versions within that same service.
Versions
Having different versions of the app within each service allows us to quickly switch
between different versions of that app for rollbacks, testing, or other temporary events. We can
route traffic to specific or different versions of our app by migrating or splitting traffic.
Instances
The versions within our services run over one or more instances. By default, App
Engine scales our app with respect to the load accordingly. GAE will scale up the number of
instances that are running to provide a uniform performance, or scale down to minimize idle
instances and reduce the overall costs.
Advantages of Google App Engine
1. Easy to build and use the platform: GAE is fully managed which lets developers lay focus
on writing code with zero configuration and server management. It handles traffic management
by automatic monitoring, patching and provisioning.
2. Scalability: Google App Engine handles the workload fluctuations through scaling the
infrastructure, by adding or removing instances or application resources as needed.
3
Cloud Deployment Environment
3. Various API sets: Google App Engine has many built-in APIs and services which allows
developers to build robust and versatile apps. These features include:
1) Application log Accessibility
2) Blobstore- serve large data objects
3) GAE Cloud Storage
4) SSL Support
5) Google Cloud Endpoint for mobile application
6) URL Fetch API, User API, Memcache API, File API, etc
Limitations of Google App Engine
1. Lack of control: Although it’s a managed infrastructure which has its own advantages, if a
problem occurs in the back-end, the users are then dependent on Google to fix it.
2. Limited access: Developers have read-only access to the filesystem on GAE.
3. Java Limits: Java apps. may only use a subset of the JRE standard edition classes and can’t
create new threads.
4. Performance Limits: CPU-intensive operations in GAE are slow and expensive to perform
because one physical server may be serving several discreet, unrelated app engine users at once
who need to share the CPU.
5. Language and frameworks Restrictions: GAE does not support various widely-used
programming languages and frameworks. Users have to depend on the custom runtimes to
utilize other languages. GAE can only execute code from an HTTP request.
AMAZON AWS
AWS stands for Amazon Web Services which uses distributed IT infrastructure to
provide different IT resources on demand. The AWS service is provided by the Amazon that
uses distributed IT infrastructure to provide different IT resources available on demand. It
provides different services such as infrastructure as a service (IaaS), platform as a service
(PaaS) and packaged software as a service (SaaS). Amazon launched AWS, a cloud computing
platform to allow the different organizations to take advantage of reliable IT infrastructure.
Providing Simple Storage Service (Amazon S3) revolutionized with scalable
management of Storage. Coming up with effective compute and storage services and providing
them rental basis helped many startup companies and users with the cost of manual Hardware
Infrastructure setup. Introducing the concept of serverless computing with AWS lambda
services enhanced its business globally. It came up with services like Elastic Beanstalk made
the deployment of applications much easier bringing large audiences. AWS always came with
4
Cloud Deployment Environment
diverse array of services offering with technical innovations, updated services with current
trends. AWS has emerged as a powerhouse in the world of Cloud Computing.
AWS Fundamentals
In the Journey of AWS, understanding the key concepts such as Regions, Availability
Zones, Global Network Infrastructure, etc is crucial. The fundamentals of AWS keep on
maintaining the applications reliable and scalable with services globally with coming to a
strategic deployment of resources for optimal performance and resilience. The following are
the some of the main fundamentals of AWS:
Regions: AWS provide the services with respective division of regions. The regions are
divided based on geographical areas/locations and will establish data centers. Based on need
and traffic of users, the scale of data centers is depended to facilitate users with low-latencies
of servcies.
Availability Zones (AZ): To prevent the Data centers for the Natural Calamities or any other
disasters. The Datacenters are established as sub sections with isolated locations to enhance
fault tolerance and disaster recovery management.
Global Network Infrastructure: AWS ensures the reliability and scalability of services
through setting up its own AWS Network Infrastructure globally. It helps in better management
of data transmissions for optimized performance and security reliance.
Features
1) Flexibility
The difference between AWS and traditional IT models is flexibility. The traditional
models used to deliver IT solutions that require large investments in a new architecture,
programming languages, and operating system. Although these investments are valuable, it
takes time to adopt new technologies and can also slow down your business. The flexibility of
AWS allows us to choose which programming models, languages, and operating systems are
better suited for their project, so we do not have to learn new skills to adopt new technologies.
5
Cloud Deployment Environment
2) Cost-effective
Cost is one of the most important factors that need to be considered in delivering IT
solutions. For example, developing and deploying an application can incur a low cost, but after
successful deployment, there is a need for hardware and bandwidth. Owing our own
infrastructure can incur considerable costs, such as power, cooling, real estate, and staff. The
cloud provides on-demand IT infrastructure that lets you consume the resources what you
actually need. In AWS, you are not limited to a set amount of resources such as storage,
bandwidth or computing resources as it is very difficult to predict the requirements of every
resource. Therefore, we can say that the cloud provides flexibility by maintaining the right
balance of resources. AWS provides no upfront investment, long-term commitment, or
minimum spend.
3) Scalable and elastic
In a traditional IT organization, scalability and elasticity were calculated with
investment and infrastructure while in a cloud, scalability and elasticity provide savings and
improved ROI (Return On Investment). Scalability in AWS has the ability to scale the
computing resources up or down when demand increases or decreases respectively. Elasticity
in AWS is defined as the distribution of incoming application traffic across multiple targets
such as Amazon EC2 instances, containers, IP addresses, and Lambda functions.
4) Secure
AWS provides a scalable cloud-computing platform that provides customers with end-
to-end security and end-to-end privacy. AWS incorporates the security into its services, and
documents to describe how to use the security features. AWS maintains confidentiality,
integrity, and availability of your data which is the utmost importance of the aws.
5) Experienced
The AWS cloud provides levels of scale, security, reliability, and privacy. AWS has
built an infrastructure based on lessons learned from over sixteen years of experience managing
the multi-billion dollar Amazon.com business. Amazon continues to benefit its customers by
enhancing their infrastructure capabilities.
Amazon AWS Architecture
The architecture considered as the basic structure of AWS architecture or AWS EC2.
Simply, EC2 is also called Elastic Compute cloud which will allow the clients or else the users
of using various configurations in their own project or method as per their requirement. There
are also different amazing options such as pricing options, individual server mapping,
configuration server, etc. S3 which is present in the AWS architecture is called Simple Storage
6
Cloud Deployment Environment
Services. By using this S3, users can easily retrieve or else store data through various data types
using Application Programming Interface calls. There will be no computing element for the
services as well.
In the above diagram S3 stands for Simple Storage Service. It allows the users to store
and retrieve various types of data using API calls. It doesn’t contain any computing element.
We will discuss this topic in detail in AWS products section.
Key Components of AWS Architecture
Load Balancing
The load balancing component in the AWS architecture helps to enhance the
application and the server’s efficiency in the right way. According to the diagrammatic
representation of AWS architecture, this Hardware load balancer is mostly used as the common
network appliance and helps to perform skills in the architectures of the traditional web
applications. It also makes sure to deliver the Elastic Load Balancing Service, AWS takes the
traffic gets distributed to EC2 instances across the various available sources. Along with this,
it also distributes the traffic to dynamic addition and the Amazon EC2 hosts removals from the
load-balancing rotation.
Elastic Load Balancing
This load balancing can easily shrink and increase the capacity of load balancing by
tuning some of the traffic demands and supporting sticky sessions to have advanced routing
services.
7
Cloud Deployment Environment
8
Cloud Deployment Environment
development of products widely. The following the Real-world industrial use-cases of AWS
services:
Netflix: The Large streaming giant using AWS for the storage and scaling of the applications
for ensuring seamless content delivery with low latency without interruptions to millions of
users globally.
Airbnb: By utilizing AWS, Airbnb manages the various workloads and provides insurable and
expandable infrastructure for its virtual marketplace and lodging offerings.
NASA’s Jet Propulsion Laboratory: It takes the help of AWS services to handle and analyze
large-scale volumes of data related to vital scientific research missions and space exploration.
Capital One: A financial Company that is utilizing AWS for its security and compliance while
delivering innovative banking services to its customers.
MICROSOFT AZURE
Microsoft Azure is a cloud computing platform that provides a wide variety of services
that we can use without purchasing and arranging our hardware. It enables the fast development
of solutions and provides the resources to complete tasks that may not be achievable in an on-
premises environment. Azure Services like compute, storage, network, and application services
allow us to put our effort into building great solutions without worrying about the assembly of
physical infrastructure.
Microsoft Azure is a growing set of cloud computing services created by Microsoft that
hosts your existing applications, streamline the development of a new application, and also
enhances our on-premises applications. It helps the organizations in building, testing,
deploying, and managing applications and services through Microsoft-managed data centers.
Azure Services
Compute services: It includes the Microsoft Azure Cloud Services, Azure Virtual Machines,
Azure Website, and Azure Mobile Services, which processes the data on the cloud with the
help of powerful processors.
Data services: This service is used to store data over the cloud that can be scaled according to
the requirements. It includes Microsoft Azure Storage (Blob, Queue Table, and Azure File
services), Azure SQL Database, and the Redis Cache.
Application services: It includes services, which help us to build and operate our application,
like the Azure Active Directory, Service Bus for connecting distributed systems, HDInsight for
processing big data, the Azure Scheduler, and the Azure Media Services.
Network services: It helps you to connect with the cloud and on-premises infrastructure, which
includes Virtual Networks, Azure Content Delivery Network, and the Azure Traffic Manager.
9
Cloud Deployment Environment
10
Cloud Deployment Environment
Azure Architecture
11
Cloud Deployment Environment
Web role: It automatically deploys and hosts our app through IIS.
Work role: It does not use IIS and runs our app standalone. If we want to run any continuous
bathes, then we can use worker roles, and both the Web role and Worker role will interact with
storage to get an application package, etc.
Cloud service is able to detect any failed VMs and applications and ready to start new
VMs or application instances when a failure occurs. Cloud service applications shouldn't
maintain state in the file system of its own VMs. To deploy these Web roles and Worker roles,
we will provide configuration and code associated with these web applications.
Azure Cloud Service Components
1. ServiceDefenition.csdef file specifies the settings that are used by Azure to configure the
cloud service. For example - sites, endpoints, certificates, etc.
2. ServiceConfiguration.cscfg contains the values that will be used to determine the
configuration of settings for the cloud service. For example - number of instances, types of
instances, ports, etc.
3. Service package.cspkg used to deploy the application as a cloud service. First, it needs to
be packaged using the CSPacK command-line tool. CSPacK generates an application package
file that can be uploaded into Azure using the portal.
12
Cloud Deployment Environment
Advantages of Azure:
1. High Availability: It refers to the quality of computing infrastructure which allows it to
continue functioning, even when some of its components fail.
2. Data Security: Azure provides many of the things to secure data over the cloud-like
Microsoft Defender for Cloud, Key Vault, Azure Information Protection, and many more.
3. Scalability: Azure provides 2 types of scalability i.e. Vertical and Horizontal scaling to
tackle the load by changing the capacity of resources or by adding the resources.
4. Cost-Effective: Azure provides different pricing models that can help to save costs.
5. Learning-Curve: Azure provides various programming languages such as C#, Visual
Basics etc., and tools such as Visual Studio, Azure ML Studio, Azure Dev tools etc., for
learning.
6. Hybrid-Capabilities: Azure provides hybrid working model. It allows the organization or
enterprise to avail services from public cloud as well as from on-premise network.
Azure DevOps
Azure DevOps is a Software as a service (SaaS) provided by Microsoft Azure that will
reduce human efforts and automates the deployment and testing of an application. You can use
no.of services to deploy the application or complete the Software Development Life Cycle
(SDLC) in a fast and efficient manner. previously Azure DevOps is also called Microsoft
vVisual Studio Team Services(VSTS).
Microsoft Azure Used For?
Following are the some the use cases that Microsoft Azure Used.
1. Deployment Of applications: You can develop and deploy the application in the azure
cloud by using the service called Azure App Service and Azure Functions after deploying the
applications end users can access it.
2. Identity and Access Management: The application and data which is deployed and stored
in the Microsoft Azure can be secured with the help of Identity and Access Management. It’s
commonly used for single sign-on, multi-factor authentication, and identity governance.
3. Data Storage and Databases: You can store the data in Microsoft azure in service like
blob storage for unstructured data, table storage for NoSQL data, file storage, and Azure SQL
Database for relational databases. The service can be scaled depending on the amount of data
we are getting.
4. DevOps and Continuous Integration/Continuous Deployment (CI/CD): Azure DevOps
will provide some tools like including version control, build automation, release management,
and application monitoring.
13
Cloud Deployment Environment
14
Cloud Deployment Environment
server provides that respective resource/service. A server can provide service to multiple clients
at a time and here mainly communication happens through computer network.
4. Distributed Computing Environment: In a distributed computing environment multiple
nodes are connected together using network but physically they are separated. A single task is
performed by different functional units of different nodes of distributed unit. Here different
programs of an application run simultaneously on different nodes, and communication happens
in between different nodes of this system over network to solve task.
5. Grid Computing Environment: In grid computing environment, multiple computers from
different locations works on single problem. In this system set of computer nodes running in
cluster jointly perform a given task by applying resources of multiple computers/nodes. It is
network of computing environment where several scattered resources provide running
environment for single task.
6. Cloud Computing Environment: In cloud computing environment on demand availability
of computer system resources like processing and storage are availed. Here computing is not
done in individual technology or computer rather it is computed in cloud of computers where
all required resources are provided by cloud vendor. This environment primarily comprised of
three services i.e software-as-a-service (SaaS), infrastructure-as-a-service (IaaS), and platform-
as-a-service (PaaS).
7. Cluster Computing Environment: In cluster computing environment cluster performs task
where cluster is a set of loosely or tightly connected computers that work together. It is viewed
as single system and performs task parallelly that’s why also it is similar to parallel computing
environment. Cluster aware applications are especially used in cluster computing environment.
EUCALYPTUS
Eucalyptus is a Linux-based open-source software architecture for cloud computing and
also a storage platform that implements Infrastructure a Service (IaaS). It provides quick and
efficient computing services. Eucalyptus was designed to provide services compatible with
Amazon’s EC2 cloud and Simple Storage Service(S3).
Eucalyptus stands for Elastic Utility Computing Architecture for Linking Your
Programs to Useful Systems. It is an open-source software framework that provides the
platform for private cloud computing implementation on computer clusters. Eucalyptus
implements infrastructure as a service (IaaS) methodology for solutions in private and hybrid
clouds.
Eucalyptus provides a platform for a single interface so that users can calculate the
resources available in private clouds and the resources available externally in public cloud
15
Cloud Deployment Environment
services. It is designed with extensible and modular architecture for Web services. It also
implements the industry standard Amazon Web Services (AWS) API. This helps it to export a
large number of APIs for users via different tools.
Key features:
Support for multiple users with the help of a single cloud
Support for Linux and Windows virtual machines
Accounting reports
Use of WS-Security to ensure secure communication between internal resources and
processes.
The option to configure policies and service level agreements based on users and the
environment
Provisions for group, user management and security groups
Eucalyptus Architecture
Eucalyptus CLIs can handle Amazon Web Services and their own private instances.
Clients have the independence to transfer cases from Eucalyptus to Amazon Elastic Cloud. The
virtualization layer oversees the Network, storage, and Computing. Occurrences are isolated
by hardware virtualization.
16
Cloud Deployment Environment
1. Images: A good example is the Eucalyptus Machine Image which is a module software
bundled and uploaded to the Cloud.
2. Instances: When we run the picture and utilize it, it turns into an instance.
3. Networking: It can be further subdivided into three modes: Static mode(allocates IP address
to instances), System mode (assigns a MAC address and imputes the instance’s network
interface to the physical network via NC), and Managed mode (achieves local network of
instances).
4. Access Control: It is utilized to give limitations to clients.
5. Elastic Block Storage: It gives block-level storage volumes to connect to an instance.
6. Auto-scaling and Load Adjusting: It is utilized to make or obliterate cases or
administrations dependent on necessities.
Eucalyptus Components
Cloud Controller
In many deployments, the Cloud Controller (CLC) service and the User-Facing
Services (UFS) are on the same host machine. This server is the entry-point into the cloud for
administrators, developers, project managers, and end-users. The CLC handles persistence and
is the backend for the UFS. A Eucalyptus cloud must have exactly one CLC.
User-Facing Services
The User-Facing Services (UFS) serve as endpoints for the AWS-compatible services
offered by Eucalyptus: EC2 (compute), AS (AutoScaling), CW (CloudWatch), ELB (Load
Balancing), IAM (Euare), and STS (tokens). A Eucalyptus cloud can have several UFS host
machines.
17
Cloud Deployment Environment
18
Cloud Deployment Environment
19
Cloud Deployment Environment
Architecture of OpenStack
OpenStack contains a modular architecture along with several code names for the components.
Nova (Compute)
Nova is a project of OpenStack that facilitates a way for provisioning compute
instances. Nova supports building bare-metal servers, virtual machines. It has narrow support
for various system containers. It executes as a daemon set on the existing Linux server's top for
providing that service. This component is specified in Python. It uses several external libraries
of Python such as SQL toolkit and object-relational mapper (SQLAlchemy), AMQP messaging
framework (Kombu), and concurrent networking libraries (Eventlet). Nova is created to be
scalable horizontally. We procure many servers and install configured services identically,
instead of switching to any large server.
20
Cloud Deployment Environment
Neutron (Networking)
Neutron can be defined as a project of OpenStack. It gives "network connectivity as a
service" facility between various interface devices (such as vNICs) that are handled by some
other types of OpenStack services (such as Nova). It operates the Networking API of
OpenStack. It handles every networking facet for VNI (Virtual Networking Infrastructure) and
various authorization layer factors of PNI (Physical Networking Infrastructure) in an
OpenStack platform. OpenStack networking allows projects to build advanced topologies of
the virtual network. It can include some of the services like VPN (Virtual Private Network)
and a firewall. Neutron permits dedicated static DHCP or IP addresses. It permits Floating IP
addresses to enable the traffic to be rerouted.
Cinder (Block Storage)
Cinder is a service of OpenStack block storage that is used to provide volumes to Nova
VMs, containers, ironic bare-metal hosts, and more. A few objectives of cinder are as follows:
Open-standard: It is any reference implementation for the community-driven APIs.
Recoverable: Failures must be not complex to rectify, debug, and diagnose.
Fault-Tolerant: Separated processes ignore cascading failures.
Highly available: Can scale to serious workloads.
Component-based architecture: Include new behaviors quickly.
Cinder volumes facilitate persistent storage for guest VMs which are called instances.
These are handled by OpenStack compute software. Also, cinder can be used separately from
other services of OpenStack as software-defined stand-alone storage. This block storage system
handles detaching, attaching, replication, creation, and snapshot management of many block
devices to the servers.
Keystone (Identity)
Keystone is a service of OpenStack that offers shared multi-tenant authorization,
service discovery, and API client authentication by implementing Identity API of OpenStack.
Commonly, it is an authentication system around the cloud OS. Keystone could integrate with
various directory services such as LDAP. It also supports standard password and username
credentials, Amazon Web Services (AWS) style, and token-based systems logins. The catalog
of keystone service permits API clients for navigating and discovering various cloud services
dynamically.
21
Cloud Deployment Environment
Glance (Image)
The glance service (image) project offers a service in which users can discover and
upload data assets. These assets are defined to be applied to many other services. Currently, it
includes metadata and image definitions.
Images
Image glance services include retrieving, registering, and discovering VM (virtual
machine) images. Glance contains the RESTful API which permits querying of virtual machine
metadata and retrieval of an actual image as well. Virtual machine images are available because
Glance could be stored inside a lot of locations through common filesystems to various object-
storage systems such as the OpenStack Swift project.
Metadata Definitions: Image hosts a metadefs catalog. It facilitates an OpenStack community
along with a path to determine several metadata valid values and key names that could be used
for OpenStack resources.
Swift (Object Storage)
Swift is an eventually consistent and distributed blob/object-store. The object store
project of OpenStack is called Swift and it provides software for cloud storage so that we can
retrieve and store a large amount of data along with a general API. It is created for scale and
upgraded for concurrency, availability, and durability across the whole data set. Object storage
is ideal to store unstructured data that could grow without any limitations.
Horizon (Dashboard)
Horizon is a canonical implementation of Dashboard of OpenStack which offers the
web-based UI to various OpenStack services such as Keystone, Swift, Nova, etc. Dashboard
shifts with a few central dashboards like a "Settings Dashboard", a "System Dashboard", and a
"User Dashboard". It envelopes Core Support. The horizon application ships using the API
abstraction set for many projects of Core OpenStack to facilitate a stable and consistent
collection of reusable techniques for developers. With these abstractions, the developers
working on OpenStack Horizon do not require to be familiar intimately with the entire
OpenStack project's APIs.
Heat (Orchestration)
Heat can be expressed as a service for orchestrating more than one fusion cloud
application with templates by CloudFormation adaptable Query API and OpenStack-native
REST API.
22
Cloud Deployment Environment
Mistral (Workflow)
Mistral is the OpenStack service that handles workflows. Typically, the user writes the
workflow with its language according to YAML. It uploads the definition of the workflow to
Mistral by the REST API. After that, the user can begin the workflow manually by a similar
API. Also, it configures the trigger for starting the workflow on a few events.
Ceilometer (Telemetry)
OpenStack Ceilometer (Telemetry) offers a Single Point of Contact for many billing
systems, facilitating each counter they require to build customer billing around every future
and current component of OpenStack. The counter delivery is auditable and traceable. The
counter should be extensible easily for supporting new projects. Also, the agents implementing
data collections must be separated from the overall system.
Trove (Database)
Trove is the database-as-a-service that is used to provision a non-relational and relational
engine of the database.
Sahara (Elastic map-reduce)
Sahara can be defined as a component for rapidly and easily provisioning Hadoop
clusters. Many users will define various parameters such as Hadoop version number, node
flavor information (RAM and CPU settings, specifying disk space), cluster topology type, and
more. After any user offers each parameter, Sahara expands the cluster in less time.
23
Cloud Deployment Environment
24
Cloud Deployment Environment
for orchestrating an operating system image that includes Kubernetes and Docker and executes
that particular image in bare metal or virtual machine inside the cluster configuration.
Barbican (Key manager)
Barbican is the REST API developed for the management, provisioning, and secure
storage of secrets. Barbican is focused on being helpful for each environment including huge
ephemeral Clouds.
Vitrage (Root Cause Analysis)
Vitrage is an OpenStack Root Cause Analysis (RCA) service to expand, analyze, and
organize OpenStack events and alarms, yielding various insights related to the problem's root
cause and reducing the existence before these problems are detected directly.
Aodh (Rule-based alarm actions)
This service of alarming allows the ability for triggering tasks based on specified rules
against event or metric data gathered by Gnocchi or Ceilometer.
Benefits of OpenStack
1. Open Source: As we know, using the open-source environment, we can create a truly
defined data center. OpenStack is the largest open-source platform. It offers the networking,
computing, and storage subsystems in a single platform. Some vendors (such as RedHat) have
developed and continue to support their own OpenStack distributions.
2. Scalability: Scalability is the major key component of cloud computing. OpenStack offers
better scalability for businesses. Through this feature, it allows enterprises to spin up and spin
down servers on-demand.
3. Security: One of the significant features of OpenStack is security, and this is the key reason
why OpenStack is so popular in the cloud computing world.
4. Automation: Automation is one of the main keys selling points of OpenStack when
compared to another option. The ease with which you can automate tasks makes OpenStack
efficient. OpenStack comes with a lot of inbuilt tools that make cloud management much faster
and easier.
5. Easy to Access and Manage
We can easily access and manage OpenStack, which is the biggest benefit for you.
OpenStack is easy to access and manage because of the following features :
Command Line Tools - We can access the OpenStack using command-line tools.
Dashboard - OpenStack offers users and administrators to access and manage various aspects
of OpenStack using GUI (graphical user interface) based dashboard component. It is available
as a web UI.
25
Cloud Deployment Environment
APIs - There are a lot of APIs (Application Program Interface), which is used to manage
OpenStack.
6. Services: OpenStack provides many services required for several different tasks for your
public, private, and hybrid cloud.
List of services - OpenStack offers a list of services or components such as the Nova, Cinder,
Glance, Keystone, Neutron, Ceilometer, Sahara, Manila, Searchlight, Heat, Ironic, Swift,
Trove, Horizon, etc. Each component is used for different tasks. Such as Nova provides
computing services, Neutron provides networking services, Horizon provides a dashboard
interface, etc.
7. Strong Community: OpenStack has many experts, developers, and users who love to come
together to work on the product of OpenStack and enhance the feature of OpenStack.
8. Compatibility: Public cloud systems like AWS (Amazon Web Services) are compatible
with OpenStack.
OpenStack Vs AWS
S.No. OpenStack AWS
OpenStack is categorized as Cloud
AWS Lambda is categorized as a Cloud
1. Management Platforms and
Platform as a Service (PaaS).
Infrastructure as a Service (IaaS).
AMI (Amazon Machine Image) handles the
2. Glance handles the images.
images.
The ELB (Elastic Load Balancer)
LBaaS of OpenStack handles the load
3. automatically distributes the incoming traffic
balance traffic.
from the services to the EC2 instances.
Each virtual instance will
AWS allocates a private IP address to every
4. automatically be allocated an IP
new instance using DHCP.
address. It is handled by DHCP.
Identity authentication services are Identity authentication services are handled
5.
handled by Keystone. by IAM Identity and Access management.
Object storage is managed by S3 (simple
6. Swift handles object storage.
storage service) bucket
A cinder component manages block Block storage is managed by EBS (Elastic
7.
storage. Block Storage)
OpenStack provides MYSQL and
Users of AWS use an instance of MySQL or
8. PostgreSQL for the relational
Oracle 11g.
databases.
OpenStack uses MongoDB, Cassandra,
For a non-relational database, AWS uses
9. or Couchbase for a non-relational
EMR (Elastic Map Reduce).
database.
26
Cloud Deployment Environment
For networking, OpenStack uses For networking, AWS uses VPC (Virtual
10.
Neutron. Private Cloud).
Machine learning (ML) and NLP
Machine Learning (ML) and NLP (Natural
11. (Natural Language processing) are not
Language processing) are possible in AWS.
readily available.
OpenStack has no Speech or Voice Lex is used for speech or voice recognition
12.
recognition solution. solutions.
It follows the Simple Workflow Service
13. It has the Mistral - Workflow Service.
(SWF).
Ceilometer - the Telemetry based
14. AWS Usage and the Billing Report.
billing, resource tracking etc.
15. No Serverless Framework. Lambda is a serverless framework.
27