Cloud Computing Fundamentals: Cloud Security
Cloud Computing Fundamentals: Cloud Security
6. Security as a Service
Security Risks
1. Cloud Security Risk Areas
IT Infrastructure Security
1. The Cloud Cube Model
3. Host-level Security
5. Application-level Security
1.
IT security is a concern for most modern organizations and moving to the cloud can heighten those concerns.
Hi, I'm Dan Lachance and in this course, I'll discuss some of the key risk areas when it comes to security and
cloud computing. I'll also discuss control assessment frameworks and models, basic guidelines for security in
an XaaS environment including key areas of the IT infrastructure, and issues related to data transfer and
storage.
Information Security Objectives
Learning Objective
After completing this topic, you should be able to
describe the objectives of information security and how they relate to the cloud
1.
In this video, I'll discuss information security objectives. The cloud security alliance, or CSA, is concerned with
providing security guidance in cloud computing environments. CSA consists of governments, industry
practitioners, corporate and private members, and so on. Collectively, these members offer cloud security
recommendations, education, and certification. And we've got to remember that when we deploy cloud IT
services and data, we're placing a lot of trust in a third party. So there are inherent security risks in doing this.
What we're talking about here then is security controls that are put in place so that those risks are mitigated or
at least are manageable. When we secure cloud assets, we need to think about data as well as applications. On
the data side, we need to think about the integrity or trustworthiness of that data, authentication to systems that
lead to access to the data. That also includes physical access where the data is stored on compute
infrastructure. We also need to make sure that data is available whenever it's needed. So if there's a failure of
some kind, we have to have a contingency plan in place. Let's examine each of those in a bit more detail
starting with data integrity.
[Heading: Information Security Objectives. An organization's security posture is characterized by the maturity,
effectiveness, and completeness of the risk-adjusted Security controls implemented. These controls are
implemented in one or more layers ranging from the facilities (Physical security), to the network infrastructure
(network security), to the IT system (system security), all the way to the Information and applications
(application security). Additionally, controls are implemented at the people and process levels, such as
separation of duties and change management, respectively.]
We need to make sure data hasn't been tampered with and that it's not corrupt either while it's being stored or
while in transit. We want to make sure that we have frequent backups that are not corrupt. Data needs to be
recoverable in a timely fashion when it's needed. Access to that data needs to be controlled whether it's by
permissions to users groups or even through role-based access control. Eventually, data reaches the end of its
useful life to a point where it can actually be deleted. In the cloud, we need to ensure that data is deleted in an
acceptable way to ensure that other cloud tenants won't be able to gain access to any data remnants that might
reside on disk. You also need to make sure that data doesn't get corrupted or stolen. We need to put the correct
measures in place. Part of that is access control. It also includes frequent backups. Access to the data needs to
be audited. Now that should happen on the cloud provider side, but we might also have control as cloud
customers. We might have a web interface where we can configure or view audit captured data. And for data
integrity, of course, there needs to be isolation provided between cloud tenants. That might be done at the
virtual network level to keep the network traffic of one cloud customer separate from another cloud customer.
The same time we want to make sure virtual machines themselves running in the cloud are kept isolated.
On the access and authentication level, we need to make sure that we deploy interdevice trust. So in a secured
environment where we have sensitive data being accessed through a cloud app, we might deploy PKI security
certificates, which contain public and private key pairs that are unique to the user or device that they were
issued to. This way only trusted devices, for example, are allowed to even begin a connection to a cloud app.
Secure user identification might include using strong passwords, and frequent password changes, and even
multifactor authentication. Security certificates need to be managed properly because they do have an expiry
date and once they expire, the public and private keys contained in those certificates are no longer valid. So we
want to make sure that certificates are renewed before they expire if the entity to which they were issued is still
considered valid and trustworthy. Of course, we need to make sure that audit logs are retained related to access
to data and authentication.
We also should ensure that we're using some kind of a centralized identity provider for authentication so that we
don't need to account for authentication in each and every cloud-based application. This can be achieved with
identity federation. Physical access to the compute infrastructure in a cloud data center needs to be strictly
controlled. We want to make sure that even staff in the data center physically doesn't have access to client data
that's stored on that cloud infrastructure. This means that there should be thorough background checks
performed on any cloud-provider data center employee. And there should be closed circuit TV security in place
to monitor the users as well as the equipment at the cloud-provider network. We need to have a plan B, a
contingency plan so that data is continuously available whenever we need it. We could deploy failover
mechanisms in the cloud or the cloud provider might be doing this as well such as failover clustering where if
one node or one server fails that's offering network services, those services are failed over to a remaining node
that's still running.
We should also make sure that we have a multi-location topology. For example, if there's a problem with an
entire data center maybe due to a flood, ideally, the cloud provider would be replicating that data to other data
centers under their control. We should also have an adequate backup schedule both as a cloud customer and in
some cases, according to the service-level agreement, the cloud provider might also do this on our behalf. It's
important that we look at third-party security assessments on the cloud provider's systems, their processes, and
their equipment so that we can determine whether or not they are trustworthy. Should also make sure that we
have a smooth transition when we migrate services to the cloud. Ideally, we won't be caught in vendor lock-in.
We want to make sure we're using standardized applications or if that's not possible, we at least have the option
of exporting data from those applications in a standardized format so that if needed, we can easily migrate that
to a different cloud provider.
We also want to make sure that denial of service attacks are prevented against cloud IT services so we could
ask the cloud provider about that or we might even be able to configure that with cloud-based firewall solutions.
At the application function and service levels, we need to make sure that security vulnerabilities are identified
within specific cloud applications that we will be using. Sometimes patches are available to mitigate those
issues. There might also be emphasis on Platform as a Service where developers are responsible for security
within the application that is hosted on a Software as a Service platform. Now bear in mind that doesn't mean
that the developer builds in authentication into each app. What we want is the opposite approach, where they
don't do that and instead rely on trusted identity providers through identity federation. In this video, we
discussed information security objectives.
[Heading: Information Security Objectives. There should be emphasis on Software as a Service or SaaS
platform and its inherent security requirements. This responsibility falls on the SaaS provider.]
Cloud Security Challenges
Learning Objective
After completing this topic, you should be able to
describe the challenges associated with cloud security
1.
Outsourcing IT workloads and data to a third-party provider introduces some security risks. Some of which are
similar to what we would face even if everything were hosted on-premises. In this video, I'll talk about cloud
security challenges. In the cloud, we need to think about what assets are? An asset in the cloud could be
something of value to the organization such as database records or files that might contain customer
transactions, private information for customers or patients, or it could include trade secrets, and so on. Threats
are potential malicious or accidental events that could occur against those assets. A vulnerability is a weakness.
That weakness could be in our authentication mechanisms in an applications API or in the way that data is
transmitted or stored. An attack is an action taken to exploit vulnerability and activate a threat. We need to think
about counter measures that can be put in place to reduce the possibility of those threats being realized.
Security challenges in the cloud include data breaches where there is unauthorized access to data. That could
be from customers, from other users within an organization, from the cloud provider's personnel, or even from
malicious users on the Internet. Data loss comes in the form of theft, leakage, and data corruption. What we
might put in place as a counter measure for data leakage is digital rights management or DRM whereby we
strictly control how data can be copied, forwarded through e-mail, printed, and so on. We might also reduce the
possibility of losing data due to corruption by conducting frequent backups and verifying that the backups
themselves are valid and not corrupted. An authorization challenge comes in the form of accounts that get
hacked by malicious users, or credentials, or sensitive data that gets intercepted over the network. Poorly
designed interfaces and APIs within software can definitely lead to problems; in terms of confidentiality, integrity,
availability, or accountability. Denial of service, or DoS, attacks attempt to render a service unusable for
legitimate purposes. This might be done by flooding a network with useless network traffic or by crashing a
virtual machine host running a service in the cloud.
There also could be malicious insider activity at the cloud service provider. We need to make sure that thorough
background checks and third-party audits are made at the cloud service provider level and that that data is
made available to us, the cloud customer. Poor planning always results in security issues. For example, if
there's insufficient research during planning cloud deployment and poor due diligence of the cloud service
providers certification, then mechanisms and processes in use, they will also suffer. And we have to think about
planning for running IT workloads in the cloud, how we're going to authenticate users, how data is going to be
backed up. We might also consider migrating services or data from on-premises to the cloud. All of these
aspects of cloud computing need to be planned thoroughly ahead of time. Multi-tenancy means that we have
multiple customers sharing the same physical infrastructure on the cloud provider network. However, there is
isolation used through virtual networks or virtual machines. However, if isolation isn't properly implemented, for
example, within virtual machines owned by different cloud tenants, then a security breach within one virtual
machine might lead to access to other virtual machines in the cloud providers data center. In this video, we
discussed cloud security challenges.
Cloud Security Models
Learning Objective
After completing this topic, you should be able to
describe the three models for public cloud security responsibilities
1.
In this video, I'll discuss cloud security models. Cloud models define security requirements, strategies, and
responsibilities in terms of whether the security responsibility falls on the cloud consumer or the cloud provider.
The cloud security alliance or CSA defines this issue by outlining the physical location of assets, resources, and
information, and by whom they are being consumed, which defines who is responsible for their governance.
There are three documented models. The first of which is the Cloud Risk Accumulation Model of the cloud
security alliance, then there's the Jericho Forums Cloud Cube Model. Finally, we have the NIST Model, which
includes multi-tenancy. With the Cloud Risk Accumulation Model of the cloud security alliance, there needs to
be an understanding of the layer dependency with the various cloud service models. This is important when
identifying security risks and who is responsible for their control.
[Heading: Cloud Security Models. The CSA Cloud Reference Model is displayed. It includes the following three
service models: IaaS, PaaS and SaaS. The Infrastructure as a Service or IaaS model is the bottom layer. It is
also known as the foundation layer, which is used for computing and storage. In this model, the bottom-most
layer is the Facilities. On top of the Facilities layer is the Hardware layer. On top of the Hardware layer are the
Abstraction and Core connectivity and delivery layers. On top of these layers is the APIs layer. The Platform as
a Service or PaaS model is the middle layer. It is a rapid application model and acts as a middleware to the
IaaS. This model contains the Integration and Middleware layers. The Software as a Service or SaaS is the top
layer. It represents the complete applications on top of the PaaS service model. This model includes the Date,
Metadata, and Content layers at the bottom of the SaaS model. On top of these layers is the Applications layer.
On top of the Applications layer is the APIs layer. The Presentation mobility and Presentation platform are the
top-most layers.]
So the cloud security alliance then defines this interdependency between the three major service models. The
three major service models we're really talking about include Infrastructure as a Service, Platform as a Service,
and Software as a Service. Now with Infrastructure as a Service, we the cloud customer are creating
infrastructure on cloud provider equipment so we might be building virtual networks, virtual machines, and
configuring cloud storage because we're doing most of the configuring, the security burden then falls on us, the
cloud customer. But if we go to the other end of the scale where we look at Software as a Service, most of the
security responsibility falls on the cloud provider and this is because we are using an offered IT solution that's
already built and made available through that cloud service provider. With on-premises IT solutions, because we
completely manage them, we are then completely responsible for their security. But again, if we go all the way
to Software as a Service hosted through a cloud provider, they then would be responsible for the vast majority
of their offerings at that cloud model level.
[Heading: Cloud Security Models. A graph displaying that the security burden is directly proportional to the
deployed models is displayed. This graphic consists of three models, the IaaS model, the IaaS and PaaS
model. In this graph, the security is least at the at IaaS or cloud consumer and then at PaaS. The highest
security is at SaaS or the cloud provider. On the left is the IaaS model. In this model, the bottom layer is the
Facilities. On top of the Facilities layer is the Hardware later. On top of the Hardware layer are the Abstraction
and Core connectivity and delivery layers. On top of this layer is the APIs layer. Next to the IaaS model, is the
PaaS model. This model contains the Integration and Middleware along with the layers in the IaaS model. The
Integration and Middleware layers on top of the layers in the IaaS model. On the right are the SaaS model. The
SaaS model is on top of the PaaS model. This model includes the Date, Metadata, and Content layers at the
bottom of the SaaS model. On top of these layers is the Applications layer. On top of the Applications layer is
the APIs layer. The Presentation mobility and Presentation platform are the top-most layers. A graphic
displaying the areas of responsibility in cloud security is displayed. It contains the following models: On-
Premises, IaaS, PaaS, and SaaS. All the models include the following layers: Applications, Data, Runtime,
Middleware, O/S, Virtualization, Servers, Storage, and Networking. In the On-Premises model, all the layers are
labeled managed by you. The IaaS model includes the following layers: Applications, Data, Runtime,
Middleware, O/S, Virtualization, Servers, Storage, and Networking. The following layers are labeled managed
by you: Applications, Data, Runtime, Middleware, and O/S. The following layers are labeled Other manages:
Virtualization, Servers, Storage, and Networking. The PaaS model includes the following layers: Applications,
Data, Runtime, Middleware, O/S, Virtualization, Servers, Storage, and Networking. The following layers are
labeled managed by you: Applications and Data. The following layers are labeled Other manages: Runtime,
Middleware, O/S, Virtualization, Servers, Storage, and Networking. The SaaS model includes the following
layers: Applications, Data, Runtime, Middleware, O/S, Virtualization, Servers, Storage, and Networking. In this
model, all the layers are labeled managed by you.]
The cloud cube model consists of four dimensions of deployment. The first of which defines whether the cloud
data is stored internally or externally. We could have data stored on-premises in the cloud or perhaps even both
if we're even synchronizing data to keep it up to date between on-premises and the cloud. The second
dimension is whether we are using proprietary versus open source cloud solutions. OpenStack is an example of
open source cloud software whereas Microsoft Azure is proprietary to Microsoft Corporation. The third
dimension is parameterized versus deparameterized architectures. Here we're talking about the cloud security
perimeter that is either in our zone, under our control, or in another external zone. Then we've got insourced
versus outsourced as another dimension, where outsourced means that the cloud service is provided by a third
party. Insource means the cloud service is provided and controlled internally. In other words, a private cloud
solution.
With the cloud cube model, the four criteria and dimensions give rise to numerous permutations for cloud
deployment. And we need to examine each of them to understand and then develop security requirements,
strategies, and responsibilities. So we have to consider whether we've got IT services being outsourced or
insourced, whether we're using a proprietary versus an open cloud solution, and so on. The National Institute of
Standards and Technology, NIST also outlines cloud deployment models in publication 800-145. Here in
publication 800-145, we can see there is a discussion of various deployment models including a private cloud, a
community cloud, a public cloud, as well as a hybrid cloud. All of the security issues related to that are also
outlined in this publication.
[Heading: Cloud Security Models. The Jericho Forum model from the Cloud Cube Model is displayed. It displays
a cube inside a cloud. The x-axis of the cube contains the Proprietary dimension at the left and the Open
dimension at the right. The y-axis contains the Internal dimension at the bottom and the External dimension at
the top. The z-axis contains the Parameterized dimension and the De- Parameterized dimension. This cube is
further divided in eight cubes. The Insourced cube contains the following dimensions: Internal, Proprietary, and
Parameterized. The Outsourced cube includes the following dimensions: External, Open and De-
Parameterized. The home page of the csrc.nist.gov web site is displayed in the Google Chrome browser
window. The URL of page displayed is: csrc.nist.gov/publications/nistpubs/800-145/SP800-145.pdf The home
page includes information about deployment models and their definitions. The four different deployment models
are as follows: Private cloud, Community cloud, Public cloud, and Hybrid cloud. In the Private cloud, the cloud
infrastructure is provisioned for exclusive use by a single organization compromising of multiple consumers, for
example, business units. It may be owned, managed, and operated by the organization, a third party, or some
combination of them, and it may exist on or off premises. In the Community cloud, the cloud infrastructure is
provisioned for exclusive use by a specific community of consumers from organizations that have shared
concerns, for example, mission, security requirements, policy, and compliance considerations. It may be owned,
managed, and operated by one or more of the organizations in the community, a third party, or some
combination of them, and it may exist on or off premises. In the Public cloud, the cloud infrastructure is
provisioned for open use by the general public. It may be owned, managed, and operated by a business,
academic, or government organization, or some combination of them. It exists on the premises of the cloud
provider. In the Hybrid cloud, the cloud infrastructure is a combination of two or more distinct aloud
infrastructures like private, community, or public that remains unique entities, but are bound together by
standardized or proprietary technology that enables data and application portability, for example, cloud bursting
for load balancing between clouds.]
Then there's the issue of multi-tenancy and security. This implies the sharing of resources and applications with
multiple users from different companies. You know, that's all happening on a cloud provider's infrastructure. So
this implies the requirement for a rigid security policy that guarantees consumer data segregation, isolation,
service levels, and governance. So we need to have a defined isolation or security boundaries between different
cloud tenants that could be achieved at the network level to keep network traffic isolated between cloud tenants
or even at the application instance or virtual machine level to make sure that many customers using the same
applications in the cloud have a separate instance to keep the data and the running of that process separated.
In this video, we discussed cloud security models.
[Heading: Cloud Security Models. A graphic displaying multi-tenancy in a Virtualized Public Cloud in an Off-
Premise Datacenter is displayed. A Public Cloud Provider with three business customers, A, B, and C are
displayed. They have different security, SLA, governance, and billing polices on shared infrastructure. There are
two VMMs. The first VMM contains five VMs and the second VMM contains two VMs. Customer A and
Customer B use the first VMM and the Customer C uses the second VMM. Customer A uses three VMs in the
first VMM. Customer B uses two VMs on the first VMM. Customer C uses the two VMs in the second VMM.]
Information Security Standards
Learning Objective
After completing this topic, you should be able to
describe relevant ISO standards for information security
1.
In this video, I'll discuss information security standards. The ISO web page lists a number of information security
management publications where they discuss ISMS, Information Security Management System. Many of the
publications are available here for reading as well as articles related to security governance. When we
outsource IT workloads and our data to a third-party cloud provider, we're introducing a new element of risk that
wasn't there before. As such, it can be very important that we follow information security standards. The
ISO/IEC has adopted a lot of their publications so that they relate to cloud services. ISO/IEC 27001:2013 deals
with ISMS, Information Security Management System. This is designed to help small, medium, and large
businesses in any sector to keep information assets secure.
[Heading: Information Security Standards. The ISO/IEC 27001 – Information security management tabbed page
is displayed in the Google Chrome browser window. The address bar of this widow displays the following URL:
www.iso.org/iso/home.standards/management-standards/iso27001.htm This page includes the login, Members
area, and a cart icons at the top-left corner of the page. This page also includes the following tabs: Standards,
About us, Standards Development, News, and Store and the Search ISO search field. The Standards tab is
selected by default and it contains the following tabs: Benefits, Certification, and Management system
standards. The Management system standards tab is displayed by default. This page is divided into two panes,
the left pane and the right pane. The left pane contains information about the ISO/IEC 27001 – Information
security management. It contains information for the following sections: What is an ISMS, Preview ISO/IEC
27001:2013, Certification to ISO/IEC 27001, Useful articles, and News. The right pane includes the Online
collection: Information Security Management Systems and ISO Store sections. The presenter now explains the
following points on ISMS: Some of the important ISO Information Security Standards are: ISO/IEC 27001:2013
and ISO/IEC 20001-9:2015 – Part 9, which provides the guidance on the application of ISO/IEC 27001-1 to
cloud services in development. ISO/IEC 27001:2013 specifies the requirements for establishing, implementing,
maintaining and continually improving an information security management system within the context of the
organization. It also includes requirements for the assessment and treatment of information security risks
tailored to the needs of the organization.]
ISO/IEC 27002:2013 is a code of practice for information security controls. So this works well for organizations
with intentions of selecting security controls within the process of implementing an ISMS. Also there's the
implementation of commonly accepted information security controls. For example, firewalls, antivirus scanners,
demilitarized zones, and so on. In some cases, organizations could also develop or customize their own
information security management policies. The ISO/IEC documentation are general guidelines, which often
need to be tailored to specific organizational needs. ISO 31000:2009 focuses on risk management, and
principles, and guidelines. This way we can deal with the risk that is inherently introduced with working, for
example, with cloud providers and put security controls in place to mitigate those security issues. In this video,
we discussed information security standards.
[Heading: Information Security Standards. The ISO 31000:2009 provides principles and generic guidelines on
risk management. It can be used by any public, private or community enterprise, association, group or individual
and not industry or sector-specific. It can also be applied throughout the life of an organization and to a wide
range of activities including strategies and decisions, operations, processes, functions, projects, products,
services, and assets.]
Security as a Service
Learning Objective
After completing this topic, you should be able to
describe the Security as a Service model
1.
In this video, I'll discuss Security as a Service. Security as a Service, or SecaaS, is defined as an outsourcing
model for security management to a trusted third party. And this would include applications, such as antivirus
software, Single Sign-On configurations, e-mail protection, firewalls, and so on. So these services are made
available on a cloud provider's infrastructure but delivered over the Internet. SecaaS is defined as a
subgrouping underneath Software as a Service. The CSA, the Cloud Security Alliance, defines SecaaS as,
"Security as a Service is the delegation of detection, remediation, and governance of security infrastructure to a
trusted third party...". Let's take a look at the CSA publication related to this. The Cloud Security Alliance makes
their publications available to anybody over the Internet. Here I'm going to search for secaas, and we can see in
our PDF document that we have a SecaaS Implementation Guidance where we can get details related to
working with this as a service through the cloud.
[Heading: Security as a Service. The presenter navigates to the Google Chrome browser window. It displays a
tabbed page with the following URL: https:downloads.cloudsecurityalliance.org/initiatives/secaas
/SecaaS_Cat_3_Web_Security_Implementation_Guidance.pdf The presenter opens the Find text box and
enters secaas. The page finds the first SecaaS, which is the SecaaS Implementation Guidance.]
But what are the benefits of using this model? Well, there are a number of benefits to SecaaS. The first is
constant and user-independent virus definition updates if that's the service that we're using in the cloud. The
second is that there is more advanced security expertise available at the cloud service provider than would
probably be available within our organizations or government agencies. There is efficient user provisioning and
setup because one of the benefits of cloud-provided services is rapid elasticity, and that is possible also with
SecaaS. The logging and administration is the responsibility of the cloud service provider where we would have
a browser interface for self-administration and activity monitoring that relate to our Security as a Service
offerings. There's also the issue of on-demand cost basis. Essentially, we consume the services as required and
pay, generally speaking, only for what we've used.
Some examples of Security as a Service offerings include virus protection, data loss prevention, cryptography,
network monitoring, web gateways, log management, e-mail security, intrusion detection, and so on. So instead
of us hosting this on-premises, we could have this delivered over the Internet from a cloud provider, and it could
still apply to localized on-premises data. The Cloud Security Alliance publishes a comprehensive list of
recommended practices for each of the Security as a Service offerings. There are many vendors that offer
Security as a Service including McAfee, Symantec, and Trend Micro. There are many public cloud providers
that offer Security as a Service for items, such as Web and Email, and there are also options for firewalls, anti-
malware, log inspection, integrity monitoring, all delivered over the Internet from a cloud provider. In this video,
we discussed Security as a Service.
[Heading: Security as a Service. One of the SECaaS service offerings is identity management. The presenter
then navigates to the Google Chrome browser window, which displays the following tabbed pages: ds-saas-
web-and-email-protection.pdf and Deep Security as a Service. The ds-saas-web-and-email-protection.pdf
tabbed page is open by default. The address bar of this tabbed page displays the following URL:
www.mcafee.com/us/resources/data-sheet/ds-saas-web-and-email-protection.pdf This page displays
information about McAfee SaaS Web and Email Protection. He then switches to the Deep Security as a Service
tabbed page. It displays a Deep Security as a Service from TREND MICRO page. This page contains the
following tabs: For Home, For Business, Security Intelligence, Why Trend Micro, and Support. The For Business
tab is selected by default. The path for this page is Home > For Business > Enterprise > Cloud and Data Center
Security > Deep Security as a Service. This page displays information about the Advanced security service built
for the cloud. Some information of this section is as follows: Deep Security as a Service is built on the proven
capabilities of the Deep Security platform providing intrusion detection and prevention, firewall, anti-malware,
web reputation, log inspection, and integrity monitoring.]
Cloud Security Risk Areas
Learning Objective
After completing this topic, you should be able to
describe the security risk areas for cloud computing
1.
In this video, I'll go over cloud security risk areas. One of the issues that arises with the adoption of cloud
services is the loss of governance. But there needs to be a clear definition of security requirements for services
and data that are hosted in the cloud. We also need to make sure that our cloud solution adheres to laws and
regulations as applicable to our organization. There also needs to be a clear allocation of responsibilities
between the cloud consumer and the cloud provider. Part of this could be listed within a service-level
agreement. We have to consider the failure of isolation between cloud tenants. With multitenancy, resources are
shared on the cloud provider network. So we need to ensure that the cloud provider is taking the appropriate
steps to ensure isolation of network traffic, application instances, and virtual machines.
Another risk that we face with cloud adoption is vendor lock-in. This is where we have a dependency on a cloud
provider. So we need to have a plan B, an exit strategy, so that if a vendor goes out of business or if some of
the data hosted at the vendor is subpoenaed by a foreign government law enforcement agency, then we need
to be able to go to another cloud provider or to host services once again on-premises. This has to be accounted
for so that when we need to do it, we have a plan in place, and we can do it quickly with a minimal disruption to
the business. Then there's the handling of security incidents. It's really out of our control. The detection
reporting and subsequent management of security at least in part falls on the cloud service provider. We need
to be made aware of how this will be done. And if there is an interface where we can at least view detected
security incidents related to our cloud tenancy, we need to know what that is. For visibility, we need to ensure
that the cloud service provider is transparent about their governance and operational issues related to cloud
tenants.
The management interface that we use for cloud services, and reporting, and so on, needs to be secured. So
we need to make sure that it doesn't use a plugin in the browser, for example, that has known vulnerabilities.
We also need to make sure that the communication happens over HTTPS and not just HTTP. Data needs to be
protected. Sensitive data perhaps might be labeled so that users with matching labels would have access to
that type of data. We have to think about data loss or data that's inaccessible due to a network outage or a
problem at the cloud provider data center. So service continuity is very important. A lot of this type of information
will be available in the service-level agreement between the cloud customer and the cloud provider. There's also
the possibility of malicious behavior at the cloud service provider. Ideally, thorough background checks will be
conducted for employees that work at cloud provider data centers. And in some cases, we might be able to
request the credentials and information about cloud security provider data center personnel. We need to
consider how data gets destroyed when it's hosted in the cloud. Often with many public cloud providers, once
data is deleted, that area is marked as immediately available to be written over, and it could even be allocated
to other cloud tenants. So we need to learn of these details for a given and chosen cloud provider.
[Heading: Cloud Security Risk Areas. Management interface vulnerabilities include remote access methods and
web browser vulnerabilities. Data destruction includes insecure or incomplete data deletion.]
The risk assessment approach begins with identifying assets within the cloud deployment that have value to the
organization. Then we need to categorize them in terms of their relative importance. Then we map the assets to
the proposed cloud deployment model. We evaluate potential cloud providers and their cloud service models.
And finally, we can then map data flow to applications and processes, and this will reveal security gaps that we
need to put security controls in place to control. The Cloud Security Alliance highlights 13 security domains,
such as Governance and Enterprise Risk management, legal issues related to cloud computing, such as
jurisdiction and privacy laws. Then there's Compliance and Audit as the third of the 13 domains. Then there's
Information Management and Data Security that deals with data hosted in the cloud or being migrated from on-
premises to the cloud. Also what's important here is who is responsible for security.
[Heading: Cloud Security Risk Areas. A table explaining some of the domains of Cloud Security Alliance is
displayed. The table includes four rows and two columns. The columns are as follows: DOMAIN and
GUIDANCE DEALING WITH. The row details are as follows: For DOMAIN Governance and Enterprise Risk
management, GUIDANCE DEALING WITH is The ability of the organization to govern and measure enterprise
risk introduced by cloud computing. Items such as legal precedence for agreement breaches, Ability of user
organizations to adequately assess risk of a cloud provider, responsibility to protect sensitive data when both
user and provider may be at fault, and how international boundaries may affect these issues. For DOMAIN
Legal Issues: Contracts Electronic Discovery, GUIDANCE DEALING WITH is Potential legal issues when using
cloud computing. Issues touched on in this section include protection requirements for information and
computer systems, security breach disclosure laws, regulatory requirements, privacy requirements, international
laws, etc. For DOMAIN Compliance and Audit, GUIDANCE DEALING WITH is Maintaining and providing
compliance when using cloud computing. Issues dealing with evaluating how cloud computing affects
compliance with internal security policies, as well as various compliance requirements (regulatory, legislative,
and otherwise) are discussed here. This domain includes some direction on providing compliance during an
audit. For DOMAIN Information Management and Data Security, GUIDANCE DEALING WITH is Managing data
that is places in the cloud, Items surrounding the identification and control of data in the cloud, as well as
compensating controls that can be used to deal with the loss of physical control when moving data to the cloud,
are discussed here. Other items, such as who is responsible for data confidentiality, integrity, and availability are
mentioned.]
Portability and Interoperability means that we have to have an exit strategy so that, if we need to, we can move
our services and data to another cloud provider. This has to be planned ahead of time. Next, we have
Traditional Security, Business continuity & Disaster Recovery. Part of that will be on us, the cloud customer, but
also we have to think about the cloud provider. What are their contingency plans if there's a failure of an entire
data center? So we have to think about data center operations also so that as cloud customers, we can identify
data center characteristics, and this could be something that determines which cloud provider that we go with.
Next, we have to think about Incident Response, Notification and Remediation. Now we might have a
management interface provided through a web browser, whereby we can take a look at these types of incidents
that relate to our cloud tenancy. We have to look at application-specific security. So this means securing
application software and usage running in the cloud. This might be outside of our control as it might be an entire
cloud offering in terms of Software as a Service. However, we might be able to control things like the way users
are authenticated to those applications in the cloud.
[Heading: Cloud Security Risk Areas. A table explaining some of the domains of Cloud Security Alliance is
displayed. The table includes four rows and two columns. The columns are as follows: DOMAIN and
GUIDANCE DEALING WITH. The row details are as follows: For DOMAIN Portability and Interoperability,
GUIDANCE DEALING WITH is The ability to move data or services from one provider to another, or bring it
entirely back in-house. Together with issues surrounding interoperability between providers. For DOMAIN
Traditional Security, Business continuity & Disaster Recovery, GUIDANCE DEALING WITH is How cloud
computing affects the operational processes and procedures currently used implement security, business
captivity, and disaster recovery. The focus is to discuss and examine possible risk of cloud computing, in hopes
of increasing dialogue and debate on the overwhelming demand for better enterprise risk management models.
Further, the section touches on helping people to identify where cloud computing may assist in diminishing
certain security risks, or entails increases in other areas. For DOMAIN Data Center Operations, GUIDANCE
DEALING WITH is How to evaluate a provider's data center architecture and operations. This is primarily
focused on helping users identify common data center characteristics that could be detrimental to on-going
services, as well as characteristics that are fundamental to long term stability. For DOMAIN Incident Response,
Notification and Remediation, GUIDANCE DEALING WITH is Proper and adequate incident detection,
response, notification, and remediation. This attempts to address items that should be in place at both provider
and user levels to enable proper incident handling and forensics. This domain will help you understand the
complexities the cloud bring to your current incident-handling program. For DOMAIN Application Security,
GUIDANCE DEALING WITH is Securing application software that is running on or being developed in the
Cloud. This includes items such as whether it's appropriate to migrate or design an application to run in the
cloud, and if so, what type of cloud platform is must appropriate (SaaS, PaaS, and IaaS).]
Some cloud services will allow us to use keys. These keys might be used for authenticating devices to cloud
services or identifying specific users to specific applications. The keys could also be used to encrypt data. Next,
we've got Identifying and Access management. This is where we would consider whether or not we can use a
centralized identity provider, so identity federation, to allow for Single Sign-On to multiple cloud-based
applications. The next domain is Virtualization. You can't have cloud computing without virtualization. So we
have to think about whether we would be provisioning virtual machines ourselves in the cloud and how isolated
those virtual machines are from other cloud tenant virtual machines. The last domain is Security as a Service.
This means that we could outsource things like antivirus scanning or things like firewalls to a cloud provider. So
we have to evaluate these offerings from various cloud providers to find something that fits our specific business
requirements. In this video, we discussed cloud security risk areas.
[Heading: Cloud Security Risk Areas. A table explaining some more domains of Cloud Security Alliance is
displayed. The table includes four rows and two columns. The columns are as follows: DOMAIN and
GUIDANCE DEALING WITH. Some of the rows in the table are as follows: For DOMAIN Encryption and Key
Management, GUIDANCE DEALING WITH is Identifying proper encryption usage and scalable key
management. This section is not prescriptive, but is more informational in discussing why they are needed and
identifying issues that arise in use, both for protecting access to resources as well as for protecting data. For
DOMAIN Identifying and Access management, GUIDANCE DEALING WITH is Managing identities and
leveraging directory services to provide access control. The focus is on issues encountered when extending an
organization's identity into the cloud. This section provides insight into assessing an organization's readiness to
conduct cloud-based identity, Entitlement, and Access Management (IdEA). For DOMAIN Virtualization,
GUIDANCE DEALING WITH is The use of virtualization technology in cloud computing. The domain addresses
items such as risk associated with multi-tenancy. VM isolation, VM co-residence, hypervisor vulnerabilities, etc.
this domain focuses on the security issues surrounding system/hardware virtualization, rather than a more
general survey of all forms of virtualization. For DOMAIN Security as a Service, GUIDANCE DEALING WITH is
Providing third party facilitated security assurance, incident management, Compliance attestation, and identity
and access oversight. Security as a service is the detection, remediation, and Governance of security
infrastructure to trusted third party with the proper tools and expertise. User of this service gains the benefit of
dedicated expertise and cutting edge technology in the fight to secure and harden Sensitive business
operations.]
Assessing Cloud Service Security Offerings
Learning Objective
After completing this topic, you should be able to
describe how to assess security offerings for cloud services
1.
In this video, I'll discuss how to assess cloud service security offerings. It's important that we take a structured
approach when evaluating offerings from various cloud service providers. We need to make sure that we match
their security offerings with our specific organizational requirements. A cloud risk assessment assists
organizations when they evaluate the suitability of migrating services to cloud environments. They can be
applied to assess potential and inherent risks in cloud deployments. Now there's never going to be zero risk.
There's always some level of risk when we think about moving or creating IT services in the cloud. What's
important is how we can manage those risks. The risk assessment might be carried out in conjunction with a
third-party partner or even a consultant. Here's the process overview for conducting a cloud risk assessment.
First assets need to be cataloged, categorized, and classified. We need to make sure we know which assets
have the most value to a specific organization. We then would start to look at selecting a cloud platform, an
appropriate platform that meets our specific business needs. For example, one cloud provider might offer
software solutions in the cloud that we need while another cloud provider may not have those available.
We should then map our defined assets to compliance and security. Now that will be specific to an organization
or any specific industry in some cases. We should also then map the security requirements that we've now
identified against the cloud service provider capabilities. Do they fit the bill? Can they meet our requirements for
security? Then we need to define security responsibilities that could be broken between the cloud customer and
the cloud service provider, depending upon the cloud service model being used. We can then integrate security
agreements and mechanisms into the service-level agreement. The service-level agreement is the document
between a cloud customer and a provider that outlines things like whose responsibility various aspects of
security fall under, guaranteed uptime, response time, and so on. So in other words, the SLA can be negotiated.
It's not engraved in stone, and it's not the same for every cloud tenant. Finally, we can monitor and audit
throughout the deployment our cloud risk assessment based on what our needs are in the organization, and the
identified assets, and risks to those assets.
[Heading: Assessing Cloud Service Security Offerings. In the process of Cloud Risk Assessment, in addition to
monitoring and auditing through the deployment it is important to monitor alerts and attack history.]
The Cloud Security Alliance model for mapping compliance to security controls is listed here, whereby we can
look at our Compliance Model. For example, we might look at Code Review if we're doing application
development in the cloud. If we're using Firewalls through Security as a Service, we would look at that. We
would look at Encryption, Anti-Virus, Monitoring/IDS/IPS, Patch/Vulnerability Management. There are many
different things we might be running through the cloud service provider. We then need to find any security gaps.
Then that can be applied to our service-level agreement where we define security responsibility. The Cloud
Security Alliance's STAR Self-Assessment program is free to all cloud providers, and it lets them submit self-
assessment reports that document their compliance with CSA-published best practices for cloud services. The
registry includes such luminaries as Amazon Web Services, Box.com, HP, Microsoft, Ping Identity, and many
others. Cloud providers can submit two different types of reports to indicate their compliance with CSA best
practices. The first is the Consensus Assessments Initiative Questionnaire, and then there's the Cloud Controls
Matrix.
[Heading: Assessing Cloud Service Security Offerings. The Cloud Security Alliance model is displayed. This
model explains the mapping of compliance to security control to the deployed model assures security risk levels
and reveals security gaps. This model includes Compliance Model, Security Control Model, and Cloud Model.
The Compliance Model is connected to the Security Control Model, which is then connected to the Cloud Model.
The Compliance Model includes a list of items such as Code Review, Firewalls, WAF, and Encryption that might
be running through the cloud service provider. The Security Model includes the possible areas where security
gaps can occur. This model helps to apply the information procured in that stage to the next stage. In this stage,
the Could model is used and includes the foundation, middleware, and complete applications. The foundation is
known as IaaS which includes the items such as APIs, Hardware, and Facilities. The middleware is known as
PaaS and is on top of the IaaS. The middleware includes the items such as Integration & Middleware. The
complete applications are known as SaaS and are on top of PaaS. The complete applications include the items
such as APIs, Applications, and Metadata. The Consensus Assessments Initiative Questionnaire is also known
as CAIQ. The Cloud Controls Matrix is also known as CCM.]
The CSA web site provides an overview of the STAR program where cloud providers can offer their own
assessments to determine their compliance with the standards listed here by the Cloud Security Alliance.
Assuming a cloud provider will do the assessment, what gets assessed? The first is privileged access, so the
hiring practices and oversight of privileged administrators working in the data center owned by the cloud service
provider. Then there is compliance with standards and specific certifications, including external audits that might
be conducted against the cloud service provider. Data segregation takes a look at the cloud service provider's
mechanisms for isolation, encryption, and data transit security. Data location deals with international boundaries
and associated legal issues. So we might require, for example, with our adoption of cloud services that data
centers reside within national boundaries.
[Heading: Assessing Cloud Service Security Offerings. The Google Chrome browser window is displayed. It
displays the CSA Security, Trust & Assurance Registry (STAR) tabbed page by default. The tabbed page
displays the following URL in the address bar: https://cloudsecurityalliance.org/star/. This tabbed page includes
information on CSA security, Trust and Assurance Registry or STAR. The CSA's STAR Self Assessment
program includes assessments that test for compliance. Compliance includes Cloud Service Provider or CSP
standards, certifications and external auditing, alert mechanisms, and CSA STAR status. Data transit security is
also known as SSL.]
Furthermore, we can assess the availability in terms of the uptime, failover, and scalability capabilities of the
cloud service provider. For backup, we would look at physical storage and backup hardware at the cloud service
provider end. Then there's recovery, multilocation backups, business continuity capabilities that are put in place
by the cloud service provider. For investigative purposes, we need to find out what access there is to business
records at the cloud service provider if they are needed. We have to look at vendor lock-in, what's the ability for
cloud tenants to move their data in applications to a different provider. That's going to be harder to do if offerings
from a cloud service provider are proprietary. At the protection level, we then have to look at system and data
protection mechanisms at the cloud service provider end. So when these types of assessments are conducted
and reported back to CSA from a cloud provider, they are then made available for cloud consumers when they
are assessing various solutions. In this video, we discussed how to assess cloud service security offerings.
SaaS Security Challenges
Learning Objective
After completing this topic, you should be able to
describe the challenges associated with security in a Software as a Service or SaaS
offering
1.
In this video, I'll talk about Software as a Service security challenges. With Software as a Service, or SaaS, we
need to consider security, both from the cloud consumer standpoint but also from the cloud service provider
standpoint. Let's start with a bit of a refresher on SaaS. With SaaS, applications are consumed over the Internet
using a thin client computing device. The SaaS infrastructure is such that software and the data are primarily
the responsibility of the provider. However, the cloud service consumer has little control over that stored data
and the application code. But there are exceptions. For example, a cloud consumer might have data stored in
the cloud, but that data might be synchronized to their local device so that they still have offline access if they
don't have an Internet connection. The cloud service consumer is usually unaware of the location or format of
the data storage. With SaaS, when a security incident occurs, the cloud service consumer is reliant upon the
cloud service provider for analysis of the incident, an explanation, and ultimately a resolution. So there's a lot of
trust here in the third party. In this scenario, the consumer needs to validate somehow that the provider has
instituted the proper security measures. One way to do this is to look at any third-party audit results that were
conducted against the cloud service provider.
From the cloud service provider perspective, there are multiple areas that need to be looked at when it comes to
security, one of which is network security. We want to make sure that each tenant has a separate way to
transmit their own network traffic; that could be done through virtual networks. But at the same time, we need to
control who is allowed access to one of those virtual networks in the cloud in the first place. Resource locality
deals with where data is actually stored. That could be in a certain data center or even a different geographical
area of the world. In some cases, it's hard to pinpoint exactly where our data is actually being stored. Ideally, the
cloud service provider will have adopted some kind of security standard that they adhere to in the cloud. There
needs to be data segregation and tenant isolation. This occurs both at the network level in the cloud, at the
virtual machine level, even at the application level. We might have multiple cloud tenants using the exact same
SaaS application through the cloud, but we would want to have multiple instances of that application running,
one for each tenant, for security purposes.
How we authenticate users and devices to our SaaS offerings will depend upon the specific software and the
cloud provider. We might use identity federation with a centralized identity provider. We might require multifactor
authentication for enhanced security and so on. For web application security, we need to make sure that the
application developers have thought of security through each phase of the system development life cycle.
Sometimes as a cloud service consumer, there's no way that we can know that, but we want to make sure that
web applications are using some kind of a secured transport, such as HTTPS, as opposed to the less secure
HTTP. Access control on data can be applied in multiple ways, including by user, by group, even using role-
based access control. Data confidentiality can be achieved with encryption. We might do that, for example, with
data stored in a database hosted in the cloud. Integrity verifies the trustworthiness of data in that it's valid, not
corrupt, and hasn't been tampered with. That can be done through a hashing mechanism, or it might be done
when sending transmissions with digital signatures. Finally, cloud service providers need to audit activity for
their SaaS offerings. They might have to do that in some cases for compliance with regulations.
There are a number of security attacks and vulnerabilities related to Software as a Service, including injection
attacks against SQL databases, operating systems, or LDAP servers running in a cloud environment. Injection
attacks allow an attacker to inject data that wasn't intended to be accepted by the application, and sometimes
that can result in the attacker gaining elevated privileges. Insecure authentication and session management
could also be a problem. Just because we're using identity federation, especially if we've set it up on our
customer premises, we might not have configured it correctly. So we want to make sure that authentication
occurs correctly. We might even have to adhere to specific rules dependent upon the SaaS application that
users need to be authorized against. Poorly designed data access control can also result in security breaches.
There should be periodic audits, not just on use of SaaS offerings but also on accessed data. We should take a
look at how data access control has been applied. We want to make sure there's no multitenant leakage. There
needs to be strict isolation between multiple cloud tenants to keep their data and software separate from one
another.
Cross-site scripting attacks take advantage of the fact that, for example, web sites or input fields on web sites
don't properly check what people are submitting, particularly through forms. So this is really an issue with the
developers not properly validating data, and it could result in an attacker injecting malicious code into a site that
is executed by another user. We also want to make sure that sensitive data like user IDs, passwords, and so on
are kept secured. We want to prevent data leakage, so sharing through social media, through e-mail, or printing.
But at the same time, we want to make it convenient to gain access to this information. There can also be
component vulnerabilities that could come in the form of plugins for a web browser that's required for us to
access a SaaS offering. Unchecked redirects and forwards can also be problematic, whereby we might have a
user that unwittingly clicks a link that redirects them to a malicious web site. On the consumer side, from the
security perspective, we need to determine whether applications were developed on a third-party platform, such
as Amazon Web Services, or is the vendor hosting it on their own infrastructure but was built by another party?
We have to consider how does the cloud service provider monitor the application, not only for security but for
performance reasons as well.
[Heading: SaaS Security Challenges. One of the Software as a Service or SaaS security vulnerabilities is poorly
configured access security or Role-based secure access, also known as RBSA; to databases, middleware, and
operating systems.]
We should have an idea of what type of event logging is carried out by the provider and, of course, very
important, do we as a consumer have access to that logging data? What protection mechanisms and
techniques might be utilized at the cloud service provider data center? Often cloud service providers will have
videos published on YouTube or documentation that you can acquire that explain their mechanisms and
techniques for their data centers. This is in their interest to ensure customers that their IT workloads and data
are safe in their data centers. We have to consider whether or not data retention and ownership policies are met
from a given cloud service provider with their SaaS offerings. This might be a showstopper. We might have to
go with a different cloud provider if we have certain data retention rules we must adhere to. Finally, we want to
make sure cloud service providers protect our contact and general information. Continuing our security issues
related to cloud consumers, we have to think about how the cloud service provider's hardware is protected from
disasters, such as flooding. Do they have raised floors? Are their servers on shelves? What's their business
continuity policy? And this is perfectly valid information that we can ask when we are evaluating various cloud
service providers.
We should also get information related to the cloud service provider's uptime and availability statistics. If I'm
going to be running a machine-critical app where uptime is of the essence, it always needs to be available, then
we're going to want to do our homework carefully here. Then we need to know what the consequences are of
the provider not adhering to the service-level agreement. So if they don't deliver the promised uptime,
availability, response time, and so on, what is the impact? Perhaps the cloud service provider will provide the
service for free for a period of time if they don't adhere to the service-level agreement. We should also have an
idea of our support options related to that provider, help desk, hours of support, quality of the support staff, and
so on. Cloud service providers have to periodically introduce new or upgraded features. So we should find out if
there's some kind of a notification before that happens. The last thing we want is for our large user base in our
organization to come into work one morning only to see that the entire interface for a SaaS offering has
changed overnight.
Authentication could also be integrated with our on-premises existing identification system. We might do that for
Single Sign-On with identity federation. Finally, we should think about multiple tenants that are being served and
making sure that the data isolation and performance are accounted for by the cloud service provider because
the more cloud tenants they have simultaneously using cloud services, the more that potentially could
negatively impact performance for our cloud services. Of course, we have to think about how we're paying for
our cloud services. What is the subscription method? Is it monthly? Is it based on consumption or perhaps it's a
variation of both? Finally, can the cloud service provider give us customer references? And can we contact them
to see what their experience has been? In this video, we discussed SaaS security challenges.
[Heading: SaaS Security Challenges. One of the SaaS security challenges is that if the application is serving
multiple tenants, find out what measures are in place to protect the safety, integrity, and access of data.]
SaaS Security Best Practices
Learning Objective
After completing this topic, you should be able to
describe the best practices for securing a Software-as-a-Service or SaaS offering
1.
In this video, I'll discuss Software as a Service security best practices. Cloud service providers have the option
to adhere to best practices for their SaaS offerings. This includes identity and authentication to their offerings.
This could include Single Sign-On, or SSO, whereby we can use identity federation or a centralized identity
provider. And once a user has successfully authenticated initially, they won't have to keep logging in to SaaS
applications as they keep accessing different applications. Instead, their existing credentials will be passed to
the applications to authorize user access. That could be done to a security token that gets issued to the user
device upon initial successful authentication. We can also use role-based access control, or RBAC, whereby
users that are placed as occupants within specific roles are granted specific access to resources, such as SaaS
applications.
At the application security and vulnerability testing level, we should ensure that cloud service providers are
being audited through third parties. Those results should be made available to cloud customers. There needs to
be continuous cloud service provider incident, performance and alert data. That needs to be provided by the
provider and accessible by cloud customers so that they can determine if any portions of the cloud provider's
infrastructure are available versus unavailable, and in the case that they are unavailable, what the reason for it
is and for how long. Ideally, cloud service providers will adhere to the SAS 70 standards. SAS stands for
Statement on Auditing Standards. And SAS 70 specifically deals with data that's encrypted when it's stored as
well as when it's being transmitted. There should also be a clearly-defined patch management strategy at the
cloud service provider so that we know when updates or maintenance will take place because it could
negatively impact our business as a cloud consumer.
Additional best practices for the cloud service providers include a developed backup and recovery plan that
works. Now this is important because we might be paying specifically for cloud backup or archiving services.
We want to make sure that, for example, if there's a problem in a single data center that our data perhaps is
available in other data centers where it's been replicated. The provider should also provide details of active
security controls at their data centers. This might be provided through literature or through videos that are
published on their web site or through YouTube. In some cases, if you're physically close to a data center
owned by a specific provider, you might be able to arrange a physical tour. That means making sure that their
servers, their staff, access to the facility, all of these things are secured properly. The provider should also be
able to demonstrate measures that support data isolation as well as security and integrity for cloud tenant
services and data. Now sometimes, this one is very difficult to prove. For example, how can we easily prove that
our virtual machines are kept completely isolated from other cloud tenants? There's really no way to tell that
from within a virtual machine because it would defeat the purpose of controlling access from that virtual
machine to the host on which it's running or other virtual machines.
Providers should also adopt a security-based development framework if they build software for all of their SaaS
applications. Now all of these security-based development frameworks insist upon working with security at each
phase of the system development life cycle and reassessing security at each phase. Cloud service providers
should also consider following OWASP guidance on web application security. OWASP stands for Open Web
Application Security Project. OWASP is concerned with web application security, and SaaS offerings are offered
over the Internet usually through some kind of a web interface. We also have the option of looking at the
OWASP Top 10 web application security risks. This is something that cloud providers as well as us cloud
consumers should take a look at so that we can educate ourselves about some of the inherent risks that we
might be putting upon ourselves. In this video, we discussed SaaS security best practices.
[The Google Chrome browser window is displayed and includes the following partially displayed tabs:
Category:OWASP Top Ten and OWASP Top 10 - 2013.pdf. The Category:OWASP Top Ten tab is selected by
default and this tabbed page includes information about Category:OWASP Top Ten Project. The presenter
clicks the OWASP Top 10 - 2013.pdf tab, as a result, the OWASP Top 10 - 2013.pdf tabbed page is displayed.
This tabbed page includes information on OWASP Top 10 - 2013, the ten most critical web application security
risks.]
Secure Software Development
Learning Objective
After completing this topic, you should be able to
describe secure software development practices
1.
In this video, I'll talk about secure software development. Some cloud service providers build their own software
offerings, whereas others simply host software built by another party. Either way, whoever develops those
software applications need to be security-minded. They should adopt the security-minded framework that
applies to their work. Applications should be developed against this background of security considerations, and
they should include security items, such as authentication, authorization, auditing, confidentiality, isolation, and
availability. Some of these items could be built directly into an application, whereas others, such as
authentication, really shouldn't be. We don't want specific applications doing their own authentication. We want
that done elsewhere through a centralized identity provider. We should also understand potential threats against
our application and then include the countermeasures or controls to limit those threats and to mitigate the risk.
For example, if we've got an application that accepts input on a web form, we want to make sure that we
properly validate fields on that form, so that reduces the possibility of things like injection attacks.
A security-minded application development framework will account for auditing and logging where security-
related events get recorded and, in some cases, made available to cloud customers. Authentication means the
proving of one's identity, whether that's a user, a device, or perhaps even another web service. Authorization
occurs only after successful authentication, and it means that we are giving access to a resource for
successfully authenticated entities. A development framework should also include communication of data or
how data that results from our application is stored. Now we might not necessarily build in secured
communication and storage into our application. Our application simply might use what's already out there. For
example, it might already use things like HTTPS transmissions, which secure transmissions over the Internet.
Configuration management deals with how applications distribute configuration and admin changes in a security
context. Some cloud service providers will do this, or they will give their cloud customers the ability to govern
those changes over their user base.
[Heading: Secure Software Development. A security-minded application development framework will account
for auditing and logging. In auditing and logging, security-related events are recorded, monitored, and audited.]
A security-minded application development framework should also account for cryptography, whereby we might
enforce data confidentiality through encryption and integrity through perhaps file hashing or digital signatures.
Software developers also have to take carefully into account input and data validation. If we are allowing input
from a user, or from another service, or a device, we want to make sure that we check the data that's sent to the
application to make sure that it's formatted correctly, to make sure it falls within specific boundaries. And we
should also make sure we test our applications through fuzzing where we feed large amounts of randomized
data to input fields to verify that the application doesn't crash, or give elevated access, and so on. Application
developers also need to account for exceptions. There needs to be proper error handling. It's fine to test an
application when we feed it the correct data, and everything works as it's designed to work. But what if there are
erroneous conditions of some kind? We want to make sure that those errors are trapped, and developers
should take care that error messages don't reveal too much information.
Sensitive data, such as that, that is stored in memory or transmitted or stored should be dealt with appropriately.
Maybe when memory-resident, sensitive data is used, afterwards it can be scrubbed from memory, or it can be
transmitted in an encrypted format or stored in an encrypted format. Session management deals with the
establishing, management, and teardown of sessions between client devices, for example, in a web application.
So we might use a web application on a web server that requires HTTPS. So the web server itself would require
a PKI security certificate in order for us to enable HTTPS. The Cloud Security Alliance covers Software as a
Service and cloud applications within Domain 10 of the guide, specifically Version 3 of the guide, where it
discusses the adoption of a secure system or software development life cycle methodology. This means that
security is used through each phase of software development and not just added on at the end as an
afterthought.
[Heading: Secure Software Development. A security-minded application development framework will account
for session management, which will ensure the security and integrity of user sessions.]
It also specifies that we should verify and validate security mechanisms. There needs to be a periodic code
review, ideally by someone other than the developers so that they can identify any security issues in the code.
There needs to be thorough testing and so on. Authentication and authorization should also be administered in
a central way outside of the application itself. Penetration tests could be conducted against the application to
make sure that there's no way for malicious users to gain elevated permissions or to crash the service. And of
course, application-related events should be logged and monitored on a periodic basis. In this video, we
discussed secure software development.
[Heading: Secure Software Development. The Cloud Security Alliance covers Software as a Service and cloud
applications within Domain 10 of the guide, specifically Version 3 of the guide, where it discusses the adoption
of a secure system or software development life cycle methodology. It requires that code review, testing, and
interoperability testing is conducted during the application construction phase. It also requires that
authentication, authorization, administration, auditing, and policy should also be administered in a central way
outside of the application itself.]
The Cloud Cube Model
Learning Objective
After completing this topic, you should be able to
describe the Jericho Forum Cloud Cube Model for defining cloud characteristics
1.
In this video, I'll discuss the Cloud Cube Model. The Jericho Forum, otherwise known as the thought-leadership
group, was founded in the UK in 2004 and ended in 2013. Here we can see the Jericho Group has
documentation available online that talks about selecting different cloud formations for secure collaboration. The
purpose of the Jericho Forum was to define and promote the breaking down of security barriers, otherwise
called de-perimeterization, and to promote enterprise security collaboration. The Jericho Forum consisted of
user and vendor members and also collaborated with the Cloud Security Alliance, CSA, who are still active
today. The Cloud Cube Model has a number of cloud formations. The objectives are that not everything is best
implemented in clouds. Sometimes, it makes more sense to operate some business functions using a traditional
non-cloud approach. Sometimes, you don't have a choice due to regulations. We might have to store data and
run services on-premises. Another objective of the Cloud Cube Model is to explain the different cloud formations
that the Jericho Forum has identified and described key characteristics, benefits, and risks of each of them.
[Heading: The Cloud Cube Model. In Macintosh OS, the cloud_cube_model_v1.0.pdf is open in the Opera
browser window. In this pdf, information related to Cloud Cube Model: Selecting Cloud Formations for Secure
Collaboration is displayed. The white paper on Cloud Cube Model – Selecting Cloud Formations for Secure
Collaboration published in April, 2009 provides the definition of available cloud computing formations and the
guidance on selecting the most appropriate model.]
Finally, the last objective is to provide a framework for exploring the nature of these different cloud formations
and the issues that need to be solved to make them usable...to make them safe and secure. We can see that
with the Cloud Cube Model, we have to determine whether we are outsourcing specific activities related to IT or
insourcing them. We have to determine whether we're going to be running solutions that are proprietary or open
source. Let's take a look at the cloud formations. The first is internal and external, where we determine the
location of cloud data. Sometimes, it's not black and white. We could have cloud data, but we might be
synchronizing it locally, so we still have access to it if we lose our Internet connection. Once the connection
comes back up, then any changes we've made locally in offline mode would be synced back up to the cloud.
The second cloud formation is proprietary and open, where we determine whether we're using proprietary or
open source cloud software. For example, OpenStack is open source cloud software, whereas Microsoft Azure
is proprietary.
The third cloud formation deals with perimeterized or de-perimeterized architectures. Now remember, with de-
perimeterization, we're really talking about removing boundaries between an organization and the outside world,
such as a connection to a cloud service provider. Now at the same time, we need to make sure that the
organization's assets are protected, so we have to have a secure solution even though we're moving that
boundary between a local network owned by an organization and a public cloud provider. The fourth cloud
formation is insourced or outsourced, whereby with outsourced, we look at whether or not the cloud service is
provided by a third party. We might even have outsourced services from various third parties. It doesn't have to
be all from the same cloud service provider. With insourced, the cloud servicing question is provided and
controlled internally. So what's running on assets owned by the organization, such as computer infrastructure,
and it's in the complete control of the organization. So in other words, that would be a private cloud. In this
video, we discussed the Cloud Cube Model.
Cloud Network Infrastructure Security
Learning Objective
After completing this topic, you should be able to
describe the considerations for infrastructure security in cloud computing
1.
In this video, I'll discuss cloud network infrastructure security. One of the main issues with cloud network
infrastructure security is to protect consumer assets that are hosted in a multi-tenant cloud infrastructure. So
we've got multiple customers that are using the same physical compute infrastructure. Now data isolation can
be put in place by the cloud service provider, but we still need to consider these issues. The cloud infrastructure
consists of compute infrastructure; network infrastructure, both of which could be physical or virtual; and storage
infrastructure, which could be dedicated to a specific cloud tenant or shared among tenants. SaaS applications
are also vulnerable to attack. We don't have control, as cloud service consumers, of how SaaS applications are
implemented. We simply use those applications that are hosted by the cloud service provider. Now the one
thing we might be able to do is control authentication to those SaaS applications. SaaS applications include
items, such as Dropbox, Facebook, Twitter, Office 365, Google Apps, and many others. Other important security
elements include whether data integrity and confidentiality are maintained. Integrity assures us that data hasn't
been tampered with and that it's authentic. Confidentiality makes sure that data is seen only by authorized
users.
[Heading: Cloud Network Infrastructure. The cloud infrastructure consists of compute infrastructure, network
infrastructure, and storage infrastructure, all of which must remain secure to achieve required control.]
With authentication, we might consider the use of a central identity store through identity federation, which we
could also use to enable single sign-on to multiple SaaS applications. Then authorization is seamless and the
user doesn't have to keep entering in credentials. It's also important that applications are available. The uptime
is listed in the service-level agreement. On the Internet, OWASP has multiple publications dealing with what we
should look for with web application security. However, if we aren't developing or testing web applications, we
do have little control. Finally, there's virtualization security, where, in the case of, virtual machines we need to
make sure one tenant's VMs are kept isolated from another tenant's VMs. This is often referred to as shielded
virtual machines. Then there's network-level security. Ideally, all data traversing cloud provider networks, be
they physical or virtual, will be secured either through Secure Sockets Layer, SSL, or its successor, Transport
Layer Security, TLS. At the same time, all networks must remain available. This is also true at the client or
customer's side, where they should have redundant Internet connections, so that if one Internet connection fails,
they still have a way to get to the public cloud provider.
The cloud provider should also look at hardware solutions that will deal with failover, if we have failed servers or
failed components; port protection; and also traffic management. For Platform as a Service and Software as a
Service, the cloud service consumer has really no control over the network infrastructure including its design
and security. So the cloud service provider should ensure via the service-level agreement that all network
protection security measures are in place. With Infrastructure as a Service, the cloud service consumer does
have some control because they are configuring infrastructure, such as network firewalls, virtual networks, even
public facing network configurations. Additional network-level security issues are related to attacks, such as
denial of service attacks, DNS hacking, routing table and arp cache poisoning, XML denial of service attacks,
quality of service issues, man-in-the-middle attacks, and finally spoofing, which is the forging of some kind of
transmission, whether it's a packet on the network or an e-mail message sent to an inbox. Now if you're asking
yourself wouldn't I have to consider these in terms of network security even on my own on-premises network?
And the answer, of course, is yes. We just want to make sure we extend that thought process to the cloud
provider network.
The cloud service provider needs to ensure individual client asset integrity. This is done through data isolation.
There are other network-level security issues, such as physical access to network equipment in the datacenter.
There also needs to be alternate sources of power, in case of a power outage. Normally, in datacenters, that's
done with generators. We then have to account for sources of damage to equipment within a physical
datacenter, such as water, fire, HVAC, rodents, and so on. Anything that could impede a service running in the
datacenter needs to be accounted for. The cloud service consumer should and can request certifications for the
cloud service provider related to network security or adopted standards. The called service consumer is
completely reliant, remember, on network traffic and incidents reporting data from the cloud service provider. It's
out of the control of the cloud service consumer. We should also consider the use of Security as a Service as
per the cloud security alliance. Security as a Service offers security mechanisms hosted by the cloud provider.
Some of those include things like firewalls, intrusion detection systems, file integrity monitoring, denial of service
protection, antivirus protection, traffic flow, and so on. In this video, we talked about cloud network infrastructure
security.
[Heading: Cloud Network Infrastructure. Some of the uses of Security as a Service per the cloud security
alliance are displayed. The providers of Network Security SecaaS must provide cloud customers with details of
data threats, details of access control threats, access and authentication controls, and security gateway such as
firewalls, WAF, and SOA/API. The providers of Network Security SecaaS must also provide cloud customers
with security products such as IDS/IPS, Server Tier Firewall, File Integrity Monitoring, DLP, and Anti-Spam, and
security monitoring and incident response. In addition, the providers of Network Security SecaaS must provide
cloud customers with DoS protection/mitigation, Secure "base services" such as DNSSEC, NTP, Oauth, SNMP,
management network segmentation and security, traffic/netflow monitoring, and integration with Hypervisor
layer.]
Host-level Security
Learning Objective
After completing this topic, you should be able to
describe the host-level security considerations in cloud computing
1.
In this video, I'll discuss host-level security. When we talk about a host in the cloud context, we're talking about
the hypervisor host, the physical machine, on which multiple virtual machines can run concurrently. Cloud
platforms all require physical hosts, storage, and wired networks. But all of those can fail. To mitigate loss and
downtime related to their failure, the cloud service provider can implement physical redundancy and security for
hardware to make sure that we've got, for example, multiple physical cluster nodes running virtual machines,
such that if we have a failure of one physical cluster node or hypervisor host, the virtual machines that it was
running can failover to our remaining cluster nod, thus minimizing downtime for those virtual machines. The
cloud service provider should also ensure that there is a valid backup policy for data and even replication
schedule. For example, the provider might replicate data for all or some cloud customers between datacenters.
The provider should also deploy failover techniques not just for hardware, but for running services and
databases. They could even deploy load balancing. In some cases, this is offered as a configurable service to
cloud customers. Load balancing allows us to distribute network traffic for busy network services.
Finally, the cloud service provider should deploy access control best practices to those host servers. One of the
things that would be considered in that case would be the principle of least privilege, whereby we only grant
administrators and users privileges they need to complete their job tasks and no more. On the cloud host, there
are also hardware and service issues, such as data loss due to power failure. Cloud service provider
datacenters need to have alternate ways of getting power. This is usually through backup generators. Then we
need to think about data loss due to the lack of physical security. It's important that datacenters have strict
physical security to control access into and out of datacenter facilities as well as access to racks of physical
computing equipment. At the same time datacenter administrators should not have the ability to go into a
customer virtual machine and see the data or its running processes. There's also data loss due to poor
redundancy. If we have a failed server or we only have a single backup of data, we want to make sure that we
have an alternate to go to if it's corrupt. So having a single instance of backup information, for example, is a
poor practice, both for cloud service providers as well as for cloud consumers.
There could also be data loss due to network outages. Sometimes, if we're in the midst of writing something
over a network to the cloud, or to on-premises, or for in the middle of synchronizing something, we could either
lose the data entirely, or it could be corrupt, or truncated. So again, we fall back on reliable backup
mechanisms. Regardless of the operating system and its operation, the following can assist in establishing
security. We need strong authentication policies that are applied to cloud hosts. This usually means the use of
multifactor authentication. So instead of allowing authentication to cloud host servers simply through username
and password, there might also be biometric authentication through fingerprint scanners, or the use of smart
cards, or PKI certificates. After successful authentication, then personnel on the datacenter would be authorized
to access cloud host servers. There should also be a strong password policy, if passwords are being used.
Firewalls can also be used to allow or restrict certain types of traffic to and from different locations.
Host servers should also have frequent backups and there should be multiple copies of backups stored in
alternate locations. This is where cloud service providers will often replicate to other datacenters. Operating
system updates will periodically need to be applied to the cloud host servers. Often what happens is any
resource running on those host servers are drained or moved over to other running servers, so that we can
update and potentially reboot a cloud host that requires updates, then the services that were running in the past
get placed back on that host. And we can use shielded virtual machines, which allow isolation between different
tenant virtual machines. One way to do this is using SELinux. If the cloud service provider is using the Linux
operating system to run virtual machines, then they might consider the use of SELinux, which stands for
Security-Enhanced Linux. Here at the command prompt, on my Linux host, I've typed getenforce and it
returns "Enforcing", which means that SELinux is active. This is relevant because SELinux can allow for
isolation between different resources, such as virtual machines. Here in Linux, I can issue the virsh, as in virtual
shell, space, list, space, --all command to list virtual machines. Here we can see, we've got two of them,
[The root@rhel71-2:~ terminal command prompt window is displayed in the Linux application. At the bottom of
the command prompt window, root@rhel71-2:/vm_disks, root@rhel71-2:~, and root@rhel71-2:~ tabs are
displayed. The third tabbed page, root@rhel71-2:~, is open by default and at the [root@rhel71-2 ~] command
prompt, the getenforce command is entered and the result, Enforcing, is displayed. The presenter navigates to
the second tabbed page, root@rhel71-2:~, and at the [root@rhel71-2:~] command prompt, the virsh list -- all
command is displayed. As a result, a table with two rows and three columns is displayed. The columns are ID,
Name, and State. The rows are as follows: For Id 2, Name is rhel7.0 and State is running; and for Id 3, Name is
dbserv and State is running. Then the virsh dumpxml dbserv|grep svirt command is displayed at the
[root@rhel71-2 ~] command prompt.]
and can also use the virsh dumpxml command against a certain virtual machine, here dbserv, where I can pipe
the results to grep to filter out the output looking for svirt. What I would see here is for dbserv running virtual
machine, it's got specific labels, such as c900 and c984. This is part of SELinux and its mandatory access
control labeling. If I look at the virtual hard disks in use by that virtual machine, I will see that it's using the same
labeling, in this case, c900 and c984. So what does this all mean? We can see the relationship with the 900 and
the 984 for the virtual machine hard disk file and the running virtual machine. Well, the relationship is that the
virtual machine can only access resources that are labeled in the same way. And this allows for virtual machine
shielding or isolation among multiple cloud tenants. In this video, we discussed host-level security.
[The root@rhel71-2:~ terminal command prompt window is displayed and the root@rhel71-2:~ tabbed page is
open. On this tabbed page, at the [root@rhel71-2 ~] command prompt, the virsh dumpxml dbserv|grep svirt
command is displayed. The presenter runs this command and the following is displayed:
<label>system_u:sysem_r:svirt_tcg_t:s0:c900,c984</label>
<imagelabel>system_u:object_r:svirt_image_t:s0c900,c984</imagelabel> Then in the command prompt
window, the presenter navigates to the root@rhel71-2:/vm_disks tabbed page. On this tabbed page, at the
[root@rhel71-2 vm_disks] command prompt, the following command is displayed: ls –laZ The output that is
displayed for this command is as follows: drwxr-xr-x. root root system_u:object_r:default_t:s0 dr-xr-xr-x. root root
system_u:object_r:root_t:s0 - rw-------. qemu qemu system_u:object_r:svirt_image_t:s0:c900,c984
dbserv.qcow2 drwx------. root root system_u:object_r:default_t:s0 lost+found - rw-------. qemu qemu
system_u:object_r:svirt_image_t:s0:c741 ,c768 webserv.qcow2 From this output, he highlights the following:
c900,c984 dbserv.qcow2. He then navigates to the root@rhel71-2:~ tabbed page and highlights the label values
c900,c984.]
Virtualization Host Security
Learning Objective
After completing this topic, you should be able to
describe considerations for security virtualization hosts in a cloud environment
1.
In this video, I'll discuss virtualization host security. Virtualization has allowed for the rapid growth of cloud
computing through its adoption by cloud service providers. We can virtualize many items including virtual
machines and virtual networks. The benefits to virtualization include elastic and rapid resource provisioning. If
we need, let's say, to fire up a number of virtual machines for a project that can be done in minutes through the
cloud. Whereas, in the past, we would have to order hardware for those servers that we need, wait for it to
arrive, then install the operating systems, and so on. At the same time, it's very quick and easy to deprovision
virtual machines when we no longer need them and we're only paying for what we use. So there's also that cost
benefit. With virtualization, we can share physical resources. Many different cloud tenants could have virtual
machines all running on a single set of hardware. At the same time, it's also important that service and
application data be kept isolated between cloud customers, which is entirely possible. Virtualization is
considered a green solution because instead of having 100 separate physical servers, for example, they could
all be running as virtual machines on one set of hardware at the same time. So it's less power consumption,
less cooling required, and so on.
Virtualization can also offer improved disaster recovery or failover management benefits. We could have
virtualization hosts as cluster nodes. So if one fails, the virtual machines it was running will be failed over to a
remaining cluster node. In terms of disaster recovery, we have the option of using virtual machine snapshots or
checkpoints do the same thing. And we can apply those checkpoints or snapshots to return the virtual machine
to an earlier point in time. With virtualization, there is a reduction in vendor lock-in. And that's because
virtualization is pretty much standardized. If cloud service provider A is allowing us to run virtual machines in
their cloud infrastructure, we should be able to migrate those virtual machines to cloud provider B. And if they're
not the same format, there are many available tools on the Internet to convert between various types of virtual
machine settings. But like all good things, there are also drawbacks. With virtualization, we've got a lot of eggs
in a single basket if we're not clustering. That means we could have many virtual machines running at the same
time on one physical server. What happens then if that physical server fails in some way, even if its network
connectivity fails? We have a problem because none of the virtual machines will be available. It's crucial with
virtualization that we have some kind of a failover clustering solution, so that if one physical host or a
component on one host fails, virtual machines can be failed over to another one.
Now there is a double-edged sword in that. We have great recovery options with virtual machines in terms of
the fact that we've got snapshots and checkpoints, but at the same time that could also be a drawback, whereby
we might have to configure backup agents within each virtual machine to recover lost information as well as
setting up backup for the virtualization host itself. So it's almost like we're backing things up twice. But that might
be necessary in some cases. In terms of application support, some apps will not work correctly in a virtualized
environment. They may, for example, need to talk directly to a specific type of hardware that isn't supported in
the virtualization environment. And also, a single misbehaving virtual machine could negatively impact other
virtual machines running on that same host. Other areas of concern are related to the Cloud Security Alliance
Domain 13, where they list virtual machine guest operating system security. Just because we have a virtual
machine running an operating system, doesn't mean it doesn't need to be hardened or secured as if it were
running on a physical host. It still needs to be updated. It needs to be hardened. We need to follow security best
practices and so on. This can be done through the use of templates in a large-scale environment, such as a
cloud service provider network.
We also have to think about the hypervisor running on the host physical computer. It also needs to be hardened.
Then there are client asset vulnerabilities, such as stored data, in which case, we might consider using
encryption to protect that information. With Domain 13, the Cloud Security Alliance also provides many
recommendations to limit security issues inherent in a virtualized environment. Part of that would include doing
things like applying updates and so on. So we need to harden each virtual machine just as we would if it were
physically running on physical hardware. We need to harden the hypervisor. We need to make sure virtual
machines are isolated at least at the tenant level. This can be done using mandatory access control
mechanisms built into the hypervisor operating system or through other tools. Virtual machine sprawl is an
interesting problem. Because it's so easy and quick to provision virtual machines, what can happen over time is
we've got plenty of virtual machines running that perhaps we forgotten about and they're no longer required.
So it's a waste of performance resources as well as paying for something that you're not using. So often cloud
service providers will have tools, where we can do a query to determine virtual machines that have been
accessed in a period of time. We can also consider encrypting virtual machines. In essence, virtual machines
are collections of files, each of which can be encrypted to protect it from other tenants in a cloud computing
environment. Finally, we want to make sure that when virtual machines are removed or destroyed that there are
no remnants that remain, so that other cloud tenants or malicious users could get access to the data in them.
Think about the fact that a virtual machine will use at least one virtual hard disk file. It's just a file. And often,
those files can be mounted at the command line or in GUI tools, so that people could poke around and see what
was in the virtual hard disk. So when we talk about virtual machine destruction, really, we're talking about the
secure deletion of files related to virtual machines. In this video, we discussed virtualization host security.
Application-level Security
Learning Objective
After completing this topic, you should be able to
describe application-level security in cloud computing
1.
In this video, I'll talk about application-level security. Application-level threats include unauthorized use of
applications and unauthorized access to the underlying data that results from those applications. This could
even take the form of, for example, a stolen smartphone that has an app installed, whereby there is access
granted to sensitive data stored in the cloud. Application-level security threats include Cross-site scripting
attacks, cookie poisoning, hidden field manipulation, SQL injection attacks, denial of service attacks, even
Google hacking, where data that wasn't intended to be made public is made available publicly and can be used
to get into an application or to access sensitive data. We have the option of migrating legacy applications to the
cloud. If this is the case, then it means that the cloud consumer loses some control over the application. The
reason for this is because it's no longer running on equipment or infrastructure owned by the consumer. Instead,
it's running in the cloud on the cloud provider's infrastructure. So therefore, there would be a reliance on the
cloud service provider to protect the application and the resultant data. Now the consumer can do some things
to protect the data, such as encrypting it out of band, using a third-party tool to encrypt data that results from
that application.
Inadequate identity and authorization control is also problematic in the cloud. It can diminish asset integrity
because the data may not be trustworthy if we can't assure it's not been tampered with. When developers build
applications on a Platform as a Service platform, then it is the client that is responsible for application security,
not the cloud service provider because the client is building the application. It's just being hosted in the cloud
environment. Passive data or data that's not currently being used but stored, often also called data at rest as
well as data in transit, needs to be encrypted. And we need to have data isolation in a multi-tenant environment,
such that the data from one cloud tenant cannot be accessed by other cloud tenants. One way to do this is to
ensure that we are enabling encryption, for example, even at the virtual machine hard disk level.
Additional application-level security considerations include issues, such as spoofing or forging something, such
as a packet or a transmission to a web application; tampering with data; repudiation, where we can deny that
something was done even though it was logged because we can't prove where a transmission, for example,
came from. There's also the issue of privilege elevation. If applications aren't coded properly with the correct
checks and balances put in place, then it's possible for a malicious user to get privilege elevation. Unauthorized
disclosure of data or data leakage is also possible. And, of course, denial of service could make an application
unusable for legitimate purposes. This could be done by flooding a network with useless traffic or even by
crashing a web application. Well-designed identity, entitlement, and access management mechanisms, also
known as IdEA, will mitigate most of the above listed threats. Domain 12 of the Cloud Security Alliance guide
deals with IdEA. If you're looking for more details related to IdEA, bear in mind that the Cloud Security Alliance
makes their publications freely available to anybody that's interested on the Internet. Here we can see the Cloud
Security Alliance publication related to Domain 12: Guidance for Identity & Access management.
[Heading: Application-level Security. The Google Chrome browser window is displayed and includes the CSA
tabbed page. The URL of this tabbed page is as follows: https://cloudsecurityalliance.org/guidance/csaguide-
dom12-v2.10.pdf. On this tabbed page, the CSA guide for Domain 12 is displayed.]
The Cloud Security Alliance and the application layer are discussed in Domain 10. This deals with the adoption
of a secure software development life cycle, whereby security is dealt with through each phase of system
development. Additional application layer security considerations include the lack of the client's control over
security because it's being hosted by the cloud provider. The lack of visibility over application security policy is
also potentially problematic. In some cases, cloud providers will allow customers to build policies to control
security aspects of software that they're delivering through the cloud. With manageability, a lack of access to
auditing and access policy could also be problematic. Again, some cloud providers are good about this and will
give a web interface to customers, so that they can take a look at audit logs related to application and data
access and even the option of configuring access policies. But in the end, there is a lack of client control over
infrastructure and inherent security. So in other words, we have a lot of trust then in the cloud service provider
and their security mechanisms that they say are in place.
There are also compliance risks. The cloud consumer has no influence over applied standards, asset security,
privacy measures, and so on. However, bear in mind that cloud consumers can ask providers for audit results or
compliance frameworks that the cloud provider is adhering too. It's always important to isolate application
instances running in the cloud between different tenants, data that's stored in memory, data that is stored, and
data that's being transmitted. It's important that deletions are also done in a secured manner. We want to make
sure other cloud tenants can't retrieve data that was removed from other cloud tenants. It's important that we
have secure interfaces, so that we have a secured way to get into our management tools, whereby we can
control some of our cloud services. Ideally, this would be done through a web interface over HTTPS. Finally, we
can also use Access Control Lists, or ACLs, to determine resource access for groups of users.
OWASP has publications on the Internet related to application security for web applications. There are a
number of testing recommendations, whereby there are Business Logic Testing, Authentication and
Authorization Testing, Session Management Testing, Data Validation Testing, which is important. We want to
make sure if we allow users to input data into an application of some kind that that data is valid and is formatted
correctly. OWASP also deals with the denial of service testing for application testing and Ajax Testing. Ajax is
Asynchronous JavaScript and XML, which developers could use to build various web services. As is the case
with Cloud Security Alliance publications on the Internet, OWASP makes all of their publications freely available
to everybody on the Internet. Here we can see the OWASP Testing Guide v4 and we're looking at the Table of
Contents, where we could scroll through and learn about the proper techniques used for testing web application
security. In this video, we discussed application-level security.
[Heading: Application-level Security. OWASP has publications on the Internet for application security of Web
applications and various testing recommendations, such as Configuration Management Testing, Business Logic
Testing, and Authentication Testing. The presenter navigates to the Google Chrome browser window and the
table of contents for OWASP Testing Guide v4 is displayed on the OWASP Testing Guide v4 tabbed page that
includes the following URL: https://www.owasp.org/index.php/OWASP_Testing_Guide_v4_Table_of_Contents]
Securing Data at Rest and Data in Transit
Learning Objective
After completing this topic, you should be able to
describe the measures to secure data at rest and data in transit
1.
In this video, I'll talk about securing data at rest and data in transit. Data has various states. Passive means that
a file or data, such as a record in a database, is currently not being used. This is also called data at rest. Data
that's in process is a file or a record that is currently being edited or data that's being altered in some way. Data
in transit is files or some kind of data being uploaded, downloaded, or synchronized over a network. The
objective in the cloud is to protect this data regardless of its state. So whether we are opening, reading, or
writing, or sharing files, we need to verify file or record in a database integrity to make sure it's not been
tampered with and to make sure that it's safe from deletions from unauthorized users or from malicious use. We
also need to protect data when it's being uploaded, downloaded, synchronized, backed up, or even restored in
the cloud. There are various data protection methodologies. One way to protect data is to use Access Control
Lists, or ACLs. This way, administrators can use groups of users that have specific permissions granted to a
resource, such as files or records within a database.
Within databases, there are often special object permissions that can be configured specific to a database. We
can also implement authentication and key management methodologies. For example, we might use identity
federation, a centralized identity provider whereby that would be trusted by different storage locations in the
cloud or databases, so that we wouldn't have to store user credentials in each location. It's stored once,
centrally. We might even use keys, whether those keys are part of PKI certificates or they could be proprietary
keys that are used to encrypt and decrypt data stored in the cloud. Storage encryption will require some kind of
a key for encryption and decryption. We can also protect data by backing it up on a reasonable basis and
verifying that the backups work and are not corrupted. It's also important to make sure we have more than just a
single backup. It's also important that we have restore policies in place, so that we know what needs to be done
in the event of a catastrophe. We also protect data by auditing access to it, especially sensitive data. Here we'll
be able to identify malicious users gaining access to data or even legitimate users abusing their privileges.
There's also a transport-level encryption, whereby when we transmit data over a network, it's protected. This
could be done using SSL, or TLS, or we might even have a point-to-point VPN tunnel between two different
networks, such as an on-premises network owned by a private organization, connected to a cloud provider.
Data sent through that tunnel is protected because the tunnel would be encrypted. Firewalls can also be used to
verify that the appropriate data is being allowed to get to a specific network or to a host on a network. Hardening
of servers is important for both physical as well as virtual servers. Server hardening means we reduce the
attack surface within an operating system. One way we might do this is by using strong passwords, disabling
unnecessary user accounts and services, and patching of an operating system, just to name a few. Physical
security is always important because we want to make sure in the cloud that even cloud datacenter staff do not
have access to sensitive data.
One way to do this is to make sure that the cloud provider adheres to acceptable standards for physical
security. But at the same time, we can also encrypt data in the cloud, so that even cloud datacenter staff would
not gain access to that data. There should also be physical hardware failover topologies at datacenters. This
could take the form of failover clustering, whereby one failed physical host and its services would be failed over
to another host that remains running. At the same time, we should also think about failover topologies at the
entire datacenter or regional level. Cloud providers have various ways by which we can protect data at rest.
Here we can see, we have the option in the cloud of enabling server-side encryption where we can use keys
that are managed by the cloud service provider, but at the same time, we could also use customer-provided
keys. Keys are important because they are used, in this context, for the encryption and decryption of the data
that would be stored in the cloud. In this video, we discussed securing data at rest and data in transit.
[Heading: Securing Data at Rest and Data in Transit. The Protecting Data Using Ser tabbed page with the
following URL doc.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html is displayed in the Google
Chrome web browser window. The web page displays information about protecting data using Server-Side
Encryption. The presenter refers to the following title: Protecting Data Using Server-Side Encryption He then
refers to the types of keys displayed in the web page that is available for Server-Side Encryption such as Use
Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3) and Use Server-Side Encryption with
Customer-Provided Keys (SSE-C).]
Cloud Security Risk Assessment
Learning Objective
After completing this topic, you should be able to
describe how to perform risk assessment in a cloud environment
1.
When individuals or organizations look to using IT services in the cloud, there is some trust they place in the
cloud service provider and there are some inherent risks. In this video, we'll discuss some of those cloud
security risks and how to conduct a risk assessment. There's plenty of documentation available over the
Internet related to cloud security risk assessments. There is the Cloud Security Customer Council, or CSCC,
where they discuss security for cloud computing. There's also ENISA, which stands for European Network and
Information Security Agency, where they discuss cloud computing risk assessments. And there's the NIST,
Special Publication 800-146. Special Publication 800-146 is entitled Cloud Computing Synopsis and
Recommendations. And if we were to search for the word "risk" throughout this document, we would see that
there are many discussions about risk remediation when using cloud services, "Risk of Business Continuity",
"Risk of Unintended Data Disclosure" in the cloud, and so on.
[Heading: Cloud Security Risk Assessment. In Cloud Security Customer Council or CSCC, the security for cloud
computing that includes 10 steps to ensure success is discussed. The NIST Special Publication 800-146
document is displayed in the Google Chrome web browser window with the following URL:
csec.nist.gov/publications/nistpubs/800-146/sp800-146.pdf. The presenter enters the text 'risk' in the search box
of the document. As a result, 56 instances of the text 'risk' are highlighted. He clicks the next button on the
search text box and refers to some of them.]
Additional documentation includes the ISO/IEC 25010:2011 publications. This deals with Systems and software
Quality Requirements and Evaluation, otherwise known as SQuaRE, and it deals with it as applied to
Infrastructure as a Service, Platform as a Service, and Software as a Service offerings. There's also the
Information Systems Audit and Control Association, or ISACA, as well as the Cloud Security Alliance's Cloud
Control Guidance and Matrix publication. When assessing risk, we need to think about the fact that the cloud
service consumer loses some control. There is a loss of governance and management because we are trusting
a third-party to host things that we depend upon for business. There's also the risk of vendor lock-in. We want to
ensure when we evaluate various cloud providers that the data format, which data is stored, or applications, or
virtual machines are not really that proprietary. We need to make sure if we need to migrate to a different cloud
provider that we can still reuse things like virtual machines and data. We also need to make sure that data, and
files, and virtual machines, and network traffic are isolated from other cloud tenants.
[Heading: Cloud Security Risk Assessment. Additional documentation related to cloud security risk
assessments includes the ISO/IEC 25010:2011 Systems and software engineering publications. It also includes
the Cloud Security Alliance's or CSA's Cloud Control Guidance and Matrix publication of version 3.0.1.]
With data access, there needs to be a valid authentication mechanism that's being used and also a valid key
management system. We could even use our own organizational keys, for example, to encrypt and decrypt data
stored in the cloud. Many cloud service providers support this. We also need to think about how data is wiped or
deleted when we no longer need it. That would come in the form of databases, or records and databases, files
stored in the cloud including virtual machine hard disks. We should also ensure that the cloud service provider
is compliant with various certifications that are relevant to our industry. It might also be important that the
provider allows us, the cloud service consumer, to perform audits. The cloud provider has cloud services that
we depend upon for business processes and we should treat those as an extension of our internal IT
department. This way, we can develop our security policies and build that also into the service-level agreement.
In some cases, legacy applications that are hosted on-premises could work well in the cloud. So we would have
to analyze and determine which ones that we are currently running might work well that way. Although,
sometimes regulations prevent us from running applications or storing the resultant data in the cloud.
[Heading: Cloud Security Risk Assessment. With data access, there needs to be a valid trust system, valid
authentication mechanism that's being used, and a valid key management system.]
We should also surface the provider's disaster recovery plan and build some of those details into the service-
level agreement, or SLA. Remember that the SLA is a contractual document between the cloud service provider
and the cloud service consumer. Other recommendations include understanding the provider's backup and
restore mechanisms, whether that occurs within the datacenter or even between datacenters. We can also
request orderly security-event alerts from the provider and associate those with known assets that we depend
upon in the cloud, such as applications and sensitive data. Often, we can go to a web page and configure this
type of alert and notification mechanism. We could also request from the cloud service provider industry-specific
certifications, such as ISO certifications or Payment Card Industry Data Security Standard Certifications, PCI
DSS. It really depends on the nature of our business. With security risk planning, we should adopt steps and
guidance templates from known bodies that specialize in this, such as CSCC, ENISA, and so on.
[Heading: Cloud Security Risk Assessment. Request industry-specific certifications of cloud service providers
such as ISO 27001:2013 and ISO 27002:20013, and ISO 31000:2009.]
When we do that, we'll be able to clearly identify areas of risk within each of the cloud models with Infrastructure
as a Service, Platform as a Service, as well as Software as a Service. We need to ensure that there is
continuous testing and auditing for our IT workloads running in the cloud. We need to make sure that they meet
business needs and that they're delivered in a cost effective and timely fashion. At the same time, we need to
ensure that data and processes are secure. When we deploy IT workloads to the cloud, there are some detailed
items we should think about in terms of risk. We need to ensure that effective governance, risk and compliance
processes exist. One way to do that is to request industry-specific audits or certifications from the cloud
provider. We need to define and manage people roles and identities that will be using the IT services running in
the cloud. We need to define trust and authentication methodologies. We might be using centralized
authentication. In other words, identity federation.
We also need to define and enforce privacy policies. Many cloud providers will give a web interface where we
can configure privacy policies and those need to be aligned with our specific business needs. Cloud networks
as well as our connections to them need to be secured, whether that's through a secured VPN connection that's
encrypted or using some kind of transport encryption, such as IPSec, or SSL, or TLS. On the cloud provider
side, we need to make sure they have the correct security controls in place on their physical infrastructure and
facilities. Often, we will depend upon third-party audits that have checked that for us. We can request those also
from the cloud service provider. Finally, we can develop agreed upon security terms with the provider and build
these into the service-level agreement. In this video, we discussed cloud security risk assessments.
[Heading: Cloud Security Risk Assessment. For security risk management, one of the cloud deployment
activities includes assessing the security provisions for cloud applications.]
Cloud Security SLAs
Learning Objective
After completing this topic, you should be able to
describe the service-level agreements for cloud security
1.
In this video, I'll discuss cloud security SLAs. The SLA, or the service level agreement, is a legally binding
document between a cloud customer and a cloud provider. It defines the agreed expectations for service
between them. All interested parties should sign up to the service-level agreement. These parties include the
cloud provider, the cloud consumer, cloud carriers, cloud brokers, and any other relevant third-party system
auditors or monitors. A standard service-level agreement will describe and define levels of expected service,
which include items, such as availability, serviceability, performance, operations, billing, and any penalties that
might be imposed if any of the previous mentioned items are not met. The service-level agreement is founded
on mutual agreement between the customer and the cloud service provider. They get implemented to manage
and minimize conflict and to support proactive issue resolution.
To assure an agreed service level, service providers must be capable of measuring and monitoring relevant
metrics. Now those metrics could be, for example, a way to measure and prove that we have a specific amount
of uptime, expressed perhaps as a percentage, or maybe a response time within two seconds for loading of
hosted web server pages, and so on. The consumer must also be capable of testing these service levels
themselves. Service-level agreements must be service-specific, for example, the services being used related to
Infrastructure as a Service, such as storage or virtual machine performance; or Software as a Service, such as
the guaranteed uptime for a cloud-based e-mail system; or Platform as a Service used by developers. The
Cloud Standards Customer Council, CSCC, has a guide to cloud service-level agreements, whereby it provides
guidance on what to expect and what to be aware of when you're shopping and evaluating SLAs from
prospective cloud providers.
[Heading: Cloud Security SLAs. Practical Guide to Cloud SLAs Version 1.0 (2012) includes the following text:
The aim of this guide is to provide a practical reference to help enterprise information technology (IT) and
business decision makers as they analyze and consider service level agreements (SLA) from different cloud
service Providers.]
The guide gets into details, such as understanding the roles and responsibilities. So for example, who is
responsible for securing data stored in the cloud? Does that fall on the customer or the cloud provider? There's
also evaluating business level policies to make sure that our IT services in the cloud meet our business needs.
Then there's understanding service and deployment model differences. That way, we can determine whether we
can deploy new IT services in the cloud and how that's done or even migrate on-premises services to the cloud.
Then there's identifying critical performance objectives. For example, if we depend on an e-commerce web site,
it might be crucial that we are guaranteed that our web page loads in less than two seconds from anywhere in
the world. We should also evaluate security and privacy requirements to make sure that they are met and meet
the standards required by our organization.
[Heading: Cloud Security SLAs. In the Practical Guide to Cloud SLAs Version 1.0 (2012), after evaluating
security and privacy requirements, identify service management requirements. Next prepare for service failure
management. Then understand the disaster recovery plan and develop an effective management process. Next
understand the exit process.]
In the end, the cloud security service-level agreement is not engraved in stone, and it can be negotiated with
the provider. Cloud service providers always categorize the SLA for a specific service that a consumer might
pay for in the cloud. And as you read through the SLA, you'll see any guarantees for availability or uptime, and
in some cases, you might realize that if you're testing or running a trial edition of a cloud service that there is no
SLA provided for the free tiers of various services. But regardless of the cloud service provider, there is always
a service-level agreement that even though on the web page, it might stipulate the details within the service-
level agreement. Remember that especially for larger organizations or government agencies, there is some
wiggle room. There can be changes made that are specific to that customer. In this video, we discussed cloud
security SLAs.
[Heading: Cloud Security SLAs. The Microsoft Azure Support tabbed page is displayed in the Google Chrome
browser window with the following URL: azure.microsoft/com/en-us/support/legal/sla/. The web page contains
options such as Why Azure, Documentation, Downloads, and Support. The web page includes the service level
agreements and includes the SLA for most Azure Services button. The presenter refers to the SLA for most
Azure Services button and then refers to the following text in the Azure Active Directory section: We guarantee
at least 99.9% availability of the Azure Active Directory Basic and Premium services. The services are
considered available in the following scenarios: users are able to login to the service, login to the Access Panel,
access applications on the Access Panels and reset passwords, and IT administrators are able to create, read,
write, and delete entries in the directory or provision or de-provision users to applications in the directory. Then
he scrolls down and refers to the following text: No SLA is provided for the Free tier of Azure Active Directory.
Next he navigates to the Amazon EC2 SLA tabbed page in the Google Chrome browser window that includes
the following URL: aws.amazon.com/ec2/sla/. He then refers to the Amazon EC2 Service Level Agreement
displayed on the web page.]
Exercise: Managing Cloud Data Security
Learning Objective
After completing this topic, you should be able to
describe the measures to secure data and connection in a cloud environment
1.
Exercise Overview
In this practice, we will focus on cloud security. In this scenario, you are an IT manager charged with ensuring
that the new cloud service provider has adequate security when hosting your data and that data is protected
while in transit. So pause this video and list out the items you think a checklist should include. When you're
done, resume the video and I'll go over my checklist of items.
Solution
So the following checklist would form the foundation of the task at hand. The first item is that all physical data
hosting areas are secure. This is physical security at the cloud provider level and we can be assured of this
through third-party audits. It might include things such as swipe card access to controlled areas, controlled
visitor contractor access, firewalls and adequate fire protection, as well as closed circuit television monitoring. At
the cloud service provider level, there should also be employee and contractor background checks. There
should be access and change control for third-party auditing, ongoing vulnerability testing and incident
reporting, which can be made available to cloud customers as well.
This should be data residence, persistence, backups, and replication even between different cloud provider data
centers. We need to know what the cloud service provider's plan is for business continuity and failover in the
event of some kind of a disaster such as a flood or a fire. The service level agreement or the SLA is the
contractual document that guarantees items such as up-time and response time. We should also think about
authenticating to services in the cloud, whether we are using strong password controls or centralized identity
federation or we are even using keys, so there must be a key management strategy in place where the keys
might be used for things like encryption and decryption of cloud stored data. We should determine whether
encryption applies to data at rest, whether access control lists are being applied to files, database objects. We
should also determine if data in transit is being protected perhaps through HTTPS and SSL or whether firewalls
control access to certain virtual networks or hosts on those networks. We should also know about the router,
network switch, and wireless configurations. Finally, we should get from the cloud service provider any cloud
service certifications and third-party audit results. In this practice, we focused on cloud security.
[Heading: Exercise: Managing Cloud Data Security. The service-level agreement or SLA guarantees items such
as up-time of 100% and response time.]