KEMBAR78
Infosec Notes | PDF
0% found this document useful (0 votes)
112 views78 pages

Infosec Notes

Uploaded by

ANNIE
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
112 views78 pages

Infosec Notes

Uploaded by

ANNIE
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 78

Contents

SECURITY OPERATIONS
1. PREVENTION
• Data Protection
Encryption .................................................................................................... 3
PKI - Public Key Infrastructure ....................................................................... 5
Transport Layer Security (TLS)....................................................................... 6
Data Loss Prevention (DLP) ........................................................................... 7
User Behavior Analytics (UBA) ...................................................................... 8
Email Security ............................................................................................... 8
Cloud Access Security Broker (CASB) ............................................................. 8
• Network Security

Firewall ......................................................................................................... 9
IPS (Intrusion Prevention System) and IDS (Intrusion Detection Systems) ... 10
Proxy Server ............................................................................................... 11
VPN (Virtual Private Network) .................................................................... 12
Secure Web Gateway.................................................................................. 13
• Application Security

Threat modeling ......................................................................................... 14


Runtime Application Self-Protection (RASP) ................................................ 15
Web Application Firewall (WAF) ................................................................. 15
• Endpoint Security

HIDS and HIPS ............................................................................................. 16


Zero Trust ................................................................................................... 17
2. DETECTION

Security Information and Event Management (SIEM) ................................. 17


Continuous Monitoring ............................................................................... 18
Network Security Monitoring...................................................................... 18
NetFlow Analysis ........................................................................................ 19
Vulnerability Assessment & Penetration Testing ......................................... 19
Web Application Scanning .......................................................................... 20
Bug Bounty ................................................................................................. 20
Security Operation Center .......................................................................... 21
Threat Intelligence ...................................................................................... 23

1
Compiled By Rammanohar Das
LEGAL AND REGULATORY
1. Compliance
Payment Card Industry Data Security Standard (PCI DSS) ............................ 24
Sarbanes–Oxley Act (SOX)........................................................................... 26
HIPAA and HITECH ...................................................................................... 29
Federal Financial Institutions Examination Council (FFIEC) .......................... 32
Family Educational Rights and Privacy Act (FERPA) ..................................... 33
NERC CIP..................................................................................................... 34
Federal Information Security Management Act (FISMA) ............................. 35
FedRAMP .................................................................................................... 36
2. Privacy
EU-U.S. Privacy Shield Framework .............................................................. 37
General Data Protection Regulation (GDPR) ............................................... 38
California Consumer Privacy Act (CCPA) ...................................................... 40
3. Audit
SSAE 16/ SSAE 18/ SOC 1/ SOC 2/ SOC 3 ..................................................... 42
ISO 27001 ................................................................................................... 44
COSO Framework ....................................................................................... 49

RISK MANAGEMENT
1. Risk Frameworks
Factor Analysis of Information Risk (FAIR) ................................................... 51
NIST RMF .................................................................................................... 53
OCTAVE ...................................................................................................... 55
TARA ........................................................................................................... 59
Risk Assessment................................................................................................ 60
Vulnerability Management ............................................................................... 64
Disaster Recovery ............................................................................................. 72
Business continuity planning (BCP) ................................................................... 75

2
Compiled By Rammanohar Das
ENCRYPTION
What is Encryption?
Encryption is a process that encodes a message or file so that it can be only be read by certain
people. Encryption uses an algorithm to scramble, or encrypt, data and then uses a key for the
receiving party to unscramble, or decrypt, the information. The message contained in an
encrypted message is referred to as plaintext. In its encrypted, unreadable form it is referred to
as ciphertext.

How Encryption Works


Encryption uses algorithms to scramble your information. It is then transmitted to the receiving
party, who is able to decode the message with a key. There are many types of algorithms, which
all involve different ways of scrambling and then decrypting information.

How are Encryption Keys Generated?

Keys are usually generated with random number generators, or computer algorithms that mimic
random number generators. A more complex way that computers can create keys is by using
user mouse movement to create unique seeds. Modern systems that have forward secrecy
involve generating a fresh key for every session, to add another layer of security.

Encrypt Terms
Key: Random string of bits created specifically for scrambling and unscrambling data. These
are used to encrypt and/or decrypt data. Each key is unique and created via algorithm to make
sure it is unpredictable. Longer keys are harder to crack. Common key lengths are 128 bits for
symmetric key algorithms and 2048 bits for public-key algorithms.

• Private Key (or Symmetric Key): This means that the encryption and decryption keys
are the same. The two parties must have the same key before they can achieve secure
communication.
• Public Key: This means that the encryption key is published and available for anyone
to use. Only the receiving party has access to the decryption key that enables them to
read the message.

Cipher: An algorithm used for encryption or decryption. It is a set of steps that are followed
as a procedure to encrypt information. There are two main types of ciphers, block ciphers and
stream ciphers.

Algorithm: An algorithm is the procedure that the encryption process follows. The specific
algorithm is called the cipher, or code. There are many types of encryption algorithms. The

3
Compiled By Rammanohar Das
encryption’s goal and level of security determines the most effective solution. Triple DES,
RSA and Blowfish are some examples of encryption algorithms, or ciphers.

Decryption: The process of switching unreadable cipher text to readable information.

Cryptanalysis: The study of ciphers and cryptosystems to find weaknesses in them that would
allow access to the information without knowing the key or algorithm.

Frequency Analysis: A technique used to crack a cipher. Those trying to decrypt a message
will study the frequency of letters or groups of letters in a ciphertext. Because some letters
occur more often than others, the frequency of letters can reveal parts of the encrypted message.
While this method was effective in cracking old encryption methods, it is ineffective against
modern encryption.

4
Compiled By Rammanohar Das
PKI - Public Key Infrastructure

Public Key Infrastructure (PKI) is a technology for authenticating users and devices in the digital world.
The basic idea is to have one or more trusted parties digitally sign documents certifying that a
particular cryptographic key belongs to a particular user or device. The key can then be used as an
identity for the user in digital networks.
The users and devices that have keys are often just called entities. In general, anything can be
associated with a key that it can use as its identity. Besides a user or device, it could be a program,
process, manufacturer, component, or something else. The purpose of a PKI is to securely associate a
key with an entity.
The trusted party signing the document associating the key with the device is called a certificate
authority (CA). The certificate authority also has a cryptographic key that it uses for signing these
documents. These documents are called certificates.

X.509 Standard
Most public key infrastructures use a standardized machine-readable certificate format for the
certificate documents. The standard is called X.509v3. Originally, it was an ISO standard, but these
days it is maintained by the Internet Engineering Task Force as RFC 3280.

Common Uses of Certificates


Secure Web Sites – HTTPS
The most familiar use of PKI is in SSL certificates. SSL (Secure Sockets Layer) is the security protocol
used on the web when you fetch a page whose address begins with https: TLS (Transport Layer
Security) is a newer version of the protocol. In practice, most websites now use the new version. With
HTTPS, certificates serve to identify the web site you are connecting to, to ensure that no-one can
eavesdrop on your connection or, for example, inject fraudulent wire transfers or steal credit card
numbers.
The Secure Shell protocol supports certificates for authenticating hosts and users. Tectia SSH uses
standards-based X.509 certificates, whereas OpenSSH uses its own proprietary certificate formats.

Email Signing and Encryption


Certificates are also used for secure email in corporations. The S/MIME standard specifies a message
format for signed and encrypted messaging, using the X.509 certificate formats.

PGP (Pretty Good Privacy) and its free version, Gnu Privacy Guard (GPG), use their own certificate
format and a somewhat different trust model. However, they still offer email encryption and are quite
popular.

Certificates and cryptographic authentication of the server prevent man-in-the-middle attacks.

5
Compiled By Rammanohar Das
Transport Layer Security (TLS)

What is Transport Layer Security (TLS)?

Transport Layer Security, or TLS, is a widely adopted security protocol designed to facilitate privacy
and data security for communications over the Internet. A primary use case of TLS is encrypting the
communication between web applications and servers, such as web browsers loading a website. TLS
can also be used to encrypt other communications such as email, messaging, and voice over IP (VOIP).

What’s the difference between TLS and HTTPS?

HTTPS is an implementation of TLS encryption on top of the HTTP protocol, which is used by all
websites as well as some other web services. Any website that uses HTTPS is therefore employing TLS
encryption.

Why should you use TLS?

TLS encryption can help protect web applications from attacks such as data breaches, and DDoS
attacks. Additionally, TLS-protected HTTPS is quickly becoming a standard practice for websites. For
example, the Google Chrome browser is cracking down on non-HTTPS sites, and everyday Internet
users are starting to become more worry of websites that don’t feature the HTTPS padlock icon.

How does TLS work?

TLS can be used on top of a transport-layer security protocol like TCP. There are three main
components to TLS: Encryption, Authentication, and Integrity.

• Encryption: hides the data being transferred from third parties.


• Authentication: ensures that the parties exchanging information are who they claim to be.
• Integrity: verifies that the data has not been forged or tampered with.

A TLS connection is initiated using a sequence known as the TLS handshake. The TLS handshake
establishes a cypher suite for each communication session. The cypher suite is a set of algorithms that
specifies details such as which shared encryption keys, or session keys, will be used for that particular
session. TLS is able to set the matching session keys over an unencrypted channel thanks to a
technology known as public key cryptography.

The handshake also handles authentication, which usually consists of the server proving its identity to
the client. This is done using public keys. Public keys are encryption keys that use one-way encryption,
meaning that anyone can unscramble data encrypted with the private key to ensure its authenticity,
but only the original sender can encrypt data with the private key.

Once data is encrypted and authenticated, it is then signed with a message authentication code (MAC).
The recipient can then verify the MAC to ensure the integrity of the data. This is kind of like the tamper-
proof foil found on a bottle of aspirin; the consumer knows no one has tampered with their medicine
because the foil is intact when they purchase it.

6
Compiled By Rammanohar Das
Data Loss Prevention (DLP)
Data loss prevention, or DLP, is a set of technologies, products, and techniques that are designed to
stop sensitive information from leaving an organization.

Data can end up in the wrong hands whether it’s sent through email or instant messaging, website
forms, file transfers, or other means. DLP strategies must include solutions that monitor for, detect,
and block the unauthorized flow of information.

How does DLP work?

DLP technologies use rules to look for sensitive information that may be included in electronic
communications or to detect abnormal data transfers. The goal is to stop information such as
intellectual property, financial data, and employee or customer details from being sent, either
accidentally or intentionally, outside the corporate network.

Why do organizations need DLP solutions?

The proliferation of business communications has given many more people access to corporate data.
Some of these users can be negligent or malicious. The result: a multitude of insider threats that can
expose confidential data with a single click. Many government and industry regulations have made
DLP a requirement.

Types of DLP technologies


DLP for data in use
One class of DLP technologies secures data in use, defined as data that is being actively processed by
an application or an endpoint. These safeguards usually involve authenticating users and controlling
their access to resources.

DLP for data in motion


When confidential data is in transit across a network, DLP technologies are needed to make sure it is
not routed outside the organization or to insecure storage areas. Encryption plays a large role in this
step. Email security is also critical since so much business communication goes through this channel.

DLP for data at rest


Even data that is not moving or in use needs safeguards. DLP technologies protect data residing in a
variety of storage mediums, including the cloud. DLP can place controls to make sure that only
authorized users are accessing the data and to track their access in case it is leaked or stolen.

7
Compiled By Rammanohar Das
User Behavior Analytics (UBA)
User behavior analytics (UBA) as defined by Gartner is a cybersecurity process about detection of
insider threats, targeted attacks, and financial fraud. UBA solutions look at patterns of human
behavior, and then apply algorithms and statistical analysis to detect meaningful anomalies from
those patterns—anomalies that indicate potential threats. Instead of tracking devices or security
events, UBA tracks a system's users. Big data platforms like Apache Hadoop are increasing UBA
functionality by allowing them to analyze petabytes worth of data to detect insider threats and
advanced persistent threats.

Advanced Persistent Threat (APT)


An advanced persistent threat (APT) is a stealthy computer network threat actor, typically a nation
state or state-sponsored group, which gains unauthorized access to a computer network and remains
undetected for an extended period. In recent times, the term may also refer to non-state sponsored
groups conducting large-scale targeted intrusions for specific goals.
Such threat actors' motivations are typically political or economic. To date, every major business
sector has recorded instances of attacks by advanced actors with specific goals seeking to steal, spy
or disrupt. These include government, defense, financial services, legal services, industrial, telecoms,
consumer goods, and many more.

Email Security
Email security is a broad term that encompasses multiple techniques used to secure an email service.
From an individual/end user standpoint, proactive email security measures include:

• Strong passwords
• Password rotations
• Spam filters
• Desktop-based anti-virus/anti-spam applications

Similarly, a service provider ensures email security by using strong password and access control
mechanisms on an email server; encrypting and digitally signing email messages when in the inbox or
in transit to or from a subscriber email address. It also implements firewall and software-based spam
filtering applications to restrict unsolicited, untrustworthy and malicious email messages from delivery
to a user’s inbox.

Cloud Access Security Broker (CASB)


A cloud access security broker (CASB) (sometimes pronounced cas-bee) is on-premises or cloud
based software that sits between cloud service users and cloud applications, and monitors all activity
and enforces security policies. A CASB can offer a variety of services such as monitoring user activity,
warning administrators about potentially hazardous actions, enforcing security policy compliance, and
automatically preventing malware.

8
Compiled By Rammanohar Das
Firewall
A firewall is a system designed to prevent unauthorized access to or from a private network. You can
implement a firewall in either hardware or software form, or a combination of both. Firewalls prevent
unauthorized internet users from accessing private networks connected to the internet, especially
intranets. All messages entering or leaving the intranet (the local network to which you are connected)
must pass through the firewall, which examines each message and blocks those that do not meet the
specified security criteria.

Several types of firewalls exist:


• Packet filtering: The system examines each packet entering or leaving the network and
accepts or rejects it based on user-defined rules. Packet filtering is fairly effective and
transparent to users, but it is difficult to configure. In addition, it is susceptible to IP spoofing.

• Circuit-level gateway implementation: This process applies security mechanisms when a TCP
or UDP connection is established. Once the connection has been made, packets can flow
between the hosts without further checking.

• Acting as a proxy server: A proxy server is a type of gateway that hides the true network
address of the computer(s) connecting through it. A proxy server connects to the internet,
makes the requests for pages, connections to servers, etc., and receives the data on behalf of
the computer(s) behind it. The firewall capabilities lie in the fact that a proxy can be configured
to allow only certain types of traffic to pass (for example, HTTP files, or web pages). A proxy
server has the potential drawback of slowing network performance, since it has to actively
analyze and manipulate traffic passing through it.

• Web application firewall: A web application firewall is a hardware appliance, server plug-in,
or some other software filter that applies a set of rules to a HTTP conversation. Such rules are
generally customized to the application so that many attacks can be identified and blocked.

9
Compiled By Rammanohar Das
IPS (Intrusion Prevention System) and IDS (Intrusion Detection
Systems)
IPS and IDS systems look for intrusions and symptoms within traffic. IPS/IDS systems would monitor
for unusual behavior, abnormal traffic, malicious coding and anything that would look like an intrusion
by a hacker being attempted.

IPS (Intrusion Prevention System) systems are deployed inline and actually take action by blocking the
attack, as well as logging the attack and adding the source IP address to the block list for a limited
amount of time; or even permanently blocking the address depending on the defined settings. Hackers
take part in lots of port scans and address scans, intending to find loop holes within organizations. IPS
systems would recognize these types of scans and take actions such as block, drop, quarantine and
log traffic. However, this is the basic functionality of IPS. IPS systems have many advanced capabilities
in sensing and stopping such attacks.

IDS vs IPS

IDS (Intrusion Detection System) systems only detect an intrusion, log the attack and send an alert to
the administrator. IDS systems do not slow networks down like IPS as they are not inline.

You may wonder why a company would purchase an IDS over an IPS? Surely a company would want a
system to take action and block such attacks rather than letting it pass and only logging and alerting
the administer. Well there’s a few reasons; however, there are two primary reasons which stand out.
IDS systems if not fine-tuned, just like IPS will also produce false positives. However, it would be very
annoying to have an IPS system producing false positives as legitimate network traffic will be blocked
as where an IDS will just send alerts and log the false attack. The 2nd reason is some administrators
and managers do not want a system to take over and make decisions on their behalf; they would
rather receive an alert and look into the problem and take action themselves.

However, that said today you will find solutions with both capabilities of IDS and IPS built in. IDS can
be used initially to see how the system behaves without actually blocking anything. Then once fine-
tuned IPS can be turned on and the system can be deployed inline to provide full protection.

IPS and IDS vs Firewalls

Not having an IPS system result in attacks going unnoticed. Don’t forget a firewall does the filtering,
blocking and allowing of addresses, ports, service, but also allows some of these through the network
as well. However, this means that the access allowed is just let through, and firewalls have no clever
way of telling whether that traffic is legit and normal. This is where the IPS and IDS systems come into
play.

So, where firewalls block and allow traffic through, IDS/IPS detect and look at that traffic in close detail
to see if it is an attack. IDS/IPS systems are made up of sensors, analyzers and GUI’s in order to do
their specialized job. Most common attack types that IPS and IDS systems are used for are -

• Policy Violations - Rules, protocols and packet designs that are violated. An example would
be an IP packet that are incorrect in length.
• Exploits - Attempts to exploit a vulnerability of a system, application or protocol. An example
would be a buffer overflow attack.
• Reconnaissance - Is a detection method that is used to gain information about system or
network such as using port scanners to see what ports are open.
• DOS\DDOS - This is when an attack attempts to bring down your system by sending a vast
number of requests to it such as SYN flood attacks.

10
Compiled By Rammanohar Das
Proxy Server
A proxy server acts as a gateway between you and the internet. It’s an intermediary server separating
end users from the websites they browse. Proxy servers provide varying levels of functionality,
security, and privacy depending on your use case, needs, or company policy.

If you’re using a proxy server, internet traffic flows through the proxy server on its way to the address
you requested. The request then comes back through that same proxy server (there are exceptions to
this rule), and then the proxy server forwards the data received from the website to you.

If that’s all it does, why bother with a proxy server? Why not just go straight from to the website and
back?

Modern proxy servers do much more than forwarding web requests, all in the name of data security
and network performance. Proxy servers act as a firewall and web filter, provide shared network
connections, and cache data to speed up common requests. A good proxy server keeps users and the
internal network protected from the bad stuff that lives out in the wild internet. Lastly, proxy servers
can provide a high level of privacy.

How Does a Proxy Server Operate?

Every computer on the internet needs to have a unique Internet Protocol (IP) Address. Think of this IP
address as your computer’s street address. Just as the post office knows to deliver your mail to your
street address, the internet knows how to send the correct data to the correct computer by the IP
address.

A proxy server is basically a computer on the internet with its own IP address that your computer
knows. When you send a web request, your request goes to the proxy server first. The proxy server
then makes your web request on your behalf, collects the response from the web server, and forwards
you the web page data so you can see the page in your browser.

When the proxy server forwards your web requests, it can make changes to the data you send and
still get you the information that you expect to see. A proxy server can change your IP address, so the
web server doesn’t know exactly where you are in the world. It can encrypt your data, so your data is
unreadable in transit. And lastly, a proxy server can block access to certain web pages, based on IP
address.

Why Should You Use a Proxy Server?

There are several reasons organizations and individuals use a proxy server.

• To control internet usage of employees and children: Organizations and parents set up proxy
servers to control and monitor how their employees or kids use the internet.
• Bandwidth savings and improved speeds: Organizations can also get better overall network
performance with a good proxy server. Proxy servers can cache (save a copy of the website
locally) popular websites
• Privacy benefits: Individuals and organizations alike use proxy servers to browse the internet
more privately.
• Improved security: Proxy servers provide security benefits on top of the privacy benefits. You
can configure your proxy server to encrypt your web requests to keep prying eyes from
reading your transactions.

11
Compiled By Rammanohar Das
VPN (Virtual Private Network)
A virtual private network, or VPN, is an encrypted connection over the Internet from a device to a
network. The encrypted connection helps ensure that sensitive data is safely transmitted. It prevents
unauthorized people from eavesdropping on the traffic and allows the user to conduct work remotely.
VPN technology is widely used in corporate environments.

It works in a corporate network through encrypted connections made over the Internet. Because the
traffic is encrypted between the device and the network, traffic remains private as it travels. An
employee can work outside the office and still securely connect to the corporate network. Even
smartphones and tablets can connect through a VPN.

Difference between Proxy and VPN

The basic difference between VPN and Proxy is that a Proxy server allows to hide, conceal and make
your network id anonymous by hiding your IP address. It provides features like Firewall and network
data filtering, network connection sharing and Data caching. This first became popular where some
countries tried to limit their citizen’s Internet access.

On the other hand, a VPN has benefits over the proxy by creating a tunnel over the public Internet
between computers or hosts. A tunnel is formed by the encapsulation of the packets by any
encryption protocol.

Encryption protocol such as Open VPN, IPsec, PPTP, L2TP, SSL and TLS, encrypts the data and adds a
new header. This has helped companies to minimize the expenditures of leased lines and the high-
speed routing services of the public internet to transfer data more securely.

12
Compiled By Rammanohar Das
Secure Web Gateway
A Secure Web Gateway/Security Gateway is an advanced, cloud-delivered or on-premises network
security service. It enforces consistent internet security and compliance policies for all users regardless
of their location or the type of computer or device they are using. These gateway security tools also
provide protection against threats to users who are accessing the internet via the web or are using
any number of web-based applications. They allow organizations to enforce acceptable use policy for
web access, enforce compliance with regulations and prevent data leakage.

As a result, secure web gateways offer a way to keep networks from falling victim to incursions
through internet traffic and malicious websites. They prevent data from such places from entering the
network and causing a malware infection or intrusion.

This form of gateway security is accomplished through malware detection, URL filtering, and other
means. A gateway effectively blocks malware from calling home and acts as a barrier against sensitive
intellectual property being stolen or sensitive data such as social security numbers, credit card
numbers, and medical information getting into the wrong hands. The web gateway secures people,
processes or programs from downloading or accessing external sites, software, or data that could
harm them, or the organization. Additionally, they stand in the way of untoward, unauthorized access
from the outside.

A secure web gateway, then, is a solution that filters unwanted software or malware from user-
initiated web and internet traffic while enforcing corporate and regulatory policy compliance. These
gateways must, at a minimum, include URL filtering, malicious-code detection and filtering, and
application controls for popular web-based applications, such as instant messaging (IM) and Skype.
Native or integrated data leak prevention is also increasingly being included in these products.
Similarly, analysts note convergence with other security technologies such as endpoint protection,
network firewalls, and threat detection.

What does a secure web gateway do?

How does a secure web gateway work? As a web proxy, a secure web gateway terminates and proxies
web traffic (ports 80 and 443), inspects that traffic via a number of security checks, including URL
filtering, advanced machine learning (AML), anti-virus (AV) scanning, sandboxing, data loss prevention
(DLP), cloud access security brokers (CASBs), web isolation and other integrated technologies. Web
gateways apply policies and enforce threat prevention and information security rules based on user,
location, content, and a variety of other factors. This form of gateway security can stop known and
unknown threats in their tracks. This includes zero day and other forms of advanced threats.

Secure Web Gateways vs. Firewalls

Some people have confused secure web gateways with firewalls. So, what is the difference? Secure
web gateways are dedicated cloud services or appliances for web and application security. They are
proxies (meaning they terminate and emulate network traffic). Because of specialization, they can
detect and protect against much more sophisticated and targeted attacks that use the web.

Firewalls have a different function. Firewalls are great at packet-level security, but are not as
sophisticated on the application layer for security. Firewalls typically do not terminate or inspect entire
objects, and many are reliant on stream-based AV scanning as a defense against malware. That's why
evasive threats operating on an application level can easily bypass some firewall defenses. But the
clear distinction between secure web gateways and firewalls is beginning to blur.

Some cloud-delivered secure web gateway services now offer an optional cloud firewall service to
enforce controls on non-web internet traffic.

13
Compiled By Rammanohar Das
Threat modeling

Threat modeling is a process by which potential threats, such as structural vulnerabilities or the
absence of appropriate safeguards, can be identified, enumerated, and mitigations can be prioritized.
The purpose of threat modeling is to provide defenders with a systematic analysis of what controls or
defenses need to be included, given the nature of the system, the probable attacker's profile, the most
likely attack vectors, and the assets most desired by an attacker. Threat modeling answers questions
like “Where am I most vulnerable to attack?”, “What are the most relevant threats?”, and “What do I
need to do to safeguard against these threats?”.

Threat modeling methodologies for IT purposes

STRIDE methodology
The STRIDE approach to threat modeling was introduced in 1999 at Microsoft, providing a mnemonic
for developers to find 'threats to our products'. STRIDE, Patterns and Practices, and Asset/entry point
were amongst the threat modeling approaches developed and published by Microsoft. References to
"the" Microsoft methodology commonly mean STRIDE and Data Flow Diagrams.

P.A.S.T.A.
The Process for Attack Simulation and Threat Analysis (PASTA) is a seven-step, risk-centric
methodology. It provides a seven-step process for aligning business objectives and technical
requirements, taking into account compliance issues and business analysis. The intent of the method
is to provide a dynamic threat identification, enumeration, and scoring process. Once the threat model
is completed security subject matter experts develop a detailed analysis of the identified threats.
Finally, appropriate security controls can be enumerated. This methodology is intended to provide an
attacker-centric view of the application and infrastructure from which defenders can develop an asset-
centric mitigation strategy.

Trike
The focus of the Trike methodology is using threat models as a risk-management tool. Within this
framework, threat models are used to satisfy the security auditing process. Threat models are based
on a “requirements model.” The requirements model establishes the stakeholder-defined
“acceptable” level of risk assigned to each asset class. Analysis of the requirements model yields a
threat model from which threats are enumerated and assigned risk values. The completed threat
model is used to construct a risk model based on asset, roles, actions, and calculated risk exposure.

VAST
VAST is an acronym for Visual, Agile, and Simple Threat modeling. The underlying principle of this
methodology is the necessity of scaling the threat modeling process across the infrastructure and
entire SDLC, and integrating it seamlessly into an Agile software development methodology. The
methodology seeks to provide actionable outputs for the unique needs of various stakeholders:
application architects and developers, cybersecurity personnel, and senior executives. The
methodology provides a unique application and infrastructure visualization scheme such that the
creation and use of threat models do not require specific security subject matter expertise.

Threat modeling tools


• IriusRisk
• Microsoft’s free threat modeling tool
• MyAppSecurity
• PyTM
• securiCAD
• SD Elements
• Tutamantic

14
Compiled By Rammanohar Das
Runtime Application Self-Protection (RASP)

What is RASP?

RASP is a technology that runs on a server and kicks in when an application runs. It's designed to detect
attacks on an application in real time. When an application begins to run, RASP can protect it from
malicious input or behavior by analyzing both the app's behavior and the context of that behavior. By
using the app to continuously monitor its own behavior, attacks can be identified and mitigated
immediately without human intervention.

RASP incorporates security into a running application wherever it resides on a server. It intercepts all
calls from the app to a system, making sure they're secure, and validates data requests directly inside
the app. Both web and non-web apps can be protected by RASP. The technology doesn't affect the
design of the app because RASP's detection and protection features operate on the server the app's
running on.

Web Application Firewall (WAF)

What is WAF?
A WAF or Web Application Firewall helps protect web applications by filtering and monitoring HTTP
traffic between a web application and the Internet. It typically protects web applications from attacks
such as cross-site forgery, cross-site-scripting (XSS), file inclusion, and SQL injection, among others. A
WAF is a protocol layer 7 defense (in the OSI model), and is not designed to defend against all types
of attacks. This method of attack mitigation is usually part of a suite of tools which together create a
holistic defense against a range of attack vectors.

By deploying a WAF in front of a web application, a shield is placed between the web application and
the Internet. While a proxy server protects a client machine’s identity by using an intermediary, a WAF
is a type of reverse-proxy, protecting the server from exposure by having clients pass through the WAF
before reaching the server.

A WAF operates through a set of rules often called policies. These policies aim to protect against
vulnerabilities in the application by filtering out malicious traffic. The value of a WAF comes in part
from the speed and ease with which policy modification can be implemented, allowing for faster
response to varying attack vectors; during a DDoS attack, rate limiting can be quickly implemented by
modifying WAF policies.

15
Compiled By Rammanohar Das
HIDS and HIPS

HIDS (Host-Based Intrusion Detection System)

An intrusion detection system (IDS) is a software application that analyzes a network for malicious
activities or policy violations and forwards a report to the management. An IDS is used to make
security personnel aware of packets entering and leaving the monitored network. There are two
general types of systems: a host-based IDS (HIDS) and a network-based IDS (NIDS).

A NIDS is often a standalone hardware appliance that includes network detection capabilities. It will
usually consist of hardware sensors located at various points along the network. It may also consist of
software that is installed on various computers connected along the network. The NIDS analyzes data
packets both inbound and outbound and offer real-time detection.

A HIDS analyzes the traffic to and from the specific computer on which the intrusion detection
software is installed. A host-based system also has the ability to monitor key system files and any
attempt to overwrite these files.

However, depending on the size of the network, either HIDS or NIDS is deployed. For instance, if the
size of the network is small, then NIDS is usually cheaper to implement and it requires less
administration and training than HIDS. However, a HIDS is generally more versatile than a NIDS.

Difference between HIDS and HIPS

About HIDS and HIPS.


• The 'D' stands for "Detection". It means that the protection system will be able to detect and
alert upon a possible security event, but it will not attempt to block anything.
• The 'P' stands for "Prevention". This means that when the protection system detects a
possible security event, it will automatically try to block it.

Since an anti-virus main use is to actively block the access to files detected as malicious, then it would
be nearer to a HIPS than and HIDS.

Are they the same thing? This is a good question, especially since Wikipedia states that "The lines
become very blurred here, as many of the tools overlap in functionality."

Historically speaking: no. An anti-virus primary goal is to detect and block access to malicious files,
while and HIPS solution has a broader goal: it may track changes on the file system (to detect changes
not necessarily implying any malicious code, like an unexpected settings change for instance), analyze
log files (system and application logs), check the system components to detect any irregularities, and
indeed also try to detect potential malware.

A HIPS solution may be either composed of several different software and the anti-virus be only of
them, or one may go toward all-in-one solutions where a single tool will bundle all these functions.
The fact is that nowadays end-user's anti-virus are a bit more than simple anti-virus, over time they
have accumulated a very large panel of features turning them more into security suites which can be
indeed perceived as end-user's HIPS solutions.

16
Compiled By Rammanohar Das
Zero Trust
Zero Trust is a security concept centered on the belief that organizations should not automatically
trust anything inside or outside its perimeters and instead must verify anything and everything trying
to connect to its systems before granting access.

Security Information and Event Management (SIEM)


SIEM stands for Security Information and Event Management. SIEM products provide real-time
analysis of security alerts generated by applications and network hardware.

This term is somewhat of an umbrella for security software packages ranging from Log Management
Systems to Security Log / Event Management, Security Information Management, and Security Event
correlation. More often than not these features are combined for 360-degree protection.

Security Information Security Event Security Information and


Management (SIM) Management (SEM) Event Management (SIEM)

Collection and analysis of Real-time threat analysis, SIEM, as the name suggests,
Overview security-related data from visualization and incident combines SIM and SEM
computer logs. response. capabilities.

More complex to deploy,


Easy to deploy, strong log More complex to deploy,
Features superior at real-time
management capabilities. complete functionality.
monitoring.

Example SolarWinds Log & Event


OSSIM NetIQ Sentinel
Tools Manager

The best SIEM tools

1. SolarWinds Security Event Manager (FREE TRIAL)


2. ManageEngine EventLog Analyzer (FREE TRIAL)
3. Splunk Enterprise Security
4. OSSEC
5. LogRhythm NextGen SIEM Platform
6. AT&T Cybersecurity AlienVault Unified Security Management
7. RSA NetWitness Platform
8. IBM QRadar SIEM
9. McAfee Enterprise Security Manager

17
Compiled By Rammanohar Das
Continuous Monitoring

Continuous monitoring is a technology and process that IT organizations may implement to enable
rapid detection of compliance issues and security risks within the IT infrastructure. Continuous
monitoring is one of the most important tools available for enterprise IT organizations, empowering
SecOps teams with real-time information from throughout public and hybrid cloud environments and
supporting critical security processes like threat intelligence, forensics, root cause analysis, and
incident response.

Network Security Monitoring


What is network security monitoring?

Network security monitoring is a service that monitors your network (traffic and devices) for security
threats, vulnerabilities, and suspicious behavior. It is an automated process that collects and analyzes
many indicators of potential threats in real time. It alerts you to these threats so you and/or an
emergency response team can take action to resolve them.

As it is automated and continuous, network security monitoring is a key tool for quickly detecting and
responding to threats. And quick response time is crucial to dealing with security threats.

What are the benefits and uses?

The demand for this sort of security monitoring is only increasing as companies need protection
against ransomware, zero-day threats, and other malicious attacks. It’s also important for compliance
reasons – allowing businesses to detect data breaches and gain information and reports about security
threats.

In the normal course of business, there can be security threats around the clock you have to deal with.
This type of monitoring detects those threats and lets you know they exist so you can protect yourself
and mitigate problems.

Here are some uses and benefits:

• Detect threats and breaches that would otherwise not have been detected
• Can help detect zero-day (new, previously unknown) threats
• Hunt down suspicious behavior
• Get a big picture view of your business’s security events
• Streamline compliance reporting
• Find solutions to address detected security threats
• Provides 24/7 skilled emergency response team

18
Compiled By Rammanohar Das
NetFlow Analysis
The ability to characterize IP traffic and understand how and where it flows is critical for assuring
network availability, performance, and security. NetFlow analysis is the practice of using tools to
perform monitoring, troubleshooting and in-depth inspection, interpretation, and synthesis of traffic
flow data. Analyzing NetFlow traffic data facilitates more accurate capacity planning and ensures
that resources are used appropriately in support of organizational goals.

NetFlow analysis helps network operators determine where to apply Quality of Service (QoS) policies
as well as how to optimize resource usage, and it plays a vital role in network security to detect
Distributed Denial-of-Service (DDoS) attacks and other undesirable network events and activity.

Vulnerability Assessment & Penetration Testing


The Difference Between a Pentest and a Vulnerability Assessment

Companies and people are often misinformed or misguided as to what the differences are between a
penetration test and a vulnerability assessment. In many cases, there are upper-level executives which
ask for a penetration test but really want a vulnerability assessment, and vice-versa. In these scenarios,
the assessment is sometimes improperly labeled which can be very misleading.

If someone asks for a pentest but really wants a vulnerability assessment, the scope of the
engagement is limited to scanning and enumeration without exploitation – which in my opinion is an
unfortunate and misguided situation for testers.

The idea here is to finally set apart the main differences between the two so that you and your
organization will know what makes most sense for your needs and requirements.

What is a Penetration Test?

A penetration test, or pentest, is the manual process where an ethical hacker conducts an assessment
on a target to uncover vulnerabilities by exploiting them. The goal is to gain unauthorized access
through exploitation which can be used to emulate the intent of a malicious hacker.

A pentest is often broken down into the following phases:

• Reconnaissance
• Scanning and enumeration
• Exploitation (gaining access)
• Post-exploitation (maintaining access)
• Covering tracks

What is a Vulnerability Assessment?

A vulnerability assessment, or VA, is the process of identifying threats and vulnerabilities on a target
by using automated vulnerability scanners. This sometimes includes a range of manual testing with
additional tools to further evaluate the security of applications or networks and to verify
vulnerabilities discovered by the scanning applications.

19
Compiled By Rammanohar Das
Web Application Scanning
Organizations need a Web application scanning solution that can scan for security loopholes in Web-
based applications to prevent would-be hackers from gaining unauthorized access to corporate
information and data. Web applications are proving to be the weakest link in overall corporate
security, even though companies have left no stone unturned in installing the better-known network
security and anti-virus solutions. Quick to take advantage of this vulnerability, hackers have now
begun to use Web applications as a platform for gaining access to corporate data; consequently, the
regular use of a web application scanner is essential.

Bug Bounty
A bug bounty program is a deal offered by many websites, organizations and software developers by
which individuals can receive recognition and compensation for reporting bugs, especially those
pertaining to security exploits and vulnerabilities.

These programs allow the developers to discover and resolve bugs before the general public is aware
of them, preventing incidents of widespread abuse. Bug bounty programs have been implemented by
a large number of organizations, including Mozilla, Facebook, Yahoo!, Google, Reddit, Square,
Microsoft and the Internet bug bounty.

Companies outside the technology industry, including traditionally conservative organizations like the
United States Department of Defense, have started using bug bounty programs. The Pentagon’s use
of bug bounty programs is part of a posture shift that has seen several US Government Agencies
reverse course from threatening white hat hackers with legal recourse to inviting them to participate
as part of a comprehensive vulnerability disclosure framework or policy.

20
Compiled By Rammanohar Das
Security Operation Center
What is a Security Operation Center (SOC)?

A Security Operation Center (SOC) is a centralized function within an organization employing people,
processes, and technology to continuously monitor and improve an organization's security posture
while preventing, detecting, analyzing, and responding to cybersecurity incidents.

A SOC acts like the hub or central command post, taking in telemetry from across an organization's IT
infrastructure, including its networks, devices, appliances, and information stores, wherever those
assets reside. The proliferation of advanced threats places a premium on collecting context from
diverse sources. Essentially, the SOC is the correlation point for every event logged within the
organization that is being monitored. For each of these events, the SOC must decide how they will be
managed and acted upon.

Security operations staffing and organizational structure

The function of a security operations team and, frequently, of a security operations center (SOC), is to
monitor, detect, investigate, and respond to cyberthreats around the clock. Security operations teams
are charged with monitoring and protecting many assets, such as intellectual property, personnel
data, business systems, and brand integrity. As the implementation component of an organization's
overall cybersecurity framework, security operations teams act as the central point of collaboration
in coordinated efforts to monitor, assess, and defend against cyberattacks.

SOCs have been typically built around a hub-and-spoke architecture, where a security information and
event management (SIEM) system aggregates and correlates data from security feeds. Spokes of this
model can incorporate a variety of systems, such as vulnerability assessment solutions, governance,
risk and compliance (GRC) systems, application and database scanners, intrusion prevention systems
(IPS), user and entity behavior analytics (UEBA), endpoint detection and remediation (EDR), and threat
intelligence platforms (TIP).

The SOC is usually led by a SOC manager, and may include incident responders, SOC Analysts (levels
1, 2 and 3), threat hunters and incident response manager(s). The SOC reports to the CISO, who in
turn reports to either the CIO or directly to the CEO.

10 key functions performed by the SOC

1. Take Stock of Available Resources


The SOC is responsible for two types of assets—the various devices, processes and applications
they’re charged with safeguarding, and the defensive tools at their disposal to help ensure this
protection.

• What the SOC Protects


The SOC can’t safeguard devices and data they can’t see. Without visibility and control from device
to the cloud, there are likely to be blind spots in the network security posture that can be found and
exploited. So, the SOC’s goal is to gain a complete view of the business’ threat landscape, including
not only the various types of endpoints, servers and software on premises, but also third-party
services and traffic flowing between these assets.

• How the SOC Protects


The SOC should also have a complete understanding of all cybersecurity tools on hand and all
workflows in use within the SOC. This increases agility and allows the SOC to run at peak efficiency.

2. Preparation and Preventative Maintenance


Even the most well-equipped and agile response processes are no match for preventing problems

21
Compiled By Rammanohar Das
from occurring in the first place. To help keep attackers at bay, the SOC implements preventative
measures, which can be divided into two main categories.

• Preparation
Team members should stay informed on the newest security innovations, the latest trends in
cybercrime and the development of new threats on the horizon. This research can help inform the
creation a security roadmap that will provide direction for the company’s cybersecurity efforts going
forward, and a disaster recovery plan that will serve as ready guidance in a worst-case scenario.

• Preventative Maintenance
This step includes all actions taken to make successful attacks more difficult, including regularly
maintaining and updating existing systems; updating firewall policies; patching vulnerabilities; and
whitelisting, blacklisting and securing applications.

3. Continuous Proactive Monitoring


Tools used by the SOC scan the network 24/7 to flag any abnormalities or suspicious activities.
Monitoring the network around the clock allows the SOC to be notified immediately of emerging
threats, giving them the best chance to prevent or mitigate harm. Monitoring tools can include a
SIEM or an EDR, the most advanced of which can use behavioral analysis to “teach” systems the
difference between regular day-to-day operations and actual threat behavior, minimizing the
amount of triage and analysis that must be done by humans.

4. Alert Ranking and Management


When monitoring tools issue alerts, it is the responsibility of the SOC to look closely at each one,
discard any false positives, and determine how aggressive any actual threats are and what they
could be targeting. This allows them to triage emerging threats appropriately, handling the most
urgent issues first.

5. Threat Response
These are the actions most people think of when they think of the SOC. As soon as an incident is
confirmed, the SOC acts as first responder, performing actions like shutting down or isolating
endpoints, terminating harmful processes (or preventing them from executing), deleting files, and
more. The goal is to respond to the extent necessary while having as small an impact on business
continuity as possible.

6. Recovery and Remediation


In the aftermath of an incident, the SOC will work to restore systems and recover any lost or
compromised data. This may include wiping and restarting endpoints, reconfiguring systems or, in
the case of ransomware attacks, deploying viable backups in order to circumvent the ransomware.
When successful, this step will return the network to the state it was in prior to the incident.

7. Log Management
The SOC is responsible for collecting, maintaining, and regularly reviewing the log of all network
activity and communications for the entire organization. This data helps define a baseline for
“normal” network activity, can reveal the existence of threats, and can be used for remediation and
forensics in the aftermath of an incident. Many SOCs use a SIEM to aggregate and correlate the data
feeds from applications, firewalls, operating systems and endpoints, all of which produce their own
internal logs.

8. Root Cause Investigation


In the aftermath of an incident, the SOC is responsible for figuring out exactly what happened when,
how and why. During this investigation, the SOC uses log data and other information to trace the
problem to its source, which will help them prevent similar problems from occurring in the future.

9. Security Refinement and Improvement


Cybercriminals are constantly refining their tools and tactics—and in order to stay ahead of them,
the SOC needs to implement improvements on a continuous basis. During this step, the plans

22
Compiled By Rammanohar Das
outlined in the Security Road Map come to life, but this refinement can also include hands-on
practices such as red-teaming and purple-teaming.

10. Compliance Management


Many of the SOC’s processes are guided by established best practices, but some are governed by
compliance requirements. The SOC is responsible for regularly auditing their systems to ensure
compliance with such regulations, which may be issued by their organization, by their industry, or by
governing bodies. Examples of these regulations include GDPR, HIPAA, and PCI DSS. Acting in
accordance with these regulations not only helps safeguard the sensitive data that the company has
been entrusted with—it can also shield the organization from reputational damage and legal
challenges resulting from a breach.

Threat Intelligence
What Is Threat Intelligence?

Threat intelligence is knowledge that allows you to prevent or mitigate cyberattacks. Rooted in data,
threat intelligence gives you context that helps you make informed decisions about your security by
answering questions like who is attacking you, what their motivations and capabilities are, and what
indicators of compromise in your systems to look for.

Here’s how Gartner defines it:

Threat intelligence is evidence-based knowledge, including context, mechanisms, indicators,


implications, and action-oriented advice about an existing or emerging menace or hazard to assets.
This intelligence can be used to inform decisions regarding the subject’s response to that menace or
hazard.

The best threat intelligence solutions use machine learning to automate data collection and
processing, integrate with your existing solutions, take in unstructured data from disparate sources,
and then connect the dots by providing context on indicators of compromise (IOCs) and the tactics,
techniques, and procedures (TTPs) of threat actors.

Threat intelligence is often broken down into three subcategories:

1. Strategic: Broader trends typically meant for a non-technical audience


2. Tactical: Outlines of the tactics, techniques, and procedures of threat actors for a more
technical audience
3. Operational: Technical details about specific attacks and campaigns

Why Is Threat Intelligence Important?

At Recorded Future, all of our work is motivated by three core beliefs:

1. Threat intelligence is only useful when it gives you the context you need to make informed
decisions and take action.
2. Threat intelligence is for everyone.
3. People and machines work better together.

23
Compiled By Rammanohar Das
Payment Card Industry Data Security Standard (PCI DSS)
The Payment Card Industry Data Security Standard (PCI DSS) is an information security standard for
organizations that handle branded credit cards from the major card schemes.

The PCI Standard is mandated by the card brands but administered by the Payment Card Industry
Security Standards Council. The standard was created to increase controls around cardholder data to
reduce credit card fraud. Validation of compliance is performed annually or quarterly, either by an
external Qualified Security Assessor (QSA) or by a firm specific Internal Security Assessor (ISA) that
creates a Report on Compliance for organizations handling large volumes of transactions, or by Self-
Assessment Questionnaire (SAQ) for companies handling smaller volumes.

Requirements

The PCI Data Security Standard specifies twelve requirements for compliance, organized into six
logically related groups called "control objectives". The six groups are:

1. Build and Maintain a Secure Network and Systems


2. Protect Cardholder Data
3. Maintain a Vulnerability Management Program
4. Implement Strong Access Control Measures
5. Regularly Monitor and Test Networks
6. Maintain an Information Security Policy

Each version of PCI DSS (Payment Card Industry Data Security Standard) has divided these six
requirements into a number of sub-requirements differently, but the twelve high-level requirements
have not changed since the inception of the standard. Each requirement/sub-requirement is
additionally elaborated into three sections.

1. Requirement Declaration: It defines the main description of the requirement. The


endorsement of PCI DSS is done on the proper implementation of the requirements.
2. Testing Processes: The processes and methodologies carried out by the assessor for the
confirmation of proper implementation.
3. Guidance: It explains the core purpose of the requirement and the corresponding content
which can assist in the proper definition of the requirement.

The twelve requirements for building and maintaining a secure network and systems can be
summarized as follows:

1. Installing and maintaining a firewall configuration to protect cardholder data. The purpose of
a firewall is to scan all network traffic, block untrusted networks from accessing the system.
2. Changing vendor-supplied defaults for system passwords and other security parameters.
These passwords are easily discovered through public information and can be used by
malicious individuals to gain unauthorized access to systems.
3. Protecting stored cardholder data. Encryption, hashing, masking and truncation are methods
used to protect card holder data.
4. Encrypting transmission of cardholder data over open, public networks. Strong encryption,
including using only trusted keys and certifications reduces risk of being targeted by malicious
individuals through hacking.
5. Protecting all systems against malware and performing regular updates of anti-virus software.
Malware can enter a network through numerous ways, including Internet use, employee
email, mobile devices or storage devices. Up-to-date anti-virus software or supplemental anti-
malware software will reduce the risk of exploitation via malware.
6. Developing and maintaining secure systems and applications. Vulnerabilities in systems and
applications allow unscrupulous individuals to gain privileged access. Security patches should

24
Compiled By Rammanohar Das
be immediately installed to fix vulnerability and prevent exploitation and compromise of
cardholder data.
7. Restricting access to cardholder data to only authorized personnel. Systems and processes
must be used to restrict access to cardholder data on a “need to know” basis.
8. Identifying and authenticating access to system components. Each person with access to
system components should be assigned a unique identification (ID) that allows accountability
of access to critical data systems.
9. Restricting physical access to cardholder data. Physical access to cardholder data or systems
that hold this data must be secure to prevent the unauthorized access or removal of data.
10. Tracking and monitoring all access to cardholder data and network resources. Logging
mechanisms should be in place to track user activities that are critical to prevent, detect or
minimize impact of data compromises.
11. Testing security systems and processes regularly. New vulnerabilities are continuously
discovered. Systems, processes and software need to be tested frequently to uncover
vulnerabilities that could be used by malicious individuals.
12. Maintaining an information security policy for all personnel. A strong security policy includes
making personnel understand the sensitivity of data and their responsibility to protect it.

Compliance levels

All companies who are subject to PCI DSS standards must be PCI compliant. There are four levels of
PCI Compliance and these are based on how much you process per year, as well as other details
about the level of risk assessed by payment brands.

At a high level, the levels are following:

• Level 1 – Over 6 million transactions annually


• Level 2 – Between 1 and 6 million transactions annually
• Level 3 – Between 20,000 and 1 million transactions annually
• Level 4 – Less than 20,000 transactions annually

Each card issuer maintains their own table of compliance levels.

25
Compiled By Rammanohar Das
Sarbanes–Oxley Act (SOX)
What is SOX Compliance?

The United States Congress passed the Sarbanes-Oxley Act in 2002 and established rules to protect
the public from fraudulent or erroneous practices by corporations and other business entities. The
goal of the legislation is to increase transparency in the financial reporting by corporations and to
require a formalized system of checks and balances in each company.

SOX compliance is not just a legal obligation but also a good business practice. Of course, companies
should behave ethically and limit access to internal financial systems. But implementing SOX financial
security controls has the side benefit of also helping to protect the company from data theft by insider
threat or cyberattack. SOX compliance can encompass many of the same practices as any data security
initiative.

History of SOX Compliance

Senator Paul Sarbanes (D-MD) and Representative Michael G. Oxley (R-OH-4) wrote this bill in
response to several high-profile corporate sandals – Enron, Worldcom, and Tyco in particular.

The stated goal of SOX is “to protect investors by improving the accuracy and reliability of corporate
disclosures.” The bill established responsibilities for Boards and officers of publicly traded companies
and set criminal penalties for failure to comply. The bill passed by overwhelming majorities in both
the House and Senate – only three members voted to oppose.

Who Must Comply with SOX?

SOX applies to all publicly traded companies in the United States as well as wholly-owned subsidiaries
and foreign companies that are publicly traded and do business in the United States. SOX also
regulates accounting firms that audit companies that must comply with SOX.

Private companies, charities, and non-profits are generally not required to comply with all of SOX.
Private organizations shouldn’t knowingly destroy or falsify financial data, and SOX does have
language to penalize those companies that do. Private companies that are planning an Initial Public
Offering (IPO) should prepare to comply with SOX before they go public.

SOX Compliance Requirements

Here are the most important SOX requirements:

• CEOs and CFOs are directly responsible for the accuracy, documentation, and submission of
all financial reports as well as the internal control structure to the SEC. Officers risk jail time
and monetary penalties for compliance failures – intentional or not.
• SOX requires an Internal Control Report that states management is responsible for an
adequate internal control structure for their financial records. Any shortcomings must be
reported up the chain as quickly as possible for transparency.
• SOX requires formal data security policies, communication of data security policies, and
consistent enforcement of data security policies. Companies should develop and implement
a comprehensive data security strategy that protects and secures all financial data stored and
utilized during normal operations.
• SOX requires that companies maintain and provide documentation proving they are compliant
and that they are continuously monitoring and measuring SOX compliance objectives.

SOX Compliance Audits

26
Compiled By Rammanohar Das
SOX mandates companies complete yearly audits and make those results easily available to any
stakeholders. Companies hire independent auditors to complete the SOX audits, which must be
separate from any other audits to prevent a conflict of interest.

The primary purpose of the SOX compliance audit is the verification of the company’s financial
statements. Auditors compare past statements to the current year and determine if everything is
copasetic. Auditors can also interview personnel and verify that compliance controls are sufficient to
maintain SOX compliance standards.

Preparing for a SOX Compliance Audit

Make sure to update your reporting and internal auditing systems so you can pull any report the
auditor requests quickly. Verify that your SOX compliance software systems are currently working as
intended so there will be no surprises with those systems.

SOX Internal Controls Audit

Your SOX auditor will investigate four internal controls as part of the yearly audit. To be SOX compliant,
it is crucial to demonstrate your capability in the following controls:

• Access: Access means both physical controls (doors, badges, locks on file cabinets) and
electronic controls (login policies, least privileged access, and permissions audits).
Maintaining a least permissive access model means each user only has the access necessary
to do their jobs and is a requirement of SOX compliance.

• Security: Security in this context means that you can demonstrate protections against data
breaches. How you choose to implement this control is up to you.

• Data Backup: Maintain SOX compliant off-site backups of all of your financial records.

• Change Management: Have defined processes to add and maintain users, install new
software, and make any changes to databases or applications that manage your company
financials.

Section 302 and 404 of the Sarbanes-Oxley Act of 2002

• Section 302 of the SOX Act of 2002 is a mandate that requires senior management to
certify the accuracy of the reported financial statement.

• Section 404 of the SOX Act of 2002 is a requirement that management


and auditors establish internal controls and reporting methods on the adequacy of
those controls.
• A statement of management’s responsibility for establishing and maintaining
adequate internal control over financial reporting;
• A statement identifying the framework used by management to evaluate the
effectiveness of internal control;
• Management’s assessment of the effectiveness of internal control as of the end of
the company’s most recent fiscal year end; and
• A statement that the company’s external auditor has issued an attestation report
on management’s assessment

27
Compiled By Rammanohar Das
Sarbanes-Oxley Act: Key Provisions

SOX Section 302 - Corporate Responsibility for Financial Reports


a. CEO and CFO must review all financial reports.
b. Financial report does not contain any misrepresentations.
c. Information in the financial report is "fairly presented".
d. CEO and CFO are responsible for the internal accounting controls.
e. CEO and CFO must report any deficiencies in internal accounting controls, or any fraud
involving the management of the audit committee.
f. CEO and CFO must indicate any material changes in internal accounting controls.

SOX Section 401: Disclosures in Periodic Reports: All financial statements and their requirement to
be accurate and presented in a manner that does not contain incorrect statements or admit to state
material information. Such financial statements should also include all material off-balance sheet
liabilities, obligations, and transactions.

SOX Section 404: Management Assessment of Internal Controls: All annual financial reports must
include an Internal Control Report stating that management is responsible for an "adequate" internal
control structure, and an assessment by management of the effectiveness of the
control structure. Any shortcomings in these controls must also be reported. In addition, registered
external auditors must attest to the accuracy of the company management’s assertion that internal
accounting controls are in place, operational and effective.

SOX Section 409 - Real Time Issuer Disclosures: Companies are required to disclose on a almost real-
time basis information concerning material changes in its financial condition or operations.

SOX Section 802 - Criminal Penalties for Altering Documents: This section specifies the penalties for
knowingly altering documents in an ongoing legal investigation, audit, or bankruptcy proceeding.

SOX Section 806 - Protection for Employees of Publicly Traded Companies Who Provide Evidence of
Fraud: This section deals with whistleblower protection.

SOX Section 902 - Attempts & Conspiracies to Commit Fraud Offenses: It is a crime for any person to
corruptly alter, destroy, mutilate, or conceal any document with the intent to impair the object's
integrity or availability for use in an official proceeding.

SOX Section 906 - Corporate Responsibility for Financial Reports: Section 906 addresses criminal
penalties for certifying a misleading or fraudulent financial report. Under SOX 906, penalties can be
upwards of $5 million in fines and 20 years in prison.

28
Compiled By Rammanohar Das
Health Insurance Portability and Accountability Act (HIPAA)
&
Health Information Technology for Economic and Clinical
Health (HITECH)

The Health Insurance Portability and Accountability Act of 1996 (HIPAA) was enacted by the United
States Congress and signed by President Bill Clinton in 1996

• It was created primarily to modernize the flow of healthcare information, stipulate how Personally
Identifiable Information maintained by the healthcare and healthcare insurance industries should be
protected from fraud and theft, and address limitations on healthcare insurance coverage

• The Privacy Rule of the HIPAA Act establishes comprehensive protections for
medical privacy

HIPAA is a federal law enacted to:


• Ensure the privacy of an individual’s protected health information (PHI)
• Provide security for electronic and physical exchange of PHI
• Provide for individual rights regarding PHI

• Personal identifiers coupled with a broad range of health, health care or health care
payment information creates PHI
There are 18 elements of PHI
1. Name
2. Address
3. Dates related to an individual
4. Telephone numbers
5. Fax number
6. Email address
7. Social Security number (USA)
8. Medical record number
9. Health plan beneficiary number
10. Account number
11. Certificate/license number
12. Any vehicle or other device serial
13. Device identifiers or serial numbers
14. Web URL
15. Internet Protocol (IP) address
16. Finger or voice prints
17. Photographic images
18. Any other characteristic that would uniquely identify the individual

29
Compiled By Rammanohar Das
HITRUST – Health Information Trust Alliance [www.hitrustalliance.net], is a privately held company
located in Frisco, Texas, United States that, in collaboration with healthcare, technology and
information security leaders, has established the HITRUST CSF (Common Security Framework), a
comprehensive, prescriptive, and certifiable framework, that can be used by all organizations towards
HIPAA Compliance.

• HITRUST, integrated and harmonized the requirements by using ISO/IEC 27001 as the basis for the
CSF structure and adding in ISO/IEC 27002, HIPAA, NIST SP 800-53 and other requirements.

• HITRUST CSF contains 149 security and privacy controls parsed amongst 46 control objectives within
14 broad control categories.

HIPAA Compliance Steps

HIPAA: Violations and Requirements


Most common causes of a data breach that can lead to a HIPAA violation:
• Theft of equipment that stores PHI
• Hacking/ malware/ ransomware
• Office break-in
• Sending PHI to the wrong person or business partner
• Discussing PHI in public
• Posting PHI to social media

Fine Levels for HIPAA Compliance Violations


• “Did Not Know” – Fines range from $100–$50,000 per incident with a yearly maximum of
1,500,000.
• “Reasonable Cause” – Fines range from $1,000–$50,000 per incident with the same yearly
maximum.
• If the company took steps to correct their negligent compliance behaviors, the fine is
$10,000 – $50,000 per incident.
• If the Compliance Auditor rules that the company did not take corrective action, they will
fine the company $50,000 per incident.

30
Compiled By Rammanohar Das
HITECH

HITECH is the acronym behind the Health Information Technology for Economic and Clinical Health
Act of 2009. The legislation, signed into law by President Obama on February 17, was intended to
accelerate the transition to electronic health records (EHR). It was actually included within the
American Recovery and Reinvestment Act of 2009 (ARRA), which was geared toward stimulating the
economy.

Another result of HITECH has to do with the Office of the National Coordinator for Health Information
Technology (ONC), which has been part of the HHS Department since 2004. The ONC became
responsible for the administration and creation of standards related to HITECH.

“HITECH stipulated that beginning in 2011, healthcare providers would be offered financial incentives
for demonstrating ‘meaningful use’ of EHRs until 2015,” noted Scot Petersen in TechTarget5, “after
which time penalties may be levied for failing to demonstrate such use.”

As you can see, the HITECH law is geared more toward the adoption of electronic health records itself
than it is toward specific security rules for digital data. That’s why HIPAA is typically more a point of
focus when looking for digital systems. However, many HIPAA hosting providers and similar entities
get certified for compliance with HITECH as well as HIPAA to demonstrate their knowledge of and
adherence to all federal healthcare law.

There is an overlap between these two laws. However, HITECH serves as somewhat of an addendum
to HIPAA. It mandates that any standards for technology arising from HITECH must meet the HIPAA
Privacy and Security Rules (described above).

31
Compiled By Rammanohar Das
Federal Financial Institutions Examination Council (FFIEC)
The Federal Financial Institutions Examination Council (FFIEC) is a formal U.S. government
interagency body composed of five banking regulators that is "empowered to prescribe uniform
principles, standards, and report forms to promote uniformity in the supervision of financial
institutions".[1] It also oversees real estate appraisal in the United States. Its regulations are contained
in title 12 of the Code of Federal Regulations.

Composition

FFIEC includes five banking regulators—the Federal Reserve Board of Governors (FRB), the Federal
Deposit Insurance Corporation (FDIC), the National Credit Union Administration (NCUA), the Office of
the Comptroller of the Currency (OCC), and the Consumer Financial Protection Bureau (CFPB).

History

FFIEC was established March 10, 1979, pursuant to title X of the Financial Institutions Regulatory and
Interest Rate Control Act of 1978 (FIRA).

Housing and real estate

The FFIEC was given additional statutory responsibilities by section 340 of the Housing and Community
Development Act of 1980 to facilitate public access to data that depository institutions must disclose
under the Home Mortgage Disclosure Act of 1975 (HMDA) and the aggregation of annual HMDA data,
by census tract, for each metropolitan statistical area (MSA). In accordance with HMDA, the FFIEC
established an advisory State Liaison Committee composed of five representatives of state supervisory
agencies. The HMDA requires "most lenders to identify the race, sex, and income of loan applicants
and borrowers”, so the FFIEC is able to deduce things like "the number of mortgages issued to black
and Hispanic borrowers rose sharply", as it did in 1993. In 2006, the State Liaison Committee was
added to the Council as a voting member.

The Appraisal Subcommittee (ASC) was established within the FFIEC pursuant to title XI of the
Financial Institutions Reform, Recovery and Enforcement Act of 1989 (FIRREA). The ASC oversees The
Appraisal Foundation, whose work is accomplished by three independent boards—the Appraiser
Qualifications Board (AQB), the Appraisal Standards Board (ASB), and the Appraisal Practices Board
(APB), who collectively regulate real estate appraisal in the United States.

Cybersecurity

Comptroller of the Currency and FFIEC Chair Thomas J. Curry stated on May 8, 2014, that "helping to
make banks less vulnerable and more resilient to cyber-attacks" has been one of his top priorities. In
June 2014 FFIEC launched a new webpage on cybersecurity and announced that it was initiating a pilot
for 500 member institutions that will focus on how these institutions manage cybersecurity and how
prepared they are to mitigate cyber risks.

On June 30, 2015 the FFIEC released the FFIEC Cybersecurity Assessment Tool to enable regulated
financial institutions to assess their cybersecurity readiness. This tool may be used as a self-
assessment. Regulators may also review the completed assessment during their examination.

32
Compiled By Rammanohar Das
Family Educational Rights and Privacy Act (FERPA)
What is FERPA Compliance?

The Family Educational Rights and Privacy Act (FERPA) is a set of standards in place to help protect
the personal information of students and their families. It applies mainly to educational organizations
that receive certain kinds of funding from the U.S. Department of Education. If your online store works
with these kinds of institutions, it may be necessary for you to meet FERPA compliance standards so
the organization stays in good standing with the law.

What is the purpose of FERPA?

Created in 1974, FERPA is a set of regulations enacted to keep student records secure and private
while also giving students access to their own records. Educational institutions that retain education
records must give ultimate control of these records to the students they concern.

Most relevant for online stores that may have access to these kinds of records is the security piece. As
an online store, you’re expected to have the right security in place to protect the personally
identifiable information (PII) of your customers.

Why would you need to be FERPA compliant?

FERPA compliance applies to institutions and relevant vendors, which means if you sell textbooks,
food, or other goods within the purview of a school, you’ll need to meet the requirements as set out
by FERPA.

Compliance with the data protection regulations should be an afterthought for most stores. The same
safeguards you’d put in place for FERPA compliance should already be achieved for PCI compliance.
Protecting your customer’s data is so important to the health of your business that complying with
FERPA regulations should be a no-brainer.

How do you achieve FERPA compliance?

To be compliant with FERPA, you should have a strong security protocol in place. Here are a few places
you can focus your efforts to make sure your store is FERPA compliant.

• Data encryption and SSL certificates should be used to protect information storage and
transmission.
• Your network should have a firewall and use network access security measures meant stop
hacking or attacks.
• You should have an administrator who enacts access control policies and has the power to
give or restrict employee access to certain types of data based on need.
• Hardware that stores data should be protected by antivirus and other third-party software as
necessary. You should also have the power to wipe data from hardware like mobile devices or
laptops remotely.
• Clearly communicate to people what kinds of information you’re storing about them and give
them the ability to opt out of data collection for things like content personalization.

33
Compiled By Rammanohar Das
NERC CIP
What Is NERC CIP Compliance?

In 2007, the North American Electric Reliability Corporation (NERC) was named the electric reliability
organization for North America. NERC develops and enforces reliability standards for the supply of
power in both the United States and Canada, as well as northern Baja California, Mexico. NERC’s
programs impact more than 1,900 bulk power system owners and operators, and focus on reliability,
assurance, learning, and risk-based approaches to improve the reliability of the electricity grid across
the continent.

NERC administers a Critical Infrastructure Protection (CIP) program, encompassed in CIP standards
001 to 014. These standards address the security of cyber assets that are critical to the operation of
the North American electricity grid. CIP compliance is mandatory.

What’s Covered in the Standards?

The various CIP standards cover everything from identifying and categorizing assets, to reporting
sabotage, to ensuring security plans that limit physical and electronic access are in place. CIP-008
covers reporting cyber security incidents, and CIP-009 focuses on recovery plans and techniques
following breaches. CIP-010 and CIP-011, focusing on change and vulnerability management and
information protection, are also enforceable.

In 2013, Version 5 of the CIP standards was approved, and implementation began in 2014. NERC
assisted industries transitioning from the older Version 3 to the new Version 5. This implementation
is ongoing, as many of the standards have been revised.

34
Compiled By Rammanohar Das
Federal Information Security Management Act (FISMA)
The Federal Information Security Management Act (FISMA) is a United States federal law passed in
2002 that made it a requirement for federal agencies to develop, document, and implement an
information security and protection program. FISMA is part of the larger E-Government Act of 2002
introduced to improve the management of electronic government services and processes.

FISMA is one of the most important regulations for federal data security standards and guidelines. It
was introduced to reduce the security risk to federal information and data while managing federal
spending on information security. To achieve these aims, FISMA established a set of guidelines and
security standards that federal agencies have to meet. The scope of FISMA has since increased to
include state agencies administering federal programs like Medicare. FISMA requirements also apply
to any private businesses that are involved in a contractual relationship with the government.

FISMA Compliance Requirements

The National Institute of Standards and Technology (NIST) plays an important role in the FISMA
Implementation Project launched in January 2003, which produced the key security standards and
guidelines required by FISMA. These publications include FIPS 199, FIPS 200, and the NIST 800 series.
The top FISMA requirements include:

• Information System Inventory: Every federal agency or contractor working with the
government must keep an inventory of all the information systems utilized within the
organization. In addition, the organization must identify the integrations between these
information systems and other systems within their network.

• Risk Categorization: Organizations must categorize their information and information


systems in order of risk to ensure that sensitive information and the systems that use it are
given the highest level of security. FIPS 199 “Standards for Security Categorization of Federal
Information and Information Systems” defines a range of risk levels within which
organizations can place their various information systems.

• System Security Plan: FISMA requires agencies to create a security plan which is regularly
maintained and kept up to date. The plan should cover things like the security controls
implemented within the organization, security policies, and a timetable for the introduction
of further controls.

• Security Controls: NIST SP 800-53 outlines an extensive catalog of suggested security controls
for FISMA compliance. FISMA does not require an agency to implement every single control;
instead, they are instructed to implement the controls that are relevant to their organization
and systems. Once the appropriate controls are selected and the security requirements have
been satisfied, the organizations must document the selected controls in their system security
plan.

• Risk Assessments: Risk assessments are a key element of FISMA’s information security
requirements. NIST SP 800-30 offers some guidance on how agencies should conduct risk
assessments. According to the NIST guidelines, risk assessments should be three-tiered to
identify security risks at the organizational level, the business process level, and the
information system level.

• Certification and Accreditation: FISMA requires program officials and agency heads to
conduct annual security reviews to ensure risks are kept to a minimum level. Agencies can
achieve FISMA Certification and Accreditation (C&A) through a four-phased process which
includes initiation and planning, certification, accreditation, and continuous monitoring.

35
Compiled By Rammanohar Das
FedRAMP
The Federal Risk and Authorization Management Program, or FedRAMP, is a program by which the
U.S. federal government determines whether cloud products and services are secure enough to be
used by federal agencies. While the process for getting the FedRAMP seal of approval is complex, it
can ultimately be lucrative for companies that are certified, not least because it signals a commitment
to security to non-government customers as well.

FedRAMP levels and FedRAMP controls

Levels and controls are two crucial concepts for understanding how FedRAMP works. Controls are the
specific technologies and techniques used to ensure the security and privacy of data stored in the
cloud; the different controls are outlined in detail in NIST Special Publication 800-53, and there's a
top-level overview on the website of the Government Service Agency (GSA).

CSPs can choose, based on which controls they implement, to offer different levels of security: low,
moderate or high. The levels in turn determine what kinds of data can be stored or accessed on those
systems. Standard Fusion has a good overview of what the different levels mean and what controls
are required for each.

36
Compiled By Rammanohar Das
EU-U.S. Privacy Shield Framework
The EU-U.S. Privacy Shield Framework was designed by the U.S. Department of Commerce and
European Commission to provide companies on both sides of the Atlantic with a mechanism to
comply with EU data protection requirements when transferring personal data from the European
Union to the United States in support of transatlantic commerce.

The Privacy Shield Framework provides a set of robust and enforceable protections for the personal
data of EU individuals. The Framework provides transparency regarding how participating companies
use personal data, strong U.S. government oversight, and increased cooperation with EU data
protection authorities (DPAs).

The European Commission deemed the Privacy Shield Framework adequate to enable data
transfers under EU law. Commerce will allow companies time to review the Framework and
update their compliance programs and then, on August 1, will begin accepting certifications.

To join the Privacy Shield Framework, a U.S.-based company will be required to self-certify to the
Department of Commerce and publicly commit to comply with the Framework’s requirements.
While joining the Privacy Shield Framework will be voluntary, once an eligible company makes the
public commitment to comply with the Framework’s requirements, the commitment will become
enforceable under U.S. law. All companies interested in joining the Privacy Shield Framework
should review its requirements in their entirety. To assist in that effort, key new requirements for
participating companies are outlined here.

Key New Requirements for Participating Companies


Informing individuals about data processing
• A Privacy Shield participant must include in its privacy policy a declaration of the
organization’s commitment to comply with the Privacy Shield Principles, so that the
commitment becomes enforceable under U.S. law.
• When a participant’s privacy policy is available online, it must include a link to the
Department of Commerce’s Privacy Shield website and a link to the website or complaint
submission form of the independent recourse mechanisms that is available to investigate
individual complaints.
• A participant must inform individuals of their rights to access their personal data, the
requirement to disclose personal information in response to lawful request by public
authorities, which enforcement authority has jurisdiction over the organization’s
compliance with the Framework, and the
organization’s liability in cases of onward transfer of data to third parties.
Providing free and accessible dispute resolution
• Individuals may bring a complaint directly to a Privacy Shield participant, and the
participant must respond to the individual within 45 days.
• Privacy Shield participants must provide, at no cost to the individual, an independent
recourse mechanism by which each individual’s complaints and disputes can be
investigated and expeditiously resolved.
• If an individual submits a complaint to a data protection authority (DPA) in the EU, the
Department of Commerce has committed to receive, review and undertake best efforts to
facilitate resolution of the complaint and to respond to the DPA within 90 days.
• Privacy Shield participants must also commit to binding arbitration at the request of the
individual to address any complaint that has not been resolved by other recourse and
enforcement mechanisms.

37
Compiled By Rammanohar Das
General Data Protection Regulation GDPR

What does GDPR want?


• Protection of personal data and privacy of EU citizens
• Restriction on export of personal data outside the EU

When?
• The regulation was adopted on 27th April, 2016
• Companies must be able to show compliance by 25th May, 2018

What does GDPR protect?


• Personally Identifiable Information (PII) is any data that can be used to identify a specific
individual such as
• Basic identity information – name, address and ID numbers, and email addresses.
• Web data – location, IP address, cookie data, RFID tags, login IDs, social media posts
or digital images, geolocation, biometric and behavioral data.
• Health and Genetic Data
• Racial or Ethnic Data
• Political Opinions
• Sexual Orientation

The Rights of a Data Subject


Any resident of EU can demand the following:
• Right to Access – Find out what information about him/her you hold, where did it come from,
when it was used and who all used it.
• Right to be Forgotten – Ask for all records and all traces of him/her be removed. This applies
when
• The personal data is no longer necessary in relation to the purpose for when it was
collected.
• The individual specifically withdraws consent to processing
• Personal data has been unlawfully processed
• The data must be erased in order for a controller to comply with legal obligation (for
ex – the deletion of certain data after a set period of time)

Who will be responsible for Compliance?


• Data Controller – Is the user/consumer of the personal data – a company that wants to act
on it.
• Data Processor – The company or outsourced partner who seeks and works on the data as a
service provider to the Data Controller
• Data Protection Officer – An appointed officer responsible for responding to all queries and
insuring compliance. Could be an internal officer or an external officer.

Which Company does this apply to?


Any company that stores or processes personal information about EU Citizens within EU states that
has:
• A presence in an EU country
• No presence in the EU but it processes personal data of EU residents.
• More than 250 employees
• Fewer than 250 employees but its data processing impacts the right and freedom of data
subjects

38
Compiled By Rammanohar Das
What if you are not GDPR Compliant?
• Steep penalties of up to €20 million or 4% of global turnover whichever is higher for non-
compliance.

Steps for GDPR Compliant

39
Compiled By Rammanohar Das
California Consumer Privacy Act (CCPA)

The California Consumer Privacy Act (CCPA) is a state statute intended to enhance privacy rights and
consumer protection for residents of California, United States. The bill was passed by the California
State Legislature and signed into law by Jerry Brown, Governor of California, on June 28, 2018, to
amend Part 4 of Division 3 of the California Civil Code. Officially called AB-375, the act was introduced
by Ed Chau, member of the California State Assembly, and State Senator Robert Hertzberg.

Amendments to the CCPA, in the form of Senate Bill 1121, were passed on September 13, 2018.
Additional substantive amendments were signed into law on October 11, 2019. The CCPA became
effective on January 1, 2020.

Intentions of the Act

The intentions of the Act are to provide California residents with the right to:

1. Know what personal data is being collected about them.


2. Know whether their personal data is sold or disclosed and to whom.
3. Say no to the sale of personal data.
4. Access their personal data.
5. Request a business to delete any personal information about a consumer collected from that
consumer.
6. Not be discriminated against for exercising their privacy rights.

Compliance

The CCPA applies to any business, including any for-profit entity that collects consumers' personal
data, which does business in California, and satisfies at least one of the following thresholds:

• Has annual gross revenues in excess of $25 million;


• Buys, receives, or sells the personal information of 50,000 or more consumers or households;
or
• Earns more than half of its annual revenue from selling consumers' personal information.

Organizations are required to "implement and maintain reasonable security procedures and practices
in protecting consumer data.

Responsibility and accountability

• Implement processes to obtain parental or guardian consent for minors under 13 years and
the affirmative consent of minors between 13 and 16 years to data sharing for purposes
• “Do Not Sell My Personal Information” link on the home page of the website of the business,
that will direct users to a web page enabling them, or someone they authorize, to opt out of
the sale of the resident's personal information
• Designate methods for submitting data access requests, including, at a minimum, a toll-free
telephone number
• Update privacy policies with newly required information, including a description of California
residents' rights
• Avoid requesting opt-in consent for 12 months after a California resident opts out

40
Compiled By Rammanohar Das
Sanctions and remedies

The following sanctions and remedies can be imposed:

• Companies, activists, associations, and others can be authorized to exercise opt-out rights on
behalf of California residents
• Companies that become victims of data theft or other data security breaches can be ordered
in civil class action lawsuits to pay statutory damages between $100 to $750 per California
resident and incident, or actual damages, whichever is greater, and any other relief a court
deems proper, subject to an option of the California Attorney General's Office to prosecute
the company instead of allowing civil suits to be brought against it
• A fine up to $7,500 for each intentional violation and $2,500 for each unintentional violation
• Privacy notices must be accessible and have alternative format access clearly called out.

Definition of personal data

CCPA defines personal information as information that identifies, relates to, describes, is reasonably
capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular
consumer or household such as a real name, alias, postal address, unique personal identifier, online
identifier, Internet Protocol address, email address, account name, social security number, driver's
license number, passport number, or other similar identifiers.

An additional caveat identifies, relates to, describes, or is capable of being associated with, a particular
individual, including, but not limited to, their name, signature, Social Security number, physical
characteristics or description, address, telephone number, passport number, driver's license or state
identification card number, insurance policy number, education, employment, employment history,
bank account number, credit card number, debit card number, or any other financial information,
medical information, or health insurance information.

It does not consider Publicly Available Information as personal.

Key differences between CCPA and the European Union's GDPR include the scope and territorial reach
of each, definitions related to protected information, levels of specificity, and an opt-out right for sales
of personal information. CCPA differs in definition of personal information from GDPR as in some cases
the CCPA only considers data that was provided by a consumer and excludes personal data that was
purchased by, or acquired through, third parties. The GDPR does not make that distinction and covers
all personal data regardless of source (even in the event of sensitive personal information, this does
not apply if the information was manifestly made public by the data subject themselves, following the
exception under Art.9(2), e). As such the definition in GDPR is much broader than defined in the CCPA.

41
Compiled By Rammanohar Das
SSAE 16/ SSAE 18/ SOC 1/ SOC 2/ SOC 3
By the Sarbanes–Oxley Act of 2002, public companies are made responsible for the maintenance of
an effective system of controls over financial reporting. Such intense stress by the government for
mitigating the risk over financial auditing and controls is the primary reason why the companies are
not choosing such vendors which might negatively impact their compliance status.

As such, organizations are making their vendors obtain System and Organization Controls (SOC)
attestation reports, as mandated by SSAE 16 and SSAE 18.

A SOC report is a verifiable auditing report which is performed by a Certified Public Accountant (CPA)
designated by the American Institute of Certified Public Accountants (AICPA). It is a collection of
offered services of a CPA concerning the systematic controls in a service organization. A SOC report
tells us if financial audits are performed or not; if audits are done as per the controls defined by the
serviced company or not; and the effectiveness of the audits performed.

In brief, a SOC report is the compendium of safeguards built within the control base of the data and is
also a check if those safeguards work or not.

If you are an organization which is regulated by the law, then you must be asking your vendors to
provide a SOC report, as it becomes more critical for those vendors which you consider to be dealing
with the high-risk operations of your business.

Some of the vendors provide a SOC 1 report, while some give SOC 2. Sometimes it might also happen
that some of the vendors provide a combination of both. Not just this, but SOC 3 reports too exist.
The differences are vast and are not evident to those people for whom Systems and Organizational
Control is an unfamiliar domain.

What does a SOC require, and should I pursue one?

There used to be SAS 70, that is, Statement on Auditing Standards (SAS) Number 70 for service
organizations. It was a broadly accepted auditing standard developed by the American Institute of
Certified Public Accountants (AICPA). There was a need for a more comprehensive system of
evaluation to be conducted, which would be more than just an audit of financial statements.

So SSAE 16 - the Statement on Standards for Attestation Engagements Number 16 - was issued by
AICPA in April of 2010, which became effective in May of 2011. The Service Auditor’s Examination that
was used to be conducted by CPAs under SAS 70 was then replaced with System and Organization
Controls reports under SSAE 16.

Older SAS 70 and the SSAE 16 are very similar in many of the aspects, but the SSAE 16 also has
numerous upgrades from the previous standard. The upgrades include the attestation issued by the
company that confirms that the described controls are there and are fully functional.

Public companies are also accountable to the Sarbanes–Oxley Act of 2002; a record-keeping and
financial information disclosure standards law. SOC reporting, as mandated by SSAE 16, also helps
companies comply with Sarbanes–Oxley Act’s section 404 to demonstrate successful internal controls
regarding financial auditing and reporting. In May 2017, AICPA superseded the SSAE 16 by the SSAE
18. SSAE 18 mandates a series of augmentations to increment the quality and application of SOC
reports. This superseded version also contained the principles, regulations, and standards for the
reporting of SOC. Along the way, it also drafted the functions of the vendors as provided by the
serviced organization. These minor but dominant changes made the SSAE 16 necessitate organizations
to take up more and more ownership and control of their own controlling mechanizations. These
controlling mechanizations proved instrumental in the identification, further classification, and
management of the risks involved in vendor relationships with third-parties.

42
Compiled By Rammanohar Das
What are SOC 1, SOC 2, and SOC 3 reports?

SOC 1 reports address a company's internal control over financial reporting, which pertains to the
application of checks-and-limits. By its very definition, as mandated by SSAE 18, SOC 1 is the audit of
a third-party vendor’s accounting and financial controls. It is the metric of how well they keep up their
books of accounts.

There are two types of SOC 1 reports — SOC 1 Type I and SOC 1 Type II. Type I pertains to the audit
taken place on a particular point of time, that is, a specific single date. While a Type II report is more
rigorous and is based on the testing of controls over a duration of time. Type II reports’ metrics are
always judged as more reliable as they pertain to the effectiveness of controls over a more extended
period of time.

SOC 2 is the most sought-after report in this domain and a must if you are dealing with an IT vendor.
It is quite common for people to believe that SOC 2 is some upgrade over the SOC 1, which is entirely
untrue. SOC 2 deals with the examination of the controls of a service organization over, one or more
of the ensuing Trust Service Criteria (TSC):

• Privacy
• Confidentiality
• Processing Integrity
• Availability
• Security

SOC 2 is built around the definition of a consistent set of parameters around the IT services which a
third party provides to you. If you require to have a metric of a vendor’s providence of private,
confidential, available, and secure IT services — then, you need to ask for an independently audited
and assessed SOC 2 report. Like SOC 1, SOC 2 too has two types — SOC 2 Type I and SOC 2 Type II.

Type I confirms that the controls exist. While Type II affirms that not just the controls are in place, but
they actually work as well. Of course, SOC 2 Type II is a better representation of how well the vendor
is doing for the protection and management of your data. But the serviced party here has to be very
clear about this that the SOC 2 Type II report is to be audited by an independent CPA.

SOC 3 is not some kind of upgrade over the SOC 2 report. It may have some of the components of SOC
2; still, it is entirely a different ball game. SOC 3 is a summarized report of the SOC 2 Type 2 report. So,
yes, it is not as detailed as SOC 2 Type I report, or SOC 2 Type II reports are, but a SOC 3 report is
designated to be a less technical and detailed audit report with a seal of approval which could be put
up on the website of the vendor.

Because it is less detailed and less technical, it might not contain the same level of vital intricacies of
the business auditing which you might require.

A business must request and analyze the SOC reports from your prospective vendors. It is an invaluable
piece of information to make sure that adequate controls are put in place and the controls actually
work in an effective manner.

Not just this, SOC reports — be it SOC 1, SOC 2, or SOC 3 — come very helpful in ensuring that your
compliance with the regulatory expectations is up to the mark.

43
Compiled By Rammanohar Das
ISO 27001

Objective: to align information security management with business compliance and risk reduction
objectives

• Focuses on the availability, confidentiality and integrity of organizational information; and


only on those risks relevant to the business justified financially & commercially through a risk
assessment
• ISO 27001 is a management standard not a technical standard; a key pillar of corporate
governance & best practice
• ISO 27001 is the standard for ISMS (Information Security Management System) and helps
identify, manage and reduce the range of risks to which information is regularly subjected.
• Leading International Standard for ISMS. Specifies the requirements for establishing,
implementing, maintaining, monitoring, reviewing and continually improving the ISMS within
the context of the organization. Includes assessment and treatment of InfoSec risks.
• Best framework for complying with information security legislation.
• Not a technical standard that describes the ISMS in technical detail.
• Does not focus on information technology alone, but also other important business assets,
resources, and processes in the organization.

ISO 27001: All 14 Domains Birds Eye View

44
Compiled By Rammanohar Das
ISO 27001:2013 Domains and Controls

45
Compiled By Rammanohar Das
ISO 27001:2013 – Main Clauses

Clause 4: Context of the organization


• Understanding the organization and its context
• Understanding the needs and expectation of interested parties.
• Determining the scope of the information security management system
• Information security management system

Clause 5: Leadership
• Leadership and Commitment
• Policy
• Organization, roles, responsibilities and authorities

Clause 6: Planning
• Action to address Risk and Opportunities
• Information security objectives and Planning to achieve them

Clause 7: Support
• Resource
• Competence
• Awareness
• Communication
• Documented Information

Clause 8: Operation
• Operation planning and control
• Information security Risk assessment
• Information security Risk Treatment

Clause 9: Performance evaluation


• Monitoring, measurement, analysis and evaluation
• Internal Audit
• Management Review

Clause 10: Improvement


• Non-conformity and corrective action
• Continual improvement

ISO 27000 Family

46
Compiled By Rammanohar Das
ISO 27001 Implementation

ISO 27001: Certification Process


Desktop Audit
• Accredited Certification Body Auditor
- Examines a Firm’s Relevant Documents Like its Statement of Applicability (SoA) and Risk
Treatment Plan (RTP)
On-Site Audit
• Certification Body
- Sends an Audit Team to Perform an In-Depth Assessment of a Firm’s Information Security
System’s Implementation
Firm Agrees to Surveillance Schedule
• Certification Body Periodically Checks Firm’s ISMS Every 6-9 Months
Issuance of Certificate
• Certificate Only Lasts for 3 years after Initial Certification

An external audit of the organization’s ISMS (Information Security Management System) is conducted
in three main phases:

• Pre-audit - having engaged an accredited certification body, they will request copies of your ISMS
documentation, your policy manual etc. and may request a short on-site visit to introduce themselves
and identify contacts for the next phase. When you are ready, they will schedule the certification audit
itself by mutual agreement.

• Certification audit - this is the formal audit itself. One or more auditors from the accredited
certification body will come on site, work their way systematically through their audit checklists,
checking things. They will check your ISMS policies, standards and procedures against the
requirements identified in ISO/IEC 27001, and also seek evidence that people follow the
documentation in practice (i.e. the auditors’ favorite “Show me!”). They will gather and assess
evidence including artifacts produced by the ISMS processes (such as records authorizing certain users

47
Compiled By Rammanohar Das
to have certain access rights to certain systems, or minutes of management meetings confirming
approval of policies) or by directly observing ISMS processes in action.

• Post-audit - the results of the audit will be reported formally back to management. Depending on
how the audit went and, on the auditors’, standard audit processes, they will typically raise the
following (in increasing order of severity):

• Observation - information on minor concerns or potential future issues that management is


well advised to consider

• Minor noncompliance - these are more significant concerns that the organization has to
address at some point as a condition of the certificate being granted. The certification body is
essentially saying that the organization does not follow ISO/IEC 27001 in some way, but they
do not consider that to be a significant weakness in the ISMS. The certification body may or
may not make recommendations on how to fix them. They may or may not check formally
that minor non-compliances are resolved, perhaps relying instead on self-reporting by the
organization. They may also be willing to agree a timescale for resolution that continues
beyond the point of issue of the certificate, but either way they will almost certainly want to
confirm that everything was resolved at the time of the next certification visit

• Major noncompliance - these are the show-stoppers, significant issues that mean the ISO/IEC
27001 certificate cannot be awarded until they are resolved. The certification body may
recommend how to resolve them and will require positive proof that such major issues have
been fully resolved before granting the certificate. The audit maybe suspended if a major
noncompliance is identified in order to give the organization a chance to fix the issue before
continuing.

48
Compiled By Rammanohar Das
COSO Framework
WHAT DOES COSO STAND FOR?

In 1992, the Committee of Sponsoring Organizations of the Treadway Commission (COSO) developed
a model for evaluating internal controls. This model has been adopted as the generally accepted
framework for internal control and is widely recognized as the definitive standard against which
organizations measure the effectiveness of their systems of internal control.

WHAT IS THE COSO FRAMEWORK?

The COSO model defines internal control as “a process effected by an entity’s board of directors,
management and other personnel designed to provide reasonable assurance of the achievement of
objectives in the following categories:

• Operational Effectiveness and Efficiency


• Financial Reporting Reliability
• Applicable Laws and Regulations Compliance

In an effective internal control system, the following five components work to support the
achievement of an entity’s mission, strategies and related business objectives:

1. Control Environment

• Exercise integrity and ethical values.


• Make a commitment to competence.
• Use the board of directors and audit committee.
• Facilitate management’s philosophy and operating style.
• Create organizational structure.
• Issue assignment of authority and responsibility.
• Utilize human resources policies and procedures.

2. Risk Assessment

• Create companywide objectives.


• Incorporate process-level objectives.
• Perform risk identification and analysis.
• Manage change.

49
Compiled By Rammanohar Das
3. Control Activities

• Follow policies and procedures.


• Improve security (application and network).
• Conduct application change management.
• Plan business continuity/backups.
• Perform outsourcing.

4. Information and Communication

• Measure quality of information.


• Measure effectiveness of communication.

5. Monitoring

• Perform ongoing monitoring.


• Conduct separate evaluations.
• Report deficiencies.

These components work to establish the foundation for sound internal control within the company
through directed leadership, shared values and a culture that emphasizes accountability for control.
The various risks facing the company are identified and assessed routinely at all levels and within all
functions in the organization. Control activities and other mechanisms are proactively designed to
address and mitigate the significant risks. Information critical to identifying risks and meeting business
objectives is communicated through established channels across the company. The entire system of
internal control is monitored continuously, and problems are addressed timely.

50
Compiled By Rammanohar Das
Factor Analysis of Information Risk (FAIR)

Factor Analysis of Information Risk (FAIR) is a taxonomy of the factors that contribute to risk and how
they affect each other. It is primarily concerned with establishing accurate probabilities for the
frequency and magnitude of data loss events. It is not a methodology for performing an enterprise (or
individual) risk assessment.

FAIR is also a risk management framework developed by Jack A. Jones, and it can help organizations
understand, analyze, and measure information risk according to Whitman & Mattord (2013).

A number of methodologies deal with risk management in an IT environment or IT risk, related to


information security management systems and standards like ISO/IEC 27000-series.

FAIR seeks to provide a foundation and framework for performing risk analyses. Much of the FAIR
framework can be used to strengthen, rather than replace, existing risk analysis processes like those
mentioned above. It is not another methodology to deal with risk management, but complements
existing ones. It is in direct competition with the other risk assessment frameworks, if complementary
to many of them.

Although the basic taxonomy and methods have been made available for non-commercial use under
a creative commons license, FAIR itself is proprietary. Using FAIR to analyze someone else's risk for
commercial gain (e.g. through consulting or as part of a software application) requires a license from
RMI.

Main concepts

FAIR underlines that risk is an uncertain event and one should not focus on what is possible, but on
how probable is a given event. This probabilistic approach is applied to every factor that is analyzed.
The risk is the probability of a loss tied to an asset. In FAIR, risk is defined as the “probable frequency
and probable magnitude of future loss.” FAIR further decomposes risk by breaking down different
factors that make up probable frequency and probable loss that can be measured in a quantifiable
number. These factors include: Threat Event Frequency, Contact Frequency, Probability of Action,
Vulnerability, Threat Capability, Difficult, Loss Event Frequency, Primary Loss Magnitude, Secondary
Loss Event Frequency, Secondary Loss Magnitude, and Secondary Risk.

Asset

An asset’s loss potential stems from the value it represents and/or the liability it introduces to an
organization. For example, customer information provides value through its role in generating
revenue for a commercial organization. That same information also can introduce liability to the
organization if a legal duty exists to protect it, or if customers have an expectation that the information
about them will be appropriately protected.

FAIR defines six kind of loss:

1. Productivity – a reduction of the organization to effectively produce goods or services in order


to generate value
2. Response – the resources spent while acting following an adverse event
3. Replacement – the expense to substitute/repair an affected asset
4. Fines and judgments (F/J) – the cost of the overall legal procedure deriving from the adverse
event
5. Competitive advantage (CA)- missed opportunities due to the security incident
6. Reputation – missed opportunities or sales due to the diminishing corporate image following
the event

51
Compiled By Rammanohar Das
FAIR defines value/liability as:

1. Critical – the effect on the organization productivity


2. Cost – the bare cost of the asset, the cost of replacing a compromised asset
3. Sensitivity – the cost associated to the disclosure of the information, further divided into:

1. Embarrassment – the disclosure states the inappropriate behavior of the


management of the company
2. Competitive advantage – the loss of competitive advantage tied to the disclosure
3. Legal/regulatory – the cost associated with the possible law violations
4. General – other losses tied to the sensitivity of data

Threat

Threat agents can be grouped by Threat Communities, subsets of the overall threat agent population
that share key characteristics. Threat communities must be precisely defined in order to effectively
evaluate effect (loss magnitude).

Threat agents can act differently on an asset:

• Access – read the data without proper authorization


• Misuse – use the asset without authorization and or differently from the intended usage
• Disclose – the agent let other people to access the data
• Modify – change the asset (data or configuration modification)
• Deny access – the threat agent does not let the legitimate intended users to access the asset

These actions can affect different assets in different ways: the effect varies in relationship with the
characteristics of the asset and its usage. Some assets have high criticality but low sensitivity: denial
of access has a much higher effect than disclosure on such assets. On the other hand, an asset with
highly sensitive data can have a low productivity effect if not available, but embarrassment and legal
effect if that data is disclosed: for example, the availability of former patient health data does not
affect a healthcare organization's productivity but can cost millions of dollars if disclosed. A single
event can involve different assets: a [laptop theft] affects the availability of the laptop itself but can
lead to the potential disclosure of the information stored on it.

The combination of an asset's characteristics and the type of action against that asset that determines
the fundamental nature and degree of loss.

52
Compiled By Rammanohar Das
NIST RMF
The Risk Management Framework (RMF) is a set of criteria that dictate how United States
government IT systems must be architected, secured, and monitored. Originally developed by the
Department of Defense (DoD), the RMF was adopted by the rest of the US federal information systems
in 2010.

Today, the RMF is maintained by the National Institute of Standards and Technology (NIST), and
provides a solid foundation for any data security strategy.

What is the Risk Management Framework (RMF)?

The elegantly titled “NIST SP 800-37 Rev.1” defines the RMF as a 6-step process to architect and
engineer a data security process for new IT systems, and suggests best practices and procedures each
federal agency must follow when enabling a new system. In addition to the primary document SP 800-
37, the RMF uses supplemental documents SP 800-30, SP 800-53, SP 800-53A, and SP 800-137.

Risk Management Framework (RMF) Steps

We’ve visualized the RMF 6-step process below. Browse through the graphic and take a look at the
steps in further detail beneath.

Step 1: Categorize Information System

The Information System Owner assigns a security role to the new IT system based on mission and
business objectives. The security role must be consistent with the organization’s risk management
strategy.

Step 2: Select Security Controls

The security controls for the project are selected and approved by leadership from the common
controls, and supplemented by hybrid or system-specific controls. Security controls are the hardware,
software, and technical processes required to fulfill the minimum assurance requirements as stated
in the risk assessment. Additionally, the agency must develop plans for continuous monitoring of the
new system during this step.

53
Compiled By Rammanohar Das
Step 3: Implement Security Controls

Simply put, put step 2 into action. By the end of this step, the agency should have documented and
proven that they have achieved the minimum assurance requirements and demonstrated the correct
use of information system and security engineering methodologies.

Step 4: Assess Security Controls

An independent assessor reviews and approves the security controls as implemented in step 3. If
necessary, the agency will need to address and remediate any weaknesses or deficiencies the assessor
finds and then documents the security plan accordingly.

Step 5: Authorize Information System

The agency must present an authorization package for risk assessment and risk determination. The
authorizing agent then submits the authorization decision to all necessary parties.

Step 6: Monitor Security Controls

The agency continues to monitor the current security controls and update security controls based on
changes to the system or the environment. The agency regularly reports on the security status of the
system and remediates any weaknesses as necessary.

54
Compiled By Rammanohar Das
OCTAVE

OCTAVE is a risk assessment methodology to identify, manage and evaluate information security risks.
This methodology serves to help an organization to:

• develop qualitative risk evaluation criteria that describe the organization’s operational risk
tolerances
• identify assets that are important to the mission of the organization
• identify vulnerabilities and threats to those assets
• determine and evaluate the potential consequences to the organization if threats are realized
• initiate continuous improvement actions to mitigate risks

OCTAVE methodology is primarily directed toward individuals who are responsible for managing an
organization’s operational risks. This can include personnel in an organization’s business units, persons
involved in information security or conformity within an organization, risk managers, information
technology department, and all staff participating in the activities of risk assessment with the OCTAVE
method.
In response to continuous issues about risk management, especially risk assessment, SEI developed
the first OCTAVE Framework approach in 1999. This framework was intended merely for large
corporations with more than 300 employees which have multi-layered hierarchy and are responsible
for their own software infrastructure. The evaluation criteria using this framework are based by a
three-phased approach that includes Organizational View, Technological View, and Risk Analysis.

In 2003, the update of the original framework was developed and named OCTAVE-S approach. This
approach was intended for small organizations with less than 100 employees that have flexible
hierarchy and more specialized team members. The three-phased approach is used in this approach
as well, and is dedicated to small teams within organizations that would deal with this approach.

Finally, in 2007, the Computer Emergency Response Team (CERT), a program of SEI, has developed
the latest update of OCTAVE named OCTAVE Allegro approach. It is intended to all organizations which
focus primarily on information assets in the context of how they are used, where they are stored,
transported, and processed, and how they are exposed to threats, vulnerabilities, and disruptions as
a result. Allegro version has reduced a lot of requirements and process to make it easier to use. In
other words, Allegro has shifted the OCTAVE approach from a technology asset-centric, to an
information asset-centric risk assessment.

Octave Allegro

OCTAVE Allegro is a methodology to restructure and optimize the measurement process of


information security risks in order for an organization to achieve the necessary results with a small
investment in time, people and other resources. Through this methodology, organizations will tend to
consider people, technology, and facilities in regards to their correlation with information, business
processes, and the services they support.

OCTAVE Allegro defines the critical components of a systematic information security risk assessment
framework by referring to risk with their confidentiality, integrity and availability of assets. Through
this approach, organizations will no longer have the problem of defining critical assets from which the
risk may come, that has not been clearly defined with other methodologies. Allegro gives clear
instructions how to identify critical assets in the same time connecting organizational goals and
objectives to information security goals and objectives. This means that information security teams
will work together with the operational teams to address the information security needs in order to
properly protect critical data. Thus, critical decisions will no longer be made from IT departments, but
rather from a conjunction of involving departments.

55
Compiled By Rammanohar Das
Organizations that use OCTAVE Allegro methodology will be required to information asset profiles to
have better and unambiguous definitions for asset boundaries. These profiles will enable
organizations to define security requirements, assign their ownership and to set their value.
Once the profiles are created, they can be updated and modified for future assessments in regards to
organization’s needs.

Roadmap steps

To make the whole process easier to be used, Allegro has reduced a lot of requirements and
complications that were included in the previous versions of OCTAVE. Allegro rather contains an eight-
step process divided in four categories that enables organizations to identify, analyze, assess, and
mitigate potential risks. The relationship between the activity areas and the actual steps of the
methodology are shown in the chart below.

I. Establish drivers – This area contains only the first step through which the organization develops
risk measurement criteria that are consistent with organizational drivers. These drivers will be used to
evaluate the risk effects to an organization’s mission and its objectives.

II. Profile assets – This area contains step 2 and 3, through which the information asset profiles are
created. After the profiles are identified and created, the assets’ containers are identified and the
profile for each asset is captured on a single worksheet. A profile represents an information asset that
describes its unique features, qualities, characteristics, and value.

III. Identify threats – This area includes steps 4 and 5 where threats to the information assets are
identified and documented through a structured process. In this category, areas of concern present
real-world scenarios that can happen to organizations, and threat scenarios that contain additional
threats are identified.

IV. Identify and mitigate risks – This is the last stage of risk assessment. In this category risks are
identified and analyzed based on threat information, and mitigation strategies developed to address
those risks. In this step, the threats identified in the previous category will be analyzed and mitigated.

What makes this process unique is that the outputs from each step in the process are seized on
worksheets which are then used as inputs to the next step in the process.

56
Compiled By Rammanohar Das
How to use Octave?

Information security community has accepted OCTAVE methodologies as one “de facto” standard to
conduct risk assessments. To effectively manage operational risk, OCTAVE methodologies are always
a clever choice for every organization that wants to implement a successful risk assessment strategy.
To properly implement the OCTAVE methodologies, organizations need to take two major steps:
preparing for OCTAVE, and performing an assessment.

Preparing for Octave

One of the most critical factors in successfully performing OCTAVE methodologies is getting the
sponsorship from the organization’s top management. Senior management should be convinced that
OCTAVE is what an organization needs, and they will also require actions from the implementer to
show continuous improvement such as:

• to support OCTAVE activities


• to encourage the staff participation
• to delegate roles and responsibilities to the analysis team
• commitment to allocate resources
• to present ideas how to continually improve

After the approval from the senior management is received, organizational resources should be
allocated in order to implement OCTAVE. A team ranging from one to seven professionals (depending
on organization’s hierarchical structure) should be created as the analysis team that should support
the implementation. Furthermore, best industry practice has shown that organizations whose
assessment team has received training have managed to successfully implement OCTAVE
methodologies.

Performing an assessment
The OCTAVE Allegro methodology developed by CERT includes guidance, worksheets, and
questionnaires that are necessary to perform an OCTAVE Allegro assessment. Organizations wanting
to perform an assessment with OCTAVE will have to go through all the materials that will prepare and
help them to successfully implement the assessment.

It is very important that before the judgment of the assessment team, organizations should identify
and select information assets that will be the basis of the implementation, set the risk measurement
criteria that reflect the management’s risk tolerance, and repeat an assessment every time there is a
significant change in the information asset.

Benefits of risk assessment with Octave

OCTAVE methodologies bring a unique perspective that involves collaboration between risk
identification, assessment and mitigation. By bringing together the importance and sensitivity of data
to the IT teams, and the proper communication to the top management, OCTAVE provides the
“organizational connection” within companies that before has been absent. As a result of the
collaborations, many gaps that reduce the ability to mitigate risks are exposed, such as gaps in
organizational communication and gaps in practice. By exposing these gaps, organizations will have a
diversity of understanding, opinions and experiences which strengthen the quality of the risk
assessment.

57
Compiled By Rammanohar Das
Other benefits of using OCTAVE methodologies for risk assessment are listed below:

• OCTAVE methodologies have highly qualitative considerations and descriptions against risk
assessment methodologies, rather than quantitative ones.

• OCTAVE brings a formal and systematic process to analyze the risks it faces which is easier for
organizations to adapt.

• A formal risk assessment process enables organizations to implement controls only where
they are needed, rather than opinion-based controls. Moreover, such process is more cost-
effective since only risks that are out of the risk measurement criteria (unacceptable risks) will
be addressed and fewer expenses in incidents will occur.

• Allows senior management to perform due diligence and understand the actual state of the
organization whilst being informed about the whole assessment strategy as well.

Helps organizations in shifting the organizational culture into a more risk-based and qualitative
culture.

58
Compiled By Rammanohar Das
TARA
TARA framework for risk management

TARA framework provides a simple idea of risk strategies. TARA stand for transference, avoidance,
reduction and acceptance. Each strategy is suitable for different risks and it is the job of P1 students
to be able to recommend suitable strategies for different risks by taking into account the
information given.

Transfer
This means to share the risk with another party. Common example will be to buy an insurance to
share part of the risk of losses with insurance company. For example, this strategy is suitable when
the risk has a significant impact to the company but low probability to occur. It is better to transfer
rather than reduce in this case because the risk may not occur. However, transfer is limited if there is
no alternative arrangements for bearing the risk.

Avoid
This means to avoid the activity that causes the risk. For example, this strategy is suitable when the
risk is likely to occur and has a significant impact. However, if it is strategically vital to undertake the
activity, then transference or reduction may have to be undertaken.

Reduce
This means to reduce the risk exposure probably by carrying out the activity in a different way. For
example, this strategy is suitable when the risk does not have significant impact but likely to occur.
This is to reduce the likelihood of occurrence by using different method to carry out the activity.
However, if reduction cannot be done, company might have to accept the risk if it does not have
significant impact or avoid it if otherwise.

Accept
This means to accept the risk and do nothing. For example, this strategy is suitable when the risk has
a low impact and low probability of occurrence. This is because the risk is not really a matter even if
it is realized.

59
Compiled By Rammanohar Das
Risk Assessment
What is IT risk?

Information technology or IT risk is basically any threat to your business data, critical systems and
business processes. It is the risk associated with the use, ownership, operation, involvement, influence
and adoption of IT within an organization.

IT risks have the potential to damage business value and often come from poor management of
processes and events.

Categories of IT risks

IT risk spans a range of business-critical areas, such as:

• Security - e.g. compromised business data due to unauthorized access or use


• Availability - e.g. inability to access your IT systems needed for business operations
• Performance - e.g. reduced productivity due to slow or delayed access to IT systems
• Compliance - e.g. failure to follow laws and regulations (e.g. data protection)

IT risks vary in range and nature. It's important to be aware of all the different types of IT risk
potentially affecting your business.

Potential impact of IT failure in business

For businesses that rely on technology, events or incidents that compromise IT can cause many
problems. For example, a security breach can lead to:

• Identity fraud and theft


• Financial fraud or theft
• Damage to reputation
• Damage to brand
• Damage to your business physical assets

Failure of IT systems due to downtime or outages can result in other damaging and diverse
consequences, such as:

• Lost sales and customers


• Reduced staff or business productivity
• Reduced customer loyalty and satisfaction
• Damaged relationship with partners and suppliers

If IT failure affects your ability to comply with laws and regulations, then it could also lead to:

• Breach of legal duties


• Breach of client confidentiality
• Penalties, fines and litigation
• Reputational damage

If technology is enabling your connection to customers, suppliers, partners and business information,
managing IT risks in your business should always be a core concern.

60
Compiled By Rammanohar Das
Different types of IT risk

Your IT systems and the information that you hold on them face a wide range of risks. If your
business relies on technology for key operations and activities, you need to be aware of the range
and nature of those threats.

Types of risks in IT systems

Threats to your IT systems can be external, internal, deliberate and unintentional. Most IT risks affect
one or more of the following:

• Business or project goals


• Service continuity
• Bottom line results
• Business reputation
• Security
• Infrastructure

Examples of IT risks

Looking at the nature of risks, it is possible to differentiate between:

• Physical threats - resulting from physical access or damage to IT resources such as the servers.
These could include theft, damage from fire or flood, or unauthorized access to confidential
data by an employee or outsider.
• Electronic threats - aiming to compromise your business information - e.g. a hacker could get
access to your website, your IT system could become infected by a computer virus, or you
could fall victim to a fraudulent email or website. These are commonly of a criminal nature.
• Technical failures - such as software bugs, a computer crash or the complete failure of a
computer component. A technical failure can be catastrophic if, for example, you cannot
retrieve data on a failed hard drive and no backup copy is available.
• Infrastructure failures - such as the loss of your internet connection can interrupt your
business - e.g. you could miss an important purchase order.
• Human error - is a major threat - e.g. someone might accidentally delete important data, or
fail to follow security procedures properly.

How to manage IT risks?

Managing various types of IT risks begins with identifying exactly:

• the type of threats affecting your business


• the assets that may be at risks
• the ways of securing your IT systems

IT risk assessment methodology

IT risk assessment is a process of analyzing potential threats and vulnerabilities to your IT systems to
establish what loss you might expect to incur if certain events happen. Its objective is to help you
achieve optimal security at a reasonable cost.

There are two prevailing methodologies for assessing the different types of IT risk: quantitative and
qualitative risk analysis.

61
Compiled By Rammanohar Das
Quantitative IT risk assessment

Quantitative assessment measures risk using monetary amounts. It uses mathematical formulas to
give you the value of expected losses associated with a particular risk, based on:

• the asset values


• the frequency of risk occurrence
• the probability of associated loss

In an example of server failure, a quantitative assessment would involve looking at:

• the cost of a server or the revenue it generates


• how often does the server crash
• the estimated loss incurred each time it crashed

From these values, you can work out several key calculations:

• Single loss expectancy - costs you would incur if the incident occurs once
• Annual rate of occurrence - how many times a year you can expect this risk to occur
• Annual loss expectancy - the total risk value over the course of a year

Find a formula to calculate Annualized Loss Expectancy.

These monetary results could help you avoid spending too much time and money on reducing
negligible risks. For example, if a threat is unlikely to happen or costs little or nothing to remedy, it
probably presents low risk to your business.

However, if a threat to your key IT systems is likely to happen, and could be expensive to fix or likely
to affect your business adversely, you should consider it high risk.

You may want to use this risk information to carry out a cost/benefit analysis to determine what level
of investment would make risk treatment worthwhile.

Keep in mind that quantitative measures of risk are only meaningful when you have good data. You
may not always have the necessary historical data to work out probability and cost estimates on IT-
related risks, since they can change very quickly.

Qualitative IT risk assessment

Qualitative risk assessment is opinion-based. It relies on judgment to categorize risks based on


probability and impact and uses a rating scale to describe the risks as:

• Low - unlikely to occur or impact your business


• Medium - possible to occur and impact
• High - likely to occur and impact your business significantly

For example, you might classify as 'high probability' something that you expect to happen several
times a year. You do the same for cost/impact in whatever terms seem useful, for example:

• Low - would lose up to half an hour of production


• Medium - would cause complete shutdown for at least three days
• High - would cause irrevocable loss to the business

With your ratings determined, you can then create a risk assessment matrix to help you categorize
the risk level for each risk event. This can, ultimately, help you decide which risks to mitigate using
controls, and which to accept or transfer.

62
Compiled By Rammanohar Das
Use different types of information in IT risk assessments

Sometimes, it may be best to use a mixed approach to IT risk assessments, combining elements of
both quantitative and qualitative analysis.

You can use the quantitative data to assess the value of assets and loss expectancy, but also involve
people in your business to gain their expert insight. This may take time and effort, but it can also result
in a greater understanding of the risks and better data than each method would provide alone.

63
Compiled By Rammanohar Das
Vulnerability Management

A vulnerability is defined in the ISO 27002 standard as "A weakness of an asset or group of assets that
can be exploited by one or more threats" (International Organization for Standardization, 2005).

What is Vulnerability Management and Scanning?

Vulnerability management is the process of identifying, evaluating, treating, and reporting on security
vulnerabilities in systems and the software that runs on them. This, implemented alongside with other
security tactics, is vital for organizations to prioritize possible threats and minimizing their "attack
surface."

Security vulnerabilities, in turn, refer to technological weaknesses that allow attackers to compromise
a product and the information it holds. This process needs to be performed continuously in order to
keep up with new systems being added to networks, changes that are made to systems, and the
discovery of new vulnerabilities over time.

Vulnerability management software can help automate this process. They’ll use a vulnerability
scanner and sometimes endpoint agents to inventory a variety of systems on a network and find
vulnerabilities on them. Once vulnerabilities are identified, the risk they pose needs to be evaluated
in different contexts so decisions can be made about how to best treat them. For example,
vulnerability validation can be an effective way to contextualize the real severity of a vulnerability.

What is the difference between Vulnerability Management and Vulnerability


Assessment?

Generally, a Vulnerability Assessment is a portion of the complete Vulnerability Management system.


Organizations will likely run multiple Vulnerability Assessments to get more information on their
Vulnerability Management action plan.

The vulnerability management process can be broken down into the following
four steps:

• Identifying Vulnerabilities
• Evaluating Vulnerabilities
• Treating Vulnerabilities
• Reporting Vulnerabilities

Step 1: Identifying Vulnerabilities

At the heart of a typical vulnerability management solution is a vulnerability scanner. The scan consists
of four stages:

1. Scan network-accessible systems by pinging them or sending them TCP/UDP packets


2. Identify open ports and services running on scanned systems
3. If possible, remotely log in to systems to gather detailed system information
4. Correlate system information with known vulnerabilities

Vulnerability scanners are able to identify a variety of systems running on a network, such as laptops
and desktops, virtual and physical servers, databases, firewalls, switches, printers, etc. Identified
systems are probed for different attributes: operating system, open ports, installed software, user
accounts, file system structure, system configurations, and more. This information is then used to

64
Compiled By Rammanohar Das
associate known vulnerabilities to scanned systems. In order to perform this association, vulnerability
scanners will use a vulnerability database that contains a list of publicly known vulnerabilities.

Properly configuring vulnerability scans is an essential component of a vulnerability management


solution. Vulnerability scanners can sometimes disrupt the networks and systems that they scan. If
available network bandwidth becomes very limited during an organization’s peak hours, then
vulnerability scans should be scheduled to run during off hours.

If some systems on a network become unstable or behave erratically when scanned, they might need
to be excluded from vulnerability scans, or the scans may need to be fine-tuned to be less disruptive.
Adaptive scanning is a new approach to further automating and streamlining vulnerability scans based
on changes in a network. For example, when a new system connects to a network for the first time, a
vulnerability scanner will scan just that system as soon as possible instead of waiting for a weekly or
monthly scan to start scanning that entire network.

Vulnerability scanners aren’t the only way to gather system vulnerability data anymore, though.
Endpoint agents allow vulnerability management solutions to continuously gather vulnerability data
from systems without performing network scans. This helps organizations maintain up-to-date system
vulnerability data whether or not, for example, employees’ laptops are connected to the
organization’s network or an employee’s home network.

Regardless of how a vulnerability management solution gathers this data, it can be used to create
reports, metrics, and dashboards for a variety of audiences.

Step 2: Evaluating Vulnerabilities

After vulnerabilities are identified, they need to be evaluated so the risks posed by them are dealt
with appropriately and in accordance with an organization’s risk management strategy. Vulnerability
management solutions will provide different risk ratings and scores for vulnerabilities, such as
Common Vulnerability Scoring System (CVSS) scores. These scores are helpful in telling organizations
which vulnerabilities they should focus on first, but the true risk posed by any given vulnerability
depends on some other factors beyond these out-of-the-box risk ratings and scores.

Here are some examples of additional factors to consider when evaluating vulnerabilities:

• Is this vulnerability a true or false positive?


• Could someone directly exploit this vulnerability from the Internet?
• How difficult is it to exploit this vulnerability?
• Is there known, published exploit code for this vulnerability?
• What would be the impact to the business if this vulnerability were exploited?
• Are there any other security controls in place that reduce the likelihood and/or impact of this
vulnerability being exploited?
• How old is the vulnerability/how long has it been on the network?

Like any security tool, vulnerability scanners aren’t perfect. Their vulnerability detection false-positive
rates, while low, are still greater than zero. Performing vulnerability validation with penetration
testing tools and techniques helps weed out false-positives so organizations can focus their attention
on dealing with real vulnerabilities. The results of vulnerability validation exercises or full-blown
penetration tests can often be an eye-opening experience for organizations that thought they were
secure enough or that the vulnerability wasn’t that risky.

65
Compiled By Rammanohar Das
Step 3: Treating Vulnerabilities

Once a vulnerability has been validated and deemed a risk, the next step is prioritizing how to treat
that vulnerability with original stakeholders to the business or network. There are different ways to
treat vulnerabilities, including:

• Remediation: Fully fixing or patching a vulnerability so it can’t be exploited. This is the ideal
treatment option that organizations strive for.
• Mitigation: Lessening the likelihood and/or impact of a vulnerability being exploited. This is
sometimes necessary when a proper fix or patch isn’t yet available for an identified
vulnerability. This option should ideally be used to buy time for an organization to eventually
remediate a vulnerability.
• Acceptance: Taking no action to fix or otherwise lessen the likelihood/impact of a vulnerability
being exploited. This is typically justified when a vulnerability is deemed a low risk, and the
cost of fixing the vulnerability is substantially greater than the cost incurred by an organization
if the vulnerability were to be exploited.

Vulnerability management solutions provide recommended remediation techniques for


vulnerabilities. Occasionally a remediation recommendation isn’t the optimal way to remediate a
vulnerability; in those cases, the right remediation approach needs to be determined by an
organization’s security team, system owners, and system administrators. Remediation can be as
simple as applying a readily-available software patch or as complex as replacing a fleet of physical
servers across an organization’s network.

When remediation activities are completed, it’s best to run another vulnerability scan to confirm that
the vulnerability has been fully resolved.

However, not all vulnerabilities need to be fixed. For example, if an organization’s vulnerability
scanner has identified vulnerabilities in Adobe Flash Player on their computers, but they completely
disabled Adobe Flash Player from being used in web browsers and other client applications, then those
vulnerabilities could be considered sufficiently mitigated by a compensating control.

Step 4: Reporting vulnerabilities

Performing regular and continuous vulnerability assessments enables organizations to understand the
speed and efficiency of their vulnerability management program over time. Vulnerability management
solutions typically have different options for exporting and visualizing vulnerability scan data with a
variety of customizable reports and dashboards. Not only does this help IT teams easily understand
which remediation techniques will help them fix the most vulnerabilities with the least amount of
effort, or help security teams monitor vulnerability trends over time in different parts of their network,
but it also helps support organizations’ compliance and regulatory requirements.

66
Compiled By Rammanohar Das
Third Party Risk Management/ Vendor Risk Management
What Is Third Party Risk Management?

• Third party risk management (TPRM) is the process of analyzing and controlling risks
presented to your company, your data, your operations and your finances by parties OTHER
than your own company.

• “A vendor is a third party that supplies products or services to an enterprise. These products
or services may be outsourcing, hardware, software, services, commodities, etc. Vendor
management is a strategic process that is dedicated to the sourcing and management of
vendor relationships so that value creation is maximized and risk to the enterprise is
minimized” (vendor management using COBIT® 5).

Third Party Risks


• Has insufficient experience & controls to protect the company’s and customer's information
from unauthorized access, disclosure, modification or destruction
• Cannot continuously maintain its services due to business disruption
• Lacks the proper security to prevent unauthorized access to its facilities, equipment and
resources to protect from damage or harm
• products, services, or systems are not consistent with your policies and procedures, agreed
service levels, applicable laws, regulations, and ethical standards
• does not possess the necessary licenses to operate & the expertise to enable the company to
remain compliant with domestic and international laws and regulations
• A fourth party contracted by the third party to service the client poses an additional risk for
meeting its contractual requirements and other outlined risks mentioned above.

Managing Third-Party Risks


• Due Diligence
• Contract Structuring
• Oversight
• Risk Assessment

67
Compiled By Rammanohar Das
Due Diligence
• Due diligence is the investigative process by which a company or other third party is reviewed
to determine its suitability for a given task. Due diligence is an ongoing activity, including
review, monitoring, and management communication over the entire vendor lifecycle.
• Due diligence assessment criteria:
• Audited financial statements
• Business reputation.
• Qualifications, backgrounds, and reputation of company's officials etc.
• Experience and ability in implementing and monitoring the proposed activity.
• Existence of any significant complaints or litigation, or regulatory actions against the
company.
• Use of other parties or subcontractors by the third party.

Contract Structuring
• Senior management should ensure that the expectations and obligations of each critical third
party are clearly defined, understood, and enforceable.
• Things to include in contract:
• Time frame covered by the contract.
• Performance measures or benchmarks
• Responsibilities
• Providing/receiving info
• Regulatory compliance requirements
• Costs and compensation
• Limits on liability
• Business contingency
• Default and termination
• A service-level agreement (SLA) is a part of a contract where a service is formally defined.
• Common service level agreements include:
• Average speed of answer (ASA) –how long it takes to answer calls.
• Turnaround time (TAT) – length of time it takes.
• System availability – IT systems – vendor system availability.

Oversight
• The company's board of directors (or a board committee) and senior management should
oversee the company’s overall risk management processes. There should be distinct roles and
responsibilities set and escalation procedures communicated.

Risk Assessment

68
Compiled By Rammanohar Das
➢ Pre-Assessment
• Obtain all information regarding the scope of work
• Find out the data that will be cstupid’ed
▪ Collect
▪ Store
▪ Transmit
▪ Use
▪ Process
▪ Interface
▪ Destroy
• Converse with the assigned BU and/or the vendor contacts to fully understand what,
where, and how’s
• If applicable, determine if the assessment will be handled by an internal or external
assessor
Send the vendor the questionnaire to be completed
➢ Assessment Phase
• Have a meeting with the BU and vendor to discuss contacts, deliverables, and
timelines
• Request/review pertinent documentation from:
▪ The BU - contracts, SOW’s (Statement of work), NDA’s (Non-disclosure
agreements, BAA’s (Business associate agreement)
▪ The vendor - SSAE-16 type II documents; ISO 27001/2 cert,
• review the returned questionnaire responses
• Note “contingent items” (non-compliant items, findings, etc.)
• Update BU and vendor management
• Track contingent items
• Compose the assessment report
• File BU/vendor documents
• Track through remediation all contingent items
➢ Post Assessment
Contingent Items (aka: issues, findings, observations, etc.)
• You can accept the risk associated with a particular item or…
• You can require remediation of the item –
• require remediation by the vendor or business unit
▪ Risk-rate and prioritize as such
▪ Actively monitor until they are closed
▪ Escalate to appropriate levels of management if timelines are not met
▪ Adjust the timelines if the vendor cannot reasonably meet the target dates
➢ Re-Assessment
Start planning by determining “what criteria”?
• Based on type of data (PCI, PHI, etc.) Suggestions include:
▪ PCI = annual
▪ PHI = annual
▪ PII = annual
▪ company confidential (i.e., Strategic)
• based on the geographic location?
▪ Onshore
▪ Offshore
▪ Offshore but with safe harbor agreements
• Based via scoring system?
▪ Risk rating (“scholastic score”)
▪ Sig
▪ other GRC tool
▪ In house tool
• Combination of the above

69
Compiled By Rammanohar Das
Physical Security
Protecting important data, confidential information, networks, software, equipment, facilities,
company’s assets, and personnel is what physical security is about. There are two factors by which the
security can be affected. First attack by nature like a flood, fire, power fluctuation, etc. Though the
information will not be misused, it is very hard to retrieve it and may cause permanent loss of data.
Second is attack by the malicious party, which includes terrorism, vandalism, and theft. All the
organization faces different kinds of physical security threats.

Physical security is very important, but it is usually overlooked by most organizations. It is necessary if
you do not want anyone to snatch away your information or destroy it, in case of natural calamity.
The reason could be anything, the attacker doing it for personal gain, financial gain, for seeking
revenge or you were the vulnerable target available. If this security is not maintained properly, all the
safety measures will be useless once the attacker gets through by gaining physical access. Though
physical security is proving to be challenging than previous decades as there are more sensitive devices
available (like USB drives, laptops, smartphones, tablets, etc.) that enables the stealing of data easy
and smooth.

As mentioned before there are fewer measures used for physical security and no one pays heed to it
as attention is mostly on technology-oriented security. This slip-up gives the attacker a chance to
exploit data or open ports. They scheme plans of penetrating the network through unauthorized
means. Though there are internal threats too, for example, employees that have access to all the areas
of the company can steal the assets with ease.

Physical security encouraged by PCI to be implemented in the workplace

PCI (Payment Card Industry) is a security standard which is created to make sure that all the
organizations and companies that deals with any cardholder data have secured environment. PCI
requirements for physical security are very simple, but it still takes loads of efforts. PCI have 12
requirements for compliance.

• Install and maintain firewall configuration that provides security for assets of cardholder data.
Protecting and securing the stored data.
• Do no use default vendor passwords and another parameter of security.
• Encrypt transmission of cardholder data across open networks.
• Use anti-virus and frequently update their programs to remove any malicious software that
can threaten the security of cardholder data environment.
• Secure systems and applications should be developed and maintained.
• Access to cardholder data or physical cardholder data is restricted.
• Those with access should have assigned unique user ID.
• Track and supervise network access.
• Regular testing of security systems and processes should take place.
• A policy must be maintained that addresses information security for all personnel.
• Use of cameras to monitor vulnerable areas. Classification of media is required to protect
sensitive data.
• Sensitive Authentication Data must be secured.

70
Compiled By Rammanohar Das
Physical security encouraged by ISO to be implemented in the workplace

ISO (Information Organization for Standardization) is a code of information security to practice. It


consists of several numbers of sections that covers a large range of security issues.

Risk treatment and assessment copes with the fundamentals of security risk analysis. Maintain an
organized infrastructure to control how the company implements information security. Assets
management includes proper protection of organizational assets and making sure that information is
rightly secured. Personnel security management- It is ensuring suitable jobs for employees,
contractors, third parties and also preventing them from misusing information processing facilities.
The organization should use perimeters and barriers to protect secure areas. Entry controls should
give access to authorized people only to important areas. Secure areas should be designed to be able
to withstand a natural disaster. Supervise the use of delivery and loading areas and make sure it is
carefully carried out in holding areas. Safeguard the equipment and protect it from hazards. Power
supplies and cable should be secured. Ensure safe access to information and property.

Advantages of physical security

However, there are many facilities provided for physical security with a good amount of advantages.
First is perimeter security that includes mantrap, fences, electric fences, gates and turnstile. Safe locks
with keys that are hard to duplicate. Badges are necessary for verifying the identity of any employee.
Set up the surveillance and at places that won’t expose it or let the attacker tamper with it. Safeguard
any vulnerable device and protect the portables. Secure the backups in a safe place where access is
not easily gained. In case of explosion, fire or electric-complications, correct control method should
be used that might help in saving some of the important things in the workplace. Strong setup may
stay adamant and lowers the loss of the majority of assets, data, and equipment.

The great advantage is that criminals or attackers have to bypass through many layers of security to
gain their objective. As a result, it gets harder for them to accomplish their mission. There are many
methods and equipment that is difficult to scale by an intruder, has a low budget to set it and reduces
security threat.
List of things that help to maintain a good and strong physical security
Intrusion detector CCTV, smart cards, Fire extinguisher, Guards, Suppression systems, Intrusion
alarm, Motion detectors, Physical access, Chain link fence, RFID tags, Barbed wire and much
more.
Access control (AC) are accessible to multiple operators; it includes authorization, access approval,
multiple identity verifications, authentication, and audit.

Disadvantages

Though there are some loopholes. Some of the methods might harm or injure animals and intruder.
The protective fences may get jumped over by the attacker. Validity can be compromised in
authentication or by Access control (CA). Smart cards or keys can be stolen and make it easier for the
hacker just to find your misplaced USB and have his way with your computer. Today’s security systems
and installations are highly complex and leave the users to figure out on their own for how to operate
it.

There are new updates and development plans in security technology every year, so changing and
keeping up with the new tech can be tiring. The thing is there are many available facilities, but
employees rarely know how to use it, for example, fire extinguisher are found at every corner of the
organization, but there are not many workers that know how to handle it. Each employee in the
workplace usually has access cards, but the problem arises when the card is blocked. Sometimes the
installations of CCTV cameras are in places that capture bathroom or private areas and hinder the
privacy of any employee.

71
Compiled By Rammanohar Das
Disaster Recovery
Disaster recovery is the process that ensures a company can restore its operations after an
interruption caused by an uncommon and damaging event. The best a company can do is to prepare
for it through a strong recovery plan, so that it can be ready to face the consequences when a disaster
happens.

What Is a Recovery Plan and What Does It Contain?

Recovery plans, also known as business recovery plans (BRPs), business continuity plans, or business
contingency plans (BCPs), are the plans used by a business to maintain or bring back to normal a
function or functions lost due to an unscheduled event. Every business unit or department, as well as
the business as a whole, should have its own recovery plan, but all the plans should be in accordance
with one another. For instance, the IT department refers to the BRP plan first to
reactivate its operations and activate the IT continuity plan. Similarly, the BRP gives information to
other severely affected department s so that they can activate their own recovery plans.

In general, a recovery plan should include, but is not limited to, the organizational unit and its scope
and the link of the plan to other plans, roles, and responsibilities, thoroughly for contact persons in
crisis situations, incident assessment procedures, emergency room contact person, invocation and
escalation information, business continuity action plan, recovery profile for each endangered activity,
logistics information (equipment, maps and directions), communication matrix, and recovery
completion procedure.

As a part of the BRP, the disaster recovery plan (DRP) is a specific recovery plan that is
concerned particularly with damaged or lost software, data, and/or hardware on one hand,
and on overcoming the consequences of that on the other hand. It aims to minimize, as much as
possible, potential functional damages caused by a disaster.

Disaster Recovery Plan Development

The DRP development is the first phase of the disaster recovery management cycle after the project
initiation and risk impact assessment. It is an ongoing process of planning, developing, testing, and
implementing procedures and processes to ensure that the organization can quickly restart its basic
activity after an unplanned interruption due to a disaster. It has the same components as any recovery
plan, but with a particular emphasis on the IT department, personnel, equipment, facilities, and
function.

Disaster Recovery Plan Components


The DRP has many particularities and areas to focus on and that the CISSP certified professional
should be aware of:

Emergency Response

Emergency response consists of the first actions undertaken immediately after the disaster.

Nothing is more important than human life in such circumstances. that is why the first measures are
to ensure personnel safety by providing first aid and looking for personnel; that should be followed
by ensuring everyone’s evacuation with the appropriate procedures, avoiding any risk to personnel
and supplying the necessary basic needs such as food, water, blankets, etc.

After securing human life comes securing business assets. This includes not only infrastructure but
also important logistics, such as vehicles and equipment, particularly IT equipment because of its
cost and its necessity for business functions. At this stage, damage can be assessed by external
engineers.

72
Compiled By Rammanohar Das
Then comes the emergency notification, the responsibility for which is assigned to the response
team, which needs to keep the personnel calm and the management updated. Objectivity is the
rule to keep in mind.

Personnel and Communications

Having the right person in the right place when a disaster occurs is crucial for the business in order to
respond fast and effectively and minimize damages in its workflow. Identifying the right person
implies knowing the characteristics of the company’s workers; for instance, it is better to select a
person living relatively close to the workplace. A skilled, experienced, and a voluntary person would
be also more useful in an emergency situation.

The DRP should be very comprehensive in describing the hierarchy of key personnel involved in
disaster management, in each department and for the business as a whole, describing in detail the
responsibilities of each person, how and when he/she can be contacted with any available phone
number, and the communication channels, which should be diversified by using radios and satellite
phones, for instance, they should be different from the ones usually used, just in case there is a
service interruption. All personnel should be aware of the plan and prepared for any unplanned event
that may happen.

In addition to training (through simulations, for instance), good preparation relies on a well-
informed DRP. For that, it may include contact information for any potential stakeholders who may
be helpful or should be contacted in a disastrous situation. A hardware provider can supply urgent IT
needs, a customer whose data security is threatened can be informed about that, and so forth.

It is also important to decide which person should be contacted if the supposed emergency contact
person does not respond to the emergency call. It is important consequently to identify other team
members to contact prior to the event of a disaster.

Assessment

Compared to the emergency response assessment, this step has the same principle except that it is
completer and more detailed. It involves internal experts but also external ones, such as civil engineers
to ensure that the building is safe.

The traditional way to assess damages qualitatively is the use of questionnaires that need to elicit
information from top management as well as end users, whether on their own, by an interviewer, or
in a debriefing meeting. This method makes it possible to categorize damages as being low, medium,
high, or even critical. The quantitative assessment allows determination of a monetary
value for losses and builds on the risk analysis performed before the disaster.

Backups/Offsite Storage

Data backup is the regular process through which business data are saved in hard copies (using tapes,
CDs, etc.) or through cloud computing, so that if a disaster occurs and information is lost, it can be
restored from what has been backed up.

For security reasons, businesses generally store the saved data in offsite rooms when they safeguard
data through hard copies. For more security, businesses are advised to use more than one offsite
storage, but cloud computing is even more secure since it is virtual, ensures the integrity of the
data, and does not require an additional, distant location. Consequently, it is less costly, because
it eliminates costs related to the offsite storage (additional personnel and equipment, transport,
maintenance, integrity checks, etc.) and also the potential costs of data loss below
the recovery point objective (RPO), which are caused by data integrity loss. Moreover, the cloud
can more easily fulfil the business needs of RPO and recovery time objective (RTO); although, the
smaller they are, the costlier they would be.

73
Compiled By Rammanohar Das
If the offsite storage will serve as the IT operations emergency location following a disaster, it should
be well equipped with suitable ventilation and power supply.

External Communications

The business stakeholders should be notified about the state of the organization and the
consequences that the unplanned event had on its operations. Any business’s official communication
channel can be used: the official website, social networks, media, phone, etc.

Utilities

Utilities such as electricity, water, and gas often become unavailable in a disaster situation, and this
inaccessibility should be managed by taking measures such as activating the generator to restore
electric power, closing the building if it is on fire and water is not available or if the waste water
system does not work anymore.

Logistics/supplies

The emergency team should think of providing the necessary logistics for personnel safety and
comfort. They can be categorized as follows:

• Vital human needs: food, water, blanket, camp beds and sanitation
• Important technical equipment: tools and spare parts, waste bins, extinguishers, sprinklers,
fire/smoke alarms
• Information and communications: radios, satellite phones, and contact person information

The DRP should contain any information related to the logistics and their supply, whether they are
available constantly in the business infrastructure, such as fire alarms, or they need to be available
quickly in an emergency situation, with potential suppliers’ information for emergency
care, food, and so forth. It is also important to specify quantities when possible; knowing how many
camp beds are available will help to forecast the needs in the case of mass recruitment, for
instance, and to update the DRP.

Recovery vs. Restoration

Recovery is an umbrella term covering all of the processes that help a business return to normal after
a disaster, while restoration has two main dimensions: reparation and/or replacement of equipment,
utilities, and business facilities. Restoration is the step following the assessment and prioritization of
what needs to be restored first, based on the importance for the business functions in general and the
IT operations in particular.

74
Compiled By Rammanohar Das
Business Continuity Planning (BCP)
What is BCP?

Business continuity planning (BCP) is the process of ensuring the continuous operation of your
business before, during, and after a disaster event. The focus of BCP is totally on business continuation
and it ensures that all services that the business provides or critical functions that the business
performs are still carried out in the wake of the disaster. To ensure that the critical business services
and functions are still operable, the organization needs to take into account the most common threats
to their critical functions and also consider any associated vulnerabilities.

The Business Continuity Planning Process

The purpose of business continuity planning is to respond to disruption, activate recovery teams,
handle tactical disaster status communication, assess damage caused by disruption, and recover
critical assets and processes.

Developing a BCP plan is vital for an organization. It helps to minimize an interruption in normal
business functions for any event, from small to catastrophic. BCP has a specific set of requirements
for review and implementation to ensure that all planning has been considered.

Following are the steps for BCP:

1. Project initiation
2. Scope
3. Business impact analysis
4. Identify preventive control
5. Recovery strategy
6. Designing and development
7. Implementation, training, and testing
8. BCP maintenance

NIST SP800-34 provides a guideline for developing a logical BCP. It can be found at:

http://csrc.nist.gov/publications/nistpubs/800-34-rev1/sp800-34-rev1_errata-Nov11-2010.pdf.

1. Project Initiation

The scope of the project must be defined and agreed upon before developing a BCP. There are seven
milestones involved:

1. Develop a contingency planning policy statement.


2. Conduct business impact analysis (BIA).
3. Identify preventive control.
4. Develop strategies for recovery.
5. Develop an IT contingency plan.
6. Plan testing, training, and exercises.
7. Maintenance planning.
8. BCP Maintenance

75
Compiled By Rammanohar Das
1. Project requirements

Management Support

Upper-level management support is very important in BCP planning and implementation. C-level
management must agree to the plan set forth and must also support the plan’s action items. C-level
management is an important resource in case of a disruption because they have the power to speak
to the entire organization and the external media. Also, they have the power to commit the resources
necessary to move from disaster to recovery.

Project Managers

The BCP project manager is main point of contact; he or she ensures that the BCP is updated and
tested periodically. The project manager should have business skills, should be knowledgeable with
regards to the organization’s mission, and of course must have good managerial and leadership skills
to handle the tumultuous events that call for BCP measures.

The BCP Team

The BCP team has the sole responsibility to handle emergency situations and carry out the BCP plans.
Before establishing the BCP team, the continuity planning project team (CPPT) must be assembled.
This CPPT should represent all the stakeholders in the organization, such as HR, the IT department,
the physical security department, public relations, and all other personnel responsible for effective
business. The focus of the CPPT is on identifying the resources that will play a part in handling a
disastrous event.

2. Scope

The scope of BCP is very difficult but crucial to define. BCP scoping requires us to define the exact
assets that are covered and protected by the plan, which types of emergency events the plan will be
able to address, and determining the resources necessary to create and implement the plan. Many
key players in the organization will have to be involved in the scoping of BCP to ensure that all aspects
of organizational function are represented. It is also crucial to assess the critical state. This assessment
can be difficult because determining which pieces of IT infrastructure are critical isn’t always
straightforward. Without consultation from key users, this can be difficult to determine. It is
recommended that you use a qualitative approach when documenting the assets, groups, impacts,
and processes.

Executive management support will be needed for the following three steps:

• Initiation of the plan


• Final approval of the plan
• Demonstration of due care and due diligence to the satisfaction of management

3. Business Impact Analysis

Business impact analysis (BIA) is a formal methodology to determine how a disruption to the IT system
of an organization will impact the organization process, requirements and interdependencies with
respect to its business mission. This process of analysis determines and prioritizes the critical IT
systems, allowing the project manager to fully delineate the IT contingency priorities. It helps in
correlating the IT system components with the critical services it supports. It also aims to quantify the
possible damage that can be done to the system components by disaster. The primary goal of BIA is
to calculate the maximum tolerable downtime (MTD) for an IT asset. Other benefits of BIA include
improvements in business processes and procedures, as it will highlight inefficiencies in these areas.

76
Compiled By Rammanohar Das
The main components of BIA are as follows:

• Identify critical assets


• Conduct risk assessment
• Determine maximum tolerable downtime (MTD)
• Failure and recovery metrics

4. Identify Preventive Controls

Preventive controls are used to prevent disruptive events from happening before they start. An
example is an HVAC system that prevents the equipment from overheating. Your BIA can also
identify risks that can be mitigated immediately and can improve security.

5. Recovery Strategy

Once your BIA is performed successfully, it will help in devising a recovery strategy. Metrics like
maximum tolerable downtime, recovery point objective and recovery time objective are used to
determine the strategy for disaster recovery. Technical, physical and administrative controls are to be
maintained while using a recovery option. Recovery options include:

• Supply chain management (acquisition of computer equipment is assured during disaster)


• Telecommunication management (availability of electronic communication during disaster)
• Utility management (Availability of utilities like power, gas, water, etc.)

Redundant Site

A redundant site is a duplicate of the production site that can operate seamlessly without loss of
services. The redundant site should have live data backup replication, so no user data is lost.

Hot Site

A hot site is a location to which an organization may relocate in case of a major disaster. The hot site
will have all necessary hardware and applications installed and real-time data mirrored. This will allow
the organization to resume the operations in a very short period of time.

Warm Site

As you might expect, a warm site has some of the same aspects as a hot site—for instance, readily
available hardware and communication capabilities. However, it will rely on backup data to
reconstruct operations. Because of the cost involved in the maintenance of redundant or hot site,
many organizations go for warm site solutions.

Cold Site

This is the least expensive solution to implement. A cold site doesn’t contain any readily available
hardware or copies of data backups. It will take the longest time to setup a cold site after a disaster
occurs.

Mobile Site

This can be described as a data center on wheels. Towable trailers containing racks of computer
equipment, HVAC, physical security, and fire suppression mechanisms are part of mobile site.

77
Compiled By Rammanohar Das
Subscription Services

BCP planning and/or implementation can sometimes be outsourced to another organization, thus
transferring the risk to the insurer company. Various organizations build their profit models by offering
BCP services for customers.

6. Plan Approval

Once the BCP plan is completed and ready for management approval, it is the responsibility of senior
management to protect an organization’s critical personnel and assets. Senior management must
understand that they are responsible for the plan and therefore must thoroughly understand the plan,
own the plan, and ensure that they will take the steps necessary to make the plan a success.

7. Implementation, Training, and Testing

Training, testing, and awareness must be performed for the disaster portion of BCP. Skipping these is
one of the most common mistakes. It needs to be emphasized that BCP is never complete. Rather, it
is a continuous process to ensure the ability of an organization to recover in a good manner.
Furthermore, while most experienced individuals carry out the planning, mistakes can happen in the
process. Third, each member of the disaster recovery team should be exceedingly familiar with their
roles in BCP. Here is where training comes into play. Awareness is imperative for general users, along
with awareness of organizational emphasis on ensuring the safety of operations and personnel.

8. BCP Maintenance

Once the plan is completed, tested, and implemented, it must be kept up to date. Business and IT
systems are changed quickly, so your BCP must keep pace with them. BCP maintenance should contain
the following components:

• Change management
• Version control
• Accounting for mistakes

The change management process includes tracking and documenting changes, approvals, and the
results of completed changes. Version control is the process of managing the updates in the BCP to
ensure that all parts of the system are using the most up-to-date version. Common mistakes in the
BCP include: lack of support from management, lack of stakeholder involvement, improper supply
chain management, lack of testing and lack of training, etc.

78
Compiled By Rammanohar Das

You might also like