Notes ISC2 CC
Notes ISC2 CC
Confidentiality
• Confidentiality protects information from unauthorized disclosure.
Confidentiality Concerns
1. Snooping
o snooping gathering information that is left out in the open.
o "Clean desk policies" protect against snooping.
2. Dumpster Diving
o Dumpster diving is to dump data anywhere or dustbin.
o "Shedding" protects against dumpster diving.
3. Eavesdropping
o listing sensitive information
o "Rules about sensitive conversations" prevent eavesdropping
4. Wiretapping
o Electronic eavesdropping - listing through wire(internet)
o "Encryption" protects against Wiretapping
5. Social Engineering
o The attacker uses psychological tricks to persuade an employee to give
then sensitive information or access to internal systems.
o Best defence is to "Educating users"
Integrity
• Integrity protects information from unauthorized changes.
Integrity_Concerns
1. Unauthorized modification
o Attacks make changes without permission.
o "Least priviege" protects against integrity attacks
2. Impersonation
o Attacks pretend to be someone else
o "User education" protects against attacks
3. Man-in-the-middle (MITM)
o Attacks place the attacker in the middle of a communications session.
o "Encryption" protects against MITM attacks
4. Replay
o Attacks eavesdrop on logins and reuse the captured credentials.
o "Encryption" protects against Replay attacks
Availability
• Availability protects authorized access to systems and data.
Availability Concerns
Password security
Password mechanisms - Password length requirements set a minimum number of
characters. - Password complexity requirements describe the types of characters that
must be included. - Password expiration requirements force password changes. -
Password requirements prevent password reuse.
Multifactor authentication
Multifactor authentication combines two different authentication factors.
Three different authentication factors. Something you know, something you are and
something you have.
Shares authentiacated sessions across systems - In a single sign on approach, users log
on to the first SSO enabled system that they encounter. And then that login session
persists across other systems until it expires. If the organization sets the expiration
period to be the length of a business day, that means that users will only need to log in
once a day and their single sign on is then going to last the entire day.
Non-repudiation
Non-repudiation prevents someone from denying the truth.
Solved the issue with 1. Signed contracts 2. Digital signatures 3. Video surveillance
Privacy
Privacy Concerns
Private information may come in many forms. Two of the most common elements of
private information are "Personally identifiable information" and "Protected health
information".
Risk Management
Understanding risks
• Internal Risks: Arise from within the organization.
• External Risks: Arise from outside the organization.
• Multiparty Risks: Affect more than one organization.
• Intellectual property therft : poses a risk to knowleage-based organizations.
• Software license compliance: issues risk fines and legal action.
Risk assessment
Risk assessment identifies and triages risks.
we have two different categories of technique that we can use to assess the likelihood
and Impact of a risk.
Risk treatment
Risk treatment analyzes and implements possible responses to control risk.
1. Risk avoidance
o Risk avoidance changes business practices to make a risk irrelevant.
2. Risk transference
o Risk treatment analyzes and implements possible responses to control
risk.
3. Risk mitigation
o Risk mitigation reduces the likelihood or impact of a risk.
4. Risk acceptance
o Risk acceptance is the choice to continue operations in the face of a risk.
1. Control Purpose
i. Preventive
▪ Preventive controls stop a security issue from occcurring.
ii. Detective
▪ Detective controls identify security issues requiring investigation.
iii. Corrective
▪ Recovery controls remediate security issues that have occurred.
2. Control Mechanism
i. Technical
▪ use technology to achieve control objectives.
ii. Administrative
▪ use processes to achieve control objectives.
iii. Physical
▪ Impact the physical world.
Configuration managment
Tracks specific device settings
Confidentiality
Data that needs protections is also known as PII or PHI. PII stands for Personally
Identifiable Information and it is related to the area of confidentiality and it means any
data that could be used to identify an individual. PHI stands for Protected Health
Information and it comprehends information about one's health status, and classified
or sensitive information, which includes trade secrets, research, business plans and
intellectual property.
1. Snooping involves gathering information that is left out in the open. Clean desk policies
protect against snooping.
2. Dumpster diving also looking for sensitive materials, but in the dumpster, a paper
shredding protects against it.
3. Eavesdropping occurs when someone secretly listen to a conversation, and it can be
prevent with rules about sensitive conversations
4. Wiretapping is the electronic version of eavesdropping, the best way against that is
using encryption to protect the communication.
5. Social Engineering, the best defense is educate users to protect them against social
engineering.
Integrity
Consistency is another concept related to integrity and requires that all instances of the
data be identical in form, content and meaning. When related to system integrity, it
refers to the maintenance of a known good configuration and expected operational
function as the system processes the information. Ensuring integrity begins with an
awareness of state, which is the current condition of the system. Specifically, this
awareness concerns the ability to document and understand the state of data or a
system at a certain point, creating a baseline. A baseline, which means a documented,
lowest level of security configuration allowed by a standard or organization, can refer
to the current state of the information—whether it is protected.
To preserve that state, the information must always continue to be protected through a
transaction. Going forward from that baseline, the integrity of the data or the system
can always be ascertained by comparing the baseline with the current state. If the two
match, then the integrity of the data or the system is intact; if the two do not match,
then the integrity of the data or the system has been compromised. Integrity is
a primary factor in the reliability of information and systems. The need to safeguard
information and system integrity may be dictated by laws and regulations. Often, it is
dictated by the needs of the organization to access and use reliable, accurate
information.
1. Unauthorized modification attacks make changes without permission. The best way to
protect against that is the least privilege principle.
2. Impersonation attacks pretend to be someone else. User education protects against
impersonation attack.
3. Man-In-The-Middle (MITM) attacks place the attacker in the middle of a
communication session, monitoring everything that's occurring.
4. Replay attacks eavesdrop on logins and reuse the captured credentials.
It means that systems and data are accessible at the time users need them. It can be
defined as timely and reliable access to information and the ability to use it, and for
authorized users, timely and reliable access to data and information services. The core
concept of availability is that data is accessible to authorized users when and where it is
needed and in the form and format required. This does not mean that data or systems
are available 100% of the time. Instead, the systems and data meet the requirements of
the business for timely and reliable access. Some systems and data are far more critical
than others, so the security professional must ensure that the appropriate levels of
availability are provided. This requires consultation with the involved business to ensure
that critical systems are identified and available. Availability is often associated with the
term criticality, which means a measure of the degree to which an organization
depends on the information or information system for the success of a mission or of a
business function (NIST SP 800-60), because it represents the importance an
organization gives to data or an information system in performing its operations or
achieving its mission
Identification
Authentication
When users have stated their identity, it is necessary to validate that they are the
rightful owners of that identity. This process of verifying or proving the user’s
identification is known as authentication, which means in another terms access control
process validating that the identity being claimed by a user or entity is known to the
system, by comparing one (single-factor or SFA) or more (multi-factor authentication
or MFA) factors of authentication. Simply put, authentication is a process to prove the
identity of the requestor.
Methods of Authentication
There are two types of authentication. Using only one of the methods of authentication
stated previously is known as single-factor authentication (SFA). Granting users access
only after successfully demonstrating or displaying two or more of these methods is
known as multi-factor authentication (MFA).
Common best practice is to implement at least two of the three common techniques
for authentication:
• Knowledge-based
• Token-based
• Characteristic-based
Password
Authorization
Accounting
Non-repudiation
In today’s world of e-commerce and electronic transactions, there are opportunities for
the impersonation of others or denial of an action, such as making a purchase online
and later denying it. It is important that all participants trust online transactions. Non-
repudiation methodologies ensure that people are held responsible for transactions
they conducted.
Base Concepts
Information security risk reflects the potential adverse impacts that result from the
possibility of unauthorized access, use, disclosure, disruption, modification or
destruction of information and/or information systems. This definition represents that
risk is associated with threats, impact and likelihood, and it also indicates that IT risk is
a subset of business risk.
Finally, the security team will consider the likely results if a threat is realized and an
event occurs. Impact is the magnitude of harm that can be expected to result from the
consequences of unauthorized disclosure of information, unauthorized modification of
information, unauthorized destruction of information, or loss of information or
information system availability.
Think about the impact and the chain of reaction that can result when an event occurs
by revisiting the pickpocket scenario: Risk comes from the intersection of those three
concepts.
Risk Identification
In the world of cyber, identifying risks is not a one-and-done activity. It’s a recurring
process of identifying different possible risks, characterizing them and then estimating
their potential for disrupting the organization.
As a security professional, you are likely to assist in risk assessment at a system level,
focusing on process, control, monitoring or incident response and recovery activities. If
you’re working with a smaller organization, or one that lacks any kind of risk
management and mitigation plan and program, you might have the opportunity to help
fill that planning void.
Risk Assessment
Risk assessment is defined as the process of identifying, estimating and prioritizing risks
to an organization’s operations (including its mission, functions, image and reputation),
assets, individuals, other organizations and even the nation. Risk assessment should
result in aligning (or associating) each identified risk resulting from the operation of an
information system with the goals, objectives, assets or processes that the organization
uses, which in turn aligns with or directly supports achieving the organization’s goals
and objectives. A risk assessment can prioritize items for management to determine
the method of mitigation that best suits the assets being protected. The result of the
risk assessment process is often documented as a report or presentation given to
management for their use in prioritizing the identified risk(s). This report is provided
to management for review and approval. In some cases, management may indicate a
need for a more in-depth or detailed risk assessment performed by internal or external
resources.
Risk Treatment
Risk treatment relates to making decisions about the best actions to take regarding the
identified and prioritized risk. The decisions made are dependent on the attitude of
management toward risk and the availability — and cost — of risk mitigation. The
options commonly used to respond to risk are:
• Avoidance: It is the decision to attempt to eliminate the risk entirely. This could
include ceasing operation for some or all of the activities of the organization
that are exposed to a particular risk. Organization leadership may choose risk
avoidance when the potential impact of a given risk is too high or if the
likelihood of the risk being realized is simply too great.
• Mitigation: Risk mitigation is the most common type of risk management and
includes taking actions to prevent or reduce the possibility of a risk event or its
impact. Mitigation can involve remediation measures, or controls, such as
security controls, establishing policies, procedures, and standards to minimize
adverse risk. Risk cannot always be mitigated, but mitigations such as safety
measures should always be in place.
• Transfer: Risk transference is the practice of passing the risk to another party,
who will accept the financial impact of the harm resulting from a risk being
realized in exchange for payment. Typically, this is an insurance policy.
Base Concepts
Risk Priorities
When risks have been identified, it is time to prioritize and analyze core risks through
qualitative risk analysis and/or quantitative risk analysis. This is necessary to determine
root cause and narrow down apparent risks and core risks. Security professionals work
with their teams to conduct both qualitative and quantitative analysis.
Understanding the organization’s overall mission and the functions that support the
mission helps to place risks in context, determine the root causes and prioritize the
assessment and analysis of these items. In most cases, management will provide
direction for using the findings of the risk assessment to determine a prioritized set of
risk-response actions.
One effective method to prioritize risk is to use a risk matrix, which helps identify
priority as the intersection of likelihood of occurrence and impact. It also gives the
team a common language to use with management when determining the final
priorities. For example, a low likelihood and a low impact might result in a low priority,
while an incident with a high likelihood and high impact will result in a high priority.
Assignment of priority may relate to business priorities, the cost of mitigating a risk or
the potential for loss if an incident occurs.
When making decisions based on risk priorities, organizations must evaluate the
likelihood and impact of the risk as well as their tolerance for different sorts of risk. A
company in Hawaii is more concerned about the risk of volcanic eruptions than a
company in Chicago, but the Chicago company will have to plan for blizzards. In those
cases, determining risk tolerance is up to the executive management and board of
directors. If a company chooses to ignore or accept risk, exposing workers to asbestos,
for example, it puts the company in a position of tremendous liability.
Risk Tolerance
The perception management takes toward risk is often likened to the entity’s appetite
for risk. How much risk are they willing to take? Does management welcome risk or
want to avoid it? The level of risk tolerance varies across organizations, and even
internally: Different departments may have different attitudes toward what is acceptable
or unacceptable risk.
Security controls pertain to the physical, technical and administrative mechanisms that
act as safeguards or countermeasures prescribed for an information system to protect
the confidentiality, integrity and availability of the system and its information. The
implementation of controls should reduce risk, hopefully to an acceptable level.
Governance Elements
When leaders and management implement the systems and structures that the
organization will use to achieve its goals, they are guided by laws and regulations
created by governments to enact public policy. Laws and regulations guide the
development of standards, which cultivate policies, which result in procedures.
• Procedures are the detailed steps to complete a task that support departmental or
organizational policies.
• Policies are put in place by organizational governance, such as executive management,
to provide guidance in all activities to ensure that the organization supports industry
standards and regulations.
• Standards are often used by governance teams to provide a framework to introduce
policies and procedures in support of regulations.
• Regulations are commonly issued in the form of laws, usually from government (not to
be confused with governance) and typically carry financial penalties for noncompliance.
Module Objective: L1.5.1 Analyze appropriate outcomes according to the canons of the
ISC2 Code of Ethics when given examples.
All information security professionals who are certified by ISC2 recognize that
certification is a privilege that must be both earned and maintained. Every ISC2
member is required to commit to fully support the ISC2 Code of Ethics.
ISC2 Code of Ethics Preamble states the purpose and intent of the ISC2 Code of
Ethics.
• The safety and welfare of society and the common good, duty to our principals, and to
each other, requires that we adhere, and be seen to adhere, to the highest ethical
standards of behavior.
• Therefore, strict adherence to this Code is a condition of certification.
ISC2 Code of Ethics Canons represent the important beliefs held in common by
the members of ISC2. Cybersecurity professionals who are members of ISC2 have
a duty to the following four entities in the Canons.
• Protect society, the common good, necessary public trust and confidence, and the
infrastructure.
• Act honorably, honestly, justly, responsibly and legally.
• Provide diligent and competent service to principals.
• Advance and protect the profession.
Narrator: Here is an example of an ethical question that might come up for cyber
security professionals. An organization handling Top Secret and other sensitive
information was hiring new employees. At its facility, it used a retinal scanner to grant
access to high-security areas, including where prospective employees were
interviewed. Retinal scanners, unbeknownst to most people, can not only match blood
vessels on an individual’s retina, but they can also tell the difference between males
and females. Further, they can tell whether a female is pregnant.
The organization used this information gathered by its access control system to
discriminate against female candidates for the positions it was seeking to fill. Allowing
this data to be accessed by those making hiring decisions was indisputably in violation
of the (ISC)2 Code of Ethics, which states that information security professionals must
act honorably, honestly, justly, responsibly and legally.
Here is another example: The security manager for an organization heard from a
network administrator who reported another user for violating the organization’s
acceptable use policy. When the security manager investigated the matter, he
discovered several pertinent facts:
In many jurisdictions, the organization can use any information, regardless of source, to
make labor decisions. So yes, the organization could use this information against the
user. The user violated the policy but did not break the law. Depending on how
egregious the infraction was, the organization may choose to punish the user for the
violation.
Because the administrator would not explain why he was monitoring the user, it makes
his actions suspect at best, and nefarious at worst. The administrator violated the trust
given to him by the organization; as an IT professional, the administrator was expected
to use authority and permissions in an adult and objective manner. This situation is
almost certainly an example of the administrator using authority to settle a personal
grievance. The administrator should be punished much more severely than the user
(firing the administrator is not untoward; this person may have opened the
organization up to a lawsuit for creating a hostile work environment, which may have
an impact/risk that exceeds whatever policy violation the user committed).
Whether the administrator was terminated or not, his actions were in clear
contradiction of the Code of Ethics.
2 . Chapter - Incident Response, Business
Continuity and Disaster Recovery
Concepts (L2)
Introduction
When we're talking about IR, BC and DR, we're focus on availability, which is
accomplished through those concepts.
Incident Terminology
• Breach (NIST SP 800-53 Rev. 5): The loss of control, compromise, unauthorized
disclosure, unauthorized acquisition, or any similar occurrence where: a person
other than an authorized user accesses or potentially accesses personally
identifiable information; or an authorized user accesses personally identifiable
information for other than an authorized purpose.
• Exploit: A particular attack. It is named this way because these attacks exploit
system vulnerabilities.
• Threat (NIST SP 800-30 Rev 1): Any circumstance or event with the potential to
adversely impact organizational operations (including mission, functions, image
or reputation), organizational assets, individuals, other organizations or the
nation through an information system via unauthorized access, destruction,
disclosure, modification of information and/or denial of service.
The priority of any incident response is to protect life, health and safety. When any
decision related to priorities is to be made, always choose safety first. The primary goal
of incident management is to be prepared. Preparation requires having a policy and a
response plan that will lead the organization through the crisis. Some organizations
use the term “crisis management” to describe this process, so you might hear this term
as well. An event is any measurable occurrence, and most events are harmless.
However, if the event has the potential to disrupt the business’s mission, then it is
called an incident. Every organization must have an incident response plan that will
help preserve business viability and survival. The incident response process is aimed at
reducing the impact of an incident so the organization can resume the interrupted
operations as soon as possible. Note that incident response planning is a subset of the
greater discipline of business continuity management (BCM).
The incident response policy should reference an incident response plan that all
employees will follow, depending on their role in the process. The plan may contain
several procedures and standards related to incident response. It is a living
representation of an organization’s incident response policy. The organization’s vision,
strategy and mission should shape the incident response process. Procedures to
implement the plan should define the technical processes, techniques, checklists and
other tools that teams will use when responding to an incident.
• Preparation: Develop a policy approved by management; Identify critical data
and systems, single points of failure; Train staff on incident response; Implement
an incident response team. (covered in subsequent topic); Practice Incident
Identification. (First Response); Identify Roles and Responsibilities; Plan the
coordination of communication between stakeholders; Consider the possibility
that a primary method of communication may not be available.
• Detection and Analysis: Monitor all possible attack vectors; Analyze incident
using known data and threat intelligence; Prioritize incident response;
Standardize incident documentation;
Along with the organizational need to establish a Security Operations Center (SOC) is
the need to create a suitable incident response team. A typical incident response team
is a cross-functional group of individuals who represent the management, technical
and functional areas of responsibility most directly impacted by a security incident.
Potential team members include the following:
Team members should have training on incident response and the organization’s
incident response plan. Typically, team members assist with investigating the incident,
assessing the damage, collecting evidence, reporting the incident and initiating
recovery procedures. They would also participate in the remediation and lessons
learned stages and help with root cause analysis.
Many organizations now have a dedicated team responsible for investigating any
computer security incidents that take place. These teams are commonly known as
computer incident response teams (CIRTs) or computer security incident response
teams (CSIRTs). When an incident occurs, the response team has four primary
responsibilities:
• Determine the amount and scope of damage caused by the incident.
• Determine whether any confidential information was compromised during the incident.
• Implement any necessary recovery procedures to restore security and recover from
incident-related damage.
• Supervise the implementation of any additional security measures necessary to improve
security and prevent recurrence of the incident.
• List of the BCP team members, including multiple contact methods and backup
members
• Immediate response procedures and checklists (security and safety procedures, fire
suppression procedures, notification of appropriate emergency-response agencies, etc.)
• Notification systems and call trees for alerting personnel that the BCP is being enacted
• Guidance for management, including designation of authority for specific managers
• How/when to enact the plan. It's important to include when and how the plan will be
used.
• Contact numbers for critical members of the supply chain (vendors, customers, possible
external emergency providers, third-party partners)
Routinely. Each individual organization must determine how often to test its BCP, but it
should be tested at predefined intervals as well as when significant changes happen
within the business environment.
Disaster recovery planning steps in where BC leaves off. When a disaster strikes or an
interruption of business activities occurs, the Disaster recovery plan (DRP) guides the
actions of emergency response personnel until the end goal is reached—which is to
see the business restored to full last-known reliable operations. Disaster recovery refers
specifically to restoring the information technology and communications services and
systems needed by an organization, both during the period of disruption caused by
any event and during restoration of normal services. The recovery of a business
function may be done independently of the recovery of IT and
communications services; however, the recovery of IT is often crucial to the recovery
and sustainment of business operations. Whereas business continuity planning is about
maintaining critical business functions, disaster recovery planning is about restoring IT
and communications back to full operations after a disruption.
Access control involves limiting what objects can be available to what subjects
according to what rules.
Controls Overview
Access controls are not just about restricting access to information systems and data,
but also about allowing access. It is about granting the appropriate level of access to
authorized personnel and processes and denying access to unauthorized functions or
individuals.
• subjects: any entity that requests access to our assets. The entity requesting access may
be a user, a client, a process or a program, for example. A subject is the initiator of a
request for service; therefore, a subject is referred to as “active.” A subject:
o Is a user, a process, a procedure, a client (or a server), a program, a device such
as an endpoint, workstation, smartphone or removable storage device with
onboard firmware.
o Is active: It initiates a request for access to resources or services.
o Requests a service from an object.
o Should have a level of clearance (permissions) that relates to its ability to
successfully access services or resources.
Controls Assessments
Risk reduction depends on the effectiveness of the control. It must apply to the current
situation and adapt to a changing environment.
Defense in Depth
We are looking at all access permissions including building access, access to server
rooms, access to networks and applications and utilities. These are all implementations
of access control and are part of a layered defense strategy, also known as defense in
depth, developed by an organization.
Another example of multiple technical layers is when additional firewalls are used to
separate untrusted networks with differing security requirements, such as the internet
from trusted networks that house servers with sensitive data in the organization. When
a company has information at multiple sensitivity levels, it might require the network
traffic to be validated by rules on more than one firewall, with the most sensitive
information being stored behind multiple firewalls.
For a non-technical example, consider the multiple layers of access required to get to
the actual data in a data center. First, a lock on the door provides a physical barrier to
access the data storage devices. Second, a technical access rule prevents access to the
data via the network. Finally, a policy, or administrative control defines the rules that
assign access to authorized individuals.
Principle of Least Privilege
For example, only individuals working in billing will be allowed to view consumer
financial data, and even fewer individuals will have the authority to change or delete
that data. This maintains confidentiality and integrity while also allowing availability by
providing administrative access with an appropriate password or sign-on that proves
the user has the appropriate permissions to access that data.
Systems often monitor access to private information, and if logs indicate that someone
has attempted to access a database without the proper permissions, that will
automatically trigger an alarm. The security administrator will then record the incident
and alert the appropriate people to take action.
The more critical information a person has access to, the greater the security should be
around that access. They should definitely have multi-factor authentication, for
instance.
Privileged access management provides the first and perhaps most familiar use case.
Consider a human user identity that is granted various create, read, update, and delete
privileges on a database. Without privileged access management, the system’s access
control would have those privileges assigned to the administrative user in a static way,
effectively “on” 24 hours a day, every day. Security would be dependent upon the login
process to prevent misuse of that identity. Just-in-time privileged access management,
by contrast, includes role-based specific subsets of privileges that only become active
in real time when the identity is requesting the use of a resource or service.
Privileged Accounts
Privileged accounts are those with permissions beyond those of normal users, such as
managers and administrators. Broadly speaking, these accounts have elevated
privileges and are used by many different classes of users, including:
• Systems administrators, who have the principal responsibilities for operating systems,
applications deployment and performance management.
• Help desk or IT support staff, who often need to view or manipulate endpoints, servers
and applications platforms by using privileged or restricted operations.
• Security analysts, who may require rapid access to the entire IT infrastructure, systems,
endpoints and data environment of the organization.
Typical measures used for moderating the potential for elevated risks from misuse or
abuse of privileged accounts include the following:
* More extensive and detailed logging than regular user accounts. The record
of privileged actions is vitally important, as both a deterrent (for
privileged account holders that might be tempted to engage in untoward
activity) and an administrative control (the logs can be audited and
reviewed to detect and respond to malicious activity).
* More stringent access control than regular user accounts. As we will see
emphasized in this course, even nonprivileged users should be required to
use MFA methods to gain access to organizational systems and networks.
Privileged users—or more accurately, highly trusted users with access to
privileged accounts—should be required to go through additional or more
rigorous authentication prior to those privileges. Just-in-time identity
should also be considered as a way to restrict the use of these privileges
to specific tasks and the times in which the user is executing them.
* Deeper trust verification than regular user accounts. Privileged account
holders should be subject to more detailed background checks, stricter
nondisclosure agreements and acceptable use policies, and be willing to be
subject to financial investigation. Periodic or event-triggered updates to
these background checks may also be in order, depending on the nature of the
organization’s activities and the risks it faces.
* More auditing than regular user accounts. Privileged account activity
should be monitored and audited at a greater rate and extent than regular
usage.
Segregation of Duties
These steps can prevent fraud or detect an error in the process before implementation.
It could be that the same employee might be authorized to originally submit invoices
regarding one set of activities, but not approve them, and yet also have approval
authority but not the right to submit invoices on another. It is possible, of course, that
two individuals can willfully work together to bypass the segregation of duties, so that
they could jointly commit fraud. This is called collusion.
The two-person rule is a security strategy that requires a minimum of two people to be
in an area together, making it impossible for a person to be in the area alone. Many
access control systems prevent an individual cardholder from entering a selected high-
security area unless accompanied by at least one other person. Use of the two-person
rule can help reduce insider threats to critical areas by requiring at least two individuals
to be present at any time. It is also used for life safety within a security area; if one
person has a medical emergency, there will be assistance present.
Other situations that call for provisioning new user accounts or changing privileges
include:
• A new employee: When a new employee is hired, the hiring manager sends a
request to the security administrator to create a new user ID. This request
authorizes creation of the new ID and provides instructions on appropriate
access levels. Additional authorization may be required by company policy for
elevated permissions.
• Change of position: When an employee has been promoted, their permissions
and access rights might change as defined by the new role, which will dictate
any added privileges and updates to access. At the same time, any access that is
no longer needed in the new job will be removed.
Physical access controls are items you can physically touch, which include physical
mechanisms deployed to prevent, monitor, or detect direct contact with systems or
areas within a facility. Examples of physical access controls include security guards,
fences, motion detectors, locked doors/gates, sealed windows, lights, cable protection,
laptop locks, badges, swipe cards, guard dogs, cameras, mantraps/turnstiles, and
alarms.
Physical access controls are necessary to protect the assets of a company, including its
most important asset, people. When considering physical access controls, the security
of the personnel always comes first, followed by securing other physical assets.
Physical access controls include fences, barriers, turnstiles, locks and other features that
prevent unauthorized individuals from entering a physical site, such as a workplace.
This is to protect not only physical assets such as computers from being stolen, but
also to protect the health and safety of the personnel inside.
Physical security controls for human traffic are often done with technologies such as
turnstiles, mantraps and remotely or system-controlled door locks. For the system to
identify an authorized employee, an access control system needs to have some form of
enrollment station used to assign and activate an access control device. Most often, a
badge is produced and issued with the employee’s identifiers, with the enrollment
station giving the employee specific areas that will be accessible. In high-security
environments, enrollment may also include biometric characteristics. In general, an
access control system compares an individual’s badge against a verified database. If
authenticated, the access control system sends output signals allowing authorized
personnel to pass through a gate or a door to a controlled area. The systems are
typically integrated with the organization’s logging systems to document access
activity (authorized and unauthorized)
A range of card types allow the system to be used in a variety of environments. These
cards include: Bar code, Magnetic stripe, Proximity, Smart, Hybrid
Environmental Design
CPTED provides direction to solve the challenges of crime with organizational (people),
mechanical (technology and hardware) and natural design (architectural and circulation
flow) methods. By directing the flow of people, using passive techniques to signal who
should and should not be in a space and providing visibility to otherwise hidden
spaces, the likelihood that someone will commit a crime in that area decreases.
Biometrics
Even though the biometric data may not be secret, it is personally identifiable
information, and the protocol should not reveal it without the user’s consent.
Biometrics takes two primary forms, physiological and behavioral.
Biometric systems are considered highly accurate, but they can be expensive to
implement and maintain because of the cost of purchasing equipment and registering
all users. Users may also be uncomfortable with the use of biometrics, considering
them to be an invasion of privacy or presenting a risk of disclosure of medical
information (since retina scans can disclose medical conditions). A further drawback is
the challenge of sanitization of the devices.
Monitoring
The use of physical access controls and monitoring personnel and equipment entering
and leaving as well as auditing/logging all physical events are primary elements in
maintaining overall organizational security.
Cameras
Cameras are normally integrated into the overall security program and centrally
monitored. Cameras provide a flexible method of surveillance and monitoring. They
can be a deterrent to criminal activity, can detect activities if combined with other
sensors and, if recorded, can provide evidence after the activity They are often used in
locations where access is difficult or there is a need for a forensic record.While cameras
provide one tool for monitoring the external perimeter of facilities, other technologies
augment their detection capabilities. A variety of motion sensor technologies can be
effective in exterior locations. These include infrared, microwave and lasers trained on
tuned receivers. Other sensors can be integrated into doors, gates and turnstiles, and
strain-sensitive cables and other vibration sensors can detect if someone attempts to
scale a fence. Proper integration of exterior or perimeter sensors will alert an
organization to any intruders attempting to gain access across open space or
attempting to breach the fence line.
Logs
In this section, we are concentrating on the use of physical logs, such as a sign-in sheet
maintained by a security guard, or even a log created by an electronic system that
manages physical access. Electronic systems that capture system and security logs
within software will be covered in another section.
A log is a record of events that have occurred. Physical security logs are essential to
support business requirements. They should capture and retain information as long as
necessary for legal or business reasons. Because logs may be needed to prove
compliance with regulations and assist in a forensic investigation, the logs must be
protected from manipulation. Logs may also contain sensitive data about customers or
users and should be protected from unauthorized disclosure.
The organization should have a policy to review logs regularly as part of their
organization’s security program. As part of the organization’s log processes, guidelines
for log retention must be established and followed. If the organizational policy states
to retain standard log files for only six months, that is all the organization should have.
A log anomaly is anything out of the ordinary. Identifying log anomalies is often the
first step in identifying security-related issues, both during an audit and during routine
monitoring. Some anomalies will be glaringly obvious: for example, gaps in date/time
stamps or account lockouts. Others will be harder to detect, such as someone trying to
write data to a protected directory. Although it may seem that logging everything so
you would not miss any important data is the best approach, most organizations would
soon drown under the amount of data collected.
Business and legal requirements for log retention will vary among economies,
countries and industries. Some businesses will have no requirements for data retention.
Others are mandated by the nature of their business or by business partners to comply
with certain retention data. For example, the Payment Card Industry Data Security
Standard (PCI DSS) requires that businesses retain one year of log data in support of
PCI. Some federal regulations include requirements for data retention as well.
If a business has no business or legal requirements to retain log data, how long should
the organization keep it? The first people to ask should be the legal department. Most
legal departments have very specific guidelines for data retention, and those guidelines
may drive the log retention policy.
Security Guards
Security guards are an effective physical security control. No matter what form of
physical access control is used, a security guard or other monitoring system will
discourage a person from masquerading as someone else or following closely on the
heels of another to gain access. This helps prevent theft and abuse of equipment or
information.
Alarm Systems
Alarm systems are commonly found on doors and windows in homes and office
buildings. In their simplest form, they are designed to alert the appropriate personnel
when a door or window is opened unexpectedly.
For example, an employee may enter a code and/or swipe a badge to open a door, and
that action would not trigger an alarm. Alternatively, if that same door was opened by
brute force without someone entering the correct code or using an authorized badge,
an alarm would be activated.
Another alarm system is a fire alarm, which may be activated by heat or smoke at a
sensor and will likely sound an audible warning to protect human lives in the vicinity. It
will likely also contact local response personnel as well as the closest fire department.
Finally, another common type of alarm system is in the form of a panic button. Once
activated, a panic button will alert the appropriate police or security personnel.
Whereas physical access controls are tangible methods or mechanisms that limit
someone from getting access to an area or asset, logical access controls are electronic
methods that limit someone from getting access to systems, and sometimes even to
tangible assets or areas. Types of logical access controls include:
• Passwords
• Biometrics (implemented on a system, such as a smartphone or laptop)
• Badge/token readers connected to a system
These types of electronic tools limit who can get logical access to an asset, even if the
person already has physical access.
Discretionary Access Control (DAC)
Discretionary access control (DAC) is a specific type of access control policy that is
enforced over all subjects and objects in an information system. In DAC, the policy
specifies that a subject who has been granted access to information can do one or
more of the following:
Most information systems in the world are DAC systems. In a DAC system, a user who
has access to a file is usually able to share that file with or pass it to someone else. This
grants the user almost the same level of access as the original owner of the file. Rule-
based access control systems are usually a form of DAC.
This methodology relies on the discretion of the owner of the access control object to
determine the access control subject’s specific rights. Hence, security of the object is
literally up to the discretion of the object owner. DACs are not very scalable; they rely
on the access control decisions made by each individual object owner, and it can be
difficult to find the source of access control issues when problems occur.
A mandatory access control (MAC) policy is one that is uniformly enforced across all
subjects and objects within the boundary of an information system. In simplest terms,
this means that only properly designated security administrators, as trusted subjects,
can modify any of the security rules that are established for subjects and objects within
the system. This also means that for all subjects defined by the organization (that is,
known to its integrated identity management and access control system), the
organization assigns a subset of total privileges for a subset of objects, such that the
subject is constrained from doing any of the following:
Although MAC sounds very similar to DAC, the primary difference is who can control
access. With Mandatory Access Control, it is mandatory for security administrators to
assign access rights or permissions; with Discretionary Access Control, it is up to the
object owner’s discretion.
Role-based access control (RBAC), as the name suggests, sets up user permissions
based on roles. Each role represents users with similar or identical permissions.
Role-based access control provides each worker privileges based on what role they
have in the organization. Only Human Resources staff have access to personnel files,
for example; only Finance has access to bank accounts; each manager has access to
their own direct reports and their own department. Very high-level system
administrators may have access to everything; new employees would have very limited
access, the minimum required to do their jobs.
Having multiple roles with different combinations of permissions can require close
monitoring to make sure everyone has the access they need to do their jobs and
nothing more. In this world where jobs are ever-changing, this can sometimes be a
challenge to keep track of, especially with extremely granular roles and permissions.
Upon hiring or changing roles, a best practice is to not copy user profiles to new users.
It is recommended that standard roles are established, and new users are created
based on those standards rather than an actual user. That way, new employees start
with the appropriate roles and permissions.
4. Chapter - Network Security (L4)
Module 1: Understand Computer Networking
Domain D4.1.1, D4.1.2
What is Networking
A network is simply two or more computers linked together to share data, information
or resources.
Types of Networks
• Local area network (LAN) - A local area network (LAN) is a network typically spanning a
single floor or building. This is commonly a limited geographical area.
• Wide area network (WAN) - Wide area network (WAN) is the term usually assigned to
the long-distance connections between geographically remote networks.
Network Devices
• Hubs are used to connect multiple devices in a network. They’re less likely to be
seen in business or corporate networks than in home networks. Hubs are wired
devices and are not as smart as switches or routers.
• You might consider using a switch, or what is also known as an intelligent hub.
Switches are wired devices that know the addresses of the devices connected to
them and route traffic to that port/device rather than retransmitting to all
devices. Offering greater efficiency for traffic delivery and improving the overall
throughput of data, switches are smarter than hubs, but not as smart as routers.
Switches can also create separate broadcast domains when used to create
VLANs, which will be discussed later.
• Routers are used to control traffic flow on networks and are often used to
connect similar networks and control traffic flow between them. Routers can be
wired or wireless and can connect multiple switches. Smarter than hubs and
switches, routers determine the most efficient “route” for the traffic to flow
across the network.
• Firewalls are essential tools in managing and controlling network traffic and
protecting the network. A firewall is a network device used to filter traffic. It is
typically deployed between a private network and the internet, but it can also be
deployed between departments (segmented networks) within an organization
(overall network). Firewalls filter traffic based on a defined set of rules, also
called filters or access control lists.
• Endpoints are the ends of a network communication link. One end is often at a
server where a resource resides, and the other end is often a client making a
request to use a network resource. An endpoint can be another server, desktop
workstation, laptop, tablet, mobile phone or any other end user device.
• Internet Protocol (IP) Address - While MAC addresses are generally assigned in
the firmware of the interface, IP hosts associate that address with a unique
logical address. This logical IP address represents the network interface within
the network and can be useful to maintain communications when a physical
device is swapped with new hardware. Examples are 192.168.1.1 and
2001:db8::ffff:0:1.
Networking Models
Many different models, architectures and standards exist that provide ways to
interconnect different hardware and software systems with each other for the purposes
of sharing information, coordinating their activities and accomplishing joint or shared
tasks.
Translating the organization’s security needs into safe, reliable and effective network
systems needs to start with a simple premise. The purpose of all communications is to
exchange information and ideas between people and organizations so that they can
get work done.
Those simple goals can be re-expressed in network (and security) terms such as:
In the most basic form, a network model has at least two layers:
• UPPER LAYER APPLICATION: also known as the host or application layer, is responsible
for managing the integrity of a connection and controlling the session as well as
establishing, maintaining and terminating communication sessions between two
computers. It is also responsible for transforming data received from the Application
Layer into a format that any system can understand. And finally, it allows applications to
communicate and determines whether a remote communication partner is available
and accessible.
o APPLICATION
▪ APPLICATION 7
▪ PRESENTATION 6
▪ SESSION 5
• LOWER LAYER: it is often referred to as the media or transport layer and is responsible
for receiving bits from the physical connection medium and converting them into a
frame. Frames are grouped into standardized sizes. Think of frames as a bucket and the
bits as water. If the buckets are sized similarly and the water is contained within the
buckets, the data can be transported in a controlled manner. Route data is added to the
frames of data to create packets. In other words, a destination address is added to the
bucket. Once we have the buckets sorted and ready to go, the host layer takes over.
o DATA TRANSPORT
▪ TRANSPORT 4
▪ NETWORK 3
▪ DATA LINK 2
▪ PHYSICAL 1
The OSI Model was developed to establish a common way to describe the
communication structure for interconnected computer systems. The OSI model serves
as an abstract framework, or theoretical model, for how protocols should function in an
ideal world, on ideal hardware. Thus, the OSI model has become a common conceptual
reference that is used to understand the communication of various hierarchical
components from software interfaces to physical hardware.
The OSI model divides networking tasks into seven distinct layers. Each layer is
responsible for performing specific tasks or operations with the goal of supporting
data exchange (in other words, network communication) between two computers. The
layers are interchangeably referenced by name or layer number. For example, Layer 3 is
also known as the Network Layer. The layers are ordered specifically to indicate how
information flows through the various levels of communication. Each layer
communicates directly with the layer above and the layer below it. For example, Layer 3
communicates with both the Data Link (2) and Transport (4) layers.
The Application, Presentation, and Session Layers (5-7) are commonly referred to
simply as data. However, each layer has the potential to perform encapsulation
(enforcement of data hiding and code hiding during all phases of software
development and operational use. Bundling together data and methods is the process
of encapsulation; its opposite process may be called unpacking, revealing, or using
other terms. Also used to refer to taking any set of data and packaging it or hiding it in
another data structure, as is common in network protocols and encryption.).
Encapsulation is the addition of header and possibly a footer (trailer) data by a
protocol used at that layer of the OSI model. Encapsulation is particularly important
when discussing Transport, Network and Data Link layers (2-4), which all generally
include some form of header. At the Physical Layer (1), the data unit is converted into
binary, i.e., 01010111, and sent across physical wires such as an ethernet cable.
It's worth mapping some common networking terminology to the OSI Model so you
can see the value in the conceptual model.
• When someone references an image file like a JPEG or PNG, we are talking about the
Presentation Layer (6).
• When discussing logical ports such as NetBIOS, we are discussing the Session Layer (5).
• When discussing TCP/UDP, we are discussing the Transport Layer (4).
• When discussing routers sending packets, we are discussing the Network Layer (3).
• When discussing switches, bridges or WAPs sending frames, we are discussing the Data
Link Layer (2).
Encapsulation occurs as the data moves down the OSI model from Application to
Physical. As data is encapsulated at each descending layer, the previous layer’s header,
payload and footer are all treated as the next layer’s payload. The data unit size
increases as we move down the conceptual model and the contents continue to
encapsulate.
The inverse action occurs as data moves up the OSI model layers from Physical to
Application. This process is known as de-encapsulation (or decapsulation). The header
and footer are used to properly interpret the data payload and are then discarded. As
we move up the OSI model, the data unit becomes smaller. The encapsulation/de-
encapsulation process is best depicted visually below:
7 Application DATA
5 Session
4 Transport
3 Network
2 Data Link
1 Physical
The OSI model wasn’t the first or only attempt to streamline networking protocols or
establish a common communications standard. In fact, the most widely used protocol
today, TCP/IP, was developed in the early 1970s. The OSI model was not developed
until the late 1970s. The TCP/IP protocol stack focuses on the core functions of
networking.
TCP/IP Protocol Architecture Layers
The most widely used protocol suite is TCP/IP, but it is not just a single protocol; rather,
it is a protocol stack comprising dozens of individual protocols. TCP/IP is a platform-
independent protocol based on open standards. However, this is both a benefit and a
drawback. TCP/IP can be found in just about every available operating system, but it
consumes a significant amount of resources and is relatively easy to hack into because
it was designed for ease of use rather than for security.
At the Application Layer, TCP/IP protocols include Telnet, File Transfer Protocol (FTP),
Simple Mail Transport Protocol (SMTP), and Domain Name Service (DNS). The two
primary Transport Layer protocols of TCP/IP are TCP and UDP. TCP is a full-duplex
connection-oriented protocol, whereas UDP is a simplex connectionless protocol. In
the Internet Layer, Internet Control Message Protocol (ICMP) is used to determine the
health of a network or a specific link. ICMP is utilized by ping, traceroute and other
network management tools. The ping utility employs ICMP echo packets and bounces
them off remote systems. Thus, you can use ping to determine whether the remote
system is online, whether the remote system is responding promptly, whether the
intermediary systems are supporting communications, and the level of performance
efficiency at which the intermediary systems are communicating.
Base concepts
IPv4 provides a 32-bit address space. IPv6 provides a 128-bit address space. The first
one is exhausted nowadays, but it is still used because of the NAT technology. 32 bits
means 4 octets of 8 bits, which is represented in a dotted decimal notation such as
192.168.0.1, which means in binary notation 11000000 10101000 00000000 00000001
To ease network administration, networks are typically divided into subnets. Because
subnets cannot be distinguished with the addressing scheme discussed so far, a
separate mechanism, the subnet mask, is used to define the part of the address used
for the subnet. The mask is usually converted to decimal notation like 255.255.255.0.
With the ever-increasing number of computers and networked devices, it is clear that
IPv4 does not provide enough addresses for our needs. To overcome this shortcoming,
IPv4 was sub-divided into public and private address ranges. Public addresses are
limited with IPv4, but this issue was addressed in part with private addressing. Private
addresses can be shared by anyone, and it is highly likely that everyone on your street
is using the same address scheme.
The nature of the addressing scheme established by IPv4 meant that network designers
had to start thinking in terms of IP address reuse. IPv4 facilitated this in several ways,
such as its creation of the private address groups; this allows every LAN in every SOHO
(small office, home office) situation to use addresses such as 192.168.2.xxx for its
internal network addresses, without fear that some other system can intercept traffic
on their LAN. This table shows the private addresses available for anyone to use:
RANGE
10.0.0.0 to 10.255.255.254
172.16.0.0 to 172.31.255.254
192.168.0.0 to 192.168.255.254
The first octet of 127 is reserved for a computer’s loopback address. Usually, the
address 127.0.0.1 is used. The loopback address is used to provide a mechanism for
self-diagnosis and troubleshooting at the machine level. This mechanism allows a
network administrator to treat a local machine as if it were a remote machine and ping
the network interface to establish whether it is operational.
* A much larger address field: IPv6 addresses are 128 bits, which supports
2128 or 340,282,366,920,938,463,463,374,607,431,768,211,456 hosts. This
ensures that we will not run out of addresses.
* Improved security: IPsec is an optional part of IPv4 networks, but a
mandatory component of IPv6 networks. This will help ensure the integrity and
confidentiality of IP packets and allow communicating partners to
authenticate with each other.
* Improved quality of service (QoS): This will help services obtain an
appropriate share of a network’s bandwidth.
An IPv6 address is shown as 8 groups of four digits. Instead of numeric (0-9) digits like
IPv4, IPv6 addresses use the hexadecimal range (0000-ffff) and are separated by colons
(:) rather than periods (.). An example IPv6 address is
2001:0db8:0000:0000:0000:ffff:0000:0001. To make it easier for humans to read and
type, it can be shortened by removing the leading zeros at the beginning of each field
and substituting two colons (::) for the longest consecutive zero fields. All fields must
retain at least one digit. After shortening, the example address above is rendered as
2001:db8::ffff:0:1, which is much easier to type. As in IPv4, there are some addresses
and ranges that are reserved for special uses:
* ::1 is the local loopback address, used the same as 127.0.0.1 in IPv4.
* The range 2001:db8:: to 2001:db8:ffff:ffff:ffff:ffff:ffff:ffff is reserved
for documentation use, just like in the examples above.
* fc00:: to fdff:ffff:ffff:ffff:ffff:ffff:ffff:ffff are addresses reserved
for internal network use and are not routable on the internet.
What is WiFi?
Wi-Fi range is generally wide enough for most homes or small offices, and range
extenders may be placed strategically to extend the signal for larger campuses or
homes. Over time the Wi-Fi standard has evolved, with each updated version faster
than the last.
In a LAN, threat actors need to enter the physical space or immediate vicinity of the
physical media itself. For wired networks, this can be done by placing sniffer taps onto
cables, plugging in USB devices, or using other tools that require physical access to the
network. By contrast, wireless media intrusions can happen at a distance.
• Physical Ports: Physical ports are the ports on the routers, switches, servers,
computers, etc. that you connect the wires, e.g., fiber optic cables, Cat5 cables,
etc., to create a network.
Secure Ports
Some network protocols transmit information in clear text, meaning it is not encrypted
and should not be used. Clear text information is subject to network sniffing. This tactic
uses software to inspect packets of data as they travel across the network and extract
text such as usernames and passwords. Network sniffing could also reveal the content
of documents and other files if they are sent via insecure protocols. The table below
shows some of the insecure protocols along with recommended secure alternatives.
Secure
Insecure
Description Protocol Alternative Protocol
Port
Port
considered secure. It is
now recommended for
web servers and clients to
use Transport Layer
Security (TLS) 1.3 or
higher for the best
protection
Types of Threats
• Spoofing: an attack with the goal of gaining access to a target system through
the use of a falsified identity. Spoofing can be used against IP addresses, MAC
address, usernames, system names, wireless network SSIDs, email addresses,
and many other types of logical identification.
• Virus: The computer virus is perhaps the earliest form of malicious code to
plague security administrators. As with biological viruses, computer viruses have
two main functions—propagation and destruction. A virus is a self-replicating
piece of code that spreads without the consent of a user, but frequently with
their assistance (a user has to click on a link or open a file).
• Worm: Worms pose a significant risk to network security. They contain the same
destructive potential as other malicious code objects with an added twist—they
propagate themselves without requiring any human intervention.
• Trojan: the Trojan is a software program that appears benevolent but carries a
malicious, behind-the-scenes payload that has the potential to wreak havoc on
a system or network. For example, ransomware often uses a Trojan to infect a
target machine and then uses encryption technology to encrypt documents,
spreadsheets and other files stored on the system with a key known only to the
malware creator.
• Insider Threat: Insider threats are threats that arise from individuals who are
trusted by the organization. These could be disgruntled employees or
employees involved in espionage. Insider threats are not always willing
participants. A trusted user who falls victim to a scam could be an unwilling
insider threat.
• Malware: A program that is inserted into a system, usually covertly, with the
intent of compromising the confidentiality, integrity or availability of the victim’s
data, applications or operating system or otherwise annoying or disrupting the
victim.
Here are some examples of steps that can be taken to protect networks.
IDS types are commonly classified as host-based and network-based. A host-based IDS
(HIDS) monitors a single computer or host. A network-based IDS (NIDS) monitors a
network by observing network traffic patterns.
Network Intrusion Detection System (NIDS): A NIDS monitors and evaluates network
activity to detect attacks or event anomalies. It cannot monitor the content of
encrypted traffic but can monitor other packet details. A single NIDS can monitor a
large network by using remote sensors to collect data at key network locations that
send data to a central management console. These sensors can monitor traffic at
routers, firewalls, network switches that support port mirroring, and other types of
network taps. A NIDS has very little negative effect on the overall network
performance, and when it is deployed on a single-purpose system, it doesn’t adversely
affect performance on any other computer. A NIDS is usually able to detect the
initiation of an attack or ongoing attacks, but they can’t always provide information
about the success of an attack. They won’t know if an attack affected specific systems,
user accounts, files or applications.
Security Information and Event Management (SIEM): Security management involves the
use of tools that collect information about the IT environment from many disparate
sources to better examine the overall security of the organization and streamline
security efforts. These tools are generally known as security information and event
management (or S-I-E-M, pronounced “SIM”) solutions. The general idea of a SIEM
solution is to gather log data from various sources across the enterprise to better
understand potential security concerns and apportion resources accordingly. SIEM
systems can be used along with other components (defense-in-depth) as part of an
overall information security program.
Preventing Threats
• Keep systems and applications up to date. Vendors regularly release patches to correct
bugs and security flaws, but these only help when they are applied. Patch management
ensures that systems and applications are kept up to date with relevant patches.
• Remove or disable unneeded services and protocols. If a system doesn’t need a service
or protocol, it should not be running. Attackers cannot exploit a vulnerability in a
service or protocol that isn’t running on a system. As an extreme contrast, imagine a
web server is running every available service and protocol. It is vulnerable to potential
attacks on any of these services and protocols.
• Use intrusion detection and prevention systems. As discussed, intrusion detection and
prevention systems observe activity, attempt to detect threats and provide alerts. They
can often block or stop attacks.
• Use up-to-date anti-malware software. We have already covered the various types of
malicious code such as viruses and worms. A primary countermeasure is anti-malware
software.
• Use firewalls. Firewalls can prevent many different types of threats. Network-based
firewalls protect entire networks, and host-based firewalls protect individual systems.
This chapter included a section describing how firewalls can prevent attacks.
Antivirus: it is a requirement for compliance with the Payment Card Industry Data
Security Standard (PCI DSS). Antivirus systems try to identify malware based on the
signature of known malware or by detecting abnormal activity on a system. This
identification is done with various types of scanners, pattern recognition and advanced
machine learning algorithms. Anti-malware now goes beyond just virus protection as
modern solutions try to provide a more holistic approach detecting rootkits,
ransomware and spyware. Many endpoint solutions also include software firewalls and
IDS or IPS systems.
Scans: Regular vulnerability and port scans are a good way to evaluate the
effectiveness of security controls used within an organization. They may reveal areas
where patches or security settings are insufficient, where new vulnerabilities have
developed or become exposed, and where security policies are either ineffective or not
being followed. Attackers can exploit any of these vulnerabilities.
Firewalls: Early computer security engineers borrowed that name for the devices and
services that isolate network segments from each other, as a security measure. As a
result, firewalling refers to the process of designing, using or operating different
processes in ways that isolate high-risk activities from lower-risk ones. Firewalls enforce
policies by filtering network traffic based on a set of rules. While a firewall should
always be placed at internet gateways, other internal network considerations and
conditions determine where a firewall would be employed, such as network zoning or
segregation of different levels of sensitivity. Firewalls have rapidly evolved over time to
provide enhanced security capabilities. It integrates a variety of threat management
capabilities into a single framework, including proxy services, intrusion prevention
services (IPS) and tight integration with the identity and access management (IAM)
environment to ensure only authorized users are permitted to pass traffic across the
infrastructure. While firewalls can manage traffic at Layers 2 (MAC addresses), 3 (IP
ranges) and 7 (application programming interface (API) and application firewalls), the
traditional implementation has been to control traffic at Layer 4. Traditional firewalls
have PORTS IP Address, IDS/IPS, Antivirus Gateway, WebProxy, VPN; NG Firewalls have
PORTS IP Address, IAM Attributes, IDS/IPS, WebProxy, Anti-Bot, Antivirus Gateway,
VPN, FaaS.
Intrusion Prevention System (IPS): An intrusion prevention system (IPS) is a special type
of active IDS that automatically attempts to detect and block attacks before they reach
target systems. A distinguishing difference between an IDS and an IPS is that the IPS is
placed in line with the traffic. In other words, all traffic must pass through the IPS and
the IPS can choose what traffic to forward and what traffic to block after analyzing it.
This allows the IPS to prevent an attack from reaching a target. Since IPS systems are
most effective at preventing network-based attacks, it is common to see the IPS
function integrated into firewalls. Just like IDS, there are Network-based IPS (NIPS) and
Host-based IPS (HIPS).
When it comes to data centers, there are two primary options: organizations can
outsource the data center or own the data center. If the data center is owned, it will
likely be built on premises. A place, like a building for the data center is needed, along
with power, HVAC, fire suppression and redundancy.
Which of the following is typically associated with an on-premises data center? Fire
suppression is associated, HVAC is associated, Power is associated are all associated
with an on-premises data center.
Which of the following is not a source of redundant power? HVAC is not a source of
redundant power, but it is something that needs to be protected by a redundant
power supply, which is what the other three options will provide. What happens if the
HVAC system breaks and equipment gets too hot? If the temperature in the data
center gets too hot, then there is a risk that the server will shut down or fail sooner
than expected, which presents a risk that data will be lost. So that is another system
that requires redundancy in order to reduce the risk of data loss. But it is not itself a
source of redundant power.
Redundancy
If the organization requires full redundancy, devices should have two power supplies
connected to diverse power sources. Those power sources would be backed up by
batteries and generators. In a high-availability environment, even generators would be
redundant and fed by different fuel types.
The service level agreement goes down to the granular level. For example, if I'm
outsourcing the IT services, then I will need to have two full-time technicians readily
available, at least from Monday through Friday from eight to five. With cloud
computing, I need to have access to the information in my backup systems within 10
minutes. An SLA specifies the more intricate aspects of the services.
We must be very cautious when outsourcing with cloud-based services, because we
have to make sure that we understand exactly what we are agreeing to. If the SLA
promises 100 percent accessibility to information, is the access directly to you at the
moment, or is it access to their website or through their portal when they open on
Monday? That's where you'll rely on your legal team, who can supervise and review the
conditions carefully before you sign the dotted line at the bottom.
Cloud
Cloud Characteristics
Cloud-based assets include any resources that an organization accesses using cloud
computing. Cloud computing refers to on-demand access to computing resources
available from almost anywhere, and cloud computing resources are highly available
and easily scalable. Organizations typically lease cloud-based resources from outside
the organization. Cloud computing has many benefits for organizations, which include
but are not limited to:
• Resource Pooling
o Broadnetwork Access
o Rapid Elasticity
o Measured Service
o On-Demand Self-Service
• Usage is metered and priced according to units (or instances) consumed. This
can also be billed back to specific departments or functions.
• Reduced cost of ownership. There is no need to buy any assets for everyday use,
no loss of asset value over time and a reduction of other related costs of
maintenance and support.
• Reduced energy and cooling costs, along with “green IT” environment effect
with optimum use of IT resources and systems.
Service Models
Some cloud-based services only provide data storage and access. When storing data in
the cloud, organizations must ensure that security controls are in place to prevent
unauthorized access to the data. There are varying levels of responsibility for assets
depending on the service model. This includes maintaining the assets, ensuring they
remain functional, and keeping the systems and applications up to date with current
patches. In some cases, the cloud service provider is responsible for these steps. In
other cases, the consumer is responsible for these steps.
• Services
Deployment Models
Clouds * Public: what we commonly refer to as the cloud for the public user. There is
no real mechanism, other than applying for and paying for the cloud service. It is open
to the public and is, therefore, a shared resource that many people will be able to use
as part of a resource pool. A public cloud deployment model includes assets available
for any consumers to rent or lease and is hosted by an external cloud service provider
(CSP). Service level agreements can be effective at ensuring the CSP provides the
cloud-based services at a level acceptable to the organization.
* Private: it begins with the same technical concept as public clouds,
except that instead of being shared with the public, they are generally
developed and deployed for a private organization that builds its own cloud.
Organizations can create and host private clouds using their own resources.
Therefore, this deployment model includes cloud-based assets for a single
organization. As such, the organization is responsible for all maintenance.
However, an organization can also rent resources from a third party and
split maintenance requirements based on the service model (SaaS, PaaS or
IaaS). Private clouds provide organizations and their departments private
access to the computing, storage, networking and software assets that are
available in the private cloud.
Some other common MSP implementations are: Augment in-house staff for projects;
Utilize expertise for implementation of a product or service; Provide payroll services;
Provide Help Desk service management; Monitor and respond to security incidents;
Manage all in-house IT infrastructure.
Service-Level Agreement (SLA)
Think of a rule book and legal contract—that combination is what you have in a
service-level agreement (SLA). Let us not underestimate or downplay the importance of
this document/ agreement. In it, the minimum level of service, availability, security,
controls, processes, communications, support and many other crucial business
elements are stated and agreed to by both parties.
The purpose of an SLA is to document specific parameters, minimum service levels and
remedies for any failure to meet the specified requirements. It should also affirm data
ownership and specify data return and destruction details. Other important SLA points
to consider include the following: Cloud system infrastructure details and security
standards; Customer right to audit legal and regulatory compliance by the CSP; Rights
and costs associated with continuing and discontinuing service use; Service availability;
Service performance; Data security and privacy; Disaster recovery processes; Data
location; Data access; Data portability; Problem identification and resolution
expectations; Change management processes; Dispute mediation processes; Exit
strategy;
Network Design
• A DMZ, which stands for Demilitarized Zone, is a network area that is designed
to be accessed by outside visitors but is still isolated from the private network of
the organization. The DMZ is often the host of public web, email, file and other
resource servers.
• VLANs, which stands for Virtual Private Network, are created by switches to
logically segment a network without altering its physical topology.
• A virtual private network (VPN) is a communication tunnel that provides point-
to-point transmission of both authentication and data traffic over an untrusted
network.
Defense in Depth
Defense in depth uses a layered approach when designing the security posture of an
organization. Think about a castle that holds the crown jewels. The jewels will be
placed in a vaulted chamber in a central location guarded by security guards. The
castle is built around the vault with additional layers of security—soldiers, walls, a
moat. The same approach is true when designing the logical security of a facility or
system. Using layers of security will deter many attackers and encourage them to focus
on other, easier targets.
Defense in depth provides more of a starting point for considering all types of
controls—administrative, technological, and physical—that empower insiders and
operators to work together to protect their organization and its systems.
• Data: Controls that protect the actual data with technologies such as encryption, data
leak prevention, identity and access management and data controls.
• Application: Controls that protect the application itself with technologies such as data
leak prevention, application firewalls and database monitors.
• Host: Every control that is placed at the endpoint level, such as antivirus, endpoint
firewall, configuration and patch management.
• Internal network: Controls that are in place to protect uncontrolled data flow and user
access across the organizational network. Relevant technologies include intrusion
detection systems, intrusion prevention systems, internal firewalls and network access
controls.
• Perimeter: Controls that protect against unauthorized access to the network. This level
includes the use of technologies such as gateway firewalls, honeypots, malware analysis
and secure demilitarized zones (DMZs).
• Physical: Controls that provide a physical barrier, such as locks, walls or access control.
• Policies, procedures and awareness: Administrative controls that reduce insider threats
(intentional and unintentional) and identify risks as soon as they appear.
Zero Trust
Zero trust networks are often microsegmented networks, with firewalls at nearly every
connecting point. Zero trust encapsulates information assets, the services that apply to
them and their security properties. This concept recognizes that once inside a trust-
but-verify environment, a user has perhaps unlimited capabilities to roam around,
identify assets and systems and potentially find exploitable vulnerabilities. Placing a
greater number of firewalls or other security boundary control devices throughout the
network increases the number of opportunities to detect a troublemaker before harm
is done. Many enterprise architectures are pushing this to the extreme of
microsegmenting their internal networks, which enforces frequent re-authentication of
a user ID.
Zero trust is an evolving design approach which recognizes that even the most robust
access control systems have their weaknesses. It adds defenses at the user, asset and
data level, rather than relying on perimeter defense. In the extreme, it insists that every
process or action a user attempts to take must be authenticated and authorized; the
window of trust becomes vanishingly small.
While microsegmentation adds internal perimeters, zero trust places the focus on the
assets, or data, rather than the perimeter. Zero trust builds more effective gates to
protect the assets directly rather than building additional or higher walls.
We need to be able to see who and what is attempting to make a network connection.
At one time, network access was limited to internal devices. Gradually, that was
extended to remote connections, although initially those were the exceptions rather
than the norm. This started to change with the concepts of bring your own device
(BYOD) and Internet of Things (IoT).
Considering just IoT for a moment, it is important to understand the range of devices
that might be found within an organization.
The organization’s access control policies and associated security policies should be
enforced via the NAC device(s). Remember, of course, that an access control device
only enforces a policy and doesn’t create one.
The NAC device will provide the network visibility needed for access security and may
later be used for incident response. Aside from identifying connections, it should also
be able to provide isolation for noncompliant devices within a quarantined network
and provide a mechanism to “fix” the noncompliant elements, such as turning on
endpoint protection. In short, the goal is to ensure that all devices wishing to join the
network do so only when they comply with the requirements laid out in the
organization policies. This visibility will encompass internal users as well as any
temporary users such as guests or contractors, etc., and any devices they may bring
with them into the organization.
Let’s consider some possible use cases for NAC deployment: Medical devices; IoT
devices; BYOD/mobile devices (laptops, tablets, smartphones); Guest users and
contractors;
It is critically important that all mobile devices, regardless of their owner, go through
an onboarding process, ideally each time a network connection is made, and that the
device is identified and interrogated to ensure the organization’s policies are being
met.
Network-enabled devices are any type of portable or nonportable device that has
native network capabilities. This generally assumes the network in question is a wireless
type of network, typically provided by a mobile telecommunications company.
Network-enabled devices include smartphones, mobile phones, tablets, smart TVs or
streaming media players, network-attached printers, game systems, and much more.
The Internet of Things (IoT) is the collection of devices that can communicate over the
internet with one another or with a control console in order to affect and monitor the
real world. IoT devices might be labeled as smart devices or smart-home equipment.
Many of the ideas of industrial environmental control found in office buildings are
finding their way into more consumer-available solutions for small offices or personal
homes.
Embedded systems and network-enabled devices that communicate with the internet
are considered IoT devices and need special attention to ensure that communication is
not used in a malicious manner. Because an embedded system is often in control of a
mechanism in the physical world, a security breach could cause harm to people and
property. Since many of these devices have multiple access routes, such as ethernet,
wireless, Bluetooth, etc., special care should be taken to isolate them from other
devices on the network. You can impose logical network segmentation with switches
using VLANs, or through other traffic-control means, including MAC addresses, IP
addresses, physical ports, protocols, or application filtering, routing, and access control
management. Network segmentation can be used to isolate IoT environments.
Microsegmentation
The toolsets of current adversaries are polymorphic in nature and allow threats to
bypass static security controls. Modern cyberattacks take advantage of traditional
security models to move easily between systems within a data center.
Microsegmentation aids in protecting against these threats. A fundamental design
requirement of microsegmentation is to understand the protection requirements for
traffic within a data center and traffic to and from the internet traffic flows.
When organizations avoid infrastructure-centric design paradigms, they are more likely
to become more efficient at service delivery in the data center and become apt at
detecting and preventing advanced persistent threats.
Virtual local area networks (VLANs) allow network administrators to use switches to
create software-based LAN segments, which can segregate or consolidate traffic across
multiple switch ports. Devices that share a VLAN communicate through switches as if
they were on the same Layer 2 network. Since VLANs act as discrete networks,
communications between VLANs must be enabled. Broadcast traffic is limited to the
VLAN, reducing congestion and reducing the effectiveness of some attacks.
Administration of the environment is simplified, as the VLANs can be reconfigured
when individuals change their physical location or need access to different services.
VLANs can be configured based on switch port, IP subnet, MAC address and protocols.
VLANs do not guarantee a network’s security. At first glance, it may seem that traffic
cannot be intercepted because communication within a VLAN is restricted to member
devices. However, there are attacks that allow a malicious user to see traffic from other
VLANs (so-called VLAN hopping). The VLAN technology is only one tool that can
improve the overall security of the network environment.
Hardening is the process of applying secure configurations (to reduce the attack
surface) and locking down various hardware, communications systems and software,
including the operating system, web server, application server and applications, etc.
This module introduces configuration management practices that will ensure systems
are installed and maintained according to industry and organizational security
standards.
Data Handling
Data itself goes through its own life cycle as users create, use, share and modify it. The
data security life cycle model is useful because it can align easily with the different
roles that people and organizations perform during the evolution of data from creation
to destruction (or disposal). It also helps put the different data states of in use, at rest
and in motion, into context.
All ideas, data, information or knowledge can be thought of as going through six major
sets of activities throughout its lifetime. Conceptually, these involve:
o Data Sensitivity Levels and Labels: unless otherwise mandated, organizations are
free to create classification systems that best meet their own needs. In
professional practice, it is typically best if the organization has enough
classifications to distinguish between sets of assets with differing
sensitivity/value, but not so many classifications that the distinction between
them is confusing to individuals. Typically, two or three classifications are
manageable, and more than four tend to be difficult.
Highly restricted: Compromise of data with this sensitivity label could possibly
put the organization’s future existence at risk. Compromise could lead to
substantial loss of life, injury or property damage, and the litigation and claims
that would follow. Moderately restricted: Compromise of data with this
sensitivity label could lead to loss of temporary competitive advantage, loss of
revenue or disruption of planned investments or activities. Low sensitivity
(sometimes called “internal use only”): Compromise of data with this sensitivity
label could cause minor disruptions, delays or impacts. Unrestricted public data:
As this data is already published, no harm can come from further dissemination
or disclosure.
o Clearing the device or system, which usually involves writing multiple patterns of
random values throughout all storage media. This is sometimes called
“overwriting” or “zeroizing” the system, although writing zeros has the risk that
a missed block or storage extent may still contain recoverable, sensitive
information after the process is completed.
o Purging the device or system, which eliminates (or greatly reduces) the chance
that residual physical effects from the writing of the original data values may still
be recovered, even after the system is cleared. Some magnetic disk storage
technologies, for example, can still have residual “ghosts” of data on their
surfaces even after being overwritten multiple times. Magnetic media, for
example, can often be altered sufficiently to meet security requirements; in
more stringent cases, degaussing may not be sufficient.
o Physical destruction of the device or system is the ultimate remedy to data
remanence. Magnetic or optical disks and some flash drive technologies may
require being mechanically shredded, chopped or broken up, etched in acid or
burned; their remains may be buried in protected landfills, in some cases.
o In many routine operational environments, security considerations may accept
that clearing a system is sufficient. But when systems elements are to be
removed and replaced, either as part of maintenance upgrades or for disposal,
purging or destruction may be required to protect sensitive information from
being compromised by an attacker.
Log reviews are an essential function not only for security assessment and testing but
also for identifying security incidents, policy violations, fraudulent activities and
operational problems near the time of occurrence. Log reviews support audits –
forensic analysis related to internal and external investigations – and provide support
for organizational security baselines. Review of historic audit logs can determine if a
vulnerability identified in a system has been previously exploited.
Different tools are used depending on whether the risk from the attack is from traffic
coming into or leaving the infrastructure.
Encryption Overview
Almost every action we take in our modern digital world involves cryptography.
Encryption protects our personal and business transactions; digitally signed software
updates verify their creator’s or supplier’s claim to authenticity. Digitally signed
contracts, binding on all parties, are routinely exchanged via email without fear of
being repudiated later by the sender.
Configuration management is a process and discipline used to ensure that the only
changes made to a system are those that have been authorized and validated. It is
both a decision-making process and a set of control processes. If we look closer at this
definition, the basic configuration management process includes components such as
identification, baselines, updates and patches.
• Configuration Management
i. Identification: baseline identification of a system and all its components,
interfaces and documentation.
ii. Baseline: a security baseline is a minimum level of protection that can be used
as a reference point. Baselines provide a way to ensure that updates to
technology and architectures are subjected to the minimum understood and
acceptable level of security requirements.
iii. Change Control: An update process for requesting changes to a baseline, by
means of making changes to one or more components in that baseline. A
review and approval process for all changes. This includes updates and patches.
iv. Verification & Audit: A regression and validation process, which may involve
testing and analysis, to verify that nothing in the system was broken by a
newly applied set of changes. An audit process can validate that the currently in-
use baseline matches the sum total of its initial baseline plus all approved
changes applied in sequence.
• Patches: The challenge for the security professional is maintaining all patches.
Some patches are critical and should be deployed quickly, while others may not
be as critical but should still be deployed because subsequent patches may be
dependent on them. Standards such as the PCI DSS require organizations to
deploy security patches within a certain time frame. An organization should test
the patch before rolling it out across the organization. If the patch does not
work or has unacceptable effects, it might be necessary to roll back to a
previous (pre-patch) state. Typically, the criteria for rollback are previously
documented and would automatically be performed when the rollback criteria
were met. The risk of using unattended patching should be weighed against the
risk of having unpatched systems in the organization’s network. Unattended (or
automated) patching might result in unscheduled outages as production
systems are taken offline or rebooted as part of the patch process.
All policies must support any regulatory and contractual obligations of the
organization. Sometimes it can be challenging to ensure the policy encompasses all
requirements while remaining simple enough for users to understand.
Here are six common security-related policies that exist in most organizations.
• Data Handling Policy: Appropriate use of data: This aspect of the policy defines
whether data is for use within the company, is restricted for use by only certain
roles or can be made public to anyone outside the organization. In addition,
some data has associated legal usage definitions. The organization’s policy
should spell out any such restrictions or refer to the legal definitions as required.
Proper data classification also helps the organization comply with pertinent laws
and regulations. For example, classifying credit card data as confidential can
help ensure compliance with the PCI DSS. One of the requirements of this
standard is to encrypt credit card information. Data owners who correctly
defined the encryption aspect of their organization’s data classification policy
will require that the data be encrypted according to the specifications defined in
this standard.
• Password Policy: Every organization should have a password policy in place that
defines expectations of systems and users. The password policy should describe
senior leadership's commitment to ensuring secure access to data, outline any
standards that the organization has selected for password formulation, and
identify who is designated to enforce and validate the policy.
• Acceptable Use Policy (AUP): The acceptable use policy (AUP) defines acceptable
use of the organization’s network and computer systems and can help protect
the organization from legal action. It should detail the appropriate and
approved usage of the organization’s assets, including the IT environment,
devices and data. Each employee (or anyone having access to the organization’s
assets) should be required to sign a copy of the AUP, preferably in the presence
of another employee of the organization, and both parties should keep a copy
of the signed AUP.
Policy aspects commonly included in AUPs: Data access, System access, Data
disclosure, Passwords, Data retention, Internet usage, Company device usage
• Bring Your Own Device (BYOD): An organization may allow workers to acquire
equipment of their choosing and use personally owned equipment for business (and
personal) use. This is sometimes called bring your own device (BYOD). Another option is
to present the teleworker or employee with a list of approved equipment and require
the employee to select one of the products on the trusted list.
Letting employees choose the device that is most comfortable for them may be good
for employee morale, but it presents additional challenges for the security professional
because it means the organization loses some control over standardization and
privacy. If employees are allowed to use their phones and laptops for both personal
and business use, this can pose a challenge if, for example, the device has to be
examined for a forensic audit. It can be hard to ensure that the device is configured
securely and does not have any backdoors or other vulnerabilities that could be used
to access organizational data or systems.
All employees must read and agree to adhere to this policy before any access to the
systems, network and/or data is allowed. If and when the workforce grows, so too will
the problems with BYOD. Certainly, the appropriate tools are going to be necessary to
manage the use of and security around BYOD devices and usage. The organization
needs to establish clear user expectations and set the appropriate business rules.
• Privacy Policy: Often, personnel have access to personally identifiable information (PII)
(also referred to as electronic protected health information [ePHI] in the health
industry). It is imperative that the organization documents that the personnel
understand and acknowledge the organization’s policies and procedures for handling
of that type of information and are made aware of the legal repercussions of handling
such sensitive data. This type of documentation is similar to the AUP but is specific to
privacy-related data.
The organization should also create a public document that explains how private
information is used, both internally and externally. For example, it may be required that
a medical provider present patients with a description of how the provider will protect
their information (or a reference to where they can find this description, such as the
provider’s website).
Throughout the system life cycle, changes made to the system, its individual
components and its operating environment all have the capability to introduce new
vulnerabilities and thus undermine the security of the enterprise. Change management
requires a process to implement the necessary changes so they do not adversely affect
business operations.
Policies will be set according to the needs of the organization and its vision and
mission. Each of these policies should have a penalty or a consequence attached in
case of noncompliance. The first time may be a warning; the next might be a forced
leave of absence or suspension without pay, and a critical violation could even result in
an employee’s termination. All of this should be outlined clearly during onboarding,
particularly for information security personnel. It should be made clear who is
responsible for enforcing these policies, and the employee must sign off on them and
have documentation saying they have done so. This process could even include a few
questions in a survey or quiz to confirm that the employees truly understand the
policy. These policies are part of the baseline security posture of any organization. Any
security or data handling procedures should be backed up by the appropriate policies.
Documentation: All of the major change management practices address a common set
of core activities that start with a request for change (RFC) and move through various
development and test stages until the change is released to the end users. From first to
last, each step is subject to some form of formalized management and decision-
making; each step produces accounting or log entries to document its results.
Approval: These processes typically include: Evaluating the RFCs for completeness,
Assignment to the proper change authorization process based on risk and
organizational practices, Stakeholder reviews, resource identification and allocation,
Appropriate approvals or rejections, and Documentation of approval or rejection.
Rollback: Depending upon the nature of the change, a variety of activities may need to
be completed. These generally include: Scheduling the change, Testing the change,
Verifying the rollback procedures, Implementing the change, Evaluating the change for
proper and effective operation, and Documenting the change in the production
environment. Rollback authority would generally be defined in the rollback plan, which
might be immediate or scheduled as a subsequent change if monitoring of the change
suggests inadequate performance.
Purpose
The purpose of awareness training is to make sure everyone knows what is expected of
them, based on responsibilities and accountabilities, and to find out if there is any
carelessness or complacency that may pose a risk to the organization. We will be able
to align the information security goals with the organization’s missions and vision and
have a better sense of what the environment is.
Let’s start with a clear understanding of the three different types of learning activities
that organizations use, whether for information security or for any other purpose:
• Awareness: These are activities that attract and engage the learner’s attention by
acquainting them with aspects of an issue, concern, problem or need.
You’ll notice that none of these have an expressed or implied degree of formality,
location or target audience. (Think of a newly hired senior executive with little or no
exposure to the specific compliance needs your organization faces; first, someone has
to get their attention and make them aware of the need to understand. The rest can
follow.)
Security Awareness Training Examples
Education may help workers in a secure server room understand the interaction of the
various fire and smoke detectors, suppression systems, alarms and their interactions
with electrical power, lighting and ventilation systems. Training would provide those
workers with task-specific, detailed learning about the proper actions each should take
in the event of an alarm, a suppression system going off without an alarm, a ventilation
system failure or other contingency. This training would build on the learning acquired
via the educational activities. Awareness activities would include not only posting the
appropriate signage, floor or doorway markings, but also other indicators to help
workers detect an anomaly, respond to an alarm and take appropriate action. In this
case, awareness is a constantly available reminder of what to do when the alarms go
off.
Education may be used to help select groups of users better understand the ways in
which social engineering attacks are conducted and engage those users in creating and
testing their own strategies for improving their defensive techniques. Training will help
users increase their proficiency in recognizing a potential phishing or similar attempt,
while also helping them practice the correct responses to such events. Training may
include simulated phishing emails sent to users on a network to test their ability to
identify a phishing email. Raising users’ overall awareness of the threat posed by
phishing, vishing, SMS phishing (also called “smishing) and other social engineering
tactics. Awareness techniques can also alert selected users to new or novel approaches
that such attacks might be taking. Let’s look at some common risks and why it’s
important to include them in your security awareness training programs.
Phishing
The use of phishing attacks to target individuals, entire departments and even
companies is a significant threat that the security professional needs to be aware of
and be prepared to defend against. Countless variations on the basic phishing
attack have been developed in recent years, leading to a variety of attacks that are
deployed relentlessly against individuals and networks in a never-ending stream of
emails, phone calls, spam, instant messages, videos, file attachments and many other
delivery mechanisms.
Phishing attacks that attempt to trick highly placed officials or private individuals with
sizable assets into authorizing large fund wire transfers to previously unknown entities
are known as whaling attacks .
Social Engineering
Social engineering is an important part of any security awareness training program for
one very simple reason: bad actors know that it works. For the cyberattackers, social
engineering is an inexpensive investment with a potentially very high payoff. Social
engineering, applied over time, can extract significant insider knowledge about almost
any organization or individual.
Most social engineering techniques are not new. Many have even been taught as basic
fieldcraft for espionage agencies and are part of the repertoire of investigative
techniques used by real and fictional police detectives. A short list of the tactics that we
see across cyberspace currently includes:
Phone phishing or vishing: Using a rogue interactive voice response (IVR) system to re-
create a legitimate-sounding copy of a bank or other institution’s IVR system. The
victim is prompted through a phishing email to call in to the “bank” via a provided
phone number to verify information such as account numbers, account access codes or
a PIN and to confirm answers to security questions, contact information and addresses.
A typical vishing system will reject logins continually, ensuring the victim enters PINs or
passwords multiple times, often disclosing several different passwords. More advanced
systems may be used to transfer the victim to a human posing as a customer service
agent for further questioning.
Password Protection
We use many different passwords and systems. Many password managers will store a
user’s passwords for them so the user does not have to remember all their passwords
for multiple systems. The greatest disadvantage of these solutions is the risk of
compromise of the password manager.
Organizations should encourage the use of different passwords for different systems
and should provide a recommended password management solution for its users.
Reusing passwords for multiple systems, especially using the same password for
business and personal use. Writing down passwords and leaving them in unsecured
areas. Sharing a password with tech support or a co-worker.