Cybersecurity Internship Guide
Cybersecurity Internship Guide
1. Introduction to Cybersecurity
Core business applications are now commonly installed alongside Web 2.0 apps on a variety
of endpoints. Networks that were originally designed to share files and printers are now used
to collect massive volumes of data, exchange real-time information, transact online business,
and enable global collaboration. Many Webs 2.0 apps are available as software-as-a-service
(SaaS), web-based, or mobile apps that can be easily installed by end users or that can be run
without installing any local programs or services on the endpoint. The use of Web 2.0 apps in
the enterprise is sometimes referred to as Enterprise 2.0. Many organizations are recognizing
significant benefits from the use of Enterprise 2.0 applications and technologies, including
better collaboration, increased knowledge sharing, and reduced expenses. Click the arrows for
more information about common Web 2.0 apps and services (many of which are also SaaS
apps).
Web 3.0
The vision of Web 3.0 is to return the power of the internet to individual users, in much the
same way that the original Web 1.0 was envisioned. To some extent, Web 2.0 has become
shaped and characterized, if not controlled, by governments and large corporations dictating
the content that is made available to individuals and raising many concerns about individual
security, privacy, and liberty.
Managed Security Services
security service providers (MSSPs) typically operate a fully staffed 24/7 security operations
centres (SOCs) and offer a variety of services such as log collection and aggregation in a
security information and event management (SIEM) platform, event detection and alerting,
vulnerability scanning and patch management, threat intelligence, and incident response and
forensic investigation, among others.
Work-from-Home (WFH) and Work-from-Anywhere (WFA)
In the wake of the global pandemic, many organizations have implemented remote working
models that include WFH and WFA. In many cases, these organizations have realized
additional the benefits from these models, including increased operational efficiencies, higher
employee productivity and morale, and greater access to a diverse talent pool that extends far
Exploiting vulnerabilities in core business applications has long been a predominant attack
vector, but threat actors are constantly developing new tactics, techniques, and procedures
(TTPs).
Protect Networks and Cloud Environments
To effectively protect their networks and cloud environments, enterprise security teams must
manage the risks associated with a relatively limited, known set of core applications, as well
as the risks associated with an ever-increasing number of known and unknown cloud-based
applications. The cloud-based application consumption model has revolutionized the way
organizations do business, and applications such as Microsoft Office 365 and Salesforce are
being consumed and updated entirely in the cloud.
Tactics, Techniques, and Procedures (TTPs)
Port hopping allows adversaries to randomly change ports and protocols during a session.
An example of using non-standard ports is running Yahoo! Messenger over TCP port 80
(HTTP) instead of the standard TCP port for Yahoo! Messenger (5050).
Tunnelling
Another method is tunnelling within commonly used services, such as running peer-to-peer
(P2P) file sharing or an IM client such as Meebo over HTTP.
Hiding Within SSL Encryption
Hiding in SSL encryption masks the application traffic, for example, over TCP port 443
(HTTPS). More than half of all web traffic is now encrypted.
Malware (short for “malicious software”) is a file or code that typically takes control of, collects
information from, or damages an infected endpoint.
Malware
Malware usually has one or more of the following objectives: to provide remote control for an
attacker to use an infected machine, to send spam from the infected machine to unsuspecting
targets, to investigate the infected user’s local network, and to steal sensitive data.
Advanced/Modern Malware
Advanced or modern malware generally refers to new or unknown malware. These types of
malwares are highly sophisticated and often have specialized targets. Advanced malware
typically can bypass traditional defences.
Malware Types
Malware is varied in type and capabilities. Let's review several malware types.
Click the arrows for more information about the malware types.
Logic Bombs
A logic bomb is malware that is triggered by a specified condition, such as a given date or a
particular user account being disabled.
Spyware and adware are types of malwares that collect information, such as internet surfing
behaviour, login credentials, and financial account information, on an infected endpoint.
Spyware often changes browser and other software settings and slows computer and internet
speeds on an infected endpoint. Adware is spyware that displays annoying advertisements on
an infected endpoint, often as pop-up banners.
Boot kits
A boot kit is malware that is a kernel-mode variant of a rootkit, commonly used to attack
computers that are protected by full-disk encryption.
Rootkits
A rootkit is malware that provides privileged (root-level) access to a computer. Rootkits are
installed in the BIOS of a machine, which means operating system-level security tools cannot
detect them.
Backdoors
Anti-AV
Anti-AV is malware that disables legitimately installed antivirus software on the compromised
endpoint, thereby preventing automatic detection and removal of other malware.
Ransomware
Ransomware is malware that locks a computer or device (Locker ransomware) or encrypts data
(Crypto ransomware) on an infected endpoint with an encryption key that only the attacker
knows, thereby making the data unusable until the victim pays a ransom (usually with
cryptocurrency, such as Bitcoin). Reve ton and Locker are two examples of Locker
ransomware. Locky, Tesla Crypt/Encrypt, Crypto locker, and Crypto wall are examples of
Crypto ransomware.
Trojan Horses
A Trojan horse is malware that is disguised as a harmless program but actually gives an attacker
full control and elevated privileges of an endpoint when installed. Unlike other types of
malwares, Trojan horses are typically not self-replicating.
Worms
A worm is malware that typically targets a computer network by replicating itself to spread
rapidly. Unlike viruses, worms do not need to infect other programs and do not need to be
executed by a user or process.
Virus
A virus is malware that is self-replicating but must first infect a host program and be executed
by a user or process.
Modern malware is stealthy and evasive. It plays a central role in a coordinated attack against
a target.
Advanced or modern malware leverages networks to gain power and resilience. Modern
malware can be updated—just like any other software application—so that an attacker can
change course and dig deeper into the network or make changes and enact countermeasures.
Vulnerabilities and exploits can be leveraged to force software to act in ways it’s not intended
to, such as gleaning information about the current security defences in place.
Vulnerability
Exploit
Patching Vulnerabilities
Security patches are developed by software vendors as quickly as possible after a vulnerability
has been discovered in their software.
1. Discovery
An attacker may learn of a vulnerability and begin exploiting it before the software vendor is
aware of the vulnerability or has an opportunity to develop a patch.
2. Development of Patch
The delay between the discovery of a vulnerability and development and release of a patch is
known as a zero-day threat (or exploit).
It may be months or years before a vulnerability is announced publicly. After a security patch
becomes available, time inevitably is required for organizations to properly test and deploy the
patch on all affected systems. During this time, a system running the vulnerable software is at
risk of being exploited by an attacker.
Exploits can be embedded in seemingly innocuous data files (such as Microsoft Word
documents, PDF files, and webpages), or they can target vulnerable network services. Exploits
are particularly dangerous because they are often packaged in legitimate files that do not trigger
anti-malware (or antivirus) software and are therefore not easily detected. Click the tabs in each
step for more information about how exploits are executed.
Creation of an exploit data file is a two-step process. The first step is to embed a small
piece of malicious code within the data file. However, the attacker still must trick the
application into running the malicious code. Thus, the second part of the exploit typically
involves memory corruption techniques that allow the attacker’s code to be inserted into the
execution flow of the vulnerable software.
Vulnerabilities can be exploited from the time software is deployed until it is patched. Click
the arrows for more information about the timeline to eliminate a vulnerability.
Business email compromise (BEC) is one of the most prevalent types of cyberattacks that
organizations face today. The FBI Internet Crime Complaint Centre (IC3) estimates that "in
aggregate" BEC attacks cost organizations three times more than any other cybercrime and
BEC incidents represented nearly a third of the incidents investigated by Palo Alto Networks
Unit 42 Incident Response Team in 2021. According to the Verizon 2021 Data Breach
Investigations Report (DBIR), BEC is the second most common form of social engineering
today.
Spam and phishing emails are the most common delivery methods for malware. The volume
of spam email as a percentage of total global email traffic fluctuates widely from month to
month – typically 45 to 75 percent. Although most end users today are readily able to identify
spam emails and are savvier about not clicking links, opening attachments, or replying to spam
emails, spam remains a popular and effective infection vector for the spread of malware.
Phishing attacks, in contrast to spam, are becoming more sophisticated and difficult to identify.
Phishing Attacks
We often think of spamming and phishing as the same thing, but they are actually separate
processes, and they each require their own mitigations and defences. Phishing attacks, in
contrast to spam, are becoming more sophisticated and difficult to identify.
Spear Phishing
Spear phishing is a targeted phishing campaign that appears more credible to its victims by
gathering specific information about the target, giving it a higher probability of success.
A spear phishing email may spoof an organization (such as a financial institution) or individual
that the recipient actually knows and does business with. It may also contain very specific
information (such as the recipient’s first name, rather than just an email address).
Spear phishing, and phishing attacks in general, are not always conducted via email. A
link is all that is required, such as a link on Facebook or a message board or a shortened URL
on Twitter. These methods are particularly effective in spear phishing attacks because they
allow the attacker to gather a great deal of information about the targets and then lure them
through dangerous links into a place where the users feel comfortable.
Whaling
Whaling is a type of spear phishing attack that is specifically directed at senior executives or
other high-profile targets within an organization. A whaling email typically purports to be a
legal subpoena, customer complaint, or other serious matter.
Watering Hole
Watering hole attacks compromise websites that are likely to be visited by a targeted victim-
for example, an insurance company website that may be frequently visited by healthcare
providers. The compromised website will typically infect unsuspecting visitors with malware
(known as a “drive-by download”).
Pharming
A pharming attack redirects a legitimate website’s traffic to a fake site, typically by modifying
an endpoint’s local hosts file or by compromising a DNS server (DNS poisoning).
Bots and botnets are notoriously difficult for organizations to detect and defend against using
traditional anti-malware solutions.
Disabling a Botnet
Botnets themselves are dubious sources of income for cybercriminals. Botnets are created by
cybercriminals to harvest computing resources (bots). Control of botnets (through C2 servers)
can then be sold or rented out to other cybercriminals.
The key to “taking down” or “decapitating” a botnet is to separate the bots (infected endpoints)
from their brains (C2 servers). If the bots cannot get to their servers, they cannot get new
instructions, upload stolen data, or do anything that makes botnets so unique and dangerous.
Although this approach may seem straightforward, disabling a botnet presents many
challenges.
Extensive resources are typically required to map the distributed C2 infrastructure of a botnet.
Mapping a botnet's infrastructure almost always requires an enormous amount of investigation,
expertise, and coordination between numerous industry, security, and law enforcement
organizations worldwide.
The following are actions for disabling a botnet. Note: Effectively deterring a botnet infection
may be an ongoing process.
Disabling internet access is a highly recommended first action, along with aggressively
monitoring local network activity to identify the infected devices. The first response to
discovery of infected devices is to remove them from the network, thus severing any
connections to a C2 server and keeping the infection from spreading.
The next response is to ensure that current patches and updates are applied. If infected
endpoints are still persistently attempting to connect to a C2 service or an attack target, then
the endpoints should be imaged and cleansed.
Effective deterrent of a botnet infection may be an ongoing process. Devices may return to a
dormant state and appear to be clean of infection for prolonged periods of time, only to one
day be “awakened” by a signal from a C2
The Internet Service Provider (ISP) community has a commitment to securing internet
backbones and core services known as the Shared Responsibility Model. Adhering to this
model does not ensure that ISP providers can fully identify and disable C2 service clusters.
Full termination of C2 architecture can be extremely difficult.
Spamming Botnets
The largest botnets are often dedicated to sending spam. The premise is straightforward: The
attacker attempts to infect as many endpoints as possible, and the endpoints can then be used
to send out spam email messages without the end users’ knowledge.
Productivity
The relative impact of this type of bot on an organization may seem low initially, but an infected
endpoint sending spam could consume additional bandwidth and ultimately reduce the
productivity of the users and even the network itself.
Reputation
Perhaps more consequential is the fact that the organization’s email domain and IP addresses
could easily become listed by various real-time blackhole lists (RBLs), causing legitimate
emails to be labelled as spam and blocked by other organizations and damaging the reputation
of the organization.
The Rus tock botnet is an example of a spamming botnet. Rus tock could send up to 25,000
spam email messages per hour from an individual bot. At its peak, it sent an average of 192
spam emails per minute per bot. Rus tock is estimated to have infected more than 2.4 million
computers worldwide. In March 2011, the U.S. Federal Bureau of Investigation (FBI), working
with Microsoft and others, was able to take down the Rus tock botnet. By then, the botnet had
operated for more than five years. At the time, it was responsible for sending up to 60 percent
of the world’s spam.
A DDoS attack is a type of cyberattack in which extremely high volumes of network traffic
such as packets, data, or transactions are sent to the target victim’s network to make their
network and systems (such as an e-commerce website or other web application) unavailable or
unusable. Click the arrow for more information about how DDoS attacks are used and their
impact on an organization.
Use of Bots
A DDoS botnet uses bots as part of a DDoS attack, overwhelming a target server or network
with traffic from a large number of bots. In such attacks, the bots themselves are not the target
of the attack. Instead, the bots are used to flood some other remote target with traffic. The
attacker leverages the massive scale of the botnet to generate traffic that overwhelms the
network and server resources of the target.
Financial Botnets
Financial botnets, such as Zeus and Spy Eye, are responsible for the direct theft of funds from
all types of
enterprises. These types of botnets are typically not as large as spamming or DDoS botnets,
which grow as large as possible for a single attacker. Click the tabs for more information about
where financial botnets are sold and their impact.
With the explosive growth in fixed and mobile devices over the past decade, wireless (Wi-Fi)
networks are growing exponentially—and so is the attack surface for advanced persistent
threats (ATP). This lesson describes Wi-Fi vulnerabilities and attacks and APT s.
Example: Lazarus
Attacks against nation-states and corporations are common, and the group of cybercriminals
that may have done the most damage is Lazarus. The Lazarus group is known as an APT. The
Lazarus group has been known to operate under different names, including Burnproof and
Hidden Cobra. They were initially known for launching numerous attacks against government
and financial institutions in South Korea and Asia. In more recent years, the Lazarus group has
been targeting banks, casinos, financial investment software developers, and crypto-currency
businesses. The malware attributed to this group recently has been found in 18 countries around
the world.
Wi-Fi Challenges
A security professional's first concern may be whether a Wi-Fi network is secure. However,
for the average user, the unfortunate reality is that Wi-Fi connectivity is more about
convenience than security.
Security professionals must secure Wi-Fi networks—but they must also protect the mobile
devices their organization’s employees use to perform work and access potentially sensitive
data, no matter where they are or whose network, they’re on.
Public Airwaves
Wi-Fi is conducted over public airwaves. The 2.4GHz and 5GHz frequency ranges that are set
aside for Wi-Fi communications are also shared with other technologies, such as Bluetooth. As
a result, Wi-Fi is extremely vulnerable to congestion and collisions.
Wi-Fi Network
Additional problems exist because Wi-Fi device settings and configurations are well known,
published openly, shared, and even broadcast. To begin securing a WLAN network, you should
disable the Service Set Identifier Broadcast configuration. If the SSID is configured to
broadcast, it is easier for an attacker to define simple attack targets and postures because the
network is already discoverable.
Wireless Security
Wi-Fi and wireless connected devices present additional challenges that might not be
considered with wired networks.
Complications with Wi-Fi and wireless network participation are defined by the devices
themselves. Whenever possible, apply mobile device management to ensure devices are
properly hardened and to limit the types of applications (particularly sharing and social
networking services) that end users can install and use.
encryption should ensure that only authorized users and devices are allowed to connect on the
network.
Although wired network boundaries use cabling and segmentation, wireless networks are
conducted over open airwaves. A primary paradigm of wireless networking security is to limit
network availability and discovery, which can be accomplished by not broadcasting the
wireless network presence and availability, or SSID. Placement of wireless access points must
be considered carefully, with the goal of limiting the range of a wireless network.
WLAN networks that do not subscribe to an 802.1x model may still experience authentication
challenges.
Security Protocols
The Wi-Fi Protected Access (WPA) security standard was published as an interim standard in
2004, quickly followed by WPA2. WPA/WPA2 contain improvements to protect against the
inherent flaws in the Wired Equivalent Privacy (WEP), including changes to the encryption.
WEP
The WEP encryption standard is no longer secure enough for Wi-Fi networks. WPA2 and the
emerging WPA3 standards provide strong encryption capabilities and manage secure
authentication via the 802.1x standard.
As mobile device processors have advanced to handle 64-bit computing, AES as a scalable
symmetric encryption algorithm solves the problems of managing secure, encrypted content on
mobile devices.
WPA2
Because requiring users to enter a 64-hexadecimal character key is impractical, WPA2 includes
a function that generates a 256-bit key based on a much shorter passphrase created by the
administrator of the Wi-Fi network and the SSID of the AP used as a salt (random data) for the
one-way hash function
WPA3
WPA3 was published in 2018. Its security enhancements include more robust brute force attack
protection, improved hotspot and guest access security, simpler integration with devices that
have limited or no user interface (such as IoT devices), and a 192-bit security suite. Newer Wi-
Fi routers and client devices will likely support both WPA2 and WPA3 to ensure backward
compatibility in mixed environments.
According to the Wi-Fi Alliance, WPA3 features include improved security for IoT devices
such as smart bulbs, wireless appliances, smart speakers, and other screen-free gadgets that
make everyday tasks easier.
Evil Twin
Perhaps the easiest way for an attacker to find a victim to exploit is to set up a wireless access
point that serves as a bridge to a real network. An attacker can inevitably bait a few victims
with “free Wi-Fi access.”
Baiting a victim with free Wi-Fi access requires a potential victim to stumble on the access
point and connect. The attacker can’t easily target a specific victim, because the attack depends
on the victim initiating the connection. Attackers now try to use a specific name that mimics a
real access point. Click the arrows for more information about how the Evil Twin attack is
executed.
A variation on this approach is to use a more specific name that mimics a real access point
normally found at a particular location–the Evil Twin. For example, if a local airport provides
Wi-Fi service and calls it “Airport Wi-Fi,” the attacker might create an access point with the
same name using an access point that has two radios.
Average users cannot easily discern when they are connected to a real access point or a fake
one, so this approach would catch a greater number of users than a method that tries to attract
victims at random. Still, the user has to select the network, so a bit of chance is involved in
trying to reach a particular target.
The main limitation of the Evil Twin attack is that the attacker can’t choose the victim. In a
crowded location, the attacker will be able to get a large number of people connecting to the
wireless network to unknowingly expose their account names and passwords. However, it’s
not an effective approach if the goal is to target employees in a specific organization.
Jasager
To understand a more targeted approach than the Evil Twin attack, think about what happens
when you bring your wireless device back to a location that you’ve previously visited.
When you bring your laptop home, you don’t have to choose which access point to use, because
your device remembers the details of wireless networks to which it has previously connected.
The same goes for visiting the office or your favourite coffee shop.
Watch the video for more information about a normal wireless device connectivity scenario
and a Jasager attack scenario.
SSL strip
After a user connects to a Wi-Fi network that’s been compromised–or to an attacker’s Wi-Fi
network masquerading as a legitimate network–the attacker can control the content that the
victim sees. The attacker simply intercepts the victim’s web traffic, redirects the victim’s
browser to a web server that it controls, and serves up whatever content the attacker desires.
Emotet
Emotet is a Trojan, first identified in 2014, that has long been used in spam botnets and
ransomware attacks. Recently, it was discovered that a new Emotet variant is using a Wi-Fi
spreader module to scan Wi-Fi networks looking for vulnerable devices to infect. The Wi-Fi
spreader module scans nearby Wi-Fi networks on an infected device and then attempts to
connect to vulnerable Wi-Fi networks via a brute-force attack. After successfully connecting
to a Wi-Fi network, Emotet then scans for non-hidden shares and attempts another brute-force
attack to guess usernames and passwords on other devices connected to the network. It then
installs its malware payload and establishes C2 communications on newly infected devices.
SSL strip strips SSL encryption from a “secure” session. When a user connected to a
compromised Wi-Fi network attempts to initiate an SSL session, the modified access point
intercepts the SSL request.
With SSL strip, the modified access point displays a fake padlock in the victim’s web
browser. Webpages can display a small icon called a favicon next to a website address in the
browser’s address bar. SSL strip replaces the favicon with a padlock that looks like SSL to an
unsuspecting user.
Wi-Fi Attacks
There are different types of Wi-Fi attacks that hackers use to eavesdrop on wireless network
connections to obtain credentials and spread malware.
Doppelganger
Doppelganger is an insider attack that targets WPA3-Personal protected Wi-Fi networks. The
attacker spoofs the source MAC address of a device that is already connected to the Wi-Fi
network and attempts to associate with the same wireless access point.
Perimeter-based network security models date back to the early mainframe era (circa late
1950s), when large mainframe computers were located in physically secure “machine rooms.”
These rooms could be accessed by a limited number of remote job entry (RJE) terminals
directly connected to the mainframe in physically secure areas.
Today’s data centres are the modern equivalent of machine rooms, but perimeter-based
physical security is no longer sufficient. Click the arrows for more information about several
obvious but important reasons for the security issues associated with perimeter-based security.
Mainframe Computers
Mainframe computers predate the internet. In fact, mainframe computers predate ARPANET,
which predates the internet. Today, an attacker uses the internet to remotely gain access, instead
of physically breaching the data centre perimeter.
Processing Power
The primary value of the mainframe computer was its processing power. The relatively limited
data that was produced was typically stored on near-line media, such as tape. Today, data is
the target. Data is stored online in data centres and in the cloud, and it is a high-value target for
any attacker.
Data Centre
Data centres today are remotely accessed by millions of remote endpoint devices from
anywhere and at any time. Unlike the RJEs of the mainframe era, modern endpoints (including
mobile devices) are far more powerful than many of the early mainframe computers and are
themselves targets.
The primary issue with a perimeter-based network security strategy, which deploys
countermeasures at a handful of well-defined entrance and exit points to the network, is that
the strategy relies on the assumption that everything on the internal network can be trusted.
Click the tabs for more information about modern business conditions and computing
environments that perimeter-based strategies fail to address.
Remote employees, mobile users, and cloud computing solutions blur the distinction between
“internal” and “external.”
Wireless Technologies
Wireless technologies, partner connections, and guest users introduce countless additional
pathways into network branch offices, which may be located in untrusted countries or regions.
Insiders
Insiders, whether intentionally malicious or just careless, may present a very real security
threat.
Cyberthreats
Sophisticated cyberthreats could penetrate perimeter defences and gain free access to the
internal network.
Stolen Credentials
Malicious users can gain access to the internal network and sensitive resources by using the
stolen credentials of trusted users.
Internal Networks
Internal networks are rarely homogeneous. They include pockets of users and resources with
different levels of trust or sensitivity, and these pockets should ideally be separated (for
example, research and development and financial systems versus print or file servers).
A broken trust model is not the only issue with perimeter-centric approaches to network
security. Another contributing factor is that traditional security devices and technologies (such
as port-based firewalls) commonly used to build network perimeters let too much unwanted
traffic through.
Net Result
Cannot definitively distinguish good applications from bad ones (which leads to overly
permissive access control settings)
The Zero Trust security model addresses some of the limitations of perimeter-based network
security strategies by removing the assumption of trust from the equation.
With a Zero Trust model, essential security capabilities are deployed in a way that provides
policy enforcement and protection for all users, devices, applications, and data resources, as
well as the communications traffic between them, regardless of location.
No Default Trust
With Zero Trust there is no default trust for any entity – including users, devices, applications,
and packets – regardless of what it is and its location on or relative to the enterprise network.
The need to "always verify" requires ongoing monitoring and inspection of associated
communication traffic for subversive activities (such as threats).
Compartmentalize
Zero Trust models establish trust boundaries that effectively compartmentalize the various
segments of the internal computing environment. The general idea is to move security
functionality closer to the pockets of resources that require protection. In this way, security can
always be enforced regardless of the point of origin of associated communications traffic.
In a Zero Trust model, verification that authorized entities are always doing only what they’re
allowed to do is not optional: It's mandatory. Click the tabs for more information about the
benefits of implementing a Zero Trust network.
Improved Effectiveness
Greater Efficiency
Improved Ability
Clearly improved effectiveness in mitigating data loss with visibility and safe enablement of
applications, plus detection and prevention of cyberthreats
The principle of least privilege in network security requires that only the permission or access
rights necessary to perform an authorized task are granted.
Security profiles are defined based on an initial security audit performed according to Zero
Trust inspection policies. Discovery is performed to determine which privileges are essential
for a device or user to perform a specific function.
Ensure that all resources are accessed securely, regardless of location. This principle suggests
the need for multiple trust boundaries and increased use of secure access for communication to
or from resources, even when sessions are confined to the “internal” network. It also means
ensuring that the only devices allowed access to the network have the correct status and
settings, have an approved VPN client and proper passcodes, and are not running malware.
Adopt a least privilege strategy and strictly enforce access control. The goal is to minimize
allowed access to resources to reduce the pathways available for malware and attackers to gain
unauthorized access.
This principle reiterates the need to “always verify” while also reinforcing that adequate
protection requires more than just strict enforcement of access control. Close and continuous
attention must also be given to exactly what “allowed” applications are actually doing, and the
only way to accomplish these goals is to inspect the content for threats.
The Zero Trust segmentation platform (also called a network segmentation gateway by
Forrester Research) is the component used to define internal trust boundaries. That is, the
platform provides the majority of the security functionality needed to deliver on the Zero Trust
operational objectives. Click the tabs for more information about the abilities of the
segmentation platform.
Conceptual Architecture
With the protect surface identified, security teams can identify how traffic moves across the
organization in relation to the protect surface. Understanding who the users are, which
applications they are using, and how they are connecting is the only way to determine and
enforce policy that ensures secure access to data. Click the arrows for more information about
the main components of a Zero Trust conceptual architecture.
Fundamental Assertions
Policies must be dynamic and calculated from as many sources of data as possible.
Single Component
In practice, the Zero Trust segmentation platform is a single component in a single physical
location. Because of performance, scalability, and physical limitations, an effective
implementation is more likely to entail multiple instances distributed throughout an
organization’s network. The solution also is called a “platform” to reflect that it is made up of
multiple distinct (and potentially distributed) security technologies that operate as part of a
holistic threat protection framework to reduce the attack surface and correlate information
about discovered threats.
The core of any Zero Trust network security architecture is the Zero Trust Segmentation
Platform, so you must choose the correct solution. Key criteria and capabilities to consider
when selecting a Zero Trust Segmentation Platform include.
Secure Access
Consistent secure IPsec and SSL VPN connectivity is provided for all employees, partners,
customers, and guests wherever they’re located (for example, at remote or branch offices, on
the local network, or over the internet). Policies to determine which users and devices can
access sensitive applications and data can be defined based on application, user, content,
device, device state, and other criteria.
Application identification accurately identifies and classifies all traffic, regardless of ports and
protocols, and evasive tactics, such as port hopping or encryption. Application identification
eliminates methods that malware may use to hide from detection and provides complete context
into applications, associated content, and threats.
The combination of application, user, and content identification delivers a positive control
model that allows organizations to control interactions with resources based on an extensive
range of business-relevant attributes, including the specific application and individual
functions being used, user and group identity, and the specific types or pieces of data being
accessed (such as credit card or Social Security numbers). The result is truly granular access
control that safely enables the correct applications for the correct sets of users while
automatically preventing unwanted, unauthorized, and potentially harmful traffic from gaining
access to the network.
Cyberthreat Protection
Virtual and hardware appliances establish consistent and cost-effective trust boundaries
throughout an organization’s network, including in remote or branch offices, for mobile users,
at the internet perimeter, in the cloud, at ingress points throughout the data centre, and for
individual areas wherever they might exist.
Implementation of a Zero Trust network security model doesn’t require a major overhaul of an
organization’s network and security infrastructure.
A Zero Trust design architecture can be implemented with only incremental modifications to
the existing network, and implementation can be completely transparent to users. Advantages
of such a flexible, non-disruptive deployment approach include minimizing the potential
impact on operations and being able to spread the required investment and work effort over tim
The Net
In the 1960s, the U.S. Defence Advanced Research Projects Agency (DARPA) created
ARPANET, the precursor to the modern internet. ARPANET was the first packet-switched
network. A packet-switched network breaks data into small blocks (packets), transmits each
individual packet from node to node toward its destination, and then reassembles the individual
packets in the correct order at the destination.
The ARPANET evolved into the internet (often referred to as the network of networks) because
the internet connects multiple local area networks (LAN) to a worldwide wide area network
(WAN) backbone.
Today billions of devices worldwide are connected to the Internet and use the transport
communications protocol/internet protocol (TCP/IP) to communicate with each over packet-
switched network. Specialized devices and technologies such as routers, routing protocols, SD-
WAN, the domain name system (DNS) and the world wide web (WWW) facilitate
communications between connected devices.
The basic operations of computer networks and the internet include common networking
devices.
Routers
Routers are physical or virtual devices that send data packets to destination networks along a
network path using logical addresses. Routers use various routing protocols to determine the
best path to a destination, based on variables such as bandwidth, cost, delay, and distance. A
wireless router combines the functionality of a router and a wireless access point (AP) to
provide routing between a wired and wireless network.
Access Point
An access point is a network device that connects to a router or wired network and transmits a
Wi-Fi signal so that wireless devices can connect to a wireless (or Wi-Fi) network. A wireless
repeater rebroadcasts the wireless signal from a wireless router or AP to extend the range of a
Wi-Fi network.
Hub
A hub (or concentrator) is a network device that connects multiple devices such as desktop
computers, laptop docking stations, and printers on a LAN. Network traffic that is sent to a hub
is broadcast out of all ports on the hub, which can create network congestion and introduces
potential security risks. Any device connected to a Hub can listen and receive unicast and
broadcast traffic from all devices connected to the same Hub. Unicast traffic is traffic sent from
one device to another device. Broadcast traffic is traffic sent from one device to all devices.
Switches
A switch is essentially an intelligent hub that uses physical addresses to forward data packets
to devices on a network. Unlike a hub, a switch is designed to forward data packets only to the
port that corresponds to the destination device. This transmission method (referred to as micro-
segmentation) creates separate network segments and effectively increases the data
transmission rates available on the individual network segments. Switches transmit data
between connected devices more securely than hubs because of micro-segmentation. A switch
can also be used to implement virtual LANs (VLANs), which logically segregate a network
and limit broadcast domains and collision domains.
Routed protocols, such as IP, manage packets with routing information that enables those
packets to be transported across networks using routing protocols.
Routing protocols are defined at the Network layer of the OSI model and specify how routers
communicate with one another on a network. Routing protocols can either be static or
dynamic.
Static Routing
A static routing protocol requires that routes be created and updated manually on a router or
other network device. If a static route is down, traffic can’t be automatically rerouted unless an
alternate route has been configured. Also, if the route is congested, traffic can’t be
automatically rerouted over the less congested alternate route. Static routing is practical only
in very small networks or for very limited, special-case routing scenarios (for example, a
destination that’s used as a backup route or is reachable only via a single router).
Dynamic Routing
A dynamic routing protocol can automatically learn new (or alternate) routes and determine
the best route to a destination. The routing table is updated periodically with current routing
information.
Dynamic routing protocols can be classified further as distance vector, link state, and path
vector. A distance-vector protocol makes routing decisions based on two factors: the distance
(hop count or another metric) and vector (the exit router interface). It periodically informs its
peers and/or neighbours of topology changes.
Convergence (the time required for all routers in a network to update their routing tables with
the most current information such as link status changes) can be a significant problem for
distance-vector protocols.
Without Convergence
Without convergence, some routers in a network may be unaware of topology changes, which
causes the router to send traffic to an invalid destination.
During Convergence
During convergence, routing information is exchanged between routers, and the network slows
down considerably. Convergence can take several minutes in networks that use distance-vector
protocols.
Split Horizon
Prevents a router from advertising a route back out through the same interface from which the
route was learned
Triggered Updates
When a change is detected, the update gets sent immediately instead of waiting 30 seconds to
send a RIP update.
Route Poisoning
Sets the hop count on a bad route to 16, which effectively advertises the route as unreachable
Causes a router to start a timer when the router first receives information that a destination is
unreachable. Subsequent updates about that destination will not be accepted until the timer
expires. This timer also helps avoid problems associated with flapping. Flapping occurs when
a route (or interface) repeatedly changes state (Up, Down, Up, Down) over a short period of
time.
Link State
A link-state protocol requires every router to calculate and maintain a complete map, or routing
table, of the entire network. Routers that use a link-state protocol periodically transmit updates
that contain information about adjacent connections, or link states, to all other routers in the
network. Click the tabs for more information about link-state protocols and a use case.
Vector
A path-vector protocol is similar to a distance-vector protocol but without the scalability issues
associated with limited hop counts in distance-vector protocols. Each routing table entry in a
path-vector protocol contains path information that gets dynamically updated.
BGP
Border Gateway Protocol (BGP) is an example of a path-vector protocol used between separate
autonomous systems.
Providers
BGP is the core protocol used by internet service providers (ISPs) and network service
providers (NSPs), as well as on very large private IP networks.
LANs
A LAN is a computer network that connects end-user devices such as laptop and desktop
computers, servers, printers, and other devices so that applications, databases, files, file storage,
and other networked resources can be shared among authorized users on the LAN. A LAN can
be wired, wireless, or a combination of wired and wireless. Examples of networking equipment
commonly used in LANs include bridges, hubs, repeaters, switches, and wireless APs. Two
basic network topologies (with many variations) are commonly used in LANs are Star topology
and Mesh topology. Other once-popular network topologies such as ring and bus are rarely
found in modern networks.
Star
Each node on the network is directly connected to a switch, hub, or concentrator, and all data
communications must pass through the switch, hub, or concentrator. The switch, hub, or
concentrator can thus become a performance bottleneck or single point of failure in the
network. A star topology is ideal for practically any size environment and is the most
commonly used basic LAN topology.
Mesh
All nodes are interconnected to provide multiple paths to all other resources. A mesh topology
may be used throughout the network or only for the most critical network components such as
routers, switches, and servers to eliminate performance bottlenecks and single points of failure.
WANs
A WAN is a computer network that connects multiple LANs or other WANs across a relatively
large geographic area such as a small city, a region or country, a global enterprise network, or
the entire planet (as is the case for the internet).
Examples of networking equipment commonly used in WANs include access servers, firewalls,
modems, routers, virtual private network (VPN) gateways, and WAN switches.
Traditional WANs rely on physical routers to connect remote or branch users to applications
hosted on data centres. Each router has a data plane, which holds the information, and a control
plane, which tells the data where to go. Where data flows is typically determined by a network
engineer or administrator who writes rules and policies, often manually, for each router on the
network. This process can be time-consuming and prone to error.
SD-WAN
A software-defined WAN (SD-WAN) separates the control and management processes from
the underlying networking hardware, making them available as software that can be easily
configured and deployed. A centralized control plane means network administrators can write
new rules and policies, and then configure and deploy them across an entire network at once.
SD-WAN Benefits
SD-WAN makes management and direction of traffic across a network easier. SD-WAN offers
many benefits to geographically distributed organizations. Click the tabs for more information
about the benefits SD-WAN offers.
Reduced Costs
Because each device is centrally managed, with routing based on application policies, WAN
managers can create and update security rules in real time as network requirements change.
The combination of SD-WAN with zero-touch provisioning, which is a feature that helps
automate the deployment and configuration processes, also helps organizations further reduce
the complexity, resources, and operating expenses required to turn up new sites.
In addition to LANs and WANs, many other types of area networks are used for different
purposes. Click the arrows for more information about other area networks and their purposes.
CANs and WCANs connect multiple buildings in a high-speed network (for example, across a
corporate or university campus).
MANs and WMANs extend networks across a relatively large area, such as a city.
PANs and WPANs connect an individual’s electronic devices such as laptop computers,
smartphones, tablets, virtual personal assistants (for example, Amazon Alexa, Apple Siri,
Google Assistant, and Microsoft Cortana), and wearable technology to each other or to a larger
network.
VANs are a type of extranet that allows businesses within an industry to share information or
integrate shared business processes.
VLANs segment broadcast domains in a LAN, typically into logical groups (such as business
departments). VLANs are created on network switches.
WLANs, also known as Wi-Fi networks, use wireless APs to connect wireless-enabled devices
to a wired LAN. Wireless wide-area networks (WWANs) extend wireless network coverage
over a large area, such as a region or country, typically using mobile cellular technology.
SANs connect servers to a separate physical storage device (typically a disk array).
Domain Name System (DNS) is a protocol that translates (resolves) a user-friendly domain
name to an IP address so that users can access computers, websites, services, or other resources
on the internet or private networks.
The Domain Name System (DNS) is a distributed, hierarchical internet database that maps
FQDNs for computers, services, and other resources such as a website address (also known as
a URL) to IP addresses, similar to how a contact list on a smartphone maps the names of
businesses and individuals to phone numbers. A root name server is the authoritative name
server for a DNS root zone.
The following is more information about DNS and root name servers.
To create a new domain name that will be accessible via the internet, you must register your
unique domain name with a domain name registrar, such as GoDaddy or Network Solutions.
This registration is similar to listing a new phone number in a phone directory. DNS is critical
to the operation of the internet.
Thirteen root name servers (actually, 13 networks comprising hundreds of root name servers)
are configured worldwide. They are named a.root-servers.net through m.root-servers.net. DNS
servers are typically configured with a root hints file that contains the names and IP addresses
of the root servers.
2G/2.5G: Due to the low cost of 2G modules, relatively long battery life, and large installed
base of 2G sensors and M2M applications, 2G connectivity remains a prevalent and viable IoT
connectivity option.
3G: IoT devices with 3G modules use either Wideband Code Division Multiple Access (W-
CDMA) or Evolved High Speed Packet Access (HSPA+ and Advanced HSPA+) to achieve
data transfer rates of between 384Kbps and 168Mbps.
4G/Long-Term Evolution (LTE): 4G/LTE networks enable real-time IoT use cases, such as
autonomous vehicles, with 4G LTE Advanced Pro delivering speeds in excess of 3Gbps and
less than 2 milliseconds of latency.
According to research conducted by the Palo Alto Networks Unit 42 threat intelligence team,
the general security posture of IoT devices is declining, leaving organizations vulnerable to
new IoT-targeted malware and older attack techniques that IT teams have long forgotten.
Palo Alto Networks IoT security enables security teams to rapidly identify and protect all
unmanaged IoT devices with a machine learning-based, signature-less approach. Palo Alto
Networks created the industry’s first turnkey IoT security offering, delivering visibility,
prevention, risk assessment, and enforcement in combination with our ML-powered next-
generation firewall. There is no need to deploy any new network infrastructure or change
existing operational processes.
Ninety-eight percent of all IoT device traffic is unencrypted, exposing personal and
confidential data on the network. Attackers who’ve successfully bypassed the first line of
defence (most frequently via phishing attacks) and established C2 can listen to unencrypted
network traffic, collect personal or confidential information, and then exploit that data for profit
on the dark web. Fifty-seven percent of IoT devices are vulnerable to medium- or high-severity
attacks, making IoT the low-hanging fruit for attackers. Because of the generally low patch
level of IoT assets, the most frequent attacks are exploits via long-known vulnerabilities and
password attacks using default device passwords.
Eighty-three percent of medical imaging devices run on unsupported operating systems, which
is a 56 percent jump from 2018, as a result of the Windows 7 operating system reaching its end
of life. This general decline in security posture opens the door for new attacks, such as crypto
jacking (which increased from 0 percent in 2017 to 5 percent in 2019) and brings back long-
forgotten attacks such as Conifer, which IT environments had previously been immune to for
a long time. The IoMT devices with the most security issues are imaging systems, which
represent a critical part of the clinical workflow. For healthcare organizations, 51 percent of
threats involve imaging devices, disrupting the quality of care and allowing attackers to
exfiltrate patient data stored on these devices.
Seventy-two percent of healthcare VLANs mix IoT and IT assets, allowing malware to spread
from users’ computers to vulnerable IoT devices on the same network. There is a 41 percent
rate of attacks exploiting device vulnerabilities, as IT-borne attacks scan through network-
connected devices in an attempt to exploit known weaknesses. We’re seeing a shift from IoT
botnets conducting denial-of-service attacks to more sophisticated attacks targeting patient
identities, corporate data, and monetary profit via ransomware.
There is an evolution of threats targeting IoT devices using new techniques, such as peer-to-
peer C2 communications and wormlike features for self-propagation. Attackers recognize the
vulnerability of decades-old legacy operational technology (OT) protocols, such as Digital
Imaging and Communications in Medicine (DICOM), and can disrupt critical business
functions in the organization.
The TCP stack places the block of data into an output buffer on the server and determines the
maximum segment size of individual TCP blocks permitted by the server operating system.
The TCP stack then divides the data blocks into appropriately sized segments, adds a TCP
header, and sends the segment to the IP stack on the server.
The IP stack adds source and destination IP addresses to the TCP segment and notifies the
server operating system that it has an outgoing message that is ready to be sent across the
network. When the server operating system is ready, the IP packet is sent to the network
adapter, which converts the IP packet to bits and sends the message across the network.
Numbering Systems
You must understand how network systems are addressed before following the path data takes
across internetworks. Physical, logical, and virtual addressing in computer networks require a
basic understanding of decimal (base 10), hexadecimal (base 16), and binary (base
2) numbering.
Leading zeros in an individual hextet can be omitted, but each hextet must have at least one
hexadecimal digit, except as noted in the next rule. Application of this rule to IPv6 address:
2001:0db8:0000:0000: 0008:0800:200c:417a yields this result: 2001:db8:0:0:
8:800:200c:417a.
Introduction to Subnetting
Subnetting is a technique used to divide a large network into smaller, multiple subnetworks by
segmenting an IP address into two parts: the network portion of the address and the host portion
of the address.
Network Classes
Subnetting can be used to limit network traffic or limit the number of devices that are visible
to, or can connect to, each other.
Routers examine IP addresses and subnet values (called masks) to determine the best forward
network path for packets. The subnet mask is a required element in IPv4.
Class A and Class B IPv4 addresses use smaller mask values and support larger numbers of
nodes than Class C IPv4 addresses for their default address assignments. Class A networks use
a default 8-bit (255.0.0.0) subnet mask, which provides a total of more than 16 million (2^24 -
2) available IPv4 node addresses. Class B networks use a default 16-bit (255.255.0.0) subnet
mask, which provides more than 65 thousand (2^16-2) available IPv4 node addresses. You
need to reserve 2 addresses, one for the network address and one for the broadcast address, so
that is the reason why you need to subtract 2 addresses from the total number of node addresses
available for these classful networking examples.
Class C Subnets
For a Class C IPv4 address, there are 254 possible node (or host) addresses (28 or 256 potential
addresses, but you lose two addresses for each network: one for the base network address and
the other for the broadcast address). A typical Class C network uses a default 24-bit subnet
mask (255.255.255.0). This subnet mask value identifies the network portion of an IPv4
address, with the first three octets being all ones (11111111 in binary notation, 255 in decimal
notation). The mask displays the last octet as zero (00000000 in binary notation). For a Class
C IPv4 address with the default subnet mask, the last octet is where the node-specific values of
the IPv4 address are assigned.
For example, in a network with an IPv4 address of 192.168.1.0 and a mask value of
255.255.255.0, the network portion of the address is 192.168.1, and the node portion of the
address or the last 8 bits provide 254 available node addresses (2^8 - 2). Just as in the Class A
and C examples, you need to reserve 2 addresses, one for the network address and one for the
broadcast address, so that is the reason why you need to subtract 2 addresses from the total
number of nodes addresses available.
CIDR
Classless Inter-Domain Routing (CIDR) is a method for allocating IP addresses and IP routing
that replaces classful IP addressing (for example, Class A, B, and C networks) with classless
IP addressing.
Unlike subnetting, which divides an IPv4 address along an arbitrary (default) classful 8-bit
boundary (8 bits for a Class A network, 16 bits for a Class B network, 24 bits for a Class C
network), CIDR allocates address space on any address bit boundary (known as variable-length
subnet masking, or VLSM).
Super netting
CIDR is used to reduce the size of routing tables on internet routers by aggregating multiple
contiguous network prefixes (known as super netting), and it also helps slow the depletion of
public IPv4 addresses.
The Open Systems Interconnection (OSI) and Transmission Control Protocol/Internet Protocol
(TCP/IP) models define standard protocols for network communication and interoperability.
Layered Approach
The OSI and TCP/IP models use a layered approach to provide more clarity and efficiency in
different areas.
The OSI model is defined by the International Organization for Standardization and consists
of seven layers. This model is a theoretical model used to logically describe networking
processes.
The TCP/IP protocol was originally developed by the U.S. Department of Defense (DoD) and
actually preceded the OSI model. This model defines actual networking requirements, for
example, for frame construction.
This layer consists of network applications and processes, and it loosely corresponds to Layers
5 through 7 of the OSI model.
This layer provides end-to-end delivery, and it corresponds to Layer 4 of the OSI model.
This layer defines the IP datagram and routing, and it corresponds to Layer 3 of the OSI model.
This layer also is referred to as the Link layer. It contains routines for accessing physical
networks, and it corresponds to Layers 1 and 2 of the OSI model.
Packet Lifecycle
We will discuss two components of the packet lifecycle: a circuit-switched network and a
packet-switched network.
The following describes the differences between circuit switching and graphic switching.
Circuit Switching
Legacy Firewalls
Firewalls have been central to network security since the early days of the internet. A firewall
is a hardware platform or software platform or both that controls the flow of traffic between a
trusted network (such as a corporate LAN) and an untrusted network (such as the internet).
First-generation packet filtering (also known as port-based) firewalls have the following
characteristics:
Operation
Packet filtering firewalls operate up to Layer 4 (Transport layer) of the OSI model and inspect
individual packet headers to determine source and destination IP address, protocol (TCP, UDP,
ICMP), and port number.
Match
Packet filtering firewalls match source and destination IP address, protocol, and port number
information contained within each packet header to a corresponding rule on the firewall that
designates whether the packet should be allowed, blocked, or dropped.
Inspection
Packet filtering firewalls inspect and handle each packet individually, with no information
about context or session.
Stateful packet inspection firewalls operate up to Layer 4 (Transport layer) of the OSI model
and maintain state information about the communication sessions that have been established
between hosts on two different networks. These firewalls inspect individual packet headers to
determine source and destination IP address, protocol (TCP, UDP, and ICMP), and port number
(during session establishment only). The firewalls compare header information to firewall rules
to determine if each session should be allowed, blocked, or dropped. After a permitted
connection is established between two hosts, the firewall allows traffic to flow between the two
hosts without further inspection of individual packets during the session
Application Firewalls
Application firewalls inspect application-layer traffic, so they can identify and block specified
content, malware, exploits, websites, and applications or services that use hiding techniques
such as encryption and non-standard ports. Proxy servers can also be used to implement strong
user authentication and web application filtering and to mask the internal network from
untrusted networks. However, proxy servers have a significant negative impact on the overall
performance of the network.
Intrusion detection systems (IDSs) and intrusion prevention systems (IPSs) provide real-time
monitoring of network traffic and perform deep-packet inspection and analysis of network
activity and data.
Classifications
Knowledge-Based Systems
Behaviour-Based Systems
Unlike traditional packet filtering firewalls and stateful packet inspection firewalls, which
examine only packet header information, IDSs and IPSs examine both the packet header and
the payload of network traffic. IDSs and IPSs attempt to match known-bad, or malicious,
patterns (or signatures) in inspected packets. IDSs and IPSs are typically deployed to detect
and block exploits of software vulnerabilities on target networks.
Web content filters restrict the internet activity of users on a network. Web content filters match
a web address (URL) against a database of websites, which is typically maintained by the
individual security vendor that sells the web content filters and is provided as a subscription-
based service.
Web content filters classify websites into broad categories. These categories are then used to
control user access to websites. Watch the video to see instances in which user activity is
allowed and restricted on a corporate network.
Elapsed time0:00/Total0:00
VPN client software is typically installed on mobile endpoints, such as laptop computers and
smartphones, to extend a network beyond the physical boundaries of the organization.
The VPN client connects to a VPN server, such as a firewall, router, or VPN appliance (or
concentrator). After a VPN tunnel is established, a remote user can access network resources,
such as file servers, printers, and Voice over IP (VoIP) phones, as if they were physically in
the office.
OpenVPN
OpenVPN is a highly secure, open-source VPN implementation that uses SSL/TLS encryption
for key exchange. OpenVPN uses up to 256-bit encryption and can run over TCP or UDP.
Although OpenVPN is not natively supported by most major operating systems, it has been
ported to most major operating systems, including mobile device operating systems.
MPPE encrypts data in PPP-based dial-up connections and PPTP VPN connections. MPPE
uses the RSA RC4 encryption algorithm to provide data confidentiality and supports 40-bit and
128-bit session keys.
PPTP is a basic VPN protocol that uses TCP port 1723 to establish communication with the
VPN peer. PPTP then creates a Generic Routing Encapsulation (GRE) tunnel that transports
encapsulated Point-to-Point Protocol (PPP) packets between the VPN peers.
Easy Setup
PPTP is easy to set up and fast. However, PPTP is perhaps the least secure VPN protocol, so it
is now seldom used.
Use Cases
Secure
Deployment
An agentless SSL VPN requires only that users launch a web browser, use HTTPS to open a
VPN portal or webpage, and log in to the network with their user credentials.
An agent-based SSL VPN connection creates a secure tunnel between a SSL VPN client
installed on a host computer/laptop and a VPN concentrator device in an organization's
network. Agent-based SSL VPNs are often used to securely connect remote users to an
organization's network.
Use Case
SSL VPN technology is the standard method of connecting remote endpoint devices back to
the enterprise network. IPsec is most commonly used in site-to-site or device-to-device VPN
connections, such as connecting a branch office network to a headquarters network or data
centre.
Network data loss prevention (DLP) solutions inspect data that is leaving, or egressing, a
network, such as data that is sent via email and or file transfer. DLP prevents sensitive data
(based on defined policies) from leaving the network.
Click the tabs for more information about the purpose and functionality of DLP.
Sensitive Data
Data Patterns
Vulnerabilities
A DLP security solution prevents sensitive data from being transmitted outside the network by
a user, either inadvertently or maliciously.
Personally identifiable information (PII) such as names, addresses, birthdates, Social Security
numbers, health records (including electronic medical records, or EMRs, and electronic health
records, or EHRs), and financial data (such as bank account numbers and credit card numbers)
Intellectual property, trade secrets, and other confidential or proprietary company information
UTM combines multiple cybersecurity functions into one appliance. The UTM appliance
sequentially executes these cybersecurity functions to examine traffic which adds to network
traffic latency.
Security Functions
Many organizations have replaced UTM appliances with next-generation firewalls (NGFWs)
to reduce traffic inspection latency. The Palo Alto Networks next-generation firewall uses a
single pass parallel processing architecture to quickly inspect all traffic crossing the firewall's
data plane.
UTM devices combine numerous security functions into a single appliance, including anti-
malware, anti-spam, content filtering, DLP, firewall (stateful inspection), IDS/IPS, and VPN.
Data encapsulation (or data hiding) wraps protocol information from the (OSI or TCP/IP) layer
immediately above in the data section of the layer below.
In the OSI model and TCP/IP protocol, data is passed from the highest layer (Layer 7 in the
OSI model, Layer 4 in the TCP/IP model) downward through each layer to the lowest layer
(Layer 1 in the OSI model and the TCP/IP model). It is then transmitted across the network
medium to the destination node, where it is passed upward from the lowest layer to the highest
layer. Each layer communicates only with the adjacent layer immediately above and below it.
This communication is achieved through a process known as data encapsulation (or data
hiding), which wraps protocol information from the layer immediately above in the data section
of the layer immediately below.
A PDU describes a unit of data at a particular layer of a protocol. For example, in the OSI
model, a Layer 1 PDU is known as a bit, a Layer 2 PDU is known as a frame, a Layer 3 PDU
is known as a packet, and a Layer 4 PDU is known as a segment or datagram.