CISSP
LAST MINUTE STUDY GUIDE
DOMAIN 3
SECURITY ARCHITECTURE
AND ENGINEERING
AUGUST 2025
CISSP DOMAIN 3: SECURITY
ARCHITECTURE AND ENGINEERING
Overview
Domain 3 focuses on the foundational concepts, secure system architecture,
engineering processes, and cryptographic systems necessary to design and implement
secure systems. It emphasizes principles of secure design, trusted computing,
cryptographic protections, and physical safeguards that underpin enterprise security.
Security architecture is the structured framework used to design and implement
security measures in systems and networks. It ensures confidentiality, integrity, and
availability (CIA) of information across IT environments. The principles guide how
components such as hardware, software, and networks are securely integrated.
Section 1: Security Models and Concepts
Introduction to Security Models
Security models provide structured methods for specifying and enforcing security
policies. They define the rules, assumptions, and mathematical logic to ensure
confidentiality, integrity, and/or availability in systems.
These models are the blueprints for designing secure operating systems, access control
mechanisms, and secure architecture frameworks.
Fundamental Concepts
1. Confidentiality, Integrity, and Availability (CIA Triad)
o Confidentiality ensures that information is only accessible to those with
proper authorization.
o Integrity ensures that data remains accurate, consistent, and unaltered
except by authorized individuals.
o Availability ensures that systems and data are accessible to authorized
users when needed.
2. Security Governance
Governance refers to the high-level management framework that defines
security objectives, roles, and responsibilities. It includes defining organizational
structure, accountability, and oversight for security functions.
3. Security Boundaries
Security boundaries define the scope of the system, including its external
interfaces and internal subsystems. Boundary protections help reduce the
attack surface and clearly delineate zones of trust.
4. System Lifecycle
o Security must be incorporated at every stage of the System
Development Life Cycle (SDLC): planning, design, implementation,
testing, deployment, and maintenance.
o Failing to address security early leads to vulnerabilities that are more
costly to fix later.
1.2 Secure Design Principles
Security design principles are foundational practices for building and maintaining
secure systems:
• Least Privilege: Users should be given only the minimum permissions necessary
to perform their duties.
• Need to Know: Access should only be granted to data necessary for job
functions.
• Defense in Depth: Use multiple layers of defense (technical, administrative,
physical).
• Fail-Safe Defaults: Deny access by default; permit access only when explicitly
allowed.
• Separation of Duties: Divide critical tasks among multiple individuals to prevent
abuse or fraud.
• Economy of Mechanism: Keep designs simple and small to reduce risk of errors
or misconfigurations.
• Complete Mediation: Every access request must be verified.
• Open Design: The system should not rely on secrecy of design for security
(Kerckhoff’s principle).
• Least Common Mechanism: Avoid shared mechanisms across users or
processes that might become shared points of failure.
Key Design Principles in Secure Systems
Principle Description
Least Privilege Grant users the minimum access necessary to perform their
tasks
Defense in Depth Apply multiple layers of controls to reduce overall risk
Fail-Safe Defaults Deny access by default; allow only when explicitly authorized
Economy of Keep the design as simple and minimal as possible
Mechanism
Complete Mediation Every access to every object must be checked against the
access control policy
Separation of Duties Ensure that no single individual has control over all critical
functions
Least Common Reduce sharing of mechanisms across users and processes
Mechanism
Open Design Security should not depend on secrecy of the design
(opposite of security through obscurity)
Threat Modeling and Attack Surface
• Threat Modeling involves identifying potential threats and vulnerabilities during
system design.
Common frameworks include:
o STRIDE: Spoofing, Tampering, Repudiation, Information Disclosure,
Denial of Service, Elevation of Privilege.
o PASTA, OCTAVE, Trike are other popular methodologies.
• Attack Surface: The total set of points in a system that are exposed to potential
attackers. Reducing the attack surface means minimizing the number of entry
points and exposure of system components.
Comparison of Common Security Models
Model Focus Enforcement Rules Use Case
Bell- Confidentiality - Simple Security Rule: No Military systems,
LaPadula read up (no access to higher classified data
classification)
- Star Property: No write down
Biba Model Integrity - Simple Integrity: No read Financial data,
down accounting systems
- Star Integrity: No write up
Clark- Integrity in - Ensures well-formed Commercial
Wilson business transactions environments
- Enforces separation of
duties
Brewer- Conflict of Access controls change Consulting firms,
Nash interest dynamically based on prior law firms
access
Graham- Access control Defines 8 primitive operations Secure OS and
Denning for managing access rights database design
HRU Model Access rights Based on access matrix and
safety subject-object relationships
Section 2: Secure System Architecture Concepts (Expanded)
2.1 Trusted Computing Base (TCB)
The Trusted Computing Base is the foundation of a system's security. It includes all
components—both hardware and software—that are critical to enforcing the system’s
security policy.
• Why is TCB important?
If any part of the TCB fails or is compromised, the system can no longer be
considered secure. For this reason, minimizing the TCB (in terms of size and
complexity) increases assurance and makes it easier to validate.
• Components of TCB may include:
o Security kernel (OS-level mechanisms)
o BIOS/UEFI (pre-boot firmware)
o Hypervisors (in virtualized systems)
o Hardware-based roots of trust (e.g., TPM)
o Access control modules
• TCB Characteristics:
o Defined boundary
o Can be verified
o Must not rely on untrusted components
2.2 Security Perimeter
A security perimeter defines the logical or physical boundary separating trusted from
untrusted elements. It's the barrier between the TCB and external components.
• Examples:
o In an operating system, the kernel boundary is the perimeter.
o In a network, firewalls and gateways create perimeters.
o In cloud, it's often defined via virtual network segregation.
Perimeters must have controlled interfaces—no backdoors, no undocumented access
paths.
2.3 Reference Monitor
The Reference Monitor is a concept in secure system design introduced by James
Anderson in the 1970s. It acts as a gatekeeper that mediates all access requests from
subjects (e.g., users, applications) to objects (e.g., files, databases).
• Three Main Properties of a Reference Monitor:
1. Tamperproof – cannot be bypassed or modified
2. Always Invoked – must be engaged every time an access occurs
3. Verifiable – should be small and simple enough to analyze for
correctness
• The Reference Monitor is not a product—it’s a design philosophy.
2.4 Security Kernel
The Security Kernel is the actual implementation of the Reference Monitor in a real
system. It’s the heart of the system’s enforcement mechanisms.
• Responsibilities:
o Manage authentication and authorization
o Enforce access controls
o Audit security-relevant events
o Handle process isolation
Security kernels are commonly found in military-grade systems and are often subject to
formal verification.
2.5 Protection Rings
Protection rings are a hierarchical model of privilege levels used by CPUs to isolate
processes.
• Most common in x86 architecture:
o Ring 0: Kernel mode — full hardware access
o Ring 3: User mode — limited access
o Rings 1 and 2 may be unused or reserved for device drivers
Why it matters:
• A compromise in Ring 0 leads to total control over the system.
• Applications run in Ring 3, isolated from critical processes.
This architecture supports process isolation, an essential part of modern OS security.
2.6 Processor Modes and CPU States
Processors support different modes to segregate code execution:
• User Mode: Limited privileges. Cannot directly access hardware or kernel
memory.
• Kernel Mode: Full control over the system.
Mode transitions (via system calls or interrupts) are tightly controlled to prevent
privilege escalation.
2.7 Multilevel Security (MLS) Systems
MLS systems support users with different security clearance levels accessing data with
varied sensitivity levels.
• Example: A classified government system where users are cleared for Top
Secret, Secret, or Confidential data.
• Access depends on:
o Security Clearance of user (subject)
o Classification of data (object)
o Need-to-know
This model is often implemented using Bell-LaPadula (confidentiality) or Biba (integrity)
frameworks.
2.8 Modes of System Operation
Mode Description
Dedicated Mode User has clearance and need-to-know for all information in
the system
System High Mode User has clearance for all data but may not have need-to-
know for all
Compartmented Access is restricted to compartments even with high
Mode clearance
Multilevel Mode Allows multiple users with different clearances to access the
system concurrently
These modes ensure that access is properly segmented in accordance with clearance
and sensitivity.
2.9 Isolation, Confinement & Bounds
• Isolation: Ensuring processes and users run in separate environments,
preventing interference.
• Confinement: Keeping the influence of a program limited to only the data and
resources it should touch.
• Bounds: Memory and resource boundaries set for a process.
2.10 Layering, Abstraction, and Modularity
• Layering: Building security controls in tiers (e.g., physical > network >
application).
• Abstraction: Hiding complex implementation details through interfaces or APIs.
• Modularity: Designing systems in small, interchangeable components that are
easier to secure and test.
These three principles reduce complexity and enhance system security posture.
Section 3: System Evaluation Models and Assurance Frameworks
System evaluation models are structured methods used to assess the security posture
of information systems. These models define standardized approaches to testing,
verifying, and validating how well a system implements security controls and policies.
They are essential for comparing different products, setting benchmarks, and gaining
certifications.
3.1 Importance of System Evaluation
Security evaluation helps:
• Ensure that systems enforce security policies correctly
• Determine levels of assurance for procurement or certification
• Provide common standards across nations, agencies, and vendors
Evaluation addresses:
• Functionality: What security features are implemented?
• Assurance: How well are those features implemented?
• Effectiveness: How do those features hold up under test?
3.2 Trusted Computer System Evaluation Criteria (TCSEC – Orange Book)
Developed by the U.S. Department of Defense in the 1980s, the TCSEC focuses heavily
on confidentiality and was the earliest formalized evaluation model.
• Main Purpose: Classify systems based on how well they protect classified data.
• Security Functionality Categories:
o Security Policy
o Accountability
o Assurance
o Documentation
• Classes of Systems in TCSEC:
Class Name Focus
D Minimal Protection Systems evaluated but failed to meet higher standards
C1 Discretionary Basic identification and DAC
Security
C2 Controlled Access Stronger accountability, individual logins, object reuse
protection
B1 Labeled Security Mandatory Access Controls (MAC), security labels
B2 Structured More formal TCB structure, security policy model
Protection
B3 Security Domains High assurance, tamper-proof TCB, auditing, reference
monitor enforcement
A1 Verified Design Formal design, verification, and rigorous testing
Limitations:
• Focused only on confidentiality (ignored integrity and availability)
• U.S.-centric, not widely adopted internationally
• Primarily useful for military/government systems
3.3 ITSEC (Information Technology Security Evaluation Criteria)
Developed in Europe, ITSEC was created to address limitations in TCSEC. It separates
functionality and assurance, allowing more flexible and realistic evaluation.
• Introduced Security Targets (ST) to define system-specific objectives
• Supports evaluation for confidentiality, integrity, and availability
• Allows commercial systems to be evaluated without mandatory access control
ITSEC Ratings:
• Functionality: F1 to F10 (scope of security features)
• Assurance: E0 to E6 (level of confidence based on testing, documentation, etc.)
Example: F4/E3 means a system with moderate functionality and moderate assurance.
Advantages over TCSEC:
• Evaluates all three components of the CIA triad
• Not tied to mandatory access controls
• Supports commercial software
3.4 Common Criteria (ISO/IEC 15408)
Common Criteria (CC) is the modern, international standard for evaluating IT product
security. It builds on lessons from both TCSEC and ITSEC.
• Adopted as ISO/IEC 15408
• Used globally by governments and organizations for procurement and
certification
• Allows vendors to define their own security needs via Protection Profiles (PP) and
Security Targets (ST)
Key Terms in Common Criteria:
1. Protection Profile (PP):
A generic set of security requirements for a category of products (e.g., firewalls,
smart cards)
2. Security Target (ST):
The vendor's tailored claim of security features in the product being evaluated.
3. Target of Evaluation (TOE):
The specific system or product being tested.
Evaluation Assurance Levels (EALs)
EAL Description Assurance Level
Level
EAL1 Functionally Tested Minimal confidence
EAL2 Structurally Tested Some design
understanding
EAL3 Methodically Tested and Checked Moderate assurance
EAL4 Methodically Designed, Tested, and Commercial-level security
Reviewed
EAL5 Semi-formally Designed and Tested High security requirement
EAL6 Semi-formally Verified Design and Tested Very high assurance
EAL7 Formally Verified Design and Tested Highest formal assurance
Note: As assurance level increases, cost and complexity of evaluation also rise.
How Evaluation Works in Common Criteria
1. Vendor defines Security Target (ST) for their product.
2. An accredited lab tests the product based on ST and/or PP.
3. Certification body issues an official report and label (e.g., EAL4-certified
firewall).
4. Organizations can use the certification to make procurement decisions.
3.5 Certification vs Accreditation
• Certification:
A technical evaluation verifying that the product meets security specifications.
• Accreditation:
A formal decision by a senior official to authorize operation of the system, based
on risk acceptance.
Example: A product may be certified at EAL4, but an organization still needs to accredit
it for operational use.
3.6 Assurance in System Design
Assurance is the confidence that a system’s security features are implemented and
operate correctly.
• Technical assurance: Comes from code reviews, testing, verification, etc.
• Operational assurance: Comes from logging, monitoring, security processes.
Assurance can be:
• Built-in (design and development phase)
• Tested (evaluation and verification phase)
Section 4: Cryptographic Systems and Principles
Cryptography is the science of securing information through encoding techniques that
ensure confidentiality, integrity, authentication, and non-repudiation. It's a cornerstone
of information security and plays a major role in secure communications, data
protection, and system assurance.
4.1 Core Cryptographic Goals (CIAAN)
Goal Definition
Confidentiality Preventing unauthorized disclosure of information
Integrity Ensuring data has not been altered in transit or storage
Authentication Verifying the identity of the sender or receiver
Authorization Determining whether the authenticated party is allowed to access
the resource
Non- Preventing denial of actions (e.g., sender can’t deny sending a
Repudiation message)
4.2 Types of Cryptographic Algorithms
Cryptographic algorithms fall into several broad categories:
1. Symmetric Key Cryptography
• Also called private key cryptography.
• Uses the same key for encryption and decryption.
• Extremely fast and suited for bulk encryption.
Common symmetric algorithms:
• DES (Data Encryption Standard) – outdated, 56-bit key
• 3DES (Triple DES) – applies DES three times for more security
• AES (Advanced Encryption Standard) – secure and widely adopted; supports
128, 192, 256-bit keys
• RC4, RC5, RC6 – stream and block cipher variants
• Blowfish / Twofish – secure alternatives to DES and AES
2. Asymmetric Key Cryptography
• Uses two keys: a public key for encryption, and a private key for decryption.
• Solves the key distribution problem of symmetric encryption.
Common asymmetric algorithms:
• RSA – Rivest-Shamir-Adleman; based on factoring large primes
• ECC – Elliptic Curve Cryptography; faster and lighter than RSA
• Diffie-Hellman – key exchange protocol, not used for encryption directly
• ElGamal – built on Diffie-Hellman for encryption
4.3 Hash Functions
Hash functions produce a fixed-length digest from variable input and are one-way (non-
reversible).
Uses:
• Data integrity checks
• Password storage
• Digital signatures
Common hash algorithms:
• MD5 (128-bit) – outdated due to collision issues
• SHA-1 (160-bit) – broken, no longer secure
• SHA-2 family (SHA-256, SHA-512) – industry standard
• SHA-3 – latest hash function designed using Keccak algorithm
Collision Resistance: No two inputs should result in the same hash output.
4.4 Digital Signatures
Digital signatures are created using asymmetric encryption. The sender signs the
message with their private key, and the recipient verifies it using the sender’s public key.
Provides:
• Authentication
• Integrity
• Non-repudiation
Signature Process:
1. Message is hashed.
2. Hash is encrypted with sender’s private key.
3. Encrypted hash is sent as the signature.
4. Receiver decrypts it with sender’s public key and compares it to a new hash of
the message.
4.5 Digital Certificates and Public Key Infrastructure (PKI)
PKI is the framework for managing public keys using digital certificates.
• Certificate Authority (CA): Trusted entity that issues certificates
• Registration Authority (RA): Verifies identities before CA issues certificates
• Certificate Revocation List (CRL): List of revoked certificates
• Online Certificate Status Protocol (OCSP): Real-time certificate status check
Digital Certificate:
• Binds a public key to an entity (user, system, device)
• Follows X.509 format
• Contains subject info, issuer info, validity period, serial number, and public key
4.6 Key Management Lifecycle
Cryptographic key management includes all procedures for generating, distributing,
storing, rotating, revoking, and destroying keys.
Key lifecycle phases:
1. Generation
2. Distribution
3. Storage
4. Use
5. Archival
6. Destruction
Secure key management ensures confidentiality and integrity of cryptographic systems.
4.7 Modes of Operation (Symmetric Block Ciphers)
Block ciphers like AES operate on fixed-size blocks, and modes determine how each
block is processed.
Mode Feature Use Case
ECB Each block encrypted separately Not recommended – patterns leak
CBC Chaining blocks with IV General-purpose encryption
CFB Converts block cipher to stream Network encryption
OFB Similar to CFB, pre-processing Used where error propagation is
unacceptable
CTR Counter mode – parallel Fast and secure when properly
processing implemented
4.8 Cryptographic Attacks
Attack Type Description
Brute-force Try all possible keys
Dictionary attack Use common passwords or phrases
Birthday attack Exploits hash collisions
Replay attack Resending captured data to gain unauthorized access
Chosen plaintext Attacker can encrypt arbitrary data to analyze patterns
Side-channel attack Uses physical data like power use or timing to infer key
Man-in-the-middle Attacker intercepts and possibly alters communication
Downgrade attack Forces parties to use weaker protocols or algorithms
Countermeasures:
• Use strong keys and algorithms
• Enable mutual authentication
• Use hashing with salts
Section 5: Physical Security (Facility and Environmental Controls)
Physical security refers to the protection of personnel, hardware, software, networks,
and data from physical actions and events that could cause serious loss or damage.
These include natural disasters, fire, theft, sabotage, and terrorism.
While logical security protects data and systems electronically, physical security is the
first layer of defense in safeguarding assets.
5.1 Goals of Physical Security
The key goals are:
• Deter unauthorized physical access (e.g., guards, lighting, signage)
• Detect intrusion or tampering (e.g., alarms, CCTV, motion sensors)
• Delay intruders long enough for response (e.g., locks, barriers)
• Respond to incidents effectively (e.g., emergency response protocols)
This approach is often referred to as deter–detect–delay–respond.
5.2 Site and Facility Design
Security begins at the site selection stage, where organizations assess threats based on
location, environment, and accessibility.
Factors to consider:
1. Geographic Risks:
o Proximity to flood zones, earthquake zones, political instability
o Neighboring facilities that may be targeted
2. Infrastructure Dependencies:
o Access to power, telecommunications, water, and transportation
o Availability of emergency services (fire, police, hospitals)
3. Visibility and Traffic Flow:
o Clear lines of sight from the road or nearby buildings
o Controlled pedestrian and vehicle traffic
4. Zoning and Local Laws:
o Compliance with safety, environmental, and construction regulations
5.3 Facility Security Zones
Facilities should be segmented into zones based on risk and sensitivity. Access
becomes more restricted the deeper you go.
• Public Zone – Open to general public (lobby, reception)
• Restricted Zone – Staff-only areas (workstations, meeting rooms)
• Secure Zone – Controlled with authentication (server rooms, data centers)
• Sensitive Compartmented Zone – Highest level; access based on strict
clearance and need-to-know (classified rooms, vaults)
Each zone should have:
• Clearly marked boundaries
• Controlled entry and exit points
• Audit trail of movement and access
5.4 Physical Access Controls
Physical access should be managed using a combination of preventive, detective, and
corrective controls:
1. Authentication Devices
• Smart cards, keypads, biometric readers (fingerprint, iris, facial recognition)
2. Entry and Exit Logging
• Use of access control systems (ACS) to log who entered, when, and where
3. Door and Gate Locks
• Mechanical: deadbolts, padlocks
• Electronic: magnetic locks, electronic strikes
• Smart locks integrated with central systems
4. Guards and Escorts
• Trained personnel monitoring access and escorting visitors
• Can challenge tailgating or suspicious activity
5. CCTV and Surveillance
• Strategically placed for blind spot coverage
• Must store video securely with defined retention policies
6. Mantraps
• Two-door access system where the second door unlocks only when the first is
closed
• Prevents piggybacking and tailgating
7. Turnstiles and Security Portals
• Allow entry of one person at a time; often used in secure buildings
5.5 Environmental and Safety Controls
Protecting infrastructure includes managing environmental hazards like fire, water,
temperature, and power fluctuations.
1. Fire Prevention and Suppression
• Fire Classes:
o A – Ordinary combustibles (wood, paper)
o B – Flammable liquids (oil, gasoline)
o C – Electrical equipment
o D – Combustible metals
o K – Kitchen oils and fats
• Detection:
o Smoke detectors (ionization and photoelectric)
o Heat sensors
o Manual pull alarms
• Suppression:
o Water sprinklers (wet/dry pipe systems)
o Gas-based (FM-200, CO₂)
o Clean agents for data centers (non-conductive, non-residual)
2. HVAC Controls
• Maintain optimal temperature/humidity for electronic equipment
• Prevent overheating, condensation, and static electricity
3. Power Protection
• Uninterruptible Power Supply (UPS):
o Provides temporary power during outages
o Filters voltage spikes and noise
• Backup Generators:
o Long-term power support during prolonged outages
o Requires fuel storage and regular testing
• Surge Protectors:
o Prevent voltage surges from damaging devices
• Grounding:
o Safely discharges stray voltages into the earth to prevent shock or
damage
5.6 Equipment Security
Protecting individual devices and systems is critical to physical security.
• Cable Locks – Prevent laptop or device theft
• Screen Filters – Prevent shoulder surfing
• Drive Locks – Prevent removal of hard disks from servers
• BIOS/UEFI Passwords – Prevent unauthorized boot-level access
• Secure Device Disposal – Use data destruction (degaussing, shredding)
5.7 Media Security
Media (paper, tapes, disks) must be protected during use, transport, storage, and
disposal.
• Labeling: Sensitivity labels for easy classification
• Storage: Locked cabinets, vaults for sensitive materials
• Transport: Secure couriers, tamper-evident packaging
• Destruction:
o Paper: Shredding, incineration
o Electronic: Degaussing, overwriting, physical destruction
5.8 Personnel Safety
Facility design must protect staff from hazards and support emergency procedures.
• Emergency exits and evacuation plans
• Alarms and intercom systems
• Emergency lighting and signage
• Assembly points and safety drills
• Panic buttons or duress alarms
Training employees on security awareness, evacuation, and emergency procedures is
critical.
Section 6: Security Capabilities of Information Systems
This section covers the technical mechanisms and features built into systems
(hardware, software, and firmware) to enforce security policies, isolate processes, and
prevent unauthorized access or tampering.
Understanding these capabilities is vital to selecting, configuring, and managing secure
systems in enterprise environments.
6.1 Memory Protection
Memory protection ensures that processes do not access memory areas outside their
assigned boundaries. It’s crucial to:
• Prevent one process from reading or modifying another’s data
• Prevent system-level code from being exposed to user applications
• Enable safe multitasking and enforce separation between user and kernel
spaces
Key memory protection mechanisms:
• Base and Limit Registers: Define memory boundaries for each process
• Segmentation: Memory divided into segments (code, data, stack), each with
specific access rights
• Paging: Breaks memory into fixed-size pages; helps implement virtual memory
and isolates processes
• No-execute (NX) Bit: Prevents execution of code in certain memory areas (e.g.,
stack)
• Address Space Layout Randomization (ASLR): Randomizes memory address
locations to prevent exploitation
6.2 Process Isolation
Each process must operate independently and be restricted from accessing others. This
isolation ensures data confidentiality and stability across user sessions.
Isolation is achieved through:
• Separate memory address spaces
• Dedicated process identifiers (PIDs)
• OS-enforced permissions and resource quotas
• Virtual memory systems
Effective isolation is a foundation for container security (e.g., Docker) and sandboxing
techniques.
6.3 Virtualization Security
Virtualization allows multiple OS instances to run on a single hardware platform using
hypervisors. This provides cost savings, scalability, and flexibility—but also introduces
unique risks.
Types of Hypervisors:
• Type 1 (Bare-Metal): Runs directly on hardware (e.g., VMware ESXi, Microsoft
Hyper-V)
• Type 2 (Hosted): Runs within a host OS (e.g., Oracle VirtualBox, VMware
Workstation)
Virtualization threats:
• Escape attacks (VM accessing host or other VMs)
• Hypervisor compromise
• Improper isolation
• Snapshot leakage (saved states contain sensitive data)
Controls:
• Patch hypervisors regularly
• Limit administrative access
• Use secure boot and hardware-assisted virtualization (e.g., Intel VT-x, AMD-V)
6.4 Hardware Security Mechanisms
Trusted Platform Module (TPM):
• A hardware chip embedded in motherboards
• Provides secure generation and storage of cryptographic keys
• Supports secure boot, full disk encryption (BitLocker), and system integrity
checks
Features:
• Secure key storage
• Remote attestation (proof of integrity to a remote party)
• Platform integrity measurement via cryptographic hash
Hardware Security Module (HSM):
• Dedicated appliance or card used for cryptographic key generation, encryption,
and signing
• Tamper-resistant and isolated from general-purpose OS
Used for:
• Certificate Authorities (CA)
• Payment processing
• Government systems
6.5 Trusted Computing and Secure Boot
Trusted computing ensures that a system starts in a known secure state and maintains
that trust throughout its operation.
Secure Boot:
• A security standard in UEFI that ensures only signed bootloaders and OS
components can load
• Prevents bootkits, rootkits, and malicious firmware
Measured Boot:
• Measures each component during startup and stores results in TPM for
attestation
• Detects unauthorized changes in boot sequence
Chain of Trust:
• Each component in the boot process verifies the integrity of the next
• Starts from a Root of Trust (usually embedded in TPM or firmware)
6.6 Sandboxing and Containers
Sandboxing runs applications in isolated environments to prevent them from affecting
the rest of the system.
Used in:
• Web browsers (e.g., Chrome tabs)
• Antivirus analysis
• Malware containment
Containers (e.g., Docker, Kubernetes):
• Lightweight isolated environments using shared OS kernel
• More efficient than virtual machines but offer less isolation
Security Concerns:
• Container escape attacks
• Misconfigured privileges (running as root)
• Unverified container images
Controls:
• Use signed container images
• Apply least privilege in container runtime
• Leverage container security tools like AppArmor or SELinux
6.7 Firmware Security
Firmware is low-level code stored on chips (e.g., BIOS, UEFI, SSD firmware, network
card firmware). It initializes hardware and hands off control to the OS.
Threats:
• Firmware malware (e.g., UEFI rootkits)
• Unauthorized updates
• Persistent backdoors
Controls:
• Use digitally signed firmware updates
• Restrict firmware update permissions
• Enable Secure Boot and firmware integrity checks
• Monitor for anomalies using tools like CHIPSEC
6.8 Input Validation and Software Controls
Input validation ensures that data provided by users or external systems is properly
verified before processing.
Improper input handling leads to:
• Buffer overflows
• SQL injection
• Command injection
• Cross-site scripting (XSS)
Controls include:
• Whitelisting valid input
• Proper encoding and escaping
• Using secure APIs and frameworks
• Performing boundary checks and size validation
6.9 Execution Control and Application Whitelisting
Restricting what software and code can run on systems helps reduce exposure.
• Execution Control: Prevents unauthorized code from being executed (e.g.,
AppLocker, Software Restriction Policies)
• Whitelisting: Only pre-approved apps can run; all others are blocked
• Blacklisting: Known bad apps are blocked, but others are allowed (less secure)
Use Cases:
• Preventing ransomware
• Securing kiosk systems
• Locking down high-security environments
Section 7: Security Models
Security models are conceptual frameworks used to define and enforce security
policies in information systems. These models ensure that data is protected according
to organizational goals—such as confidentiality, integrity, and conflict-of-interest
prevention.
These models often form the basis for designing operating systems, access control
mechanisms, and database security.
7.1 Bell-LaPadula Model (Confidentiality-Focused)
The Bell-LaPadula (BLP) model was designed for military and government systems
where confidentiality is critical. It prevents unauthorized disclosure of information by
controlling access to data based on security levels.
Key Rules:
1. Simple Security Property ("no read up"):
A subject at a lower classification level cannot read data at a higher
classification level.
2. *** (Star) Property** ("no write down"):
A subject cannot write data to a lower classification level.
Focus: Confidentiality
Weakness: Does not address integrity or availability
Use Case: Military environments with classified information
7.2 Biba Model (Integrity-Focused)
The Biba Model ensures data integrity by preventing unauthorized or untrusted
modification of data.
Key Rules:
1. Simple Integrity Axiom ("no read down"):
A subject at a higher integrity level cannot read data at a lower integrity level.
2. Star Integrity Axiom ("no write up"):
A subject cannot write data to a higher integrity level.
Focus: Integrity
Use Case: Financial systems where maintaining accurate records is vital
7.3 Clark-Wilson Model (Commercial Integrity)
The Clark-Wilson model is designed for commercial environments where data integrity
is enforced through well-structured transactions.
Key Concepts:
• Uses certified transactions (well-formed procedures)
• Enforces separation of duties between roles
• Introduces:
o Transformation Procedures (TP): Procedures that manipulate data
o Constrained Data Items (CDI): Protected data
o Integrity Verification Procedures (IVP): Ensure data consistency
Strength: Controls who can do what to data through processes
Use Case: Banking systems, ERP systems
7.4 Brewer-Nash Model (Cognitive/Context-Based)
Also known as the Cinderella Model or Dynamic Security Model, Brewer-Nash is
designed to avoid conflicts of interest — primarily in consulting or law firms.
Principle:
Access is determined based on what data a user has previously accessed.
If a consultant accesses data from Company A, they cannot access Company B's
conflicting data during the same session or role context.
Strength: Dynamic adjustment of access control based on user behavior
Use Case: Legal and consulting environments
7.5 Graham-Denning Model (Secure Object Management)
The Graham-Denning model defines a set of secure operations for managing subjects
and objects in a system.
It uses an access control matrix and allows:
• Creating/deleting subjects and objects
• Granting/revoking access rights
• Transferring rights between subjects
Focus: Formal structure for object access
Use Case: Operating system security, object lifecycle management
7.6 Harrison-Ruzzo-Ullman (HRU) Model
The HRU Model is a formal access control model that expands on the Graham-Denning
Model. It’s designed to analyze whether a system can reach an unsafe state, i.e., where
unauthorized access becomes possible.
Core Idea:
• Uses access matrix
• Can determine if a subject can gain access to an object over time
• The "safety problem" (determining if a state can be unsafe) is undecidable in
general
Use Case: Formal analysis of system safety
7.7 Take-Grant Model
This model simplifies access rights through a graph-based structure.
• Subjects and objects are nodes
• Edges represent access rights like "take" or "grant"
• Take allows a subject to take rights from another
• Grant allows a subject to grant rights to others
Useful for analyzing how rights propagate through a system.
7.8 Information Flow Model
This model focuses on how data flows between different levels of classification or trust.
• Data must not flow from high to low (confidentiality)
• Or from low to high (integrity)
Used in:
• Secure Operating Systems
• Multilevel secure databases
It is also implemented in mandatory access control (MAC) environments.
7.9 Non-Interference Model
This model ensures that actions at higher security levels do not influence or interfere
with the behavior of users at lower levels.
Goal: Prevent covert channels and data leakage
Used in:
• Multilevel secure systems
• Defense systems where confidentiality must be strictly enforced
7.10 Lattice-Based Model
A lattice model represents security labels as a set of levels and categories forming a
mathematical lattice structure.
• Subjects and objects are assigned labels
• Access decisions are made by comparing lattice positions
• Supports fine-grained and hierarchical control
Used in:
• Mandatory Access Control (MAC) systems
• Multilevel government systems
Section 8: Evaluation Criteria and Certification Frameworks
Security evaluation frameworks provide standardized methods to assess, compare, and
certify the security features and assurance levels of information systems, software, and
hardware.
These models are vital for procurement decisions, regulatory compliance, and
international trade of security-related products.
8.1 Trusted Computer System Evaluation Criteria (TCSEC) – Orange Book
Developed by the U.S. Department of Defense, TCSEC was the first widely used formal
evaluation system for computer security.
Focus: Confidentiality
Does not address: Integrity or availability
Main Categories (with increasing assurance):
Category Label Description
D Minimal Protection No meaningful security controls
C1 Discretionary Security Basic DAC and user IDs
Protection
C2 Controlled Access Protection Login, audit trails, finer DAC
B1 Labeled Security Protection MAC, classification labels
B2 Structured Protection Stronger TCB, covert channel controls
B3 Security Domains Dedicated system functions, TCB
isolation
A1 Verified Design Formal verification of security design
Limitations:
• U.S.-centric
• Focused mainly on confidentiality
• Assumes standalone systems, not networks
8.2 Information Technology Security Evaluation Criteria (ITSEC)
Developed in Europe as a response to TCSEC’s U.S. limitations, ITSEC was multi-
national (UK, Germany, France, Netherlands).
Differences from TCSEC:
• Separates functionality and assurance
• Covers confidentiality, integrity, and availability
• Supports commercial systems
Functionality classes (F1–F10): Define what security features are present
Assurance levels (E0–E6): Define how well these features are implemented and
verified
8.3 Common Criteria (CC)
Common Criteria is an international standard (ISO/IEC 15408) for evaluating IT
products and systems.
It unified TCSEC, ITSEC, and others into a globally recognized framework for security
certification.
Key Components:
• Protection Profiles (PP): Consumer-defined requirements for a type of product
(e.g., firewalls)
• Security Targets (ST): Vendor-defined implementation of security features
• Evaluation Assurance Levels (EAL): Assurance ratings from EAL1 (lowest) to
EAL7 (highest)
EAL Level Assurance Description
EAL1 Functionally tested
EAL2 Structurally tested
EAL3 Methodically tested and checked
EAL4 Methodically designed, tested, and reviewed
EAL5 Semi-formally designed and tested
EAL6 Semi-formally verified design and tested
EAL7 Formally verified design and tested
Benefits:
• International recognition
• Tailored evaluation
• Suitable for both government and commercial use
Limitation:
• Evaluation can be slow and costly
• Only evaluates what's claimed in the ST (doesn’t find hidden functionality)
8.4 Certification vs Accreditation
• Certification:
A technical evaluation of security controls to ensure the system meets security
requirements.
o Typically done by technical assessors or auditors.
o Focuses on vulnerabilities, configurations, design, etc.
• Accreditation:
A management decision to authorize operation of the system based on
certification results and residual risks.
o Made by an authorizing official.
o Balances risk with business need.
Both are required in formal environments (e.g., government, defense).
8.5 Security Assurance and Security Functionality
• Security Assurance: The confidence that the security controls are properly
designed and implemented.
o Related to how thoroughly a system has been evaluated and tested.
• Security Functionality: The features and mechanisms the system provides (e.g.,
encryption, access control, auditing).
Evaluation frameworks typically assess both dimensions.
8.6 ISO/IEC 15408 and International Adoption
The Common Criteria standard is recognized by many countries under the Common
Criteria Recognition Arrangement (CCRA).
• Allows for mutual recognition of security evaluations.
• Products certified in one country are accepted in others under defined
conditions.
Participating countries include the USA, UK, Germany, Canada, Japan, Australia, South
Korea, and others.
Summary of Domain 3: Security Architecture and Engineering
CISSP Domain 3 encompasses:
CISSP Domain 3 encompasses:
• Security design principles (least privilege, defense in depth)
• Security design principles (least privilege, defense in depth)
Architecture components (TCB, security kernel, reference monitor)
•
• Architecture components (TCB, security kernel, reference monitor)
• Cryptographic systems and attacks
• Cryptographic systems and attacks
• Trusted computing (TPM, secure boot)
• Trusted computing (TPM, secure boot)
• Secure system evaluation models (TCSEC, ITSEC, Common Criteria)
• Secure system evaluation models (TCSEC, ITSEC, Common Criteria)
• Physical and environmental security
• Physical and environmental security
• Formal models (Bell-LaPadula, Biba, Clark-Wilson)
• Formal models (Bell-LaPadula, Biba, Clark-Wilson)
• Secure hardware/software/firmware features
• Secure hardware/software/firmware features
Understanding these concepts provides a strong foundation for building, evaluating,
and maintaining
Understanding secure
these systems
concepts in modern
provides IT environments.
a strong foundation for building, evaluating,
and maintaining secure systems in modern IT environments.
THANK YOU
Enroll with MoS – CISSP
Training @ ₹4,999!
WWW.MINISTRYOFSECURITY.CO