OS Unit - 1 Notes
OS Unit - 1 Notes
Computer-System Organization
A modern general-purpose computer system consists of one or more CPUs and a
number of device controllers connected through a common bus that provides access
between components and shared memory.
Each device controller is in charge of a specific type of device (for example, a disk
drive, audio device, or graphics display).
Depending on the controller, more than one device may be attached. For instance, one
system USB port can connect to a USB hub, to which several devices can connect.
A device controller maintains some local buffer storage and a set of special-purpose
registers
The device controller is responsible for moving the data between the peripheral devices
that it controls and its local buffer storage.
Typically, operating systems have a device driver for each device controller.
This device driver understands the device controller and provides the rest of the
operating system with a uniform interface to the device.
The CPU and the device controllers can execute in parallel, competing for memory
cycles
To ensure orderly access to the shared memory, a memory controller synchronizes
access to the memory.
Interrupts:
Consider a typical computer operation: a program performing I/O.
To start an I/O operation, the device driver loads the appropriate registers in the device
controller.
The device controller, in turn, examines the contents of these registers to determine
what action to take (such as “read a character from the keyboard”).
The controller starts the transfer of data from the device to its local buffer.
Once the transfer of data is complete, the device controller informs the device driver
that it has finished its operation
Figure 1.3 Interrupt timeline for a single program doing output.
Hardware may trigger an interrupt at any time by sending a signal to the CPU, usually
by way of the system bus.
(There may be many buses within a computer system, but the system bus is the main
communications path between the major components.)
Interrupts are used for many other purposes as well and are a key part of how operating
systems and hardware interact.
When the CPU is interrupted, it stops what it is doing and immediately transfers
execution to a fixed location.
The fixed location usually contains the starting address where the service routine for the
interrupt is located.
Implementation:
The basic interrupt mechanism works as follows. The CPU hardware has a wire called
the interrupt-request line that the CPU senses after executing every instruction.
When the CPU detects that a controller has asserted a signal on the interrupt-request
line, it reads the interrupt number and jumps to the interrupt-handler routine by using
that interrupt number as an index into the interrupt vector.
It then starts execution at the address associated with that index.
The interrupt handler saves any state it will be changing during its operation,
determines the cause of the interrupt, performs the necessary processing, performs a
state restore, and executes a return from interrupt instruction to return the CPU to the
execution state prior to the interrupt.
The basic interrupt mechanism just described enables the CPU to respond to an
asynchronous event, as when a device controller becomes ready for service.
In a modern operating system, however, we need more sophisticated interrupt handling
features.
and low-priority interrupts and can respond with the appropriate degree of urgency.
Most CPUs have two interrupt request lines. One is the non-maskable interrupt,
which is reserved for events such as unrecoverable memory errors.
The second interrupt line is maskable: it can be turned off by the CPU before the
execution of critical instruction sequences that must not be interrupted.
The maskable interrupt is used by device controllers to request service.
Acommon way to solve this problem is to use interrupt chaining, in which each
element in the interrupt vector points to the head of a list of interrupt handlers.
The interrupt mechanism also implements a system of interrupt priority levels.
Storage Structure:
The CPU can load instructions only from memory, so any programs must first be
loaded into memory to run.
General-purpose computers run most of their programs from rewritable memory, called
main memory (also called random-access memory, or RAM).
Main memory commonly is implemented in a semiconductor technology called
dynamic random-access memory (DRAM).
Computers use other forms of memory as well. For example, the first program to run on
computer power-on is a bootstrap program, which then loads the operating system.
Since RAM is volatile—loses its content when power is turned off or otherwise lost—
we cannot trust it to hold the bootstrap program.
Instead, for this and some other purposes, the computer uses electrically erasable
programmable read-only memory (EEPROM) and other forms of firmwar —storage
that is infrequently written to and is non-volatile.
All forms of memory provide an array of bytes. Each byte has its own address.
Interaction is achieved through a sequence of load or store instructions to specific
memory addresses.
The load instruction moves a byte or word from main memory to an internal register
within the CPU, whereas the store instruction moves the content of a register to main
memory.
Aside from explicit loads and stores, the CPU automatically loads instructions from
main memory for execution from the location stored in the program counter.
I/O Structure:
A large portion of operating system code is dedicated to managing I/O, both because of
its importance to the reliability and performance of a system and because of the varying
nature of the devices.
Recall from the beginning of this section that a general-purpose computer system
consists of multiple devices, all of which exchange data via a common
Computer-System Architecture
A computer system can be organized in a number of different ways, which we can
categorize roughly according to the number of general-purpose processors used.
1. Single-Processor Systems
2. Multiprocessor Systems
3. Clustered Systems
Single-Processor Systems:
The most computer systems used a single processor containing one CPU with a single
processing core.
The core is the component that executes instructions and registers for storing data
locally.
The one main CPU with its core is capable of executing a general-purpose instruction
set, including instructions from processes.
All of these special-purpose processors run a limited instruction set and do not run
processes. Sometimes, they are managed by the operating system, in that the operating
system sends them information about their next task and monitors their status.
Multiprocessor Systems:
Multiprocessor systems now dominate the landscape of computing.
Traditionally, such systems have two (or more) processors, each with a single-core
CPU.
The processors share the computer bus and sometimes the clock, memory, and
peripheral devices.
The primary advantage of multiprocessor systems is increased throughput.
That is, by increasing the number of processors, we expect to get more work done in
less time.
The most common multiprocessor systems use symmetric multiprocessing (SMP), in
which each peer CPU processor performs all tasks, including operating-system
functions and user processes.
The benefit of this model is that many processes can run simultaneously —N processes
can run if there are N CPUs—without causing performance to deteriorate significantly.
These inefficiencies can be avoided if the processors share certain data structures.
A multiprocessor system of this form will allow processes and resources—such as
memory—to be shared dynamically among the various processors and can lower the
workload variance among the processors.
The definition of multiprocessor has evolved over time and now includes multicore
systems, in which multiple computing cores reside on a single chip.
Multicore systems can be more efficient than multiple chips with single cores because
on-chip communication is faster than between-chip communication.
The CPUs are connected by a shared system interconnect, so that all CPUs share one
physical address space.
This approach—known as non-uniform memory access, or NUMA—is illustrated in
Figure 1.10.
The advantage is that, when a CPU accesses its local memory, not only is it fast, but
there is also no contention over the system interconnect.
Thus, NUMA systems can scale more effectively as more processors are added.
A potential drawback with a NUMA system is increased latency when a CPU must
access remote memory across the system interconnect, creating a possible performance
penalty.
Finally, blade servers are systems in which multiple processor boards, I/O boards, and
networking boards are placed in the same chassis.
The difference between these and traditional Multiprocessor systems is that each blade
processor board boots independently and runs its own operating system.
Some lade-server boards are multiprocessor as well, which blurs the lines between
Clustered Systems
Another type of multiprocessor system is a clustered system, which gathers together
multiple CPUs.
Clustered systems differ from the multiprocessor systems described in Section 1.3.2 in
that they are composed of two or more individual systems—or nodes—joined together;
each node is typically a multicore system.
Such systems are considered loosely coupled.
We should note that the definition of clustered is not concrete; many commercial and
open source packages wrestle to define what a clustered system is and why one form is
better than another.
The generally accepted definition is that clustered computers share storage and are
closely linked via a local-area network LAN.
This function, commonly known as a distributed lock manager (DLM), is included in some
cluster technology.
Operating-System Operations
Now that we have discussed basic information about computer-system organization and
architecture, we are ready to talk about operating systems.
An operating system provides the environment within which programs are executed.
Internally, operating systems vary greatly, since they are organized along many
different lines.
HADOOP
Hadoop is an open-source software framework that is used for distributed processing of
large data sets (known as big data) in a clustered system containing simple, low-cost
hardware components.
Hadoop is designed to scale from a single system to a cluster containing thousands of
computing nodes.
Tasks are assigned to a node in the cluster, and Hadoop arranges communication
between nodes to manage parallel computations to process and coalesce results.
Hadoop also detects and manages failures in nodes, providing an efficient and highly
reliable distributed computing service.
Another form of interrupt is a trap (or an exception), which is a software-generated
interrupt caused either by an error (for example, division by zero or invalid memory
access) or by a specific request from a user program that an operating-system service be
performed by executing a special operation called a system call.
Multiprogramming and Multitasking
One of the most important aspects of operating systems is the ability to run multiple programs,
as a single program cannot, in general, keep either the CPU or the I/O devices busy at all times.
Furthermore, users typically want to run more than one program at a time as well.
Multiprogramming increases CPU utilization, as well as keeping users satisfied, by
organizing programs so that the CPU always has one to execute.
In a multiprogrammed system, a program in execution is termed a process.
Multitasking is a logical extension of multiprogramming. In multitasking systems, the CPU
executes multiple processes by switching among them, but the switches occur frequently,
providing the user with a fast response time.
Consider that when a process executes, it typically executes for only a short time before it
either finishes or needs to perform I/O. I/O may be interactive; that is, output goes to a display
for the user, and input comes from a user keyboard, mouse, or touch screen.
CPU scheduling, virtual memory, physical memory, logical memory
Dual-Mode and Multimode Operation
the hardware and software resources of the computer system, a properly designed
operating system must ensure that an incorrect (or malicious) program cannot cause
other programs —or the operating system itself—to execute incorrectly.
In order to ensure the proper execution of the system, we must be able to distinguish
between the execution of operating-system code and user-defined code.
At the very least, we need two separate modes of operation: user mode and kernel
mode (also called supervisor mode, system mode, or privileged mode).
Resource Management
A program can do nothing unless its instructions are executed by a CPU.
A program in execution, as mentioned, is a process.
A program such as a compiler is a process, and a word-processing program being run
by an individual user on a PC is a process. Similarly, a social media app on a mobile
device is a process.
A process needs certain resources—including CPU time, memory, files, and I/O
devices—to accomplish its task.
A process is the unit of work in a system. A system consists of a collection of
processes, some of which are operating-system processes (those that execute system
code) and the rest of which are user processes (those that execute user code). All these
processes can potentially execute concurrently—by multiplexing on a single CPU core
—or in parallel across multiple CPU cores.
The operating system is responsible for the following activities
Creating and deleting both user and system processes
Scheduling processes and threads on the CPUs
Suspending and resuming processes
Providing mechanisms for process synchronization
Providing mechanisms for process communication
Memory Management
Main memory is a large array of bytes, ranging in size from hundreds of thousands to
billions. Each byte has its own address.
Main memory is a repository of quickly accessible data shared by the CPU and I/O
devices.
The CPU reads instructions from main memory during the instruction-fetch cycle and
both reads and writes data from main memory during the data-fetch cycle (on a von
Neumann architecture).
The operating system is responsible for the following activities
Keeping track of which parts of memory are currently being used and which
process is using them
Allocating and deallocating memory space as needed
Deciding which processes (or parts of processes) and data to move into and
out of memory
File-System Management
The operating system abstracts from the physical properties of its storage devices to
define a logical storage unit, the file.
File management is one of the most visible components of an operating system.
Computers can store information on several different types of physical media.
Secondary storage is the most common, but tertiary storage is also possible.
The operating system is responsible for the following activities in connection with file
management:
Creating and deleting files
Creating and deleting directories to organize files
Supporting primitives for manipulating files and directories
Mapping files onto mass storage
Backing up files on stable (non-volatile) storage media
Mass-Storage Management
We have already seen, the computer system must provide secondary storage to back up
main memory.
Most modern computer systems use HDDs and NVM devices as the principal on-line
storage media for both programs and data.
Most programs—including compilers, web browsers, word processors, and games—are
stored on these devices until loaded into memory.
The operating system is responsible for the following activities in connection with secondary
storage management:
Mounting and unmounting
Free-space management
Storage allocation
Disk scheduling
Partitioning
Protection
Cache Management
Caching is an important principle of computer systems. Here’s how it works.
Information is normally kept in some storage system (such as main memory).
As it is used, it is copied into a faster storage system—the cache—on a temporary basis.
When we need a particular piece of information, we first check whether it is in the
cache. If it is, we use the information directly from the cache.
The operations for inserting and removing items from a stack are known as push and pop,
respectively. An operating system often uses a stack when invoking function calls.
Trees
A tree is a data structure that can be used to represent data hierarchically.
Data values in a tree structure are linked through parent–child relationships.
In a general tree, a parent may have an unlimited number of children. In a binary tree,
a parent may have at most two children, which we term the left child and the right
child.
Hash Functions and Maps
A hash function takes data as its input, performs a numeric operation on the data, and
returns a numeric value.
This numeric value can then be used as an index into a table (typically an array) to
quickly retrieve the data.
Operating-System Services
System-Call Interface that serves as the link to system calls made available by the
operating system.
The system-call interface intercepts function calls in the API and invokes the necessary
system calls within the operating system.
Types of System Calls
process control,.
A running program needs to be able to halt its execution either normally
(end()) or abnormally (abort()).
If a system call is made to terminate the currently running program
abnormally, or if the program runs into a problem and causes an error
trap, a dump of memory is sometimes taken and an error message
generated.
create process, terminate process
load, execute
get process attributes, set process attributes
wait event, signal event
allocate and free memory
file management
We first need to be able to create() and delete() files.
Either system call requires the name of the file and perhaps some of the
file’s attributes.
Once the file is created, we need to open() it and to use it. We may also
read(), write(), or reposition() (rewind or skip to the end of the file, for
example).
Finally, we need to close() the file, indicating that we are no longer using
it.
create file, delete file
open, close
read, write, reposition
get file attributes, set file attributes
device management
request device, release device
read, write, reposition
get device attributes, set device attributes
logically attach or detach devices
information maintenance
get time or date, set time or date
get system data, set system data
get process, file, or device attributes
set process, file, or device attributes
communications
create, delete communication connection
send, receive messages
transfer status information
attach or detach remote devices
protection
get file permissions
set file permissions
System Services
System services, also known as system utilities, provide a convenient environment for
program development and execution.
Some of them are simply user interfaces to system calls. Others are considerably more
complex. They can be divided into these categories:
File management. These programs create, delete, copy, rename, print, list, and generally
access and manipulate files and directories.
Status information. Some programs simply ask the system for the date, time, amount of
available memory or disk space, number of users, or similar status information. Others are
more complex, providing detailed performance, logging, and debugging information.
Typically, these programs format and print the output to the terminal or other output devices or
files or display it in a window of the GUI. Some systems also support a registry, which is used
to store and retrieve configuration information.
File modification. Several text editors may be available to create and modify the content of
files stored on disk or other storage devices. There may also be special commands to search
contents of files or perform transformations of the text.
Programming-language support. Compilers, assemblers, debuggers, and interpreters for
common programming languages (such as C, C++, Java, and Python) are often provided with
the operating system or available as a separate download.
Program loading and execution. Once a program is assembled or compiled, it must be loaded
into memory to be executed. The system may provide absolute loaders, relocatable loaders,
linkage editors, and overlay loaders. Debugging systems for either higher-level languages or
machine language are needed as well.
Communications. These programs provide the mechanism for creating virtual connections
among processes, users, and computer systems. They allow users to send messages to one
another’s screens, to browse web pages, to send e-mail messages, to log in remotely, or to
transfer files from one machine to another.
Background services. All general-purpose systems have methods for launching certain
system-program processes at boot time. Some of these processes terminate after completing
their tasks, while others continue to
Layered Approach
The monolithic approach is often known as a tightly coupled system because changes to one
part of the system can have wide-ranging effects on other parts. Alternatively, we could design
a loosely coupled system.
A system can be made modular in many ways. One method is the layered approach, in which
the operating system is broken into a number of layers (levels).
Microkernels
We have already seen that the original UNIX system had a monolithic structure. As
UNIX expanded, the kernel became large and difficult to manage. In the mid-1980s,
researchers at Carnegie Mellon University developed an
operating system called Mach that modularized the kernel using the microkernel
approach.
This method structures the operating system by removing all nonessential components
from the kernel and implementing them as user level programs that reside in separate
address spaces.
The result is a smaller kernel. There is little consensus regarding which services should
remain in the kernel and which should be implemented in user space.
The main function of the microkernel is to provide communication between the client
program and the various services that are also running in user space.
Modules
Perhaps the best current methodology for operating-system design involves using
loadable kernel modules (LKMs).
Here, the kernel has a set of core components and can link in additional services via
modules, either at boot time or during run time.
This type of design is common in modern implementations of UNIX, such as Linux,
mac OS, and Solaris, as well as Windows.
The idea of the design is for the kernel to provide core services, while other services are
implemented dynamically, as the kernel is running.
Hybrid Systems
In practice, very few operating systems adopt a single, strictly defined structure. Instead, they
combine different structures, resulting in hybrid systems that address performance, security,
and usability issues.
In the remainder of this section, we explore the structure of three hybrid systems: the Apple
mac OS operating system and the two most prominent mobile operating systems—iOS and
Android.
This access matrix model presents a problem for secure systems: untrusted
processes can tamper with the protection system.
Using protection state operations, untrusted user processes can modify the
access matrix by adding new subjects, objects, or operations assigned to cells.
Unfortunately, the protection approach underlying the access matrix protection
state is naïve in today’s world of malware and connectivity to ubiquitous
network attackers.
Protection systems that can enforce secrecy and integrity goals must enforce the
requirement of security: where a system’s security mechanisms can enforce system
security goals even when any of the software outside the trusted computing base may
be malicious.
A mandatory protection state is a protection state where subjects and objects are
represented by labels where the state describes the operations that subject labels
may take upon object labels;
A labeling state for mapping processes and system resource objects to labels;
A transition state that describes the legal ways that processes and system
resource objects may be relabeled.
1.4.2 REFERENCE MONITOR
A reference monitor is the classical access enforcement mechanism.
It takes a request as input, and returns a binary response indicating whether
the request is authorized by the reference monitor’s access control policy.
We identify three distinct components of a reference monitor:
(1) its interface
(2) its authorization module
(3) its policy store.
The interface defines where the authorization module needs to be invoked to
perform an authorization query to the protection state, a labeling query to the
labeling state, or a transition query to the transition state.
The authorization module determines the exact queries that are to be made to
the policy store.
The policy store responds to authorization, labeling, and transition queries
based on the protection system that it maintains.
Reference Monitor Interface The reference monitor interface defines where protection system
queries are made to the reference monitor.
In particular, it ensures that all security-sensitive operations are authorized by the access
enforcement mechanism.
By a security-sensitive operation, we mean an operation on a particular object (e.g., file,
socket, etc.) whose execution may violate the system’s security requirements.
For example, an operating system implements file access operations that would allow one
user to read another’s secret data (e.g., private key) if not controlled by the operating system.
Labeling and transitions may be executed for authorized operations.
Authorization Module
The core of the reference monitor is its authorization module.
The authorization module takes interface’s inputs (e.g., process identity, object
references, and system call name), and converts these to a query for the reference
monitor’s policy store.
The challenge for the authorization module is to map the process identity to a subject
label, the object references to an object label, and determine the actual operations to
authorize (e.g., there may be multiple operations per interface).
The protection system determines the choices of labels and operations, but the
authorization module must develop a means for performing the mapping to execute
the “right” query.
Policy Store
The policy store is a database for the protection state, labeling state, and transition
state.
An authorization query from the authorization module is answered by the policy store.
These queries are of the form {subject_label, object_label, operation_set} and return a
binary authorization reply.
Labeling queries are of the form {subject_label, resource} where the combination of
the subject and, optionally, some system resource attributes determine the resultant
resource label returned by the query.
A secure operating system is an operating system where its access enforcement satisfies the
reference monitor concept
The reference monitor concept defines the necessary and sufficient properties of any system
that securely enforces a mandatory protection system, consisting of three guarantees:
Complete Mediation: The system ensures that its access enforcement mechanism mediates
all security-sensitive operations.
Tamperproof: The system ensures that its access enforcement mechanism, including its
protection system, cannot be modified by untrusted processes.
Verifiable: The access enforcement mechanism, including its protection system, “must be
small
enough to be subject to analysis and tests, the completeness of which can be assured” That is,
we must be able to prove that the system enforces its security goals correctly.
The reference monitor concept defines the necessary and sufficient requirements for
access
control in a secure operating system.
First, a secure operating system must provide complete mediation of all security-
sensitive operations.
If all these operations are not mediated, then a security requirement may not be
enforced (i.e., a secret may be leaked or trusted data may be modified by an untrusted
process).
Second, the reference monitor system, which includes its implementation and the
protection system, must all be tamperproof.
Complete Mediation:
Verifying that a reference monitor is tamperproof requires verifying that all the
reference monitor components, the reference monitor interface, authorization
module, and policy store, cannot be modified by processes outside the system’s
trusted computing base (TCB) .
This also implies that the TCB itself is high integrity, so we ultimately must verify that
the entire
TCB cannot be modified by processes outside the TCB.
Thus, we must identify all the ways that the TCB can be modified, and verify that no
untrusted processes (i.e., those outside the TCB) can perform such modifications.
First, this involves verifying that the TCB binaries and data files are unmodified. This
can be accomplished by a multiple means, such as file system protections and binary
verification programs.
Verifiable
Finally, we must be able to verify that a reference monitor and its policy really enforce
the system security goals.
This requires verifying the correctness of the interface, module, and policy store
software, and evaluating whether the mandatory protection system truly enforces the
intended goals.
First, verifying the correctness of software automatically is an unsolved problem.
Tools have been developed that enable proofs of correctness for small amounts of
code and limited properties, but the problem of verifying a large set of correctness
properties for large codebases appears intractable.
In practice, correctness is evaluated with a combination of formal and manual
techniques which adds significant cost and time to development.
As a result, few systems have been developed with the aim of proving correctness,
and any comprehensive correctness claims are based on some informal analysis (i.e.,
they have some risk of being wrong).
Assessment Criteria
we must specify precisely how each system enforces the reference monitor
guarantees in order to determine how an operating system aims to satisfy these
guarantees.
In doing this, it turns out to be easy to expose an insecure operating system, but it is
difficult to define how close to “secure” an operating system is.
Based on the analysis of reference monitor guarantees above, we list a set of
dimensions that we use to evaluate the extent to which an operating system satisfies
these reference monitor guarantees.
1. Complete Mediation
In this answer, we describe how the system ensures that the subjects,
objects, and operations being mediated are the ones that will be used in the
security-sensitive operation. This can be a problem for some approaches (e.g.,
system call interposition [3, 6, 44, 84, 102, 115, 171, 250]), in which the
reference monitor does not have access to the objects used by the operating
system.
2. Complete Mediation
Does the reference monitor interface mediate security-sensitive
operations on all system resources?
We describe how the mediation interface described above mediates all
security-sensitive operations.
3. Complete Mediation
How do we verify that the reference monitor interface provides
complete mediation?
We describe any formal means for verifying the complete mediation
described above.
4. Tamperproof
How does the system protect the reference monitor, including its protection
system, from modification?
In modern systems, the reference monitor and its protection system are
protected by the operating system in which they run.
The operating system must ensure that the reference monitor cannot
be modified and the protection state can only be modified by trusted
computing base processes.
5. Tamperproof
Does the system’s protection system protect the trusted computing
base programs?
The reference monitor’s tamper proofing depends on the integrity of
the entire trusted computing base, so we examine how the trusted
computing base is defined and protected.
6. Verifiable
What is basis for the correctness of the system’s trusted computing
base?
We outline the approach that is used to justify the correctness of the
implementation of all trusted computing base code.
7. Verifiable
Does the protection system enforce the system’s security goals?
Finally, we examine how the system’s policy correctly justifies the
enforcement of the system’s security goals.
The security goals should be based on the models in Chapter 5,
such that it is possible to test the access control policy formally.
Information Flow
Secure operating systems use information flow as the basis for specifying secrecy and
integrity security requirements. Conceptually, information flow is quite simple.
Information flow represents how data moves among subjects and objects in a system.
When a subject (e.g., process) reads from an object (e.g., a file), the data from the
object flows into the subject’s memory.
If there are secrets in the object, then information flow shows that these secrets may
flow to the subject when the subject reads the object. However, if the subject holds
the secrets, then information flow also can show that the subject may leak these
secrets if the subject writes to the object.
In a information flow model, each subject and object is assigned a security class.
Secure classes are labels in the mandatory protection system defined in Definition 2.4,
and both subjects and objects may share security classes.
Since processes in the lower security classes do not even know the name of
objects in higher security classes, such writing is implemented by a poly
instantiated file system where the files have instances at each security
level, so the high security process can read the lower data and update the
higher secrecy version without leaking whether there is a higher secrecy
version of the file.
Bell LaPadula Model
The most common information flow model in secure operating systems for enforcing
secrecy requirements is the Bell-LaPadula (BLP) model.
There are a variety of models associated with Bell and LaPadula, but we describe a
common variant here, known as the Multics interpretation.
This BLP model is a finite lattice model where the security classes represent two
dimensions of secrecy: sensitivity level and need-to-know.
The sensitive level of data is a total order indicating secrecy regardless of the type of
data. In the BLP model, these levels consist of the four governmental security classes
mentioned previously: top-secret, secret, confidential, and unclassified
The simple-security property solves the obvious problem that subjects should not read
data that is above their security class.
That is, the BLP policy identifies unauthorized subjects for data as subjects whose
security class is dominated by the object’s security class.
Thus, the simple-security property prevents unauthorized subjects from receiving
data.
A multilevel security (MLS) model is a lattice model consisting of multiple sensitivity
levels.
While the BLP models are simply instances of MLS models, they are MLS models used
predominantly in practice.
Thus, the BLP models are synonymous with MLS models.
Second, the Bell-LaPadula model defines a labeling state where subjects and objects
are labelled based on the label of the process that created them.
At create time, a subject or object may be labelled at a security class that dominates
the security class of the creating process.
Once the subject or object is created and labeled, its label is static.
Information Flow Integrity Models
Secure operating systems sometimes include policies that explicitly protect the
integrity of the
system. Integrity protection is more subtle than confidentiality protection, however.
The integrity of a system is often described in more informal terms, such as “it
behaves as expected.”
A common practical view of integrity in the security community is: a process is said to
be high integrity if it does not depend on any low integrity inputs.
That is, if the process’s code and data originate from known, high integrity sources,
then we may assume that the process is running in a high integrity manner (e.g.,
as we would expect).
Biba Integrity Model
The simple-integrity property states that subject s can read an object o only if SC(s) ≤
SC(o).Thus, a subject can only read data that is at their security class or is higher integrity.
Second, the _-integrity property states that subject s can write an object o only if SC(s) ≥
SC(o).Thus, a subject can only write data that is at their security class or is lower integrity.
Suppose that an object is a top-secret, user object (i.e., secrecy class, then integrity
class).
Only subjects that are authorized to read both top-secret objects according to the Bell-
LaPadula policy and user objects according to the Biba policy are permitted to read the
object.
For example, neither secret, low nor top-secret, appl are allowed to read this object
because both the Biba and Bell-LaPadula requirements are not satisfied for these
subjects.
A subject must be able to both read the object in Bell-LaPadula (i.e., be top-secret) and
read the object in Biba (i.e., below or user).
As for reading, a subject’s integrity and secrecy classes must individually permit the
subject to write to the object for writes to be authorized.
Low-Water Mark Integrity
An alternative view of integrity is the Low-Water Mark integrity or LOMAC model [27, 101].
LOMAC differs from Biba in that the integrity of a subject or object is set equal to the lowest
integrity class input.
For example, a subject’s integrity starts at the highest integrity class, but as code, libraries, and
data are input, its integrity class drops to the lowest class of any of these inputs.
Similarly, a file’s integrity class is determined by the lowest integrity class of a subject that has
written data to the file.
LOMAC differs from BLP and Biba in that the integrity class of a subject or object may
change as the system runs. That is, a LOMAC transition state is nonnull, as the
protection state is not tranquil
Clark- Wilson Integrity
Ten years after the Biba model, Clark andWilson aimed to bring integrity back into the
focus of security enforcement.
Clark-Wilson specified that high integrity data, called constrained data items (CDIs),
must be validated as high integrity by special processes, called integrity verification
procedures (IVPs), and could only be modified by high integrity processes, called
transformation procedures (TPs).
IVPs ensure that CDIs satisfy some known requirements for integrity (analogously to
double-bookkeeping in accounting), so that the system can be sure that it starts with
data that meets its integrity requirements.
TPs are analogous to high integrity processes in Biba in that only they may modify high
integrity data.
That is, low integrity processes may not write data of a higher integrity level (i.e., CDI
data). These two requirements are defined in two certification rules of the model, CR1
and CR2
UNIT V SECURITY IN OPERATING SYSTEMS 9
UNIX Security – UNIX Protection System – UNIX Authorization – UNIX Security Analysis – UNIX
Vulnerabilities – Windows Security – Windows Protection System – Windows Authorization – Windows
Security Analysis – Windows Vulnerabilities – Address Space Layout Randomizations – Retrofitting
Security into a Commercial Operating System – Introduction to Security Kernels