KEMBAR78
OS - Module 1 | PDF | Process (Computing) | Computer Data Storage
0% found this document useful (0 votes)
23 views157 pages

OS - Module 1

Uploaded by

hvranjan0603
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views157 pages

OS - Module 1

Uploaded by

hvranjan0603
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 157

OPERATING SYSTEMS

(21CS44)
Acknowledgment

The contents used in this presentation


are courtesy of textbook “Abraham
Silberschatz, Peter Baer Galvin, Greg
Gagne, Operating System Principles 7th
edition, Wiley-India, 2006”
Syllabus- Module
Introduction to 1
operating systems, System structures What
operating systems do;Computer System : Computer
organization;
System architecture; Operating System structure; Operating System
operations; Process management; management; Storage
Memory
management; Protection and Security; Distributed Special-
system; purpose systems; Computing environments.

Operating System Services; User - Operating System interface;


System calls; Types of system calls; System programs; Operating system
design and implementation; Operating System structure;
Virtual machines; Operating System generation; System boot.

Process Management Process concept; Process scheduling;


Operations on processes; Inter process communication
Text Books:
1. Abraham Silberschatz,Peter Baer Galvin,
Greg Gagne, Operating System Principles 7th edition,
Wiley-India, 2006.

Reference Books
1. Ann McHoes Ida M Fylnn, Understanding Operating System, Cengage
Learning, 6th
Edition
2.D.M Dhamdhere, Operating Systems: A Concept Based Approach 3rd
Ed,
McGraw-
Hill,
2013.
3.P.C.P. Bhatt, An Introduction to Operating Systems: Concepts and Practice
4th Edition,
PHI(EEE), 2014.
4. William Stallings Operating Systems: Internals and Design Principles, 6th
Chapter 1: Introduction
Chapter 1: Introduction
 1.1 What Operating Systems
Do
 1.2 Computer-System
Organization
 1.3 Computer-System
Architecture
 1.4 Operating-System
Structure
 1.5 Operating-System
Operations
 1.6 Process Management
 1.7 Memory Management
 1.8 Storage Management
 1.9 Protection and Security
 1.10 Distributed system
What is an Operating
System?
 An operating system is a program that
manages a computer’s hardware

 A program that acts as an intermediary between a


user of a computer and the computer hardware

 Operating system goals:


 Execute user and make solving
problems
programs easier user
 Make the computer system convenient to
use
 Use the computer hardware in an efficient
1.1 What OS
do?
 A computer system can be divided roughly
into four components:

o The hardware,
o The operating system,
o The application programs,
o The users
Four Components of a Computer System
Computer System
 ComputerStructure
system can be divided into four
components:
 Hardware – provides basic computing resources
CPU, memory, I/O devices
 Application
programs – define the ways in which
the system resources are used to solve the
computing problems of the users
Word processors, compilers, web browsers,
database systems, video games
 Operating system
Controls and coordinates use of hardware
among
various applications and users
 Users
1.1.2 What Operating Systems
Do
 Depends on the point of view
 Users View
 Users want convenience, ease of use and good
performance
 Don’t care about resource utilization
 But shared computer such as mainframe or
minicomputer
must keep all users happy
 Users of dedicate systems such as workstations have
dedicated resources but frequently use shared
resources from servers
 Handheld computers are resource poor, optimized for
usability and battery life
System View

 OS is a resource allocator

 Manages all resources


 CPU time, memory space, file-storage space, I/O
devices, and so on
 Decides between conflicting requests
for efficient and fair
resource use

 OS is a control program

 Controls execution of programs to prevent


1.2 Computer System Organization
 1.2.1 Computer-system operation
 One or more CPUs, device controllers connect
through common bus
providing access to shared memory
 Concurrent executionof CPUs and devices competing
for memory cycles
 To ensure orderly access to the shared memory, a memory
controller synchronizes access to the memory
Computer Startup
 bootstrap program is loaded at power-up or reboot
 Typically stored in ROM or EPROM, generally
known as
firmware
 Initializes
all aspects of system- from CPU registers to
device controllers to memory contents
 Loads operating system kernel and starts execution

 Once the kernel is loaded and executing, it can start


providing services to the system and its users.

 Some services are provided outside of the kernel, by


system programs that are loaded into memory at boot
time to become system processes, or system daemons
Computer-System
Operation
 I/O devices and the CPU can execute concurrently
 Each device controller is in charge of a particular
device type
 Each device controller has a local buffer
 CPU moves data from/to main memory to/from local
buffers
 I/O is from the device to local buffer of controller
 Device
 controller informs CPUis that it has finished
The occurrenceof an event usually signaled by
its operation
an
interrupt by causing
from either an interrupt
the hardware or the
software
 Hardware – through signals
 Software –system call
 When the CPU is interrupted, it stops what it is doing

 Immediately transfers execution to a fixed location.

 The fixed location usually contains the starting


address where the service routine for the interrupt
is located.

 The interrupt service routine


executes; on completion, the CPU
resumes the interrupted computation
•Each computer design has its own interrupt mechanism, but
several functions are common
•The interrupt must transfer control to the appropriate interrupt
service routine
•The straightforward method for handling this transfer would be
to invoke a generic routine to examine the interrupt information

•The routine, in turn, would call the interrupt-specific handler


However, interrupts must be handled quickly.

•Since only a predefined number of interrupts is possible, a table of


pointers to interrupt routines can be used instead to provide
the necessary speed– interrupt vector

•The interrupt routine is called indirectly through the table,


withno intermediate routine needed
1.2.2 Storage
Structure
•The CPU can load instructions only from memory

•Any programs to run must be stored there

•General-purpose computers run most of their programs from


rewritable memory, called main memory (also called random-
access memory, or RAM).

•Main memory commonly is implemented in a semiconductor


technology called dynamic random-access memory (DRAM)
(has array of memory words)

Load and store instructions(main memory to register-load , register to


memory- store)

Instruction-execution cycle
Storage Structure
 Main memory –
 Mainmemory is usually too small to store all
needed programs and data permanently.
 Typically volatile
 Secondary storage – extension of main memory that
provides large nonvolatile storage capacity
 Hard disks – rigid metal or glass platters covered
with
magnetic recording material
 Disk surface is logically divided into tracks, which are
subdivided into
sectors
 The disk controller determines the logical
interaction between the
device and the computer
 Solid-state disks – faster than hard disks, nonvolatile
Storage
Hierarchy
 Storage systems organized in
hierarchy

 Speed

 Cost

 Volatility
Storage-Device
Hierarchy
1.2.3 I/O structure
 CPUs and multiple device controllers

 Device controller is in charge of specific type of


device

 Device controller has local buffer and a set of special


purpose registers

 Data moves from device to local buffer

 OS has device drivers for each device controller


 To start an I/O operation,
 Device driver loads the appropriate registers within
the device controller
 Device controller checks the content of the registers
determine
to what action to take
 Controller starts the data movement between device
and local buffer
 On completion controller informs the driver through
an interrupt
 Driver returns the control to OS

 Interrupt driven I/O


Direct Memory Access Structure

 Used for high-speed I/O devices able to transmit


information at close to memory speeds

 Device controller transfers blocks of


data from buffer storage directly to main
memory without CPU intervention

 Only one interrupt is generated per block, rather than


the one interrupt per byte
How a Modern Computer
Works

A von Neumann
architecture
1.3 Computer-System Architecture

 Single processor
systems

 Multiprocessor
systems

 Clustered systems
1.3.1 Single-Processor Systems
 On a single processor system, there is one main CPU
capable of executing a general-purpose instruction
set, including instructions from user processes
 Almost all single processor systems have other special-
purpose processors as well such as disk, keyboard and
so on of these special-purpose processors run a
 All
instruction set and do not run user processes
limited
 Sometimes, they are managed by the operating
 system
For example, a disk-controller microprocessor receives
a sequence of requests from the main CPU and
implements its own disk queue and scheduling
algorithm. This arrangement relieves the main CPU of
the overhead of disk scheduling
 PCs contain a microprocessor in the keyboard to
1.3.2 Multiprocessor systems
 Multiprocessors systems growing in use and
importance
 Also known as parallel systems, tightly-
systems
coupled
 Advantages include:
1. Increased throughput
2. Economy of scale
3. Increased reliability – gracefuldegradation
or fault tolerance
 Two types:
1. Asymmetric Multiprocessing – each
processor is assigned a specie task.
2. Symmetric Multiprocessing – each
processor performs all tasks
Symmetric Multiprocessing Architecture
A Dual-Core
 Design
Multi-chip and multicore
 Systems containing all chips
 Chassiscontaining multiple separate
systems
1.3.3 Clustered Systems
 Like multiprocessor systems, but
together
multiple systems working
 Usually sharing storage via a storage-area network
(SAN)
 Provides a high-availability service which survives
failures
Asymmetric
Symmetric
clustering has one
clustering hasmachine
multipleinnodes
hot-
standby mode
running
applications, monitoring each other
 Some clusters are for high-performance computing
(HPC)
Applications must be written to use parallelization
 Some have distributed lock manager (DLM) to
Clustered Systems
1.4 Operating System
Structure
 Multiprogramming (Batch needed
system) efficiency for
 Single user cannotkeep CPU and I/O devicesbusy
at all times
 Multiprogramming organizes jobs (code
and data) so CPU
always has one to execute
 A subset of total jobs in system is kept in memory

 One job selected and run via job scheduling


 When it has to wait (for I/O for example), OS
switches to another job
Memory Layout for Multiprogrammed System
Operating System
Structure
 Timesharing (multitasking) is logical extension in
which CPU switches jobs so frequently that users can
interact with each job while it is running, creating
interactive computing
Response time should be < 1 second
 Each
user has at least one program executing in
memory
process
 If several jobs ready to run at the same time
CPU scheduling
processes don’t fit in memory, swapping
 If
moves them in and out to run
Virtual memory allows execution of processes
1.5 Operating-System Operations
 Interrupt driven (hardware and software)
 Hardware interrupt by one of the devices
 Software interrupt (exception or trap):
 Software error (e.g., division by zero)

 Request for operating system service

 Other process problems include infinite loop, processes


modifying each other or the operating system
 Since the operating system and the users share the hardware
and software resources of the computer system, we need to
make sure that an error in a user program could cause
problems only for the one program running
 Without protection against these sorts of errors, either the
computer
must execute only one process at a time or all output must be
suspect.
 A properly designed operating system must ensure that an
incorrect (or malicious) program cannot cause other programs
Operating-System Operations (cont.)
 1.5.1 Dual-mode operation allows OS to protect itself and
other
system components
 User mode (1) and kernel mode (0)
 Mode bit provided by hardware

Provides ability to distinguish when system is


running user code or kernel code
Some instructions designated as privileged,
only executable in kernel mode
System call changes mode to kernel,
return from call resets it to user
 Increasingly CPUs support multi-mode operations
 i.e. virtual machine manager (VMM) mode for guest
VMs
Transition from User to Kernel Mode
1.5.2 Timer
 OS should have control over CPU
 Cannot allow a user program to get stuck
 A timer can be set to interrupt the computer after a
specified
period
 Period can be fixed or variable
 The operating system sets the counter.
 Every time the clock ticks, the counter is
decremented.
 When the counter reaches 0, an interrupt occurs
1.6 Process Management

 A process is a program in execution. It is a unit of


work within the system. Program is a passive entity,
process is an active entity.
 Process needs resources to accomplish its task
 CPU, memory, I/O, files
 Initialization data
 Process termination requires reclaim of any reusable
resources
 Single-threaded process has one program counter
specifying
location of next instruction to execute
 Process executes instructions sequentially, one at a
time, until completion
Process Management Activities
The operating system is responsible for the following
activities in connection with process management:

 Creating and deleting both user and system


processes
 Scheduling processes and threads on the CPUs
 Providing mechanisms for process synchronization
 Providing mechanisms for process communication
 Providing mechanisms for deadlock handling
1.7 Memory Management
 To execute a program all (or part) of the instructions
must be in memory
 All (or part) of the data that is needed by the
program must be
in memory.
 Memory management determines what is in memory
and when
 Optimizing CPU utilization and computer response to
users
 Memory management activities
 Keeping track of which parts of memory are
currently being used and by whom
 Deciding which processes (or parts thereof)and
data to
1.8 Storage Management
 OS provides uniform, logical view of information
storage
 Abstracts physical properties to logical storage unit - file
 Each medium is controlled by device (i.e., disk drive, tape
drive)
 Varying properties include access speed, capacity, data-
transfer rate, access method (sequential or random)

 1.8.1 File-System management


 Files usually organized into directories
 Access control on most systems to determine who can access
what
 OS activities include
 Creating and deleting files and directories

 Primitives to manipulate files and directories

 Mapping files onto secondary storage


1.8.2 Mass-Storage Management
 Usually disks used to store data that does not fit in main memory
or data that must be kept for a “long”period of time
 Proper management is of central importance
 Entire speed of computer operation hinges on
disk subsystem and its
algorithms
 OS activities
 Free-space management
 Storage allocation
 Disk scheduling
 Some storage need not be fast
 Tertiary storage includes optical storage, magnetic tape
 Still must be managed – by OS or applications
 Varies between WORM (write-once, read-many-times) and
RW (read- write)
1.8.3 Caching

 Cache
management
 Cache size
 Replacement
policy
Performance of Various Levels of Storage

Movement between levels of storage hierarchy can be explicit


or implicit
Migration of data “A” from Disk to Register
 Multitasking environments must be careful to use
most recent value, no matter where it is stored in the
storage hierarchy

 Multiprocessor environment must provide cache


coherency in hardware such that all CPUs have the
most recent value in their cache
 Distributed environment situation even more complex
 Several copies of a datum can exist
1.8.4 I/O Systems
 One of purposes of OS is to hide the peculiarities
of specific hardware devices from the user.

 I/O subsystem components:


 Memory management componets:-buffering,caching and
spooling
 General device-driver interface
 Drivers for specific h/w devices

 Only device driver knows the peculiarities of the


specific device to which it is assigned
1.9 Protection and
 Security
Protection – any mechanism for controlling access of processes or
users
to resources defined by the OS
 Security – defense of the system against internal and external
attacks
 Huge range, including denial-of-service, worms, viruses,
identity theft,
theft of service
 Systems generally first distinguish among users, to determine
who can do what
 User identities (user IDs, security IDs) include name and
associated
number, one per user
 User ID then associatedwith all files, processes of that
user to determine access control
 Group identifier (group ID) allows set of
users to be defined and controls managed,
then also associated with each process, file
1.10 Distributed
 Systems
Distributed computing
 Collection of separate, possibly system
heterogeneous, networked together s
Network is a communications path,
TCP/IP common-
most Local Area Network
(LAN)
- Wide Area Network (WAN)
- Metropolitan Area
Network (MAN)
 Network Operating System provides features
- Personal
between systems Area Network
across network
(PAN)
Communication allows systems to
scheme messages exchange
Illusion of a single
system
1.11 Special-Purpose Systems
1.11.1 Real-Time Embedded Systems
 Real-time embedded systems most form
prevalent computers of
 Vary considerable, special purpose, limited
purpose OS,
real-time OS
 Use expanding

 Many other special computing environments as well


 Some have OSes, some perform tasks without an OS

 Real-time OS has well-defined fixed time constraints


 Processing must be done within constraint
1.11.2 Multimedia systems
Audio and video

3. Handheld systems
PDA, cellphones etc
1.12 Computing Environments - Traditional
1.12.1 Computing Environments - Traditional
 Stand-alone general purpose machines
 But blurred as most systems interconnect with others
(i.e., the Internet)
 Portals provide web access to internal systems
 Network computers (thin clients) are like Web terminals
 Mobile computers interconnect via wireless networks
 Networking becoming ubiquitous –
even home systems use
firewalls to protect home computers from Internet
attacks
1.12.2 Computing Environments – Client-Server
 Client-Server Computing
 Dumb terminals supplanted by smart PCs
 Many systems now servers, responding to
requests generated by clients
Compute-server system provides an interface to
client to
request services (i.e., database)
system provides interface for clients to
File-server
store and retrieve files
1.12.3 Computing Environments - Peer-to-Peer
 Another model of distributed system
 P2P does not distinguish clients and
servers
 Instead all nodes are considered
peers
 Node must join P2P
 May each act as client, server or
network central lookup service
both
Registers its service on
with
Broadcast request for service and
network,
respond or to requests for service via
discovery protocol
 Examples includeNapster and Gnutella,Voice over
IP
(VoIP) such as Skype
Web based computing
 Become ubiquitous
 PC, workstations, handled PDAs, cell phones
provides access
 Increases emphasis on networking
 Devices include wired, wireless, faster
network
connectivity by networking technology
 Web based computing has given rise to load balancing
Which distribute network connections among a pool
of similar servers
 Web has increased the complexity of devices
End of Chapter 1
Chapter 2: Operating-
System Structures
Chapter 2: Operating-System Structures
 2.1 Operating System Services
 2.2 User Operating System Interface
 2.3 System Calls
 2.4 Types of System Calls
 2.5 System Programs
 2.6 Operating System Design and
Implementation
 2.7 Operating System Structure
 2.8 Virtual Machines
 2.9 Operating System Generation
 2.10System Boot
Objectives
 To the services an operatingsystem provides
users, processes,
describe to and other systems

 To discuss the various ways of structuring an operating


system

 To explain how operating systems are installed and


customized and how they boot
2.1 Operating System
 OperatingServices
systems provide an environment for
executionof programs and services to programs and
users
 One set of operating-system services provides
functions that
are helpful to the user:
 User interface - Almost all operating systems have
a user interface (UI).
Varies between Command-Line (CLI), Graphics
User
Interface (GUI), Batch
 Program execution - The system must be able to load
a program into memory and to run that program,
end execution, either normally or abnormally
(indicating error)
Operating System Services (Cont.)
 One set of operating-system services provides functions that are
helpful to the user (Cont.):
 File-system manipulation - The file system is of particular interest.
Programs need to read and write files and directories, create
and delete them, search them, list file Information, permission
management.
 Communications – Processes may exchange information, on the
same computer or between computers over a network
 Communications may be via shared memory or through
message
passing (packets moved by the OS)
 Error detection – OS needs to be constantly aware of possible
errors
 May occur in the CPU and memory hardware, in I/O devices,
in user
program
 For each type of error, OS should take the
Operating System Services (Cont.)

 Another set of OS functions exists for ensuring the efficient


operation of the
system itself via resource sharing
 Resource allocation - When multiple users or multiple jobs
running concurrently, resources must be allocated to each of
them
 Many types of resources - CPU cycles, main memory, file
storage, I/O devices.
 Accounting - To keep track of which users use how much and
what kinds of computer resources
 Protection and security - The owners of information stored in a
multiuser or networked computer system may want to control
use of that information, concurrent processes should not
interfere with each other
 Protection involves ensuring that all access to system
resources is controlled
 Security of the system from outsiders requires user
authentication, extends to defending external I/O devices
A View of Operating System Services
2.2.1 User Operating System Interface - CLI
CLI or command interpreter allows direct command entry
 Sometimes implemented in kernel,
sometimes by systems program
 Sometimes multiple flavors implemented –
Bourne
shells shell, Ceg-
shell, Korn shell etc
 Primarily fetches a command from user and executes
it
 Sometimes commands built-in,
sometimes
If
just
the latter, names of programs
adding like
new features
rm file.txt
doesn’t require shell modification
2.2.2 User Operating System Interface - GUI
 User-friendly desktop metaphor
interface
 Usually mouse, keyboard, and
monitor
 Icons represent files, programs,
actions, etc
 Variousmouse buttons over objects in the interface
cause various actions (provide information, options,
execute function, open directory (known as a folder)
 Invented at Xerox PARC
 Many systems now include both CLI and GUI
interfaces
 Microsoft Windows is GUI with CLI “command”shell
 Apple Mac OS X is “Aqua”GUI interface with UNIX
2.3 System Calls

 Programming interface to the services provided by the


OS
 Typically written ina high-level language
(Cor C++ and also
assembly-language instructions)
Example of System
 System Calls
call sequence to copy the contentsof one file
to another file
System Calls…

 Most accessed by programs via a high-level Application


Programming Interface (API) rather than direct system
call use
 API – functions, parameters and return values– eg read()
 Reasons to use API than system call is:
 Support portability
 Reduce complexity
 \Three most common APIs are Win32 API for Windows,
POSIX API for POSIX-based systems (including virtually
all versions of UNIX, Linux, and Mac OS X), and Java API
for the Java virtual machine (JVM)
Example of Standard
API
System Call Implementation
 System call interface – link between system call and function
name in the API

 Typically, a number associated with each system call


 System-call interface maintains a table indexed
according to these numbers

 The system call interface invokes the intended system call in OS


kernel and returns status of the system call and any return values

 The caller need know nothing about how the system call is
implemented
 Just needs to obey API and understand what OS will do as a
result call
 Most details of OS interface hidden from programmer by
API
API – System Call – OS
Relationship
System Call Parameter Passing
 Often, more information is required than
simplyidentity of desired system call
 Exact type and amount of information vary
according to OS and call
 Three general methods used to pass parameters to
the OS
 Simplest: pass the parameters in registers
 In some cases, may be more parameters than
registers
 Parameters stored in a block, or table, in
memory, and address of block passed as a
parameter in a register
This approach taken by Linux and Solaris
 Parameters placed, or pushed, onto the stack
by the program and popped off the stack by the
Parameter Passing via Table
2.4 Types of System Calls
 1. Process control
 create process, terminate process
 end, abort
 load, execute
 get process attributes, set process attributes
 wait for time
 wait event, signal event
 allocate and free memory
 Dump memory if error
 Debugger for determining bugs, single step execution
 Locks for managing access to shared data
between processes
Standard C Library Example
 C program invoking printf() library
call, which calls write() system call
Example: MS-DOS
 Single-tasking
 Shell invoked
when system booted
 Simple method to
run
program
 No process created
 Single memory space
 Loads program into
memory, overwriting all
but the kernel
 Program exit ->
shell At system running a
startup program
reloaded
Example: FreeBSD
 Unix variant
 Multitasking
 User login -> invoke choice
user’s shell of
 Shell executes fork() system call
to
create process
 Executes exec() to load
program into process
 Shell waits for process to
terminate or continues with
user commands
 Process exits with:
 code = 0 – no error
Types of System Calls
 2. File management
 create file, delete file
 open, close file
 read, write, reposition
 get and set file attributes
 3. Device management
 request device, release device
 read, write, reposition
 get device attributes, set device
attributes
 logically attach or detach devices
Types of System Calls (Cont.)
 4. Information maintenance
 get time or date, set time or date
 get system data, set system data
 get and set process, file, or device attributes
 5. Communications
 create, delete communication connection
 send, receive messages if message passing model to
host name or process name
From client to server
 Shared-memorymodel create and gain access to
memory regions
 transfer status information
 attach and detach remote devices
Types of System Calls (Cont.)

 Protection
 Controlaccess to
resources
 Get and set permissions
 Allowand deny user
access
Examples of Windows and Unix System Calls
2.5 System Programs
 System programs provide a
convenient environment for
program development and execution.
 Most users’view of the operation system
is defined by system programs, not the
actual system calls
System Programs
 Provide a convenient environment for program development
and execution
 Some of them are simply user interfaces to system calls;
others are considerably more complex
 1. File management - Create, delete, copy, rename, print,
dump, list, and generally manipulate files and directories
 2. Status information
 Some ask the system for info - date, time, amount of
available memory, disk space, number of users
 Others provide detailed performance, logging, and debugging
information
 Typically, these programs format and print the
output to the terminal or other output devices
 Some systems implement a registry - used to
store and
retrieve configuration information
System Programs (Cont.)
 3. File modification
 Text editors to create and modify
 files
Special commands to search of files or
contents perform
transformations of the text
 4. Programming-language support –
interpreters sometimes
 Compilers, assemblers,
debuggersloading and
 5. Program andexecution-
provided
 Absolute loaders, relocatable loaders, linkage editors, and
overlay- loaders, debugging systems for higher-level and machine
language
 6. Communications - Provide the mechanism for creating virtual
connections among processes, users, and computer systems
 Allow users to send messages to one another’s screens, browse web
pages, send electronic-mail messages, log in remotely, transfer
2.6 Operating System Design and Implementation
 Design Implementation of OS not “solvable ”, but
and
approaches some
have proven successful
 Internal structure of different Operating Systems can vary widely
 Start the design by defining goals and specifications
 Affected by choice of hardware, type of system ( batch, time shared
single user, multi user, distributed, real time or general purpose)
 User goals and System goals
 User goals – operating system should be convenient to use, easy to
learn, reliable, safe, and fast
 System goals – operating system should be easy to design,
implement, and maintain, as well as flexible, reliable, error-free,
and efficient
Operating System Design and Implementation (Cont.)
 Important principle to separate
Policy:Whatwillbedone?
Mechanism: How to do
it?

 Mechanisms determine how to do something, policies decide what


will be done
 The separation of policy from mechanism is a very important
principle, it allows maximum flexibility if policy decisions are to
be changed later (example – timer)
 Specifying and designing an OS is highly creative task of
software engineering
Implementation
 Much variation
 Early OSes in assembly language
 Then system programming languages like Algol, PL/1
 Now C, C++
 Actually usually a mix of languages
 Lowest levels in assembly
 Main body in C
 Systems programs in C, C++, scripting languages like
PERL, Python, shell scripts
 More high-level language easier to port to other hardware
 But slower, more storage requirement
2.7 Operating System Structure
 General-purpose OS is very large
program

 Various ways to structure ones


 Simple structure – MS-DOS
 More complex -- UNIX
 Layered – an abstrcation
 Microkernel –Mach
 Module

 Hybrid
2.7.1 Simple Structure -- MS-
DOS
 MS-DOS – written to provide the
most functionality in the least
space
 Not divided into modules
 Although MS-DOS has some
structure, its interfaces and
levels of functionality are not
well separated
Non Simple Structure --
UNIX
UNIX – limited by hardware functionality, the original UNIX
operating system had limited The UNIX OS consists
structuring. of two separable parts
 Systems programs
 The kernel
Consists of everything below the system-call interface and
above the physical hardware
Provides the file system, CPU scheduling, memory
management, and other operating-system functions; a large
number of functions for one level
Traditional UNIX System Structure

Beyond simple but not fully layered


2.7.2 Layered
Approach
 The operating system is divided
into a number of layers (levels),
each built on top of lower layers.
The bottom layer (layer 0), is the
hardware; the highest (layer N) is
the user interface.
 With modularity, layers are selected
such that each uses functions
(operations) and services of only
lower-level layers
2.7.3 Microkernel System Structure
 Moves as much from the kernel into user space
 Mach example of microkernel
 Mac OS X kernel (Darwin) partly based on Mach
 Communication takes place between user modules using
message passing
 Benefits:
 Easier to extend a microkernel
 Easier to port the operating system to new architectures
 More reliable (less code is running in kernel mode)
 More secure
 Detriments:
 Performance overhead of user space to kernel
space communication
Microkernel System Structure

Application File Device user


Program System Driver mod
e

message message
s s

Interprocess memory CPU kern


schedulin el
Communicat managm g mod
ion ent e
microkern
el

hardwa
re
2.7.4 Modules
 Many modern operating systems implement loadable kernel
modules
 Uses object-oriented approach
 Each core component is separate
 Each talks to the others over known interfaces
 Each is loadable as needed within the kernel
 Overall, similar to layers but with more flexible
 Linux, Solaris, etc
Solaris Modular Approach
2.7.4 Hybrid Systems
 Most OSes today do not strictly adhere to one architecture, but are
hybrids of several.

Mac OS X
 The Max OSX architecture relies on the Mach microkernel for basic
system management services, and the BSD kernel for additional
services.
 Application services and dynamically loadable modules ( kernel
extensions ) provide the rest of the OS functionality:
2.8 Virtual Machines
 To abstract the hardware of a single computer into several different
execution environment.
 The concept of a virtual machine is to provide an interface that looks
like independent hardware, to multiple different OSes
running
simultaneously on the same physical hardware.
 Each OS believes that it has access to and control over its own CPU,
RAM, I/O devices, hard drives, etc.
 One obvious use for this system is for the development and testing of
software that must run on multiple platforms and/or OSes.
 One obvious difficulty involves the sharing of hard drives, which are
generally partitioned into separate smaller virtual disks(mini disk) for
each operating OS.
 Virtual machines first appeared as the VM Operating System for IBM
mainframes in 1972.
2.8.1 Implementation
 Implementation may be challenging, partially due to
consequences
the of user versus kernel mode.
 Each of the simultaneously running kernels needs to operate in
kernel mode at some point, but the virtual machine actually runs
in user mode.
 Sothe kernel mode has to be simulated for each of the loaded
OSes, and kernel system calls passed through the virtual
machine into a true kernel mode for eventual HW access.
 The virtual machines may run slower, due to the increased levels of
code between applications and the HW, or they may run faster, due
to the benefits of caching or spooling. ( And virtual devices may
also be faster than real devices, such as RAM disks which are
faster than physical disks. )
2.8.2 Benefits

 Each OS runs independently of all the others, offering protection


and security benefits.
 Sharing of physical resources is not commonly implemented, but
may be done by
 1. share a minidisk and thus files
 2. define network of VM
 Virtual machines are a very useful tool for OS development, as
they allow a user full access to and control over a virtual machine,
without affecting other users operating the real machine.
 This approach can also be useful for product development and
testing of SW that must run on multiple OSes / HW platforms.
2.8.3 Examples
1. VMware
 Abstracts the 80x86 hardware platform, allowing
simultaneous
operation of multiple Windows and Linux OSes
2. The Java Virtual Machine
 Java was designed to be platform independent, by running
Java only on a Java Virtual Machine(JVM), of
which different
implementations have been developed for numerous
different underlying HW platforms.
 Java source code is compiled into Java byte code in .class files.
 The JVM implements memory management and garbage collection.
 Java byte code may be interpreted as it runs, or compiled to native
system binary code using just-in-time ( JIT ) compilation.
 The first native binary code is cached, so that the next time that
piece of code is encountered it can be used directly.
 Some hardware chips have been developed to run Java byte code
directly, which is an interesting application of a real machine being
developed to emulate the services of a virtual one!
2.9 Operating System
 Operating Generation
systems are designed to run on any of a class of machines;
the system must be configured for each specific computer site
 SYSGEN program obtains information concerning the
configuration
specific of the hardware system
 Used to build system-specific compiled kernel or system-tuned
 Can general more efficient code than one general kernel
 OSes may be designed and built for a specific HW configuration at a
specific site, but more commonly they are designed with a number of
variable parameters and components, which are then configured for a
particular operating environment.
 Operating Systems is normally distributed on disk or CD-ROM. To
generate system special program is used.
2.9 Operating System
 SYSGEN Generation
program reads from given file and Information that
is needed to configure an OS include:
 What CPU(s) are installed on the system, and what
optional characteristics does each have?
 How much RAM is installed? ( This may be
automatically, either at install or boot time. )
determined
 What devices are present? The OS needs to determine
which device drivers to include, as well as some device-specific
characteristics and parameters.
 What OS options are desired, and what values to set for particular
OS parameters. The latter may include the size of the open file
table, the number of buffers to use, process scheduling ( priority )
parameters, disk scheduling algorithms, number of slots in the
process table, etc.
 At one extreme system administrator can use it to modify OS
source code and can be edited, re-compiled, and linked into a new
kernel.
 More commonly configuration tables determine which modules to
link into the new kernel, and what values to set for some key
important parameters. This approach may require the configuration
of complicated makefiles.
 At the other extreme a system configuration may be entirely
defined by table driven data. All code is always part of system, and
selection occurs at execution time.
 Once a system has been regenerated, it is usually required to reboot
the system to activate the new kernel.
2.10 System
 Boot
When power initialized on system, execution starts at a fixed memory
location
 Firmware ROM used to hold initial boot code
 Operating system must be made available to hardware, so hardware
can start it
 Smallpiece of code – bootstrap loader, stored in ROM or
EEPROM locates the kernel, loads it into memory, and starts it
 Sometimes two-step process where boot block at fixed
location loaded by ROM code, which loads bootstrap loader from
disk bootstrap loader GRUB(grand unified boot
 Common
allows selection of , from multiple loader),
options kernel disks, versions,
 Kernel loads and system is then kernel
running
End of Chapter 2
Chapter 3: Processes
Chapter 3:
Processes
 3.1 Process Concept
 3.2 Process Scheduling
 3.3 Operations on Processes
 3.4 Interprocess Communication
Objectives
 Tointroduce the notion of a process -- a program in
execution, which forms the basis of all computation

 To describe the various features of processes, including scheduling,


creation and termination, and communication

 Toexplore interprocess communication using shared memory


and message passing
3.1 Process Concept
 An operating system executes a variety of programs:
 Batch system – jobs
 Time-shared systems – user programs or tasks
 Textbook uses the terms job and process almost interchangeably
 Process – a program in execution; process execution must
progress in sequential fashion
 Multiple parts
 The program code, also called text section
 Current activity including program counter, processor
registers
 Stack containing temporary data
Function parameters, return addresses, local variables
 Data section containing global variables
Process Concept (Cont.)

 Program is passive entity stored on disk (executable file), process is


active
 Program becomes process when executable file
loaded into
memory
 Execution of program started via GUI mouse clicks, command
line entry of its name, etc
 One program can be several processes
 Consider multiple users executing the same program
Process in Memory
3.1.2 Process State

 As a process executes, it changes state


 new: The process is being created
 running: Instructions are being executed
 waiting: The process is waiting for some event to
occur
 ready: The process is waiting to be assigned to a
processor
 terminated: The process has finished execution
 Suspend Ready
 Suspend wait
Diagram of Process State
3.1.3 Process Control Block (PCB)
Information associated with each
process (also called task control block)
 Process state – running, waiting, etc
 Program counter – location of instruction to
next execute
 CPU registers – contents of all process-
centric registers
 CPU scheduling information- priorities,
scheduling queue pointers
 Memory-management information –
memory allocated to the process
 Accounting information – CPU used, clock
time
elapsed since start, time limits
 I/O status information – I/O devices allocated
to process, list of open files
CPU Switch From Process to Process
3.1.4 Threads
 So far, process has a single thread of execution

 Consider having multiple program counters per


process
 Multiple locations can execute at once
Multiple threads of control -> threads

 Must then have storage for thread details,


multiple program counters in PCB
3.2 Process Scheduling

 Maximize CPU use, quickly switch processes onto CPU for


time sharing
 Process scheduler selects among available processes for next
execution on CPU
 Maintains scheduling queues of processes
 Job queue – set of all processes in the system
 Ready queue – set of all processes residing in main memory,
ready and waiting to execute
 Device queues – set of processes waiting for an I/O device
 Processes migrate among the various queues
Ready Queue And Various I/O Device Queues
Representation of Process Scheduling

 Queueing diagram represents queues, resources,


flows
3.2.2 Schedulers
 Process – migrates among various queues
 OS must select processes form these queues in some fashion
 The selection process is carried out by the appropriate
 scheduler
Long-term (or job scheduler) – selects which
processes
schedulershould be brought into the ready queue
 Long-term scheduler is invoked infrequently (seconds,
minutes)
 (may belong-term
The slow) scheduler controls the degree
multiprogramming
of
 Short-term scheduler (or CPU scheduler) – selects
which should be executed next and allocates CPU
process
 Sometimes the only scheduler in a system
 Short-term scheduler is invoked frequently (milliseconds)

Schedulers
 Long term scheduler should make a careful selection
 Processes can be described as either:
 I/O-bound process – spends more time doing I/O
than computations, many short CPU bursts
 CPU-bound process – spends more time doing computations; few
very long CPU bursts
 Long-term scheduler strives for good process mix

 In some systems long term scheduler may be absent


 Example- time sharing systems like UNIX and Microsoft Windows
Addition of Medium Term Scheduling
 Medium-term scheduler can be added if degree of
multiple programming needs to decrease
 Remove process from memory, store on disk,
bring back in
from disk to continue execution: swapping
3.2.3 Context
Switch
 When CPU switches to another process, the system must save the
state of the old process and load the saved state ( state restore)
for the new process via a context switch
 Context of a process represented in the PCB
 Context-switch time is overhead; the system does no useful work
while switching
 The more complex the OS and the PCB  the longer the
context switch
 Time dependents on the memory speed, the number of registers
that must be copied and the existence of special instructions
 Also dependent on hardware support
 Some hardware provides multiple sets of registers per CPU 
context switch simply requires changing the pointer to the
current register set
3.3 Operations on Processes

 System must provide mechanisms


for:
 process creation,
 process termination,
 and so on as detailed next
3.3.1 Process Creation
 Parent process create children processes, which, in turn
create other processes, forming a tree of processes
 Generally, identified and managed via a
identifier
process (pid) process
 Resource sharing options
 Parent and children share all resources
 Children share subset of parent’s
resources
 Parent and child share no resources
 Execution options
 Parent and children execute concurrently
 Parent waits until children terminate
A Tree of Processes in Linux

init
pid =
1

login kthread sshd


pid = d pid pid =
8415 = 2 3028

bash khelper pdflush sshd


pid = pid = pid = pid =
8416 6 200 3610

emacs tcsch
ps
pid = pid =
pid =
9204 4005
9298
Process Creation (Cont.)
 Address space
 Child duplicate of parent
 Child has a program loaded into it
 UNIX examples
 fork() system call creates new process
 exec() system call used after a fork() to replace the
process’ memory space with a new program
C Program Forking Separate Process
Creating a Separate Process via Windows API
3.3.2 Process Termination

 Process executes last statement and then asks the


operating system to delete it using the exit() system call.
 Returns status data from child to parent (via wait())
 Process’resources are deallocated by operating system

 Parent may terminate the execution of children using


processes the abort() system call. Some reasons for
doing so:
 Child has exceeded allocated resources
 Task assigned
 The parent tois child is noand
exiting longer
the required
operating systems does
not allow a child to continue if its parent terminates
Process Termination
 Some operating systems do not allow child to exists if its parent has
terminated. If a process terminates, then all its children must also be
terminated.
 cascading termination. All children, grandchildren, etc. are
terminated.
 The termination is initiated by the operating system.
 The parent process may wait for termination of a child process by
using the wait()system call. The call returns status information and
the pid of the terminated process
pid = wait(&status);
 If no parent waiting (did not invoke wait()) process is a zombie
 If parent terminated without invoking wait , process is an orphan
3.4 Interprocess Communication
 Processes within a system may be independent or cooperating
 Independent - if it cannot affect or be affected by the other processes
executing in the system – do not share data with any
 Cooperating process can affect or be affected by other processes,
including sharing data
 Reasons for cooperating processes:
 Information sharing
 Computation speedup
 Modularity
 Convenience
 Cooperating processes need interprocess communication (IPC)

 Two models of IPC


 Shared memory
 Message passing
Communications Models
(a) Message (b) shared
passing. memory.
 Message passing is good for exchanging smaller amount of
data
 Easier to implement
 Shared memory allows maximum speed
 Faster than message passing because message passing
requires system calls
3.4.1 Interprocess Communication – Shared
Memory
 An area of memory shared among the processes that wish to
communicate
 A shared-memory region resides in the address space of
the process creating the shared-memory segment
 The other process must attach it to their address space
 The communication is under the control of the users processes
not the operating system.

 Major issues is to provide mechanism that will allow the user


processes to synchronize their actions when they access
shared memory.

 Synchronization is discussed in great details in Chapter 5.


Producer-Consumer Problem
 Example for cooperating processes

 Producer process produces information that is consumed by a consumer


process
 Example : a compiler may produce assembly code which is consumed by
an assembler
 One solution to this is shared memory
 To allow producer and consumer to run concurrently we must have
available a
buffer that can be filled by producer and emptied by consumer
 A producer will produce one item while the consumer is consuming
another item
 Synchronization is important

 Two types of buffer can be used


 unbounded-buffer places no practical limit on the size of the buffer
Bounded-Buffer – Shared-Memory Solution

 Shared data
#define BUFFER_SIZE 10
typedef struct {
...
} item;

item buffer[BUFFER_SIZE];
int in = 0;
int out = 0;

 Solution is correct, but can only use


BUFFER_SIZE-1 elements
 If in == out buffer is empty
 If ((in+1)%BUFFER_SIZE) == out
Bounded-Buffer – Producer

item
next_produced;
while (true) {
/* produce an item in next produced */
while (((in + 1) % BUFFER_SIZE) ==
out)
; /* do nothing */
buffer[in] = next_produced;
in = (in + 1) % BUFFER_SIZE;
}
Bounded Buffer – Consumer

item
next_consumed;
while (true) {
while (in ==
out)
; /*
do
nothi
ng */
next_consum
ed =
buffer[out];
out = (out +
1) %
BUFFER_SI
3.4.2 Interprocess Communication – Message Passing
 Mechanism for processes to communicate and to synchronize
their actions without sharing the address space
 Useful for distributed systems where processes reside on
different systems

 IPC facility provides two operations:


 send(message)
 receive(message)

 The message size is either fixed or variable


 Fixed size easy to implement, programming becomes difficult
 Variable size difficult to implement, easy for programming
Message Passing (Cont.)

 If processes P and Q wish to communicate, they need to:


 Establisha communication link between them
 Exchange messages via send/receive
 Implementation issues:
 How are links established?
 Can a link be associated with more than two
 processes?
How many links can there be between every pair
communicating
of processes?
 What is the capacity of a link?
 Is the size of a message that the link can accommodate fixed
or
variable?
Message Passing (Cont.)

 Implementation of communication link


 Physical:

Shared memory
Hardware bus
Network
 Logical:
 Direct or indirect
 Synchronous or asynchronous
 Automatic or explicit buffering
Direct Communication
 Processes must name each other explicitly:
 send (P, message) – send a message to process P
 receive(Q, message) – receive a message from process Q
 Properties of communication link
 Links are established automatically
 A link is associated with exactly one pair of communicating processes
 Between each pair there exists exactly one link
 The link may be unidirectional, but is usually bi-directional
 This scheme is symmetry in addressing means both sender and receiver
must name each other
 Asymmetry in addressing, here only sender names the recipient
 send (P, message) – send a message to process P
 receive ( id, message) – receive a message from any process; the id is set
to the name of the process with which communication has taken place
Indirect Communication
 Messages are directed and received from mailboxes (also
referred to as ports)
 Each mailbox has a unique id
 Processes can communicate only if they share a mailbox
 Properties of communication link
 Link established only if processes share a common mailbox
A link may be associated with many processes
 Each pair of processes may share several communication
links
 Link may be unidirectional or bi-directional
Indirect Communication
 Operations
 create a new mailbox (port)
 send and receive messages through mailbox
 destroy a mailbox
 Primitives are defined as:

send(A, message) – send a message to mailbox A


receive(A, message) – receive a message from mailbox
A
Indirect Communication
 Mailbox sharing
 P1, P2, and P3 share mailboxA
 P1, sends; P2 and P3 receive
 Who gets the message?
 Solutions
 Allow a link to be associated with at most two processes
 Allow only one process at a time to execute a receive operation
 Allowthe system to select arbitrarily the receiver.
Sender is notified who the receiver was.
 Mailbox may be owned by a process or by OS
 If mailbox is owned by a process, we can distinguish between the
owner ( only receive messages) and the user ( who can only send
the messages)
 Owned by OS, mailbox is independent and is not attached to any
particular process

 OS must allow a process to do the following


 Create a mailbox
 Send and receive messages through the mailbox
 Delete the mailbox
Synchronization
 Message passing may be either blocking or non-blocking
 Blocking is considered synchronous
 Blocking send -- the sender is blocked until the message is received
 Blocking receive -- the receiver is blocked until a message is
available
 Non-blocking is considered asynchronous
 Non-blocking send -- the sender sends the message and continue
 Non-blocking receive -- the receiver receives:
 A valid message, or
 Null message
 Different combinations possible
 If both send and receive are blocking, we have a rendezvous
Synchronization (Cont.)

 Producer-consumer becomes trivial

message
next_produced; while
(true) {
/* produce an item in
next produced */
send(next_produced);
}
message
next_consumed; while
(true) {
receive(next_consumed
);

/* consume the item in


Buffering

 Queue of messages attached to the link.


 implemented in one of three ways

1. Zero capacity – no messages are queued on a link.


Sender must wait for receiver (rendezvous)
2. Bounded capacity – finite length of n messages. Sender
must wait if link full
3. Unbounded capacity – infinite length. Sender never waits
End of Chapter 3

You might also like