PROCESS MANAGEMENT
Key notes
•Early computers allowed only one program to be
executed at a time
•With modern OS, we could have multiple programs run
at the same time
•With this came the need for stricter control of program
execution leading to the birthing of processes
PROCESS
•In simple terms, a process is a program in
execution. This is the unit of work in a computing
system. Eg. Chrome Process.
•A process is more than just the program code.
•A process needs resources to operate till end
allocated at process creation or during operation
PROCESS
•A system is made up of a collection of processes:
✓operating-system processes execute system code
✓user processes execute user code
•All these processes may execute concurrently
A process is represented in memory as the structure
shown below
temp data like function parameters, return addresses, local variables
portion of memory that is dynamically allocated during run time
contains global variables
program code
PROCESS
•A program by itself is not a process
•A program in itself is a passive entity (files)
•A process is an active entity
•Although two processes may be associated with
the same program, they are not considered one
process
PROCESS STATE
•New: the process is being created.
•Running: instructions are being executed.
•Waiting: the process is waiting for some event to
occur (such as an i/o completion or reception of a
signal).
•Ready: the process is waiting to be assigned to a
processor (CPU time).
•Terminated: the process has finished execution
It is important
to realize that
only one
process can be
running on any
processor at
any instant
PROCESS CONTROL BLOCK
•This is a block of data (data structure) which
contains data necessary to
control/manage/track a process
•It also referred to as a task control block.
PROCESS
CONTROL
BLOCK
THREADS
•A thread is the smallest unit of a process that can exist in an OS
•A single process may contain multiple threads
•It comprises a thread ID, a program counter, a register set, and a
stack
•It shares with other threads belonging to the same process its
code section, data section, and other operating-system resources,
such as open files and signals.
MULTITHREADING
•Multithreading is a situation where multiple threads
are running concurrently
•You can imagine multitasking as something that
allows processes to run concurrently, while
multithreading allows sub-processes to run
concurrently.
Benefits of multithreading include;
•Responsiveness
•Resource sharing
•Economy
•Scalability
PROCESS SCHEDULING
•Multiprogramming aims to keep a processor active at all
times
•The process scheduler chooses one available process for
CPU program execution in order to achieve these goals
•Other processes must wait until the CPU is free so that
they may be rescheduled
Scheduling Queues
•All processes in the system are gathered into a job queue
•There are various types of queues; example ready queue,
device queue
•In most cases, a linked list is used to hold these queue
•Pointers to the first and last PCBs in the list are contained
in the particular queue header
•The process charged with the duty of arranging and making sure
all process get some CPU time is referred to as a scheduler.
•The schedular supervises process migration among the various
scheduling queues
•Long-term schedulers (job schedulers) selects processes from a
pool and loads them into memory from disk
•Short-term scheduler (CPU scheduler) selects from among the
processes that are ready to execute and allocates to them CPU
Medium-Term Scheduler
CONTEXT SWITCH
•State save
•State restore
•Saving the current state of a process so you can switch
the CPU to another process which was previously
suspended is referred to as a context switch.
•The context is represented in the PCB of the process
PROCESS CREATION
•A process may create several new processes
•The creating process is called a parent process, and the
new processes are called the children of that process
•Each process is
identified by a unique
process identifier
(PID)
PROCESS CREATION CONT’D
•When a child process is created, that process will need
certain resources such as CPU time, memory, files, I/O
devices, etc.
•Child obtain resources:
✓Directly from OS
✓Sharing parent resources (partial), this leads to restriction
on child process creation
PROCESS CREATION CONT’D
•In addition to supplying various physical and logical resources,
the parent process may pass along initialization data as input to
the child process
•When a process creates a new process, two possibilities for
execution exist:
✓The parent continues to execute concurrently with their children.
✓The parent waits until some or all of their children have
terminated
PROCESS CREATION CONT’D
•Process Systems Calls:
✓fork ()/CreateProcess () system call - used for process
creation
✓exec () system call – to start running
✓wait () system call – to exit the ready queue
PROCESS TERMINATION
•A process terminates when it finishes executing its final statement and asks
the operating system to delete it by using the exit( )/TerminateProcess( )
system call
•The operating system reallocates all of the process's resources
•Also, one process can make another process terminate
•Reasons a parent may request the termination of a child process:
✓child has exceeded its usage of resources
✓task assigned to the child is no longer needed
✓parent is exciting
PROCESS TERMINATION CONT’D
•Cascading termination: the situation whereby the termination of a
parent process results in the termination of all children and grand
children process
•Zombie/defunct process: a process that has completed its execution,
made an exit system call but still has an entry in the process table.
•Orphan process: a process that is still in execution but has the parent
already terminated. These processes are adopted by the kernel.
INTERPROCESS COMMUNICATION
•Processes executing concurrently in the operating system may be
either independent processes or cooperating processes
•Any process that shares data with other processes is a cooperating
process
•Reasons for cooperating processes may include information sharing
or computation speedup
•To enable an environment that allows cooperating processes,
interprocess communication is the solution.
INTERPROCESS COMMUNICATION
•The mechanism that allows the exchange data and information
between cooperating processes is referred to Interprocess
communication.
•There are two modes: Message Passing and Shared Memory
INTERPROCESS COMMUNICATION
•Message passing is:
•Useful in exchanging small amount of data
•Easier to implement in distributed
INTERPROCESS COMMUNICATION
Synchronization
•Communication between processes takes place through calls to send()
and receive() primitives. Useful in exchanging small amount of data
•Message passing may be either blocking or nonblocking— also
known as synchronous and asynchronous
INTERPROCESS COMMUNICATION
Synchronization Variations:
•Blocking send: The sending process is blocked until the message is
received by the receiving process
•Nonblocking send: The sending process sends the message and resumes
operation.
•Blocking receive: The receiver blocks until a message is available.
•Nonblocking receive: The receiver retrieves either as it proceeds
INTERPROCESS COMMUNICATION
When both send() and receive() are blocked,
there arises a problem which causes both the
sender and receiver may become stagnant
BUFFERING
Whether a process communication is direct or indirect,
messages exchanged by communicating processes reside in
a temporary queue. Such queues can be implemented in
three ways:
•Zero capacity: The queue has a maximum length of zero;
thus, the link cannot have any messages waiting in it.
Sender blocks until message is delivered
BUFFERING
•Bounded capacity: The queue has a finite length of n;
thus, at most n messages can reside in it. Sender
only blocks when the channel is full
•Unbounded capacity: The queue’s length is
potentially infinite; thus, any number of messages
can wait in it. The sender never blocks.
DEADLOCKS
This is a situation where a waiting process is never a
able to change state, because the resources it has
requested are held by other waiting processes.
Ideal situation of resource usage:
Request Use Release
DEADLOCKS PREVENTION
•Each process must request all its required resources at
once and can not proceed until all have been granted.
•If a process holding certain resources is denied a
further request, that process must release its original
resource if necessary and then request them again
together with the additional resources.
CPU SCHEDULING
INTRODUCTION TO CPU SCHEDULING
•CPU Scheduling is the process by which the
operating system decides which process in the
ready queue should be executed next.
•It is a core function of a multitasking operating
system and is critical for ensuring efficient
utilization of the CPU.
REASONS FOR CPU SCHEDULING
•Maximize CPU utilization
•Maximize throughput
•Minimize turnaround time
•Minimize waiting time
•Minimize response time
•Provide fairness among all processes
SCHEDULING CONCEPTS
•CPU–I/O Burst Cycle: Processes alternate between
CPU execution and I/O wait. A typical process
execution consists of:
❖CPU burst: Time spent in executing on the CPU
❖I/O burst: Time spent waiting for I/O operations.
SCHEDULING CONCEPTS
•Preemptive vs. Non-Preemptive
Scheduling
Type Description
Algorithms
CPU can be taken away
Preemptive Round Robin, SRTF
from a process
Non- Process keeps CPU until FCFS, SJF (non-
Preemptive it finishes or blocks preemptive)
CPU SCHEDULER
•The CPU Schedular also known as the Short-term
Schedular is responsible for selecting a process for
execution each time the CPU is idle.
•These selection decisions are made based on some
scheduling algorithms.
DISPATCHER
•The dispatcher is responsible for the transfer of control of the CPU to
a selected process.
•Its functions include:
✓Switching context
✓Switching to user mode
✓Jumping to the proper location in the user program
•Dispatcher Latency: Time taken by the dispatcher to stop one process
and start another.
SCHEDULING CRITERIA
Criterion Description
CPU Utilization Keep the CPU as busy as possible (40%–90%)
Number of processes completed per time
Throughput
unit
Time spent from process submission to
Turnaround Time
completion
Waiting Time Time spent waiting in the ready queue
Response Time Time from submission to the first CPU burst
Fairness Equal opportunity for all processes
SCHEDULING CRITERIA
•It is desirable to maximize CPU utilization
and throughput and to minimize
turnaround time, waiting time, and
response time. In most cases, we optimize
the average measure.
SCHEDULING ALGORITHMS
•First-Come, First-Served Scheduling
•Shortest-Job-First Scheduling
•Priority Scheduling
•Round-Robin Scheduling
•Multilevel Queue Scheduling
•Multilevel Feedback Queue Scheduling
FIRST-COME, FIRST-SERVED SCHEDULING (FCFS)
•Process are scheduled according to the order
in which they arrive.
•It is a non-preemptive algorithm
•The disadvantage is short process may wait
too long if they are found behind a long
process.
SHORTEST JOB FIRST (SJF)
•The processes with the shortest CPU burst
time are scheduled first
•Can be Non-preemptive or Preemptive
•Can cause process starvation of long processes
when there exist many short processes
PRIORITY SCHEDULING
•Each process is assigned a priority number.
•Higher priority processes are scheduled first
•Can be preemptive or non-preemptive
•Has the problem of starvation of low-priority
processes
•Problem can be solved using aging – gradually
increase priority of waiting processes
ROUND ROBIN (RR)
•Each process gets a small unit of CPU time
(quantum).
•A preemptive algorithm
•It has better response time but performance is
dependent on the time quantum.
MULTILEVEL QUEUE SCHEDULING
•Ready queue is divided into multiple queues
based on priority. Examples are System
processes, CPU interactive processes, I/O
intensive processes etc.
• Each queue can have its own scheduling
algorithm.
MULTILEVEL FEEDBACK QUEUE
•Similar to multilevel queue, but processes can
move between queues.
•Aging is naturally implemented.
•This is ideal for general-purpose OS like
Windows or Linux.
MEMORY MANAGEMENT
WHAT IS MEMORY MANAGEMENT?
•Memory management refers to the functionality
of an operating system (OS) that handles or
manages primary memory (RAM).
•It keeps track and manages allocation and
deallocation of memory spaces as needed by
different programs.
PURPOSE OF MEMORY MANAGEMENT
•Manage memory allocation and deallocation
efficiently.
•Protect memory space among processes.
•Enable sharing of memory when necessary.
•Ensure efficient memory utilization and
performance.
TYPES OF MEMORY IN A SYSTEM
•Primary Memory (RAM): Volatile memory
directly accessible by the CPU.
•Cache Memory: High-speed memory close to
the CPU.
•Virtual Memory: Logical memory abstraction
providing an “illusion” of large memory space.
SWAPPING
Temporarily transferring a process(pages) from
memory to disk and back to free up space.
PROCESS(PAGE) REPLACEMENT ALGORITHMS
•FIFO (First-In First-Out): Removes oldest page in
memory
•LRU (Least Recently Used): Removes the least
recently accessed page
•Optimal Page Replacement: Replaces page not
used for the longest time in future (theoretical)
MEMORY ALLOCATION
TECHNIQUES
•Contiguous Memory Allocation
•Non-Contiguous Memory
1. CONTIGUOUS MEMORY ALLOCATION
Here memory is allocated in single contiguous
blocks. Approaches include:
•Single Partition Allocation: OS occupies one part
while user process occupies the rest, simple but
inflexible.
•Multiple Partition Allocation: this uses a fixed-
sized or variable sized memory blocks
Multiple Partition Allocation approaches:
•Fixed Partitioning: Memory is divided into fixed
sizes. This is easy to implement, but causes
internal fragmentation.
• Variable Partitioning: Memory is allocated
exactly as what is needed. This can cause
external fragmentation.
2. NON-CONTIGUOUS MEMORY ALLOCATION
Here memory is divided and assigned in non-
contiguous blocks. Approaches include:
•Paging: Divides memory into fixed-size pages
and frames which are then used by page tables
map logical to physical addresses.
2. NON-CONTIGUOUS MEMORY ALLOCATION
•Segmentation: This divides memory based on
logical divisions called segments of a program
like code, stack, data, etc. Segment to physical
memory mapping is done using segment tables
FRAGMENTATION IN MEMORY MANAGEMENT
•Internal Fragmentation: this is unused memory
within an allocated block.
•External Fragmentation: this are scattered free
memory blocks after repeated allocations and
deallocations.
•Solution: Compaction, paging, buddy system
MEMORY ALLOCATION ALGORITHMS
Memory
Algorithm Speed Fragmentation
Utilization
First Fit Fast Moderate Medium
Best Fit Slow Good High
Worst Fit Slow Poor High
QUESTIONS
CLASS DISCUSSION
A system has 1 GB RAM for user
processes. If three processes request
400 MB, 300 MB, and 350 MB, detail
how the OS will manage memory.