KEMBAR78
OS Module-5 - Virtual Memory Management | PDF | Hard Disk Drive | Computer File
0% found this document useful (0 votes)
6 views60 pages

OS Module-5 - Virtual Memory Management

Uploaded by

ammupinku789
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views60 pages

OS Module-5 - Virtual Memory Management

Uploaded by

ammupinku789
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 60

Operating Systems

BCS303
MODULE-4-Virtual-Memory Management
• Virtual memory is a technique that allows the
execution of processes that are not completely in
memory.
• One major advantage of this scheme is that
programs can be larger than physical memory.

In practice, most real processes do not need all their


pages, or at least not all at once, for several reasons:
• 1. Error handling code is not needed unless that
specific error occurs, some of which are quite rare.
• 2. Certain features of certain programs are rarely
used.
The ability to load only the portions of
processes that are actually needed has several
benefits:
• Programs could be written for a much larger
address space (virtual memory space ) than
physically exists on the computer.
• Because each process is only using a fraction
of their total address space, there is more
memory left for other programs, improving
CPU utilization and system throughput.
• Less I/O is needed for swapping processes in
and out of RAM, speeding things up.
The figure below shows the general layout of virtual
memory, which can be much larger than physical memory:
The figure below shows virtual address space, which is the
programmer’s logical view of process memory storage. The actual
physical layout is controlled by the process's page table.
• Note that the address space shown in Figure 9.2 is
sparse - A great hole in the middle of the address space
is never used, unless the stack and/or the heap grow to
fill the hole.

Virtual memory also allows the sharing of files and


memory by multiple processes, with several benefits:
• System libraries can be shared by mapping them into
the virtual address space of more than one process.
• Processes can also share virtual memory by mapping
the same block of memory to more than one process.
• Process pages can be shared during a fork( ) system
call, eliminating the need to copy all of the pages of the
original ( parent ) process.
Demand Paging
The basic idea behind demand paging is that when a process is
swapped in, its pages are not swapped in all at once.
• The basic idea behind demand paging is that
when a process is swapped in, the pager
only loads into memory those pages that is
needed presently.
• Pages that are not loaded into memory are
marked as invalid in the page table, using the
invalid bit. Pages loaded in memory are
marked as valid.
• If the process only ever accesses pages that
are loaded in memory ( memory resident
pages ), then the process runs exactly as if
all the pages were loaded in to memory.
On the other hand, if a page is needed that was not originally loaded
up, then a page fault trap is generated, which must be handled in a
series of steps:
• 1. We check an internal table(kept with the PCB).The memory
address requested is first checked, to make sure it was a valid
memory request.
• 2. If the reference is to an invalid page, the process is terminated.
Otherwise, if the page is not present in memory, it must be paged
in.
• 3. A free frame is located, possibly from a free-frame list.
• 4. A disk operation is scheduled to bring in the necessary page
from disk.
• 5. After the page is loaded to memory, the process's page table is
updated with the new frame number, and the invalid bit is changed
to indicate that this is now a valid page reference.
• 6. The instruction that caused the page fault must now be
restarted from the beginning.
In an extreme case, the program starts
execution with zero pages in memory. Here NO
pages are swapped in for a process until they
are requested by page faults. This is known as
pure demand paging.
• The hardware necessary to support demand
paging is the same as for paging and
swapping: A page table and secondary
memory.
Performance of Demand Paging
There is some slowdown and performance hit
whenever a page fault occurs(as the required page
is not available in memory) and the system has to go
get it from memory.
• There are many steps that occur when servicing a
page fault and some of the steps are optional or
variable.
• But just for the sake of discussion, suppose that a
normal memory access requires 200
nanoseconds, and that servicing a page fault takes
8 milliseconds. ( 8,000,000 nanoseconds, or
40,000 times a normal memory access. )
• With a page fault rate of p, ( on a scale from 0 to 1
), the effective access time is now:
• Effective access time = p * time taken to access
memory in page fault+ (1-p)* time taken to access
memory
= p * 8000000 + ( 1 - p ) * ( 200 )
= 200 + 7,999,800 * p
Copy-on-Write
The idea behind a copy-on-write is that the pages of a
parent process is shared by the child process, until one or
the other of the processes changes the page. Only when a
process changes any page content, that page is copied for
the child.
• Only pages that can be modified need to be labeled as
copy-on-write. Code segments can simply be shared.
• Some systems provide an alternative to the fork( )
system call called a virtual memory fork, vfork( ). In
this case the parent is suspended, and the child uses the
parent's memory pages.
• This is very fast for process creation, but requires that
the child not modify any of the shared memory pages
before performing the exec( ) system call.
Page Replacement
• In order to make the most use of virtual memory, we
load several processes into memory at the same
time.
• Since we only load the pages that are actually
needed by each process at any given time, there
are frames to load many more processes in
memory.
• If some process suddenly decides to use more
pages and there aren't any free frames available.
• Then there are several possible solutions to
consider:
1. Adjust the memory used by I/O buffering, etc., to
free up some frames for user processes.
2. Put the process requesting more pages into a wait
queue until some free frames become available.
3. Swap some process out of memory completely,
freeing up its page frames.
4. Find some page in memory that isn't being used
right now, and swap that page only out to disk, freeing
up a frame that can be allocated to the process
requesting it.
This is known as page replacement, and is the most
common solution. There are many different algorithms
for page replacement.
• The previously discussed page-fault processing assumed that
there would be free frames available on the free-frame list.
Now the page-fault handling must be modified to free up a frame
If necessary, as follows:
1. Find the location of the desired page on the disk.
2. Find a free frame:
– a. If there is a free frame, use it.
– b. If there is no free frame, use a page-replacement
algorithm to select an existing frame to be replaced,
known as the victim frame.
– c. Write the victim frame to disk. Change all related page
tables to indicate that this page is no longer in memory.
3. Read in the desired page and store it in the frame. Change
the entries in page table.
4. Restart the process that was waiting for this page.
• The overall goal in selecting and tuning these
algorithms is to generate the fewest number of
overall page faults.
• Because disk access is so slow relative to
memory access, even slight improvements to
these algorithms can yield large improvements
in overall system performance.
• Algorithms are evaluated using a given string
of page accesses known as a reference
string.
Few Page Replacement algorithms –
a) FIFO Page Replacement
• A simple and obvious page replacement strategy is FIFO, i.e.
first-in-first-out.
• This algorithm associates with each page the time when that
page was brought into memory. When a page must be
replaced, the oldest page is chosen.
• Or a FIFO queue can be created to hold all pages in memory.
As new pages are brought in, they are added to the tail of a
queue, and the page at the head of the queue is the next
victim.
• In the following example, a reference string is given and there
are 3 free frames. There are 20 page requests, which results
in 15 page faults.
• Although FIFO is simple and easy to understand, it is not always
optimal, or even efficient.

• Belady's anomaly tells that for some page-replacement


algorithms, the page-fault rate may increase as the number of
allocated frames increases.
b) Optimal Page Replacement
• The discovery of Belady's anomaly lead to the search for an
optimal page-replacement algorithm, which is simply that
which yields the lowest of all possible page-faults, and which
does not suffer from Belady's anomaly.
• Such an algorithm does exist, and is called OPT. This
algorithm is "Replace the page that will not be used for the
longest time in the future."
• The same reference string used for the FIFO example is used
in the example below, here the minimum number of possible
page faults is 9.
• Unfortunately OPT cannot be implemented in practice,
because it requires the knowledge of future string, but it
makes a nice benchmark for the comparison and evaluation
of real proposed new algorithms.
c) LRU Page Replacement

• The LRU (Least Recently Used) algorithm,


predicts that the page that has not been used in the
longest time is the one that will not be used again in
the near future.
• Some view LRU as analogous to OPT, but here we
look backwards in time instead of forwards.
• Figure 9.15 illustrates LRU for our sample string,
yielding 12 page faults, ( as compared to 15 for
FIFO and 9 for OPT. )
LRU is considered a good replacement policy, and is
often used. There are two simple approaches
commonly used to implement this:
1. Counters. With each page-table entry a time-of-
use field is associated.
• Whenever a reference to a page is made, the
contents of the clock register are copied to the
time-of-use field in the page-table entry for that
page.
• In this way, we always have the "time" of the last
reference to each page.
• This scheme requires a search of the page table to
find the LRU page and a write to memory for each
memory access.
2. Stack. Another approach is to use a stack, and
whenever a page is accessed, pull that page from the
middle of the stack and place it on the top.
• The LRU page will always be at the bottom of the
stack. Because this requires removing objects from
the middle of the stack, a doubly linked list is the
recommended data structure.
• Neither LRU or OPT exhibit Belady's anomaly. Both
belong to a class of page-replacement algorithms
called stack algorithms, which can never exhibit
Belady's anomaly.
Allocation of Frames
Minimum Number of Frames

• We must allocate at least a minimum number of


frames. One reason for this is performance.
• As the number of frames allocated to each
process decreases, the page-fault rate increases,
slowing process execution.
• In addition, when a page-fault occurs before an
executing instruction is complete, the instruction
must be restarted. The minimum number of
frames is defined by the computer architecture.
Allocation Algorithms
1.Equal Allocation
• We split m frames among n processes is to give
everyone an equal share, m/n frames. (For ex: if there
are 93 frames and five processes, each process will get
18 frames. The three leftover frames can be used as a
free-frame buffer pool).
2.Proportional Allocation
– We can allocate available memory to each process
according to its size.
• In both 1 & 2, the allocation may vary according
to the multiprogramming level.
• If the multiprogramming level is increased, each
process will lose some frames to provide the memory
needed for the new process.
• Classification of page replacement algorithms.
Global Replacement Local Replacement

Allows a process to a replacement frame Each process selects from only its own set of
from the set of all frames. allocated frames.

A process may happen to select only frames Number of frames allocated to a process does
allocated to other processes, thus increasing not change.
the number of frames allocated to it.

Disadvantage: Disadvantage:
A process cannot control its own page-fault The set of pages in memory for a process is
rate. affected by the paging behavior of only that
process. This might hinder a process.

Advantage:
Results in greater system throughput.
Thrashing
A process is thrashing if it is spending more time in
paging than executing.
Cause of Thrashing
• Thrashing results in severe performance problem.
• The operating system monitors CPU utilization. If it is
low, we increase the degree of multiprogramming by
introducing new process to the system.
• A global page-replacement algorithm is used.
• The CPU scheduler sees the decreasing CPU utilization
and increases the degree of multiprogramming.
• These faulting processes must use the paging device to
swap pages in and out.
Methods to avoid thrashing:
1.Use Local Replacement
– If one process starts thrashing, it cannot
→ steal frames from another process and
→ cause the latter to thrash as well.
2.We must provide a process with as many frames as
it needs. This approach defines the locality model of
process execution.
– Locality Model states that As a process executes,
it moves from locality to locality.
– A locality is a set of pages that are actively used
together.
– A program may consist of several different
localities, which may overlap.
Module V: Secondary - Storage Structure
Magnetic disks:
• Provide a bulk of secondary storage.
• Disks come in various sizes and speed.
• Here the information is stored magnetically.
• Each disk platter has a flat circular shape like CD.
• The two surfaces of a platter are covered with a
magnetic material.
• The surface of a platter is logically divided into
circular tracks, which are subdivided into sectors.
• Sector is the basic unit of storage.
• The set of tracks that are at one arm position
makes up a cylinder.
• The number of cylinders in the
disk drive equals the number of
tracks in each platter.
• There may be thousands of
concentric cylinders in a disk
drive, and each track may
contain hundreds of sectors.
• The storage capacity of disk
drives is measured in gigabytes.
• The head moves from the inner
track of the disk to the outer
track.
• When the disk drive is operating
the disks is rotating at a constant
speed.
• To read or write the head must be
positioned at the desired track
and at the beginning of the
desired sector on that track.
• Seek Time: Seek time is the time required to move the disk
arm to the required track.

• Rotational Latency(Rotational Delay): Rotational latency is


the time taken for the disk to rotate so that the required sector
comes under the r/w head.

• Positioning time or random access time is the summation


of seek time and rotational delay.

• Disk Bandwidth: Disk bandwidth is the total number of


bytes transferred divided by total time between the first request
for service and the completion of last transfer.

• Transfer rate is the rate at which data flow between the


drive and the computer.
DISK STRUCTURE

• Each disk platter is divided into number of tracks


and each track is divided into number of sectors.
• Sectors is the basic unit for read or write operation
in the disk.
• Sector 0 is the first sector of the first track on the
outermost cylinder.
• The mapping proceeds in order through that track,
then through the rest of the tracks in that cylinder,
and then through the rest of the cylinders from
outermost to innermost.
The disk structure (architecture) can be of two
types –
i) Constant Linear Velocity (CLV)
ii) Constant Angular Velocity (CAV)
CLV - The density of bits per track is uniform.
The farther a track is from the center of the disk, the
greater its length, so the more sectors it can hold.
• As we move from outer zones to inner zones, the
number of sectors per track decreases.
• This architecture is used in CD-ROM and DVD-
ROM.

CAV – There is same number of sectors in each track.


• The sectors are densely packed in the inner tracks.
• The density of bits decreases from inner tracks to
outer tracks to keep the data rate constant.
DISK ATTACHMENT
Computers can access data in three ways.
i) via I/O ports (or host-attached storage)
ii) via a remote host in a distributed file systems(or
network-attached storage)
iii) Storage area network(SAN)

Host-attached storage is storage accessed through


local I/O ports.
• This architecture supports a maximum of two drives per I/O
bus.
• The other cabling systems are – SATA(Serially Attached
Technology Attachment), SCSI(Small Computer System
Interface) and fiber channel (FC).
• A network-attached storage (NAS) device is a
special-purpose storage system that is accessed
remotely over a network.
• Clients access network-attached storage via a remote-
procedure-call interface.
• The remote procedure calls (RPCs) are carried via TCP or
UDP over an IP network—-usually the same local-area
network (LAN) carries all data traffic to the clients.
• A storage-area network (SAN) is a private network
connecting servers and storage units.
• The power of a SAN lies in its flexibility. Multiple hosts and
multiple storage arrays can attach to the same SAN, and
storage can be dynamically allocated to hosts.
• A SAN switch allows or prohibits access between the hosts
and the storage.
• Fiber Chanel is the most common SAN interconnect.
DISK SCHEDULING
Different types of disk scheduling algorithms are as
follows:

• FCFS (First Come First Serve)


• SSTF(Shortest Seek Time First)
• SCAN (Elecvator)
• C-SCAN
• LOOK
• C-LOOK
FCFS scheduling algorithm:

• This is the simplest form of disk scheduling


algorithm.
• This services the request in the order they
are received.
• This algorithm is fair but do not provide
fastest service.
• It takes no special care to minimize the
overall seek time.
consider a disk queue with request for i/o to
blocks on cylinders. 98, 183, 37, 122, 14, 124,
65, 67
SSTF ( Shortest Seek Time First)
algorithm:
This selects the request with minimum seek time
from the current head position.
• SSTF chooses the pending request closest to the
current head position.

consider a disk queue


with request for i/o to
• blocks on cylinders.
98, 183, 37, 122,
14, 124, 65, 67.
SCAN algorithm:

• In this the disk arm starts moving towards one end,


servicing the request as it reaches each cylinder
until it gets to the other end of the disk.
• At the other end, the direction of the head
movement is reversed and servicing continues.
• The initial direction is chosen depending upon the
direction of the head.
• The SCAN is also called as elevator algorithm.
• consider a disk queue with request for i/o to
blocks on cylinders. 98, 183, 37, 122, 14,
124, 65, 67.
C-SCAN (Circular scan) algorithm:

• C-SCAN is a variant of SCAN designed to provide


a more uniform wait time.
• Like SCAN, C-SCAN moves the head from end of
the disk to the other servicing the request along the
way.
• When the head reaches the other end, it
immediately returns to the beginning of the disk,
without servicing any request on the return.
Look Scheduling algorithm:
• Look and C-Look scheduling are different version of SCAN
and C-SCAN respectively. Here the arm goes only as far as
the final request in each direction.
• Then it reverses, without going all the way to the end of the
disk.
• The Look and C-Look scheduling look for a request before
continuing to move in a given direction.
Implementation of File System
File Concepts
• A collection of files, each storing related data,
and a directory structure, which organizes
and provides information about all the files in
the system.
• A file is a collection of related information that
is recorded on secondary storage.
• A file is a sequence of bits, bytes, lines, or
records, the meaning of which is defined by
the file's creator and user.
File Attributes
 Name - The symbolic file name is the only information kept
in human readable form.
 Some special significance is given to names, and
particularly extensions ( .exe, .txt, etc. ).
 Identifier – It is a unique number, that identifies the file within
the file system.
 Type – Type of the file like text, executable, other binary, etc.
Location - . location of the file on that device.
 Size - The current size of the file (in bytes, words, or blocks)
o Protection - Access-control information ( reading, writing,
executing).
 Time, date, and user identification –These data can be
useful for protection, security, and usage monitoring.
File Operations
Creating a file - Two steps are necessary to create a
file
▪ Find space in the file system for the file.
▪ Make an entry for the new file in the directory.

Writing a file - To write a file, the system call consists


of both the name of the file and the information to be
written to the file.
– The system must keep a write pointer to the location in
the file where the next write is to take place.
– The write pointer must be updated whenever a write
occurs.
Reading a file - To read from a file, the system call
that specifies the name of the file and where the next
block of the file should be put.
– The directory is searched for the file, and the system
needs to keep a read pointer to the location in the file
where the next read is to take place.
– Once the read has taken place, the read pointer is
updated.

Repositioning within a file - The directory is searched


for the file, and the file pointer is repositioned to a
given value.
– This file operation is also known as a file seek.
Deleting a file – To delete a file, search the directory
for the file.
– Release all file space, so that it can be reused by other
files, and erase the directory entry.
– Truncating a file - The user may want to erase the
contents of a file but keep its attributes.
– The file size is reset to zero.
Information about currently open files is stored in an open file
table.
It contains information's like:
• File pointer - records the current position in the file, for the
next read or write access.
• File-open count - How many times has the current file been
opened by different processes, at the same time and not yet
closed? When this counter reaches zero the file can be
removed from the table.
– Disk location of the file – The information
needed to locate the file on disk is kept in
memory so that the system does not have to
read it from disk for each operation.
– Access rights – The file access permissions are
stored on the per-process table so that the
operating system can allow or deny subsequent
I/O requests.

You might also like