KEMBAR78
Summary of Virtual Memory | PDF | Computer Data Storage | Process (Computing)
0% found this document useful (0 votes)
11 views11 pages

Summary of Virtual Memory

The document discusses memory management and virtual memory, highlighting the benefits of demand paging, which loads only necessary pages into memory to optimize CPU utilization and reduce I/O operations. It covers various page replacement algorithms, including FIFO, OPT, and LRU, and introduces concepts like copy-on-write and page-fault frequency to manage memory efficiently. Additionally, it addresses kernel memory allocation strategies, memory-mapped files, and the impact of page size and TLB reach on performance.

Uploaded by

ianniyann721
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views11 pages

Summary of Virtual Memory

The document discusses memory management and virtual memory, highlighting the benefits of demand paging, which loads only necessary pages into memory to optimize CPU utilization and reduce I/O operations. It covers various page replacement algorithms, including FIFO, OPT, and LRU, and introduces concepts like copy-on-write and page-fault frequency to manage memory efficiently. Additionally, it addresses kernel memory allocation strategies, memory-mapped files, and the impact of page size and TLB reach on performance.

Uploaded by

ianniyann721
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 11

Summary: Memory Management & Virtual Memory

9.1 Background

 Previous sections discussed dividing process memory into smaller pages and storing
them non-contiguously in memory. However, not all pages are needed at once.

 Reasons for partial usage:

1. Error handling code is rarely used.

2. Arrays are often over-sized and not fully utilized.

3. Some program features are seldom used.

 Benefits of loading only necessary pages:

o Programs can use a larger address space than physical memory.

o More memory available for other programs, improving CPU utilization.

o Less I/O needed for swapping, which speeds up the system.

 Virtual memory allows sharing of files and memory between processes:

o System libraries can be shared by mapping them into multiple processes' virtual
spaces.

o Processes can share memory by mapping the same block for multiple processes.

o During a fork() system call, pages can be shared without copying them entirely.

9.2 Demand Paging

 Demand Paging: Instead of swapping all pages at once, only the pages needed by a
process are swapped in (lazy swapping).

o Pages not loaded into memory are marked invalid in the page table.

o If a page not in memory is requested, a page fault occurs and the system handles
it by:

1. Checking if the memory request is valid.

2. If valid, locating a free frame.

3. Scheduling a disk operation to load the page.

4. Updating the page table and marking the page as valid.


5. Restarting the instruction that caused the page fault.

 Pure Demand Paging: No pages are swapped in until required by page faults.

 The performance hit of a page fault is significant but mitigated by locality of reference
(most pages are accessed sequentially).

 Hardware needed: page table and secondary memory (swap space).

9.2.1 Performance of Demand Paging

 Servicing a page fault can be much slower than a normal memory access (e.g., 8
milliseconds vs. 200 nanoseconds).

 The effective access time depends on the page fault rate (p). For example:

o If p is 0.001 (1 in 1000 accesses), the effective access time is 8.2 microseconds.

 Swap space can be faster than the file system because it avoids the directory structure.

9.3 Copy-on-Write

 Copy-on-Write: In a fork system call, the child process shares the pages with the parent
until a modification happens. This avoids copying data until necessary.

 Only pages that can be modified are marked for copy-on-write.

 Some systems use vfork(), where the parent is suspended and the child uses the parent’s
memory until exec() is called, which is fast for process creation.

9.4 Page Replacement

 Multiple processes are loaded into memory, but if there are no free frames available,
page replacement is needed.

 Solutions:

1. Adjust memory allocation for I/O buffering.

2. Wait for free frames to become available.

3. Swap a process out of memory.

4. Swap out a page (page replacement) to free up space.

 Basic Page Replacement: If no free frames are available, the system uses a page
replacement algorithm to select a victim page, write it to disk, and load the needed page
into memory.
 Performance Optimization: The goal is to minimize page faults, as disk access is much
slower than memory access. This requires efficient frame allocation and page
replacement algorithms.

9.4.1 Evaluating Page Replacement

 Algorithms are evaluated using reference strings, which are sequences of memory
accesses.

 Three common methods for generating reference strings:

1. Random generation (easy but may not reflect real behavior).

2. Designed sequences (useful for illustrating algorithm properties).

3. Recorded memory references (best for real systems, but may produce large data
sets).

As the number of available frames increases, the number of page faults decreases.

 FIFO Page Replacement:

 FIFO replaces the page that has been in memory the longest.

 Simple but can result in inefficiencies like Belady's anomaly, where increasing the
number of frames can increase page faults.

 Example: A series of page references demonstrates how FIFO can lead to a higher
number of page faults compared to optimal algorithms.

 Optimal Page Replacement (OPT):

 OPT minimizes page faults by replacing the page that will not be used for the longest
time in the future.

 It's theoretical and cannot be implemented in practice, but it serves as a benchmark for
evaluating other algorithms.

 In practice, algorithms aim to approximate OPT’s performance.

 LRU Page Replacement:

 Least Recently Used (LRU) assumes that the page that hasn't been used for the longest
time is least likely to be used again soon.

 Two common implementations:


o Counters: Every memory access increments a counter, and the page with the
smallest counter is replaced.

o Stack: Pages are arranged in a stack, with the least recently used page at the
bottom.

 LRU is generally efficient and avoids Belady's anomaly.

 LRU Approximation:

 Full implementation of LRU requires hardware support, but approximations are possible
through reference bits and algorithms like Second-Chance and Enhanced Second-
Chance.

 Counting-Based Algorithms:

 LFU (Least Frequently Used) and MFU (Most Frequently Used) are based on counting
page references.

 LFU can be problematic when a page is used frequently at first and then never used
again.

 These algorithms are not widely used due to their complexity.

 Page Buffering Algorithms:

 Strategies to maintain a minimum number of free frames, write modified pages when
the I/O system is idle, or reuse pages from freed frames to improve efficiency.

 Frame Allocation:

 Methods for allocating memory frames include equal allocation, proportional


allocation, and variations based on process priority.

 Global allocation allows any page to be replaced, while local allocation restricts
replacement to pages belonging to the process in question.

 Non-Uniform Memory Access (NUMA):

 NUMA systems have memory access latencies that depend on the physical location of
the memory relative to the CPU.

 Memory and CPU scheduling are optimized by trying to keep processes on CPUs close to
their memory, minimizing latency.

 Thrashing:
 Thrashing occurs when a process spends more time paging than executing due to
insufficient memory.

 The working-set model helps prevent thrashing by ensuring that processes have enough
frames to work with the current locality of references.

 Page-Fault Frequency:

 A strategy to control page-fault rates directly by adjusting the number of frames


allocated to processes based on their current page-fault rates.

 If a process’s page-fault rate is too high, it gets more frames; if it's too low, it gives up
some frames.

 Memory-Mapped Files:

 Files can be mapped to a process's virtual address space, allowing data to be paged into
memory for faster access.

 Memory-mapped files can be used for shared memory, enabling multiple processes to
access the same file or memory region.

 The importance of flushing data to disk after writes to ensure data consistency.

 Kernel Memory Allocation:

 Special care must be taken when allocating kernel memory since some of it cannot be
swapped out and must remain contiguous.

 Two common algorithms for managing kernel memory:

o Buddy System: Allocates memory in powers of two and splits larger blocks if
necessary. It simplifies coalescing free blocks.

o Slab Allocation: Allocates memory in slabs for specific data structures, reducing
fragmentation and improving allocation speed.

 Prepaging:

 Predicting which pages will be needed soon and preloading them into memory can
reduce page faults.

 However, incorrect predictions can cause unnecessary overhead.

 Page Size:

 The choice between small and large page sizes involves trade-offs, including memory
waste, page fault frequency, and disk access efficiency.
 Larger pages reduce the overhead of page tables but increase the risk of internal
fragmentation.

 TLB Reach:

 The TLB (Translation Lookaside Buffer) Reach refers to how much memory can be
accessed based on the TLB entries.

 Increasing TLB size or using larger page sizes increases TLB reach but may introduce
fragmentation.

 Inverted Page Tables:

 Inverted page tables store one entry per physical frame, reducing the memory required
for page tables but complicating virtual memory management.

 Program Structure and Memory Access:

 The way an array is accessed in code (e.g., row-major vs. column-major order) can
impact page fault behavior significantly.

 Different programming languages store arrays in different formats, and understanding


this can help optimize memory access patterns.

 I/O Interlock and Page Locking:

 Some kernel operations or I/O operations require pages to remain in memory and not be
swapped out, which may involve locking pages in memory.

 Operating System Examples:

 Windows: Uses demand paging with clustering and adjusts memory allocation based on
page faults and available free pages.

 Solaris: Uses the clock algorithm for page replacement and has mechanisms to manage
free memory, including adjusting the speed of the page scanner based on free memory
thresholds.

Memory Management & Virtual Memory Quiz

1. What is the primary benefit of using virtual memory?

a) Reduces the need for disk storage


b) Allows programs to use more memory than physically available
c) Increases the physical size of the RAM
d) Improves the quality of CPU processing
2. In demand paging, when is a page loaded into memory?

a) When the process starts


b) When the operating system schedules the process
c) When the page is accessed by the process
d) When the process terminates

3. What happens if a page fault occurs?

a) The process is terminated immediately


b) A page is loaded into memory from the disk
c) The operating system allocates more memory
d) The page is copied to the hard disk

4. What is the function of the invalid bit in the page table?

a) To track whether the page is currently being used


b) To mark a page as valid or invalid
c) To store the address of the page on the disk
d) To prioritize which pages should be swapped in next

5. What is pure demand paging?

a) All pages are preloaded into memory


b) Pages are loaded into memory only when needed
c) The entire process is loaded into memory at once
d) No page faults are allowed

6. What is the performance impact of a page fault?

a) No impact, it is handled instantly


b) It can cause a significant slowdown as the page is fetched from disk
c) It increases the CPU’s processing power
d) It eliminates the need for further paging

7. What is copy-on-write?

a) When pages are copied from the disk to memory only when modified
b) When pages are copied for a child process during a fork, even if not modified
c) When pages are shared by processes until one modifies them
d) A method to write data to disk after a process ends

8. How does the vfork() system call differ from fork()?


a) It creates a new process without allocating memory for it
b) The parent process is suspended, allowing the child process to run first
c) The child process is not allowed to execute
d) It allocates new memory for both parent and child

9. In page replacement, what happens when no free frames are available?

a) The process is terminated


b) The operating system allocates memory for the process
c) A page is selected for replacement and written to disk
d) The page is not loaded until memory is freed

10. Which of the following can help reduce the number of page faults?

a) Increasing the size of the page table


b) Reducing the page size
c) Using more memory frames for processes
d) Increasing the frequency of process swaps

11. What does the "dirty bit" indicate in page replacement?

a) Whether the page has been loaded into memory


b) Whether the page has been modified since it was loaded into memory
c) Whether the page is clean and ready to be swapped
d) Whether the page is accessible by other processes

12. Why might some systems transfer an entire process to swap space before starting
it?

a) To save time by reducing disk read operations during paging


b) To ensure the process runs faster
c) To avoid memory fragmentation
d) To prevent the page fault rate from increasing

Answer Key:

1. b) Allows programs to use more memory than physically available

2. c) When the page is accessed by the process

3. b) A page is loaded into memory from the disk

4. b) To mark a page as valid or invalid


5. b) Pages are loaded into memory only when needed

6. b) It can cause a significant slowdown as the page is fetched from disk

7. c) When pages are shared by processes until one modifies them

8. b) The parent process is suspended, allowing the child process to run first

9. c) A page is selected for replacement and written to disk

10. c) Using more memory frames for processes

11. b) Whether the page has been modified since it was loaded into memory

12. a) To save time by reducing disk read operations during paging

Quiz on Page-Fault Frequency and Memory Management

1. What is the purpose of the page-fault frequency approach in memory management?

 Answer: The purpose is to control the page-fault rate by dynamically allocating frames. If
the page-fault rate exceeds a certain upper bound, the process is allocated more frames,
and if it is below a lower bound, it can afford to release some frames.

2. How does page-fault frequency relate to the working-set model?

 Answer: There is a direct relationship between the page-fault rate and the working-set,
as a process moves between localities. A high page-fault rate indicates that the working
set may need more frames to minimize page faults.

3. Explain memory-mapped files.

 Answer: Memory-mapped files are files that are mapped to an address range within a
process’s virtual address space. Pages are brought into memory as needed using a
demand paging system, and writes are made to memory page frames rather than
directly to disk until the file is flushed.

4. What system call is typically used to ensure data written to a memory-mapped file is safely
saved to disk?

 Answer: The flush() system call ensures that data written to a memory-mapped file is
safely written to disk.

5. What is the difference between shared memory and memory-mapped files in terms of
implementation?
 Answer: Shared memory can be implemented via shared memory-mapped files (like in
Windows) or through separate processes (like in UNIX/Linux). Both approaches allow
multiple processes to access the same memory area.

6. What is the advantage of memory-mapped I/O for devices like video controllers?

 Answer: Memory-mapped I/O allows devices like video controllers to have their registers
mapped directly to an address in a process’s virtual address space, making device I/O as
fast and simple as any other memory access.

7. What are the two mechanisms for transferring bytes between the CPU and I/O devices?

 Answer: The two mechanisms are Programmed I/O (PIO), where the CPU periodically
checks the device’s control bit, and Interrupt Driven I/O, where the device generates an
interrupt when it is ready to send or receive data.

8. Explain the Buddy System for kernel memory allocation.

 Answer: The Buddy System allocates memory in power-of-two sizes. If the required
block size is not available, larger blocks are split into smaller blocks. It allows for fast
coalescing of free blocks back into larger blocks using the XOR operation on the block's
address.

9. What is Slab Allocation in kernel memory management?

 Answer: Slab Allocation divides memory into slabs, each containing one or more
contiguous pages. The kernel creates separate caches for different data structures,
allocating memory from these caches as needed. It avoids internal fragmentation and
provides fast allocation of structures.

10. What is prepaging, and when is it useful?

 Answer: Prepaging is the process of pre-loading pages into memory based on


predictions of future page accesses. It is useful if the prediction is accurate, reducing
page faults, but can slow down the system if the prediction is wrong.

11. What is the trade-off between small and large page sizes in memory management?

 Answer: Small pages reduce internal fragmentation but can increase the number of page
faults. Large pages reduce the number of page faults and require smaller page tables,
but they increase fragmentation and are less efficient for handling small data accesses.

12. What is the "TLB reach," and how is it affected by page size?
 Answer: TLB reach is the amount of memory that can be accessed through the entries in
the Translation Lookaside Buffer (TLB). Increasing the page size increases the TLB reach
but can also lead to greater fragmentation.

13. What is the purpose of inverted page tables?

 Answer: Inverted page tables store one entry per physical frame, rather than per virtual
page, to reduce memory usage for the page table. However, it requires additional
mechanisms to manage virtual memory paging, often using a separate page table for
each process stored on disk.

14. In a row-major storage format, how does array access pattern affect page faults?

 Answer: If the loops access elements row by row (outer loop iterating over rows, inner
loop iterating over columns), there will be fewer page faults. If the loops access
elements column by column, each access could cause a page fault, leading to a large
number of page faults.

15. What is I/O interlock and why is it necessary?

 Answer: I/O interlock ensures that pages involved in direct I/O operations are not
swapped out during the operation. This is critical to prevent data corruption or errors in
I/O operations, particularly when using direct memory access (DMA).

16. What happens in Windows when a process exceeds its maximum working set size?

 Answer: When a process exceeds its maximum working set size, pages from the process
are replaced using a local page-replacement algorithm, and the system may trim the
working set if necessary to free memory for other processes.

17. In Solaris, how does the system maintain memory availability for processes?

 Answer: Solaris maintains a free page list and uses parameters like lotsfree to trigger
page-out operations when free memory falls below a certain threshold. If memory gets
very low, processes may be swapped out to free up memory.

You might also like