1.
Memory Management in a Bare Machine and Resident Monitor
Bare Machine: No OS, direct access to physical memory.
Resident Monitor: Basic OS in memory, includes simple memory management like fixed partitioning.
2. Memory Organization and Access
Bare Machine: Direct physical addressing.
Resident Monitor: Uses simple partitions with bounds checking.
3. Multiprogramming and Fixed Partitions
Multiprogramming: Run multiple programs concurrently.
Fixed Partitions: Memory divided into fixed-size blocks.
4. Allocation of Fixed Partitions
Programs fit into smallest available fixed partition. Remaining space wasted (internal fragmentation).
5. Advantages and Limitations of Fixed Partitions
Advantages: Simple, low overhead.
Limitations: Internal fragmentation, inflexible.
6. Variable vs Fixed Partitions
Fixed: Predefined size, internal fragmentation.
Variable: Dynamic size, external fragmentation.
7. Challenges and Benefits of Variable Partitions
Benefits: Efficient memory use, accommodates different sizes.
Challenges: External fragmentation, complex allocation.
8. Paging and Segmentation
Paging: Fixed-size blocks.
Segmentation: Logical memory divisions like code, data.
9. Implementation of Paging and Segmentation
Paging: Page tables map pages to frames.
Segmentation: Segment tables with base and limit.
10. Paged Segmentation
Segments divided into pages. Combines logical organization and fixed-size allocation.
11. Combining Paging and Segmentation
Each segment has its own page table. Logical address = segment + page + offset.
12. Virtual Memory
Abstraction allowing more memory than physically available. Uses disk space for swapping.
13. Advantages of Virtual Memory
Runs large programs, process isolation, efficient RAM usage.
14. Demand Paging
Loads pages only when needed, reducing memory use.
15. How Demand Paging Works
Page fault triggers load from disk. Page table updated.
16. Benefits and Challenges of Demand Paging
Benefits: Saves memory, supports multiprogramming.
Challenges: Page faults cause delay.
17. Performance Factors in Demand Paging
Page fault rate, replacement strategy, disk latency.
18. Thrashing and Mitigation
Thrashing: Excessive paging.
Mitigation: Working set model, increase RAM, better algorithms.
19. Cache Memory Organization and Access
Fast memory between CPU and RAM. Organized in levels and accessed via mapping techniques.
20. Benefits of Cache Memory
Faster access, improves CPU performance.
21. Locality of Reference
Programs tend to reuse recent or nearby data.
22. Temporal and Spatial Locality
Temporal: Reuse recent data.
Spatial: Use nearby data.
23. Leveraging Locality in Memory Management
Caching, paging, and prefetching exploit locality.
24. Protection Schemes in Memory Management
Prevents unauthorized access between processes.
25. Implementation of Protection Schemes
Base-limit registers, access rights in page/segment tables.