KEMBAR78
Spos Unit 6 | PDF | Computing | Computer Science
0% found this document useful (0 votes)
80 views26 pages

Spos Unit 6

The document discusses memory management, focusing on fragmentation types: internal and external fragmentation, their causes, impacts, and solutions. It also covers fixed and dynamic partitioning, the buddy system, and paging techniques, including TLB for efficient address translation. Additionally, it outlines page replacement strategies such as FIFO and LRU for managing memory in virtual systems.

Uploaded by

johnrocky4678
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
80 views26 pages

Spos Unit 6

The document discusses memory management, focusing on fragmentation types: internal and external fragmentation, their causes, impacts, and solutions. It also covers fixed and dynamic partitioning, the buddy system, and paging techniques, including TLB for efficient address translation. Additionally, it outlines page replacement strategies such as FIFO and LRU for managing memory in virtual systems.

Uploaded by

johnrocky4678
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

UNIT – 6 Memory Management

(Q.) What is Fragmentation? Types of fragmentation with diagram


and example.
Ans. 1) Processes are loaded and removed from memory; the free
memory space is broken into little pieces.
2) It happens after sometimes that processes cannot be allocated to
memory blocks considering their small size and memory
blocks remains unused.
Internal Fragmentation:
Definition: There is wasted space internal to a partition due to the fact that the
block of data loaded is smaller than the partition is called as Internal Fragment.

Key Characteristics:
1. Cause: Memory is allocated in fixed-sized blocks, and processes often do
not use the entire block.
2. Nature of Wastage: The wastage happens inside the allocated block.
3. Example:
o Suppose a memory block of 512 KB is allocated to a process that
only requires 400 KB.
o The remaining 112 KB within the allocated block is unused and
wasted.

Detailed Example:

Block Size Process Size Wasted Space

512 KB 400 KB 112 KB

1024 KB 900 KB 124 KB

If there are multiple processes, the wasted space accumulates, leading to


significant memory inefficiency.

Diagram:
Impact:
 Memory utilization decreases, as part of the allocated memory remains
unused.
 The total memory available for new processes is reduced, even though
some parts of the allocated blocks remain empty.

Solutions:
1. Dynamic Partitioning: Allocate memory dynamically based on the exact
size required by a process.
2. Paging: Divide memory into fixed-size pages and allocate only the
required number of pages to a process.
3. Best-Fit Allocation: Use memory allocation algorithms that minimize
wastage.

Advantages and Disadvantages:

Advantages Disadvantages

Simplifies memory allocation. Causes wastage of memory.

Reduces overhead of managing variable Inefficient for systems with


sizes. many

External Fragmentation:
Definition: It occurs when enough total main memory space exists to satisfy a
request, but it is not contiguous, storage is fragmentated into a large number
of small holes.
Key Characteristics:
1. Cause: Processes dynamically allocate and release memory in varying
sizes, leaving small gaps in memory.
2. Nature of Wastage: The wastage happens outside allocated memory
blocks, in the form of small fragmented free spaces.
3. Example:
o Suppose there are free memory blocks of 100 KB, 200 KB, and 700
KB, but a process requires 500 KB.
o Even though the total free memory (1000 KB) is sufficient, the
process cannot be allocated memory due to lack of a contiguous
500 KB block.

Detailed Example:

Memory Block Size Free Space (KB)

Allocated 200

Free 100

Allocated 500

Free 700

In this example, even though 800 KB of memory is free, only 700 KB can be
allocated at most, leaving gaps.

Diagram:
Impact:
 The scattered free space reduces the overall efficiency of memory
utilization.
 New memory allocation requests may fail even when there is enough
total free memory.

Solutions:
1. Compaction: Rearrange the memory to consolidate all free blocks into
one contiguous block. This method can be time-consuming.
2. Paging: Use a paging technique where memory is divided into fixed-size
pages, eliminating the need for contiguous memory allocation.
3. Segmentation with Paging: Combines segmentation and paging to
reduce fragmentation.

Advantages and Disadvantages:

Advantages Disadvantages

Allows dynamic allocation of Causes memory fragmentation over


memory. time.

May require compaction to reclaim


Reduces internal fragmentation.
space.
1. Fixed Partitioning
Definition:
In fixed partitioning, the memory is divided into fixed-sized partitions at system
initialization. Each partition can hold exactly one process, regardless of the
process size.

Key Characteristics:
1. Static Partitioning: Partition sizes are determined at the time of system
setup and cannot be changed during execution.
2. Process Allocation:
o Each process is allocated to a partition large enough to
accommodate it.
o If the process size is smaller than the partition, the remaining
space results in internal fragmentation.
3. Limitations:
o If a process is larger than the largest partition, it cannot be
allocated memory.
o Fixed number of partitions limit the number of processes in
memory.

Example:

Partition Size (KB) Process Wasted Space

P1 1000 700 300

P2 2000 1500 500

P3 1500 800 700

In this example, internal fragmentation occurs because the allocated memory is


larger than the process size.
Advantages:
 Simple to implement.
 Easy to manage since partitions are pre-defined.
 No external fragmentation.
Disadvantages:
 Internal Fragmentation: Wasted space within partitions.
 Inflexibility: Number and size of partitions cannot adapt to changing
workloads.
 Processes larger than the largest partition cannot be accommodated.

Diagram:
| P1: 1000 KB | P2: 2000 KB | P3: 1500 KB |
2. Dynamic Partitioning
Definition:
In dynamic partitioning, memory is allocated dynamically based on the size of
each process at runtime. Partitions are created as needed, matching the exact
size of the process.

Key Characteristics:
1. Flexible Partitioning: Memory partitions are not fixed and are created on
demand.
2. Process Allocation:
o Each process is allocated exactly the amount of memory it
requires.
o This eliminates internal fragmentation but can lead to external
fragmentation over time.
3. Compaction: To address external fragmentation, free memory can be
consolidated by moving processes closer together.

Example:

Process Requested Size (KB) Allocated Size (KB)

P1 700 700

P2 1500 1500

P3 800 800

No internal fragmentation occurs as memory is allocated exactly as required.


Over time, however, free space may become fragmented.
Advantages:
 More efficient memory utilization.
 No internal fragmentation.
 Can handle processes of any size (within the total memory available).
Disadvantages:
 External Fragmentation: Over time, free memory may become
scattered.
 More complex memory management.
 Compaction may be required to consolidate free space.

Diagram:
| P1: 700 KB | P2: 1500 KB | P3: 800 KB | Free Space |

Comparison Table:

Fixed Partitioning Dynamic Partitioning

Fixed, determined during Variable, created dynamically as


Partition Size
initialization. needed.

Internal High (unused space within


None (memory allocated as needed).
Fragmentation partitions).

External None (fixed sizes prevent Can occur as free space becomes
Fragmentation scattering). scattered.

Limited (number and size are


Flexibility High (adapts to process size).
static).

Memory Utilization Inefficient (due to wasted space). Efficient (no wasted internal space).

More complex due to dynamic


Complexity Simple to implement.
allocation.
Buddy System Fragmentation:
● The buddy system is a simple dynamic storage allocation method.

● The buddy system is a memory allocation and management algorithm that


manages memory in power of two increments.

● Linux operating system manages memory using buddy algorithm.

Here’s a simplified explanation of the buddy system and its key concepts:

Key Points About the Buddy System


1. Memory Size is a Power of 2:
o Memory is organized in blocks of size 2^N (e.g., 128 pages, 1 MB).
o Each block is split into smaller blocks of size 2^k, where k < N.
2. Separate Lists for Free Blocks:
o The system maintains separate lists for free blocks of each size
(2^N,2^N−1,…,2^k).
o There can be at most NNN lists, one for each possible block size.
3. Buddies:
o When a block is split, it creates two halves called "buddies."
o Buddies can be merged back if they are both free.
4. Initial State:
o At the beginning, the memory has one large block (e.g., 128 pages in
the list for 2^N).
o All other lists are empty.
5. Allocation Process:
o Memory requests are rounded up to the nearest power of 2.
 Example: A request for 75 KB is rounded to 128 KB.
 Example: A request for 13 KB is rounded to 16 KB.
o If a block of the desired size is not available, the system finds a larger
block, splits it into smaller buddies, and continues splitting until the
desired size is reached.
6. Deallocation Process:
o When memory is freed, the system checks if the adjacent buddy block
is also free.
o If both buddies are free, they are merged into a larger block.
o This merging continues until no more buddies can be combined.
7. Internal Fragmentation:
o Internal fragmentation happens because memory requests are rounded
up to the next power of 2.
 Example: A 75 KB request rounded to 128 KB wastes 53 KB.
Advantages:
a) Easy to implement a buddy system
b) Allocates block of correct size
c) It is easy to merge adjacent holes
d) Fast to allocate memory and de-allocating memory

Disadvantages:
a) It requires all allocation unit to be powers of two
b) It leads to internal fragmentation.
c) Merging of holes is recursive, so poor worst-case behaviour.
Paging:
● Operating system divides each incoming programs into pages of
equal size. The sections of main memory are called pages frames.
● Paging translates a logical address used by the CPU into a physical
address in memory. It solves external fragmentation problem.
●The relation between virtual addresses and physical memory
address given by page table.

Address Translation in Paging:


● Fixed sized blocks are called frames and breaking of logical memory into
blocks of same size called pages.

● Memory manager prepares following things before executing a program:


1. Find out the number of pages in the program.
2. Free space in the main memory.
3. Loading of all the programs pages into memory.

● Pages are not loaded continuously in main memory. Each page can be stored
in any available page frame anywhere in main memory. Memory manager
keeps the track of pages of program. Paging avoids external fragmentation and
need for compaction.

●Memory manager uses three tables to keep track of process and memory.

1. Job table: It stores the size of the active job and memory location where its
page table is stored.
2. Page map table: The Page Mapping Table (PMT) maps each page to its
corresponding frame in physical memory, with one sequential entry per page.
3. Memory map table: MMT is used to store page frame location and its status.

Page size depends upon underlying hardware. Operating system maintains a


page table for each process. The page table shows the frame location for each
page of a process.
The processor translates logical to physical addresses. A logical address has a
page number and offset, while a physical address has a frame number and
offset, used to locate data within the frame.

TLB
● Set of registers are used for implementing page table. Registers are suitable
only for small page table. If page table size increases, then special purpose
hardware cache is used for page table.

● Special cache memory called a Translation Lookaside Buffer (TLB) is used


with the address translation hardware.

● TLB is very effective in improving the performance of page frame access.

Paging using TLB:


When a page is first translated to a page frame, the map is read from primary
memory into the TLB. The TLB entry contains the page number, page frame's
physical address etc.
TLB Hit:
 If the page number is found in the TLB, it is a TLB hit, and the physical
address is quickly retrieved.
TLB Miss:
 If the page number is not in the TLB, it is a TLB miss.
 The page table in main memory is accessed to find the mapping, which
takes more time.
Associative Memory:
 TLB is an associative, high-speed memory that stores a small number of
page mappings for quick access.
Translation Buffer:
 A buffer in the TLB stores frequently used page table entries, improving
performance for repeated accesses.
Performance:
 TLB is fast but limited in size, so not all mappings can be stored
simultaneously.
(Optional)
Check TLB First:
 If TLB has the page info → Fast access (TLB hit).
TLB Miss:
 If TLB doesn’t have the info → Use page and segment tables to find it →
Slower access.
 The retrieved info is added to TLB for future references.
Performance Improvement:
 TLB improves performance if it is much faster than page table lookup.
 A high hit ratio (percentage of accesses found in TLB) boosts efficiency.
Advantages of Using TLB:
 Speed: Faster address translation by avoiding frequent page table access.
 Efficiency: Reduces the time required for memory access.

Disadvantages:
 Limited Size: TLB can only store a small number of entries.
 Overhead: Updating the TLB during a miss adds a slight delay.

Page Table Structure:


Definition:
A page table is a data structure used in paging systems to map logical page
numbers to physical frame numbers in memory. Each process has its own page
table.

Structure:
1. Page Table Entries (PTEs):
o Each entry corresponds to a page in logical memory.
o Contains:
 Page Number: Logical page number.
 Frame Number: Physical frame number where the page is
stored.
 Status Bits:
 Valid Bit: Indicates if the page is loaded in memory.
 Protection Bit: Specifies access rights
(read/write/execute).
 Dirty Bit: Indicates if the page has been modified.
2. Page Table Size:
o Depends on the number of pages and the size of each entry.
o Larger logical address space → Larger page table.
3. Location:
o Stored in memory and managed by the operating system.

Advantages:
1. Simplifies memory allocation by mapping non-contiguous memory.
2. Supports process isolation with separate page tables.
Disadvantages:
1. Requires additional memory for storage.
2. Slower address translation without TLB.
Address Translation in segmentation:
VM using Paging:
Page Replacement Strategies
Page replacement is a technique used in virtual memory systems when a page
fault occurs and the system needs to bring a page into memory but there is no
free space. The operating system must decide which page to replace (evict)
from memory to make room for the new page. Here are two common page
replacement strategies:

1. FIFO (First-In, First-Out)


Concept:
 FIFO is the simplest page replacement algorithm.
 Pages are maintained in a queue in the order they were brought into
memory.
 When a page needs to be replaced, the page that has been in memory
the longest (i.e., the page at the front of the queue) is chosen for
replacement.
Steps:
1. When a page is referenced, it is added to the end of the queue.
2. If a page fault occurs and there is no free space, the page at the front of
the queue (oldest page) is replaced with the new page.
3. The page that was evicted is moved out of memory, and the new page is
inserted at the end of the queue.
Advantages:
 Simple and easy to implement.
Disadvantages:
 May not be optimal: FIFO does not always select the best page to evict.
Even if a page is rarely accessed, it might still get evicted just because it
was loaded earlier.
 Can result in poor performance due to unnecessary page replacements.
Example:
 Suppose memory can hold 3 pages (A, B, and C). The pages are loaded in
order of access. If the next page requested is D, FIFO will replace the
page that has been in memory the longest, which would be A.

2. LRU (Least Recently Used)


Concept:
 LRU replaces the page that has not been used for the longest time.
 It assumes that pages that were most recently used will likely be used
again in the near future.
Steps:
1. Maintain a record of the pages’ access history.
2. When a page is accessed, update its position to indicate it was recently
used.
3. When a page fault occurs and a page must be replaced, choose the page
that hasn’t been used for the longest period.
Advantages:
 LRU is more effective than FIFO because it takes into account which
pages are actually being used and which are not.
 Better performance in many real-world scenarios because it tends to
keep frequently accessed pages in memory.
Disadvantages:
 Complex implementation: To track the most recently used pages, the
system needs to maintain a history of accesses, which requires more
bookkeeping.
 Overhead: Updating the access history can introduce additional
overhead.
Example:
 Suppose memory can hold 3 pages (A, B, and C). The system keeps track
of the most recently accessed page. If the next page requested is D, LRU
will replace the page that was accessed least recently (e.g., C).
Swapping:
Thrashing:

You might also like