KEMBAR78
Lecture 8 | PDF | Random Access Memory | Computer Data Storage
0% found this document useful (0 votes)
6 views41 pages

Lecture 8

Uploaded by

meeesi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views41 pages

Lecture 8

Uploaded by

meeesi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

Lecture8

Topics covered:
Memory subsystem
Memory hierarchy

❑ A big challenge in the design of a computer system is to


provide a sufficiently large memory, with a reasonable speed
at an affordable cost.
❑ Static RAM:
◆ Very fast, but expensive, because a basic SRAM cell has a
complex circuit making it impossible to pack a large number of
cells onto a single chip.
❑ Dynamic RAM:
◆ Simpler basic cell circuit, hence are much less expensive, but
significantly slower than SRAMs.
❑ Magnetic disks:
◆ Storage provided by DRAMs is higher than SRAMs, but is still
less than what is necessary.
◆ Secondary storage such as magnetic disks provide a large
amount of storage, but is much slower than DRAMs.

1
Memory hierarchy (contd..)

❑ All these types of memory units are employed effectively in


a computer.
◆ SRAM -- Smaller units where speed is critical, namely, cache
memories.
◆ DRAM -- Large, yet affordable memory, namely, main memory.
◆ Magnetic disks -- Huge amount of cost-effective storage.
❑ Computer memory can be viewed as a hierarchy.

2
Memory hierarchy (contd..)

Pr ocessor •Fastest access is to the data held in


processor registers. Registers are at
Registers the top of the memory hierarchy.
Increasing Increasing Increasing •Relatively small amount of memory that
size speed cost per bit can be implemented on the processor
Primary L1
cache chip. This is processor cache.
•Two levels of cache. Level 1 (L1) cache
is on the processor chip. Level 2 (L2)
cache is in between main memory and
Secondary L2 processor.
cache
•Next level is main memory, implemented
as SIMMs. Much larger, but much slower
than cache memory.
Main •Next level is magnetic disks. Huge amount
memory
of inexepensive storage.
•Speed of memory access is critical, the
idea is to bring instructions and data
Magnetic disk that will be used in the near future as
secondary close to the processor as possible.
memory

3
Cache memories

❑ Processor is much faster than the main memory.


◆ As a result, the processor has to spend much of its time waiting
while instructions and data are being fetched from the main
memory.
◆ Major obstacle towards achieving good performance.
❑ Speed of the main memory cannot be increased beyond a
certain point.
❑ Cache memory is an architectural arrangement which makes
the main memory appear faster to the processor than it
really is.
❑ Cache memory is based on the property of computer
programs known as “locality of reference”.

4
Locality of reference

❑ Analysis of programs indicates that many instructions in


localized areas of a program are executed repeatedly during
some period of time, while the others are accessed
relatively less frequently.
◆ These instructions may be the ones in a loop, nested loop or few
procedures calling each other repeatedly.
◆ This is called “locality of reference”.
❑ Temporal locality of reference:
◆ Recently executed instruction is likely to be executed again
very soon.
❑ Spatial locality of reference:
◆ Instructions with addresses close to a recently instruction are
likely to be executed soon.

5
Locality of reference (contd..)

❑ Cache memory is based on the concept of locality of


reference.
◆ If active segments of a program are placed in a fast cache
memory, then the execution time can be reduced.
❑ Temporal locality of reference:
◆ Whenever an instruction or data is needed for the first time, it
should be brought into a cache. It will hopefully be used again
repeatedly.
❑ Spatial locality of reference:
◆ Instead of fetching just one item from the main memory to the
cache at a time, several items that have addresses adjacent to
the item being fetched may be useful.
◆ The term “block” refers to a set of contiguous addresses
locations of some size. However, a cache block is called cache
line.

6
Cache memories

Main
Processor Cache
memory

•Processor issues a Read request, a block of words is transferred from the main memory
to the cache, one word at a time.
•Subsequent references to the data in this block of words are found in the cache.
•At any given time, only some blocks in the main memory are held in the cache. Which
blocks in the main memory are in the cache is determined by a “mapping function”.
•When the cache is full, and a block of words needs to be transferred from the main
memory, some block of words in the cache must be replaced. This is determined by a
“replacement algorithm”.

7
Cache memories (contd..)

•The processor doesnot need to know explicitly about the existence of the cache.
The processor issues Read and Write requests in the same manner.(normal memory
a ddresses)

•If the data is in the cache it is called a Read or Write hit.

•Read hit:
- The data is obtained from the cache.

•Write hit:
- Cache is a replica of the contents of the main memory.
- Contents of the cache and the main memory may be updated simultaneously.
This is the write-through protocol.
- Update the contents of the cache, and mark it as updated by setting a bit known
as the dirty bit or modified bit. The contents of the main memory are updated
when this block is replaced. This is write-back or copy-back protocol.

8
Cache memories (contd..)
•If the data is not present in the cache, then a Read miss or Write miss occurs.

•Read miss:
- Block of words containing this requested word is transferred from the memory.
- After the block is transferred, the desired word is forwarded to the processor.
- The desired word may also be forwarded to the processor as soon as it is
transferred without waiting for the entire block to be transferred. This is called
load-through or early-restart.

•Write-miss:
- If Write-through protocol is used, then the contents of the main memory are
updated directly.
- If write-back protocol is used, the block containing the addressed word is first
brought into the cache. The desired word is overwritten with new information.

9
Cache memories (contd..)

•A bit called as “valid bit” is provided for each block.


•If the block contains valid data, then the bit is set to 1, else it is 0.
•Valid bits are set to 0, when the power is just turned on.
•When a block is loaded into the cache for the first time, the valid bit is set to 1.

•Data transfers between main memory and disk occur directly bypassing the cache.

10
Mapping functions

❑ Mapping functions determine how memory blocks are placed


in the cache.
❑ A simple processor example:
◆ Cache consisting of 128 blocks of 16 words each.
◆ Total size of cache is 2048 (2K) words.
◆ Main memory is addressable by a 16-bit address.
◆ Main memory has 64K words.
◆ Main memory has 64K blocks of 16 words each.
◆ Consecutive addresses refer to consecutive words.
❑ Three mapping functions:
◆ Direct mapping
◆ Associative mapping
◆ Set-associative mapping.

11
Direct mapping
Main
memory
Block 0 •Block j of the main memory maps to j modulo 128 of
Cache Block 1 the cache. 0 maps to 0, 129 maps to 1.
tag •More than one memory block is mapped onto the same
Block 0
position in the cache.
tag
Block 1 •May lead to contention for cache blocks even if the
cache is not full.
Block 127
•Resolve the contention by allowing new block to
Block 128 replace the old block, leading to a trivial replacement
tag
Block 127 Block 129 algorithm.
•Memory address is divided into three fields:
- Low order 4 bits determine one of the 16
words in a block.
- When a new block is brought into the cache,
Block 255
Tag Block Word the next 7 bits determine which cache
5 7 4 Block 256 block this new block is placed in.
Block 257 - High order 5 bits determine which of the possible
Main memory address
32 (4096/128) blocks is currently present in the cache.
These are tag bits.
•Simple to implement but not very flexible.
Each 32 memory block can be
Block 4095
mapped into one cache block.
12
Associative mapping
Main
memory
Block 0 •Main memory block can be placed into any cache
Block 1
position.
Cache
tag
•Memory address is divided into two fields:
Block 0 - Low order 4 bits identify the word within a block.
tag - High order 12 bits or tag bits identify a memory
Block 1
block when it is resident in the cache.
Block 127
•Flexible, and uses cache space efficiently.
Block 128 •Replacement algorithms can be used to replace an
tag
Block 129
existing block in the cache when the cache is full.
Block 127
•Cost is higher than direct-mapped cache because of
the need to search all 128 patterns to determine
whether a given block is in the cache.
Block 255
Tag Word
Block 256
12 4
Block 257
Main memory address

Each memory block can be


mapped into any cache block. Block 4095

13
Set-Associative mapping
Main
Cache memory
Block 0 Blocks of cache are grouped into sets.
tag Block 1
Mapping function allows a block of the main
Block 0 memory to reside in any block of a specific set.
Set 0
tag Divide the cache into 64 sets, with two blocks per set.
Block 1
tag Memory block 0, 64, 128 etc. map to block 0, and they
Block 2
Set 1 can occupy either of the two positions.
tag Block 127
Block 3 Memory address is divided into three fields:
Block 128 - 6 bit field determines the set number.
Block 129
- High order 6 bit fields are compared to the tag
fields of the two blocks in a set.
tag
Block 126
Set-associative mapping combination of direct and
Set 63
tag
associative mapping.
Block 127 Number of blocks per set is a design parameter.
Block 255
- One extreme is to have all the blocks in one set,
Block 256 requiring no set bits (fully associative mapping).
Block 257
- Other extreme is to have one block per set, is
Tag Set Word
the same as direct mapping.
6 6 4

Main memory address


Each 64 memory block can be
Block 4095
mapped into a cache block.
14
Replacement algorithms

Direct-mapped cache, the position that each memory block occupies in the cache is fixed. As
a result, the replacement strategy is trivial.
Associative and set-associative mapping provide some flexibility in deciding which memory
block occupies which cache block.
When a new block is to be transferred to the cache, and all the positions it may occupy are full,
which block in the cache should be replaced?

Locality of reference suggests that it may be okay to replace the block that has gone the longest
time without being referenced.
This block is called Least Recently Used (LRU) block, and the replacement strategy is called
LRU replacement algorithm.
LRU algorithm has been used extensively.
It provides poor performance in some cases.
Performance of the LRU algorithm may be improved by introducing a small amount of
randomness into which block might be replaced.

Other replacement algorithms include removing the “oldest” block. However, they disregard
the locality of reference principle and do not perform as well.

15
LRU replacement algorithm

16
Performance considerations

❑ A key design objective is to achieve the best possible


performance at the lowest possible cost.
◆ Price/performance ratio is a common measure.
❑ Performance of a processor depends on:
◆ How fast machine instructions can be brought into the
processor for execution.
◆ How fast the instructions can be executed.
❑ Memory hierarchy described earlier was created to increase
the speed and size of the memory at an affordable cost.
❑ Data need to be transferred between various units of this
hierarchy as well.
◆ Speed and efficiency of data transfer between these various
memory units also impacts the performance.

17
Interleaving

❑ Main memory of a computer is structured as a collection of


modules.
◆ Each module has its own address buffer register (ABR) and
data buffer register (DBR).
❑ Memory access operations can proceed in more than one
module increasing the rate of transfer of words to and from
the main memory.
❑ How individual addresses are distributed over the modules is
critical in determining how many modules can be kept busy
simultaneously.

18
Interleaving (contd..)
k bits m bits
Module Address in module MM address

ABR DBR ABR DBR ABR DBR

Module Module Module


0 i n- 1

•Consecutive words are placed in a module.


•High-order k bits of a memory address determine the module.
•Low-order m bits of a memory address determine the word within a module.
•When a block of words is transferred from main memory to cache, only one module
is busy at a time.
•Other modules may be involved in data transfers between main memory and disk
using DMA transfer

19
Interleaving (contd..)
m bits k bits

Address in module Module MM address

ABR DBR ABR DBR ABR DBR

Module Module Module


k
0 i 2 - 1

Consecutive words are located in consecutive modules.


Low-order k bits select a module.
High-order k bits select a word location within that module.
Consecutive addresses can be located in consecutive modules.
While transferring a block of data, several memory modules can be kept busy at
the same time.
Faster access and high memory utilization.
This is called “Memory interleaving”.
20
Memory interleaving Example

21
Memory interleaving Example

Using single memory module:

Using four interleaved memory modules:

22
Performance enhancements
Write buffer
Write-through:
•Each write operation involves writing to the main memory.
•If the processor has to wait for the write operation to be complete, it slows down the
processor.
•Processor does not depend on the results of the write operation.
•Write buffer can be included for temporary storage of write requests.
•Processor places each write request into the buffer and continues execution.
•If a subsequent Read request references data which is still in the write buffer, then
this data is referenced in the write buffer.

Write-back:
•Block is written back to the main memory when it is replaced.
•If the processor waits for this write to complete, before reading the new block, it is
slowed down.
•Fast write buffer can hold the block to be written, and the new block can be read first.

23
Performance enhancements
Prefetching

•New data are brought into the processor when they are first needed.
•Processor has to wait before the data transfer is complete.
•Prefetch the data into the cache before they are actually needed, or a before a Read
miss occurs.
•Prefetching should occur (hopefully) when the processor is busy executing
instructions that do not result in a read miss.
•Prefetching can be accomplished through software by including a special instruction
in the machine language of the processor.
- Inclusion of prefetch instructions increases the length of the programs.
•Prefetching can also be accomplished using hardware:
- Circuitry that attempts to discover patterns in memory references and then
prefetches according to this pattern.

24
Performance enhancements
Lockup-Free Cache

•Prefetching scheme does not work if it stops other accesses to the cache until the
prefetch is completed.
•A cache of this type is said to be “locked” while it services a miss.
•Cache structure which supports multiple outstanding misses is called a lockup
free cache.
•Since only one miss can be serviced at a time, a lockup free cache must include
circuits that keep track of all the outstanding misses.
•Special registers may hold the necessary information about these misses.

25
Virtual memories

❑ Recall that an important challenge in the design of a


computer system is to provide a large, fast memory system
at an affordable cost.
❑ Architectural solutions to increase the effective speed and
size of the memory system.
❑ Cache memories were developed to increase the effective
speed of the memory system.
❑ Virtual memory is an architectural solution to increase the
effective size of the memory system .

26
Virtual memories (contd..)

❑ Recall that the addressable memory space depends on the


number of address bits in a computer.
◆ For example, if a computer issues 32-bit addresses, the
addressable memory space is 4G bytes(divided into user space
and system space).
❑ Physical main memory in a computer is generally not as large
as the entire possible addressable space.
◆ Physical memory typically ranges from a few hundred
megabytes to 1G bytes.
❑ Large programs that cannot fit completely into the main
memory have their parts stored on secondary storage
devices such as magnetic disks.
◆ Pieces of programs must be transferred to the main memory
from secondary storage before they can be executed.
If you have 2 GB RAM. Although a user program needs only 1.5 GB
to run, the program cannot run, why????
27
Virtual memories (contd..)

❑ When a new piece of a program is to be transferred to the


main memory, and the main memory is full, then some other
piece in the main memory must be replaced.
◆ Recall this is very similar to what we studied in case of cache
memories.
❑ Operating system automatically transfers data between the
main memory and secondary storage.
◆ Application programmer need not be concerned with this
transfer.
◆ Also, application programmer does not need to be aware of the
limitations imposed by the available physical memory.

28
Virtual memories (contd..)

❑ Techniques that automatically move program and data


between main memory and secondary storage when they are
required for execution are called virtual-memory techniques.
❑ Programs and processors reference an instruction or data
independent of the size of the main memory.
❑ Processor issues binary addresses for instructions and data.
◆ These binary addresses are called logical or virtual addresses.
❑ Virtual addresses are translated into physical addresses by
a combination of hardware and software subsystems.
◆ If virtual address refers to a part of the program that is
currently in the main memory, it is accessed immediately.
◆ If the address refers to a part of the program that is not
currently in the main memory, it is first transferred to the
main memory before it can be used.

29
Virtual memory

❑ Virtual memory is a feature of an operating system (OS)


(implemented using software and hardware)that allows a
computer to compensate for shortages of physical memory
by temporarily transferring pages of data from random
access memory (RAM) to disk storage.
❑ In effect, RAM acts like cache for disk.
❑ The primary benefits of virtual memory include freeing
applications from having to manage a shared memory space,
increased security (memory protection) due to memory
isolation, and being able to conceptually use more memory
than might be physically available, using the technique of
paging.

30
Virtual(Logical) address

❑ A virtual address is a binary number in virtual memory that


enables a process to use a location in primary storage (RAM)
independently of other processes and to use more space
than actually exists in primary storage (RAM)by temporarily
relegating (take away)some contents to a hard disk or
internal flash drive.
❑ In a computer with both physical and virtual memory, a so-
called MMU (memory management unit) coordinates and
controls all of the memory resources, assigning portions
called pages (large blocks) to various running programs to
optimize system performance. By translating between virtual
addresses and physical addresses, the MMU allows every
running process to "think" that it has all the primary storage
(RAM)to itself.

31
Virtual memory organization

Processor
•Memory management unit (MMU) translates
virtual addresses into physical addresses.
Virtual address
•If the desired data or instructions are in the
main memory they are fetched as described
Data MMU
previously.
•If the desired data or instructions are not in
Physical address
the main memory, they must be transferred
from secondary storage to the main memory.
Cache •MMU causes the operating system to bring
the data from the secondary storage into the
Data Physical address main memory.

Main memory

DMA transfer

Disk storage

32
Address translation

❑ Assume that program and data are composed of fixed-length


units called pages.
❑ A page consists of a large block of words that occupy
contiguous (neighboring)locations in the main memory.
❑ Page is a basic unit of information that is transferred
between secondary storage and main memory.
❑ Size of a page commonly ranges from 2K to 16K bytes.
◆ Pages should not be too small, because data can be transferred
at high rates(megabytes per second) between a secondary
storage device and the main memory.
◆ Pages should not be too large, else a large portion of the page
may not be used, and it will occupy valuable space in the main
memory.

33
Address translation (contd..)

❑ Concepts of virtual memory are similar to the concepts of


cache memory.
❑ Cache memory:
◆ Introduced to bridge the speed gap between the processor and
the main memory.
◆ Implemented in hardware.
❑ Virtual memory:
◆ Introduced to bridge the speed gap between the main memory
and secondary storage.
◆ Implemented in hardware and software.

34
Address translation (contd..)

❑ Each virtual or logical address generated by a processor is


interpreted as a virtual page number (high-order bits) plus
an offset (low-order bits) that specifies the location of a
particular byte within that page.
❑ Information about the main memory location of each page is
kept in the page table.
◆ Main memory address where the page is stored.
◆ Current status of the page.
❑ Area of the main memory that can hold a page is called as
page frame.
❑ Starting address of the page table is kept in a page table
base register.

35
Address translation (contd..)

❑ Virtual page number generated by the processor is added to


the contents of the page table base register.
◆ This provides the address of the corresponding entry in the
page table.
❑ The contents of this location in the page table give the
starting address of the page if the page is currently in the
main memory.

36
Address translation (contd..)
PTBR holds Virtual address from processor
Page table base register
the address of
the page table. Page table address Virtual page number Offset
Virtual address is
interpreted as page
+ number and offset.
PAGE TABLE

PTBR + virtual This entry has the starting location


page number provide of the page.
the entry of the page
in the page table.

Page table holds information


about each page. This includes Word address in the page
the starting address of the page
in the main memory. Control Page frame
bits in memory Page frame Offset

Physical address in main memory

37
Address translation (contd..)

❑ Page table entry for a page also includes some control bits
which describe the status of the page while it is in the main
memory.
❑ One bit indicates the validity of the page.
◆ Indicates whether the page is actually loaded into the main
memory.
◆ Allows the operating system to invalidate the page without
actually removing it. (Quick and full format?)
❑ One bit indicates whether the page has been modified
during its residency in the main memory.
◆ This bit determines whether the page should be written back to
the disk when it is removed from the main memory.
◆ Similar to the dirty or modified bit in case of cache memory.

38
Address translation (contd..)

❑ Other control bits for various other types of restrictions


that may be imposed.
◆ For example, a program may only have read permission for a
page, but not write or modify permissions.

For a 4KB page you require (4K == (4 * 1024) == 4096 == 212 ==) 12
bits of offset.

32-bit address-space would require a table of (232/4k) =1048576


entries when using 4KB pages.

39
Address translation (contd..)

❑ Where should the page table be located?


❑ Recall that the page table is used by the MMU for every
read and write access to the memory.
◆ Ideal location for the page table is within the MMU.
❑ Page table is quite large.
❑ MMU is implemented as part of the processor chip.
❑ Impossible to include a complete page table on the chip.
❑ Page table is kept in the main memory.
❑ A copy of a small portion of the page table can be
accommodated within the MMU.
◆ Portion consists of page table entries that correspond to the
most recently accessed pages.

40

You might also like