KEMBAR78
Cache memory | PPTX
1) Mohd. Maviya Ansari - Introduction to Cache Memory
2) Rishab Yadav - Direct Mapping Techniques
3) Ankush Singh - Full Associative Mapping
Techniques
4) Prabjyot Singh - Set Associative Mapping
Techniques
Cache memory is a small-sized type of volatile
computer memory that provides high-speed data
access to a processor and stores frequently used
computer programs, applications and data. It stores
and retains data only until a computer is powered up.
Small amount of fast memory
Sits between normal main memory and CPU
May be located on CPU chip or module
Address
Address
buffer
Control Control
Data
Data
buffer
System
bus
 The CPU initially looks in the Cache for the data it needs
 If the data is there, it will retrieve it and process it
 If the data is not there, then the CPU accesses the system memory and
then puts a copy of the new data in the cache before processing it
 Next time if the CPU needs to access the same data again, it will just
retrieve the data from the Cache instead of going through the whole
loading process again
Level 1(L1) Cache:
 L1-cache is the fastest cache and it usually comes within the processor
chip itself.
 The L1 cache typically ranges in size from 8KB to 64KB and uses the
high-speed SRAM (static RAM) instead of the slower and cheaper
DRAM (dynamic RAM) used for main memory.
 It is referred to as internal cache or primary cache.
Level 2(L2) Cache:
 The L2 cache is larger but slower in speed than L1 cache.
 store recently accessed information. Also known as secondary cache, it
is designed to reduce the time needed to access data in cases where
data has already been accessed previously.
 L2 cache comes between L1 and RAM(processor-L1-L2-RAM) and is
bigger than the primary cache (typically 64KB to 4MB).
Level 3(L3) Cache:
 L3 Cache memory is an enhanced form of memory present on the
motherboard of the computer.
 L3, cache is a memory cache that is built into the motherboard. It is
used to feed the L2 cache, and is typically faster than the system’s
main memory, but still slower than the L2 cache, having more than 3
MB of storage in it.
Commonly used methods:
Direct-Mapped Cache
Associative Mapped Cache
Set-Associative Mapped Cache
Each block of main memory maps to only one cache
line
i.e. if a block is in cache, it must be in one specific
place
Address is in two parts
Least Significant w bits identify unique word
Most Significant s bits specify one memory block
The MSBs are split into a cache line field r and a tag of
s-r (most significant)
24 bit address
2 bit word identifier (4 byte block)
22 bit block identifier
8 bit tag (=22-14)
14 bit slot or line
No two blocks in the same line have the same Tag field
Check contents of cache by finding line and checking Tag
Tag s-r Line or Slot r Word w
8 14 2
Advantages
The tag memory is much smaller than in associative
mapped cache.
No need for an associative search, since the slot
field is used to direct the comparison to a single
field.
Disadvantages
Consider what happens when a program references
locations that are 219 words apart, which is the size
of the cache. Every memory reference will result in
a miss, which will cause an entire block to be read
into the cache even though only a single word is
used.
Address length = (s + w) bits
Number of addressable units = 2s+w words or
bytes
Block size = line size = 2w words or bytes
Number of lines in cache = m = 2r
Size of tag = (s – r) bits
A main memory block can load into any line of
cache
Memory address is interpreted as tag and
word
Tag uniquely identifies block of memory
Every line’s tag is examined for a match
Cache searching gets expensive
Tag 22 bit
Word
2 bit
22 bit tag stored with each 32 bit block of data
Compare tag field with tag entry in cache to check for hit
Least significant 2 bits of address identify which 16 bit
word is required from 32 bit data block
e.g.
Address Tag Data Cache line
FFFFFC FFFFFC 24682468 3FFF
Advantages
Any main memory block can be placed into any
cache slot.
Regardless of how irregular the data and program
references are, if a slot is available for the block, it
can be stored in the cache.
Disadvantages
Considerable hardware overhead needed for cache
bookkeeping.
There must be a mechanism for searching the tag
memory in parallel.
Address length = (s + w) bits
Number of addressable units = 2s+w words or
bytes
Block size = line size = 2w words or bytes
Number of lines in cache = undetermined
Size of tag = s bits
Cache is divided into a number of sets
Each set contains a number of lines
A given block maps to any line in a given set
e.g. Block B can be in any line of set i
2 way associative mapping
A given block can be in one of 2 lines in only
one set
Advantages
In our example the tag memory increases only
slightly from the direct mapping and only two tags
need to be searched for each memory reference.
The set-associative cache is widely used in today’s
microprocessors.
Address length = (s + w) bits
Number of addressable units = 2s+w words or
bytes
Block size = line size = 2w words or bytes
Number of blocks in main memory = 2d
Number of lines in set = k
Number of sets = v = 2d
Size of tag = (s – d) bits
The synchronization of data in multiple caches
such that reading a memory location via any
cache will return the most recent data written to
that location via any (other) cache.
Some parallel processors do not provide cache
accesses to shared memory to avoid the issue of
cache coherency.
If caches are used with shared memory then
some system is required to detect, when data in
one processor's cache should be discarded or
replaced, because another processor has
updated that memory location. Several such
schemes have been devised.
Summary
 Introduction to Cache
Memory
 Definition
 working
 Levels
 Organization
 Cache Coherency
Mapping Techniques
 Direct Mapping
 Fully Associative Mapping
 Fully Associative Mapping
World Wide Web
 www.wikipedia.org
 www.google.co.in
 www.existor.com
 www.authorstream.com
 www.slideshare.com
 www.thinkquest.org
References
Cache memory
Cache memory
Cache memory

Cache memory

  • 3.
    1) Mohd. MaviyaAnsari - Introduction to Cache Memory 2) Rishab Yadav - Direct Mapping Techniques 3) Ankush Singh - Full Associative Mapping Techniques 4) Prabjyot Singh - Set Associative Mapping Techniques
  • 4.
    Cache memory isa small-sized type of volatile computer memory that provides high-speed data access to a processor and stores frequently used computer programs, applications and data. It stores and retains data only until a computer is powered up.
  • 6.
    Small amount offast memory Sits between normal main memory and CPU May be located on CPU chip or module
  • 7.
  • 8.
     The CPUinitially looks in the Cache for the data it needs  If the data is there, it will retrieve it and process it  If the data is not there, then the CPU accesses the system memory and then puts a copy of the new data in the cache before processing it  Next time if the CPU needs to access the same data again, it will just retrieve the data from the Cache instead of going through the whole loading process again
  • 9.
    Level 1(L1) Cache: L1-cache is the fastest cache and it usually comes within the processor chip itself.  The L1 cache typically ranges in size from 8KB to 64KB and uses the high-speed SRAM (static RAM) instead of the slower and cheaper DRAM (dynamic RAM) used for main memory.  It is referred to as internal cache or primary cache. Level 2(L2) Cache:  The L2 cache is larger but slower in speed than L1 cache.  store recently accessed information. Also known as secondary cache, it is designed to reduce the time needed to access data in cases where data has already been accessed previously.  L2 cache comes between L1 and RAM(processor-L1-L2-RAM) and is bigger than the primary cache (typically 64KB to 4MB).
  • 10.
    Level 3(L3) Cache: L3 Cache memory is an enhanced form of memory present on the motherboard of the computer.  L3, cache is a memory cache that is built into the motherboard. It is used to feed the L2 cache, and is typically faster than the system’s main memory, but still slower than the L2 cache, having more than 3 MB of storage in it.
  • 11.
    Commonly used methods: Direct-MappedCache Associative Mapped Cache Set-Associative Mapped Cache
  • 13.
    Each block ofmain memory maps to only one cache line i.e. if a block is in cache, it must be in one specific place Address is in two parts Least Significant w bits identify unique word Most Significant s bits specify one memory block The MSBs are split into a cache line field r and a tag of s-r (most significant)
  • 14.
    24 bit address 2bit word identifier (4 byte block) 22 bit block identifier 8 bit tag (=22-14) 14 bit slot or line No two blocks in the same line have the same Tag field Check contents of cache by finding line and checking Tag Tag s-r Line or Slot r Word w 8 14 2
  • 17.
    Advantages The tag memoryis much smaller than in associative mapped cache. No need for an associative search, since the slot field is used to direct the comparison to a single field.
  • 18.
    Disadvantages Consider what happenswhen a program references locations that are 219 words apart, which is the size of the cache. Every memory reference will result in a miss, which will cause an entire block to be read into the cache even though only a single word is used.
  • 19.
    Address length =(s + w) bits Number of addressable units = 2s+w words or bytes Block size = line size = 2w words or bytes Number of lines in cache = m = 2r Size of tag = (s – r) bits
  • 21.
    A main memoryblock can load into any line of cache Memory address is interpreted as tag and word Tag uniquely identifies block of memory Every line’s tag is examined for a match Cache searching gets expensive
  • 24.
    Tag 22 bit Word 2bit 22 bit tag stored with each 32 bit block of data Compare tag field with tag entry in cache to check for hit Least significant 2 bits of address identify which 16 bit word is required from 32 bit data block e.g. Address Tag Data Cache line FFFFFC FFFFFC 24682468 3FFF
  • 25.
    Advantages Any main memoryblock can be placed into any cache slot. Regardless of how irregular the data and program references are, if a slot is available for the block, it can be stored in the cache.
  • 26.
    Disadvantages Considerable hardware overheadneeded for cache bookkeeping. There must be a mechanism for searching the tag memory in parallel.
  • 27.
    Address length =(s + w) bits Number of addressable units = 2s+w words or bytes Block size = line size = 2w words or bytes Number of lines in cache = undetermined Size of tag = s bits
  • 29.
    Cache is dividedinto a number of sets Each set contains a number of lines A given block maps to any line in a given set e.g. Block B can be in any line of set i 2 way associative mapping A given block can be in one of 2 lines in only one set
  • 31.
    Advantages In our examplethe tag memory increases only slightly from the direct mapping and only two tags need to be searched for each memory reference. The set-associative cache is widely used in today’s microprocessors.
  • 32.
    Address length =(s + w) bits Number of addressable units = 2s+w words or bytes Block size = line size = 2w words or bytes Number of blocks in main memory = 2d Number of lines in set = k Number of sets = v = 2d Size of tag = (s – d) bits
  • 33.
    The synchronization ofdata in multiple caches such that reading a memory location via any cache will return the most recent data written to that location via any (other) cache. Some parallel processors do not provide cache accesses to shared memory to avoid the issue of cache coherency.
  • 34.
    If caches areused with shared memory then some system is required to detect, when data in one processor's cache should be discarded or replaced, because another processor has updated that memory location. Several such schemes have been devised.
  • 35.
    Summary  Introduction toCache Memory  Definition  working  Levels  Organization  Cache Coherency Mapping Techniques  Direct Mapping  Fully Associative Mapping  Fully Associative Mapping
  • 36.
    World Wide Web www.wikipedia.org  www.google.co.in  www.existor.com  www.authorstream.com  www.slideshare.com  www.thinkquest.org References