KEMBAR78
Computer Organisation Notes | PDF | Cpu Cache | Flash Memory
0% found this document useful (0 votes)
23 views9 pages

Computer Organisation Notes

The document discusses the basic concepts of memory systems, including memory access time, memory cycle time, and types of memory such as RAM, cache, and various forms of ROM. It explains the organization and operation of memory chips, including the differences between static and dynamic RAM, as well as the mechanisms of read and write operations. Additionally, it covers the importance of locality of reference in cache memory and the protocols for managing data consistency between cache and main memory.

Uploaded by

N sowjanya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views9 pages

Computer Organisation Notes

The document discusses the basic concepts of memory systems, including memory access time, memory cycle time, and types of memory such as RAM, cache, and various forms of ROM. It explains the organization and operation of memory chips, including the differences between static and dynamic RAM, as well as the mechanisms of read and write operations. Additionally, it covers the importance of locality of reference in cache memory and the protocols for managing data consistency between cache and main memory.

Uploaded by

N sowjanya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

BASIC CONCEPTS OF MEMORY SYSTEM Memory Access Time

 The maximum size of the memory that can be used in any computer is determined by  It is the time that elapses between the initiation of an operation and the completion of
the addressing scheme. that operation.
 If MAR is k bits long and MDR is n bits long, then the memory may contain upto 2K Memory Cycle Time
addressable locations and the n-bits of data are transferred between the memory and  It is the minimum time delay that required between the initiations of the two
processor. successive memory operations.
 This transfer takes place over the processor bus. RAM (Random Access Memory)
 The processor bus has,  In RAM, if any location that can be accessed for a Read/Write operation in fixed
 Address Line amount of time, it is independent of the location’s address.

 Data Line Cache Memory

 Control Line (R/W, MFC – Memory Function Completed)  It is a small, fast memory that is inserted between the larger slower main memory
and the processor.
 It holds the currently active segments of a program and their data.
Virtual memory
 The address generated by the processor does not directly specify the physical
locations in the memory.
 The address generated by the processor is referred to as a virtual / logical address.
 The virtual address space is mapped onto the physical memory where data are
actually stored.
 The mapping function is implemented by a special memory control circuit is often
called the memory management unit.
 Only the active portion of the address space is mapped into locations in the physical
memory.
 The remaining virtual addresses are mapped onto the bulk storage devices used,
which are usually magnetic disk.
 The control line is used for coordinating data transfer.  As the active portion of the virtual address space changes during program execution,
 The processor reads the data from the memory by loading the address of the required the memory management unit changes the mapping function and transfers the data
memory location into MAR and setting the R/W line to 1. between disk and memory.
 The memory responds by placing the data from the addressed location onto the data  Thus, during every memory cycle, an address processing mechanism determines
lines and confirms this action by asserting MFC signal. whether the addressed in function is in the physical memory unit.
 Upon receipt of MFC signal, the processor loads the data onto the data lines into  If it is, then the proper word is accessed and execution proceeds.
MDR register.  If it is not, a page of words containing the desired word is transferred from disk to
 The processor writes the data into the memory location by loading the address of this memory.
location into MAR and loading the data into MDR sets the R/W line to 0.  This page displaces some page in the memory that is currently inactive.
SEMICONDUCTOR RAM Bit Organization Requirement of external
connection for address, data and
 Semi-Conductor memories are available is a wide range of speeds.
control lines
 Their cycle time ranges from 100ns to 10ns.
128 (16x8) 14
Internal Organization of Memory Chips
 Memory cells are usually organized in the form of array, in which each cell is capable (1024) 19
128x8(1k)
of storing one bit of information.
 Each row of cells constitutes a memory word and all cells of a row are connected to a
Static Memories
common line called as word line.
 Memories that consists of circuits capable of retaining their state as long as power is
 The cells in each column are connected to Sense / Write circuit by two bit lines.
applied are known as static memory.
 The Sense / Write circuits are connected to data input or output lines of the chip.
 Two inverters are cross connected to form a batch.
 During a write operation, the sense / write circuit receive input information and store it
 The batch is connected to two bit lines by transistors T1 and T2.
in the cells of the selected word.
 These transistors act as switches that can be opened / closed under the control of the
 The data input and data output of each senses / write circuit are connected to a single
word line.
bidirectional data line that can be connected to a data bus of the computer.
 When the word line is at ground level, the transistors are turned off and the latch retain
its state.

Read Operation
 In order to read the state of the SRAM cell, the word line is activated to close switches
T1 and T2.
 If the cell is in state 1, the signal on bit line b is high and the signal on the bit line b is
low. Thus b and b’ are complements of each other.
 Sense / write circuit at the end of the bit line monitors the state of b and b’ and set the

R / W → specifies the required operation. output accordingly.

CS → Chip Select input selects a given chip in the multi-chip memory system.
Write Operation Description
 The state of the cell is set by placing the appropriate value on bit line b and its  The 4 bit cells in each row are divided into 512 groups of 8.
complement on b and then activating the word line.  21 bit address is needed to access a byte in the memory (12 bit To select a row,9 Bit
 This forces the cell into the corresponding state. Specify the group of 8 bits in the selected row).
 The required signal on the bit lines are generated by Sense / Write circuit.  A8-0 → Row address of a byte.
 A20-9 → Column address of a byte.
Asynchronous DRAMS
 During Read/ Write operation, the row address is applied first. It is loaded into the row
 Less expensive RAM’s can be implemented if simplex calls are used such cells cannot
address latch in response to a signal pulse on Row Address Strobe (RAS) input of the
retain their state indefinitely. Hence they are called Dynamic RAM’s (DRAM).
chip.
 The information stored in a dynamic memory cell in the form of a charge on a capacitor
and this charge can be maintained only for tens of Milliseconds.  When a Read operation is initiated, all cells on the selected row are read and
refreshed.Shortly after the row address is loaded, the column address is applied to
 The contents must be periodically refreshed by restoring by restoring this capacitor
the address pins and loaded into Column Address Strobe (CAS).
charge to its full value.
 The information in this latch is decoded and the appropriate group of 8 Sense/Write
 In order to store information in the cell, the transistor T is turned ‘on’ and the
circuits are selected.
appropriate voltage is applied to the bit line, which charges the capacitor.
 R/W =1(read operation) → the output values of the selected circuits are transferred to
 After the transistor is turned off, the capacitor begins to discharge which is caused by
the data lines D0 - D7.
the capacitor’s own leakage resistance.
 R/W =0(write operation) → the information on D0 - D7 are transferred to the selected
 Hence the information stored in the cell can be retrieved correctly before the threshold
circuits.
value of the capacitor drops down.
 During a read operation, the transistor is turned ‘on’ and a sense amplifier connected to
the bit line detects whether the charge on the capacitor is above the threshold value.
 If charge on capacitor > threshold value -> Bit line will have logic value ‘1’.
 If charge on capacitor < threshold value -> Bit line will set to logic value ‘0’.

 RAS and CAS are active low so that they cause the latching of address when they
change from high to low. This is because they are indicated by RAS and CAS.
 To ensure that the contents of a DRAM’s are maintained, each row of cells must be
accessed periodically. memory.
 Refresh operation usually perform this function automatically.  Non-volatile memory is used in embedded system.
 A specialized memory controller circuit provides the necessary control signals RAS  Since the normal operation involves only reading of stored data, a memory of this type
and CAS governs the timing. is called ROM.
 The processor must take into account the delay in the response of the memory.
 Such memories are referred to as Asynchronous DRAM’s.
Synchronous DRAM
 Here the operations are directly synchronized with clock signal.
 The address and data connections are buffered by means of registers.
 The output of each sense amplifier is connected to a latch.
 A Read operation causes the contents of all cells in the selected row to be loaded in
these latches.
 Data held in the latches that correspond to the selected columns are transferred into the
data output register, thus becoming available on the data output pins.  At Logic value ‘0’ → Transistor (T) is connected to the ground point (P).
 Transistor switch is closed and voltage on bit line nearly drops to zero.
 At Logic value ‘1’ → Transistor switch is open.
 The bit line remains at high voltage.
 To read the state of the cell, the word line is activated.
 A Sense circuit at the end of the bit line generates the proper output value.
Types of ROM
 Different types of non-volatile memory are,
 PROM
 EPROM
 First, the row address is latched under control of RAS signal.  EEPROM
 The memory typically takes 2 or 3 clock cycles to activate the selected row.  Flash Memory
 Then the column address is latched under the control of CAS signal. PROM -Programmable ROM
 After a delay of one clock cycle, the first set of data bits is placed on the data lines.  PROM allows the data to be loaded by the user.
 The SDRAM automatically increments the column address to access the next 3 sets of  Programmability is achieved by inserting a ‘fuse’ at point P in a ROM cell.
bits in the selected row, which are placed on the data lines in the next 3 clock cycles.
 Before it is programmed, the memory contains all 0’s
READ ONLY MEMORY  The user can insert 1’s at the required location by burning out the fuse at these locations
 Both SRAM and DRAM chips are volatile, which means that they lose the stored using high-current pulse.
information if power is turned off.  This process is irreversible.
 Many application requires Non-volatile memory (which retain the stored information Merit
if power is turned off).  It provides flexibility.
 Example: Operating System software has to be loaded from disk to memory which  It is faster.
requires the program that boots the Operating System ie. It requires non-volatile  It is less expensive because they can be programmed directly by the user.
EPROM-Erasable reprogrammable ROM  Flash Drives
 EPROM allows the stored data to be erased and new data to be loaded. Merits
 In an EPROM cell, a connection to ground is always made at ‘P’ and a special transistor  Flash drives have greater density which leads to higher capacity and low cost per bit.
is used, which has the ability to function either as a normal transistor or as a disabled  It requires single power supply voltage and consumes less power in their operation.
transistor that is always turned ‘off’. Flash Cards
 This transistor can be programmed to behave as a permanently open switch, by injecting  One way of constructing larger module is to mount flash chips on a small card.
charge into it that becomes trapped inside.  Such flash card have standard interface.
 Erasure requires dissipating the charges trapped in the transistor of memory cells.  The card is simply plugged into a conveniently accessible slot.
 This can be done by exposing the chip to ultra-violet light, so that EPROM chips are  Its memory size are of 8, 32, 64 MB.
mounted in packages that have transparent windows.  Example: A minute of music can be stored in 1MB of memory. Hence 64MB flash
Merits cards can store an hour of music.
 It provides flexibility during the development phase of digital system. Flash Drives
 It is capable of retaining the stored information for a long time.  Larger flash memory module can be developed by replacing the hard disk drive.
Demerits  The flash drives are designed to fully emulate the hard disk.
 The chip must be physically removed from the circuit for reprogramming and its entire  The flash drives are solid state electronic devices that have no movable parts.
contents are erased by UV light. Merits
EEPROM-Electrically Erasable ROM  They have shorter seek and access time which results in faster response.
Merits  They have low power consumption which makes them attractive for battery driven
 It can be both programmed and erased electrically. application.
 It allows the erasing of all cell contents selectively.  They are insensitive to vibration.
Demerits Demerit
 It requires different voltage for erasing, writing and reading the stored data.  The capacity of flash drive (less than 1GB) is less than hard disk (greater than 1GB).
Flash Memory  It leads to higher cost per bit.
 In EEPROM, it is possible to read and write the contents of a single cell.  Flash memory will deteriorate after it has been written a number of times (typically
 In Flash device, it is possible to read the contents of a single cell but it is only possible atleast 1 million times.)
to write the entire contents of a block. Speed, Size Cost
 Prior to writing, the previous contents of the block are erased.
 Example: In MP3 player, the flash memory stores the data that represents sound.
 Single flash chips cannot provide sufficient storage capacity for embedded system
application.
 There are 2 methods for implementing larger memory modules consisting of number of
chips. They are,
 Flash Cards
Magnetic Disk for the new block that contains the referenced word.
 A huge amount of cost effective storage can be provided by magnetic disk; the main  The collection of rule for making this decision is called the replacement algorithm.
memory can be built with DRAM which leaves SRAM’s to be used in smaller units  The cache control circuit determines whether the requested word currently exists in the
where speed is of essence. cache.
 If it exists, then Read/Write operation will take place on appropriate cache location. In
this case Read/Write hit will occur.
 In a Read operation, the memory will not involve.
 The write operation is proceed in 2 ways. They are,
 Write-through protocol
 Write-back protocol
CACHE MEMORY Write-through protocol
 The effectiveness of cache mechanism is based on the property of ‘Locality of reference’.  Here the cache location and the main memory locations are updated simultaneously.
Locality of Reference
 Many instructions in the localized areas of the program are executed repeatedly during
some time period and remainder of the program is accessed relatively infrequently.
 It manifests itself in 2 ways. They are,
 Temporal (the recently executed instructions are likely to be executed again very
soon.)
 Spatial (The instructions in close proximity to recently executed instruction are also
likely to be executed soon.)
 If the active segment of the program is placed in cache memory, then the total execution
time can be reduced significantly.

Write-back protocol
 This technique is to update only the cache location and to mark it as with associated
flag bit called dirty/modified bit.
 The word in the main memory will be updated later, when the block containing this
marked word is to be removed from the cache to make room for a new block.
 The term Block refers to the set of contiguous address locations of some size.
 If the requested word currently not exists in the cache during read operation, then read
 The cache line is used to refer to the cache block.
miss will occur.
 The Cache memory stores a reasonable number of blocks at a given time but this
 To overcome the read miss Load –through / early restart protocol is used.
number is small compared to the total number of blocks available in Main Memory.
Read Miss
 The correspondence between main memory block and the block in cache memory is
 The block of words that contains the requested word is copied from the main memory
specified by a mapping function.
into cache.
 The Cache control hardware decide that which block should be removed to create space
Load –through Demerit
 After the entire block is loaded into cache, the particular word requested is forwarded  It is not very flexible.
to the processor. Associative Mapping
 If the requested word not exists in the cache during write operation, then Write Miss  In this method, the main memory block can be placed into any cache block position.
will occur.
 If Write through protocol is used, the information is written directly into main memory.
 If Write back protocol is used then block containing the addressed word is first brought
into the cache and then the desired word in the cache is over-written with the new
information.
Mapping Function
Direct Mapping
 It is the simplest technique in which block j of the main memory maps onto block ‘j’
modulo 128 of the cache.
 Thus whenever one of the main memory blocks 0,128,256 is loaded in the cache, it is
stored in block 0.
 Block 1,129,257 are stored in cache block 1 and so on.
 The contention may arise when,
 When the cache is full
 When more than one memory block is mapped onto a given cache block position.
 The contention is resolved by allowing the new blocks to overwrite the currently
 12 tag bits will identify a memory block when it is resolved in the cache.
resident block.
 The tag bits of an address received from the processor are compared to the tag bits of
 Placement of block in the cache is determined from memory address.
each block of the cache to see if the desired block is present. This is called associative
 The memory address is divided into 3 fields. They are, mapping.
 Low Order 4 bit field (word)→Selects one of 16 words in a block.  It gives complete freedom in choosing the cache location.
 7 bit cache block field →When new block enters cache,7 bit determines the cache  A new block that has to be brought into the cache has to replace (eject) an existing block
position in which this block must be stored. if the cache is full.
 5 bit Tag field→ The high order 5 bits of the memory address of the block is stored  In this method, the memory has to determine whether a given block is in the cache.
in 5 tag bits associated with its location in the cache.
 A search of this kind is called an associative search.
 As execution proceeds, the high order 5 bits of the address are compared with tag bits Merit
associated with that cache location.
 It is more flexible than direct mapping technique.
 If they match, then the desired word is in that block of the cache. Demerit
 If there is no match, then the block containing the required word must be first read from  Its cost is high.
the main memory and loaded into the cache.
Merit
 It is easy to implement.
Set-Associative Mapping  If Processor and DMA uses the same copies of data then it is called as the Cache
 It is the combination of direct and associative mapping. Coherence Problem.
 The blocks of the cache are grouped into sets and the mapping allows a block of the Merit
main memory to reside in any block of the specified set.  The Contention problem of direct mapping is solved by having few choices for block
placement.
 The hardware cost is decreased by reducing the size of associative search.
CACHE PERFORMANCE CONSIDERATION
 Two key factors of a computer are performance and cost.
 Performance depends on how fast machine instructions can be brought into the
processor for execution and how fast they can be executed.
 An effective way to introduce parallelism is to use an interleaved organization.
Interleaving
 If the main memory of a computer is structured as a collection of physically separate
modules, each with its own address buffer register (ABR) and data buffer register (DR).
 The memory access operations may proceed more than one module at the same time.
 Thus, the aggregate rate of transmission of words to and from the main memory system
 In this case, the cache has two blocks per set, so the memory blocks 0,64,128……..4032 can be increased.
maps into cache set ‘0’ and they can occupy either of the two block position within the  Two methods of address layout are shown below.
set.
 6 bit set field→ Determines which set of cache contains the desired block.
 6 bit tag field→ The tag field of the address is compared to the tags of the two blocks
of the set to clock if the desired block is present.
 The cache which contains 1 block per set is called direct Mapping.
 A cache that has ‘k’ blocks per set is called as ‘k-way set associative cache’.
 Each block contains a control bit called a valid bit.
 The Valid bit indicates that whether the block contains valid data.
 The dirty bit indicates that whether the block has been modified during its cache
residency.
 Valid bit=0→When power is initially applied to system
 Valid bit =1→When the block is loaded from main memory at first time.
 If the main memory block is updated by a source and if the block in the source is already
exists in the cache, then the valid bit will be cleared to ‘0’.

 In fig (a), the memory address generated by the processor is decode.


 The high order K bits name one of ‘n’ modules and the low- order ‘m bits name a
particular word in that module.
 When consecutive locations are accessed, when a block of data is transferred to a cache,
only one module is involved. At the same time, however, devices with DMA ability
may be accessing information in other memory modules.
 The fig (b) is a more effective way to address the modules is called memory
interleaving.
 The low- order K bits of the memory address select a module, and the high order m bits
name a location with is in that module.
 The consecutive addresses are located in successive modules.
 Any component of the system that generates request for access to consecutive memory
locations can keep several modules busy at any one time.
 This results in both faster access to a block of data and higher average utilization of the
memory system as a whole.
 To implement the interleaved structure, there must be two modules otherwise, there
will be gaps of non-existent locations in the memory address space.
Hit rate and Miss Penalty
 The number of hits stated as a fraction of all attempted accesses is called the ‘Hit
Rate’ and the ‘Miss Rate is the number of misses stated as a fraction of attempted
accesses.
 High it rates are 0.9 essential for high- performance computers.
 The extra time needed to bring the desired information into the cache is called the
‘Miss Penalty’.
 In general, the miss penalty is the time needed to bring a block of data from a switch
unit in the memory hierarchy to a faster unit.
 The miss penalty is reduced if efficient mechanism for transferring data between
the various units of the hierarchy are implemented.

You might also like