KEMBAR78
Unit V | PDF | Random Access Memory | Read Only Memory
0% found this document useful (0 votes)
7 views10 pages

Unit V

Uploaded by

godllucifer247
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views10 pages

Unit V

Uploaded by

godllucifer247
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

UNIT-V

Memory Hierarchy Design and its Characteristics


In the Computer System Design, Memory Hierarchy is an enhancement to organize the
memory such that it can minimize the access time. The Memory Hierarchy was developed
based on a program behavior known as locality of references. The figure below clearly
demonstrates the different levels of memory hierarchy :

This Memory Hierarchy Design is divided into 2 main types:


1. External Memory or Secondary Memory –Comprising of Magnetic Disk, Optical
Disk, Magnetic Tape i.e. peripheral storage devices which are accessible by the
processor via I/O Module.
2. Internal Memory or Primary Memory –Comprising of Main Memory, Cache
Memory & CPU registers. This is directly accessible by the processor.
We can infer the following characteristics of Memory Hierarchy Design from above figure:
1. Capacity:
It is the global volume of information the memory can store. As we move from top
to bottom in the Hierarchy, the capacity increases.
2. Access Time:
It is the time interval between the read/write request and the availability of the data. As
we move from top to bottom in the Hierarchy, the access time increases.
3. Performance:
Earlier when the computer system was designed without Memory Hierarchy design,
the speed gap increases between the CPU registers and Main Memory due to large
difference in access time. This results in lower performance of the system and thus,
enhancement was required. This enhancement was made in the form of Memory
Hierarchy Design because of which the performance of the system increases. One of
the most significant ways to increase system performance is minimizing how far
down the memory hierarchy one has to go to manipulate data.
4. Cost per bit:
As we move from bottom to top in the Hierarchy, the cost per bit increases i.e.
Internal Memory is costlier than External Memory.
RAM and ROM architecture.

1) Read-only memory, or ROM, is a form of data storage in computers and other


electronic devices that cannot be easily altered or reprogrammed. RAM is referred to as
volatile memory and is lost when the power is turned off whereas ROM in non-volatile
and the contents are retained even after the power is switched off.
Types of ROM: Semiconductor-Based
Classic mask-programmed ROM chips are integrated circuits that physically encode
the data to be stored, and thus it is impossible to change their contents after fabrication.
Other types of non-volatile solid-state memory permit some degree of modification:
• Programmable read-only memory (PROM), or one-time programmable ROM (OTP),
can be written to or programmed via a special device called a PROM programmer.
Typically, this device uses high voltages to permanently destroy or create internal links
(fuses or anti-fuses) within the chip. Consequently, a PROM can only be programmed
once.
• Erasable programmable read-only memory (EPROM) can be erased by exposure to
strong ultraviolet light (typically for 10 minutes or longer), then rewritten with a process
that again needs higher than usual voltage applied. Repeated exposure to UV light will
eventually wear out an EPROM, but the endurance of most EPROM chips exceeds 1000
cycles of erasing and reprogramming. EPROM chip packages can often be identified by
the prominent quartz "window" which allows UV light to enter. After programming, the
window is typically covered with a label to prevent accidental erasure. Some EPROM
chips are factory-erased before they are packaged, and include no window; these are
effectively PROM.
• Electrically erasable programmable read-only memory (EEPROM) is based on a
similar semiconductor structure to EPROM, but allows its entire contents (or selected
banks) to be electrically erased, then rewritten electrically, so that they need not be
removed from the computer (whether general-purpose or an embedded computer in a
camera, MP3 player, etc.). Writing or flashing an EEPROM is much slower (milliseconds
per bit) than reading from a ROM or writing to a RAM (nanoseconds in both cases).
Random-access memory, or RAM, is a form of data storage that can be accessed
randomly at any time, in any order and from any physical location in contrast to other
storage devices, such as hard drives, where the physical location of the data determines
the time taken to retrieve it. RAM is measured in megabytes and the speed is measured
in nanoseconds and RAM chips can read data faster than ROM.
Types of RAM:
The two widely used forms of modern RAM are static RAM (SRAM) and dynamic RAM
(DRAM). In SRAM, a bit of data is stored using the state of a six transistor memory cell.
This form of RAM is more expensive to produce, but is generally faster and requires less
dynamic power than DRAM. In modern computers, SRAM is often used as cache
memory for the CPU. DRAM stores a bit of data using a transistor and capacitor pair,
which together comprise a DRAM cell. The capacitor holds a high or low charge (1 or
0, respectively), and the transistor acts as a switch that lets the control circuitry on the
chip read the capacitor's state of charge or change it. As this form of memory is less
expensive to produce than static RAM, it is the predominant form of computer memory
used in modern computers. The figure below shows DRAM & SRAM resp.

Both static and dynamic RAM are considered volatile, as their state is lost or reset
when power is removed from the system. By contrast, read-only memory (ROM) stores
data by permanently enabling or disabling selected transistors, such that the memory
cannot be altered. Writeable variants of ROM (such as EEPROM and flash memory)
share properties of both ROM and RAM, enabling data to persist without power and to be
updated without requiring special equipment. These persistent forms of semiconductor
ROM include USB flash drives, memory cards for cameras and portable devices, and
solid-state drives. ECC memory (which can be either SRAM or DRAM) includes
special circuitry to detect and/or correct random faults (memory errors) in the stored
data, using parity bits or error correction codes.

Difference between Static Ram And Dynamic Ram


Cache Memory in Computer Organization
Cache Memory is a special very high-speed memory. It is used to speed up and
synchronizing with high-speed CPU. Cache memory is costlier than main memory or
disk memory but economical than CPU registers. Cache memory is an extremely fast
memory type that acts as a buffer between RAM and the CPU. It holds frequently
requested data and instructions so that they are immediately available to the CPU when
needed.
Cache memory is used to reduce the average time to access data from the Main
memory. The cache is a smaller and faster memory which stores copies of the data from
frequently used main memory locations. There are various different independent caches
in a CPU, which store instructions and data.

Levels of memory:
• Level 1 or Register – It is a type of memory in which data is stored and accepted
that are immediately stored in CPU. Most commonly used register is accumulator,
Program counter, address register etc.
• Level 2 or Cache memory – It is the fastest memory which has faster access time
where data is temporarily stored for faster access.
• Level 3 or Main Memory – It is memory on which computer works currently. It
is small in size and once power is off data no longer stays in this memory.
• Level 4 or Secondary Memory – It is external memory which is not as fast as main
memory but data stays permanently in this memory.

Cache Performance:
When the processor needs to read or write a location in main memory, it first checks
for a corresponding entry in the cache.
• If the processor finds that the memory location is in the cache, a cache hit has occurred
and data is read from cache
• If the processor does not find the memory location in the cache, a cache miss
has occurred. For a cache miss, the cache allocates a new entry and copies in
data from main memory, then the request is fulfilled from the contents of the cache.

The performance of cache memory is frequently measured in terms of a quantity called


Hit ratio.
Hit ratio = hit / (hit + miss) = no. of hits/total accesses
We can improve Cache performance using higher cache block size, higher associativity,
reduce miss rate, reduce miss penalty, and reduce Reduce the time to hit in the cache.
Cache Mapping:
There are three different types of mapping used for the purpose of cache memory which are
as follows: Direct mapping, Associative mapping, and Set-Associative mapping. These are
explained below.
1. Direct Mapping – The simplest technique, known as direct mapping, maps each
block of main memory into only one possible cache line. or In Direct mapping,
assign each memory block to a specific line in the cache. If a line is previously
taken up by a memory block when a new block needs to be loaded, the old block is
trashed. An address space is split into two parts index field and a tag field. The
cache is used to store the tag field whereas the rest is stored in the main memory.
Direct mapping`s performance is directly proportional to the Hit ratio.

2. Associative Mapping – In this type of mapping, the associative memory is used to


store content and addresses of the memory word. Any block can go into any line of
the cache. This means that the word id bits are used to identify which word in the
block is needed, but the tag becomes all of the remaining bits. This enables the
placement of any word at any place in the cache memory. It is considered to be the
fastest and the most flexible mapping form.

3. Set-associative Mapping – This form of mapping is an enhanced form of direct


mapping where the drawbacks of direct mapping are removed. Set associative
addresses the problem of possible thrashing in the direct mapping method. It does
this by saying that instead of having exactly one line that a block can map to in the
cache, we will group a few lines together creating a set. Then a block in memory
can map to any one of the lines of a specific set. Set-associative mapping allows that
each word that is present in the cache can have two or more words in the main
memory for the same index address. Set associative cache mapping combines the
best of direct and associative cache mapping techniques.

Application of Cache Memory –


1. Usually, the cache memory can store a reasonable number of blocks at any given
time, but this number is small compared to the total number of blocks in the
main memory.
2. The correspondence between the main memory blocks and those in the cache
is specified by a mapping function.

Types of Cache –
• Primary Cache – A primary cache is always located on the processor chip. This
cache is small and its access time is comparable to that of processor registers.
• Secondary Cache – Secondary cache is placed between the primary cache and the
rest of the memory. It is referred to as the level 2 (L2) cache. Often, the Level 2
cache is also housed on the processor chip.

I/O ORGANIZATION

-The I/O organization of computer depend open the size of the computer and the peripheral connected to
it.
-The I/O sub system of the computer, provides an efficient mode of communication between the inside
system and outside environment.

PERIPHERAL DEVICES
The most common input outputs devices are; Keyboard, Mouse, Printer, Monitor, Magnetic tape
-Input or output devices that are connected to computer are called peripheral devices.
-These devices are designed to read information into or out of the memory unit upon command from the
CPU and are considered to be the part of computer system. These devices are also called peripherals
Three categories of external devices ;
1. Human readable
suitable for communicating with the computer user.
2. Machine readable
suitable for communicating equipment.
3. Communication
suitable for communicating with remote devices such as a terminal or machine readable devices.

ACCESSING I/O DEVICES

-A simple arrangement to connect I/O devices to a computer is to use a single bus arrangement.
-The bus enables all the devices connected to it to exchange information.
-Typically, it consists of three sets of lines used to carry address, data, and control signals. Each I/O
device is assigned a unique set of addresses.

-The processor requests either a read or a write operation, and the requested data are transferred over the
data lines .
-When i/o devices and the memory share the same address space, the arrangement is called memory
mapped i/o.

The method that is used to transfer information between internal storage and external I/O devices is
known as I/O interface.
-It defines the typical link between the processor and several peripherals.
-The I/O Bus consists of data line, address line, control line.
-There exists special hardware components between CPU and peripherals to supervise and synchronize all
the input and output transfers that are called interface units.
-Address Decoder enables the device to recognize its address when this address appears on the address
lines .
-Data Register holds the data being transferred to or from the processor.
-The status register contains information relevant to the operation of the I/O device .
-Both the data and status registers are connected to the data bus and assigned unique addresses .

INTERRUPTS
Arrange for the I/O devices to alert the processor when it becomes ready. I/O devices sending by a
hardware signal called an interrupt to the processor .
When the input/output device is ready it could signal the processor on a separate line called interrupt
request line.
An interrupt is more than a simple mechanism for coordinating I/O transfer .

TYPES OF INTERRUPTS
There are two types of interrupts use in I/O organization .
1 Hardware interrupt
2 Software interrupt
HARDWARE INTERRUPT:-
If the signal for the processor is form external device or hardware is called hardware interrupts.
Hardware interrupts can be classified into 2 types they are ,
Maskable interrupt: the hardware interrupts which can be delay when a much highest priority interrupt
has occurred to the processor.
Non maskable interrupt: the hardware which cannot be delayed and should process by the processor
immediately.

SOFTWARE INTERRUPT:-
Normal Interrupts: the interrupts which ar caused by the software instructions are called software
instructions.
Exception: unplanned interrupts while executing a program is called Exception. For example: while
executing a program if we got a value which should be divided by zero is called a exception.

INTERRUPT HARDWARE
Many computers have facility to connect two or more input and output devices to it like laptop may have
3 USB slots. All these input and output devices are connected via switches as shown –

So there is a common interrupt line for all N input/output devices and the interrupt handling works in the
following manner ,
1. When no interrupt is issued by the input/output devices then all the switches are open and the
entire voltage from Vdd is flown through the single line INTR and reaches the processor. Which means
the processor gets a voltage of 1V.
2. When the interrupt is issued by the input/output devices then the switch associated with the
input/output device is closed, so the entire current now passes via the switches which means the hardware
line reaching the processes i.e. INTR line gets 0 voltage. This is an indication for the processor that an
interrupt has occurred and the processor needs to identify which input/output device has triggered the
interrupt
3. The value of INTR is a logical OR of the requests from individual devices.
4. The resistor R is called as a pull up resistor because it pulls the line voltage to high voltage state
when all switches are open( no interrupt state).

BUS ARBITRATION

-The arbitration procedure comes into picture whenever there are more than one processor requesting the
services of bus.
-Because only one unit may at a time be able to transmit successfully over the bus, there is some selection
mechanism is required to maintain such transfer. This mechanism is called as bus arbitration.
-Bus arbitration decide which component will use the bus among various competing request.
-Bus Arbitration refers to the process by which the current bus master accesses and then leaves the control
of the bus and passes it to the another bus requesting processor unit. The controller that has access to a
bus at an instance is known as Bus master .
-Bus arbitration schemes usually try to balance two factors,
1. Bus priority: the highest priority device should be serve first.
2. Fairness: even the lowest priority device should be allow to access the bus.

3. Standard I/O interfaces

4. The processor bus is the bus defied by the signals on the processor chip itself.
Devices that require a very high-speed connection to the processor, such as the
main memory, may be connected directly to this bus. For electrical reasons, only a
few devices can be connected in this manner. The motherboard usually provides
another bus that can support more devices. The two buses are interconnected by a
circuit, which we will call a bridge, that translates the signals and protocols of one
bus into those of the other. Devices connected to the expansion bus appear to the
processor as if they were connected directly to the processor’s own bus. The only
difference is that the bridge circuit introduces a small delay in data transfers
between the processor and those devices.

5. It is not possible to define a uniform standard for the processor bus. The structure
of this bus is closely tied to the architecture of the processor. It is also dependent on
the electrical characteristics of the processor chip, such as its clock speed. The
expansion bus is not subject to these limitations, and therefore it can use a
standardized signaling scheme. A number of standards have been developed. Some
have evolved by default, when a particular design became commercially
successful. For example, IBM developed a bus they called ISA (Industry Standard
Architecture) for their personal computer known at the time as PC AT.

6. Some standards have been developed through industrial cooperative efforts, even
among competing companies driven by their common self-interest in having
compatible products. In some cases, organizations such as the IEEE (Institute of
Electrical and Electronics Engineers), ANSI (American National Standards
Institute), or international bodies such as ISO (International Standards
Organization) have blessed these standards and given them an official status.

7. A given computer may use more than one bus standards. A typical Pentium
computer has both a PCI bus and an ISA bus, thus providing the user with a wide
range of devices to choose from.
Direct Memory Access:
The data transfer between a fast storage media such as magnetic disk and memory
unit is limited by the speed of the CPU. Thus, we can allow the peripherals directly
communicate with each other using the memory buses, removing the intervention of the
CPU. This type of data transfer technique is known as DMA or direct memory access.
During DMA the CPU is idle and it has no control over the memory buses. The DMA
controller takes over the buses to manage the transfer directly between the I/O devices
and the memory unit.

Bus Request : It is used by the DMA controller to request the CPU to relinquish the control
of the buses.
Bus Grant : It is activated by the CPU to Inform the external DMA controller that the
buses are in high impedance state and the requesting DMA can take control of the
buses. Once the DMA has taken the control of the buses it transfers the data. This
transfer can take place in many ways.

You might also like