Vous êtes sur la page 1sur 7

Memory Architecture and Hierarchy

Memory Map Hierarchy

Memory hierarchies improve performance. Cache memory, additionally called CPU memory, is
random access memory (RAM) that a computer microprocessor can access more rapidly than it
can get to regular RAM. It is small, fast storage used to improve average access time to slow
memory. This memory is normally integrated directly with the CPU chip or set on a different
chip that has a separate bus interconnect with the CPU.

The essential motivation behind cache memory is to store program instructions that are much of
the time re-referenced by software during operation. Quick access to these program instructions
boots the overall speed of the software program. As the microprocessor processes data, it looks
first in the cache memory; if it finds the information there (from a past reading of data), it doesn't
need to do a more time-consuming reading of data from larger memory or other data storage
devices. Most programs use not very many resources once they have been opened and operated
for a period, mainly because frequently re-referenced instructions tend to be cached. This
clarifies why measurement of system performance in PCs with slower processors yet larger
caches have a tendency to be faster than measurements of system performance in PCs with faster
processors yet limited cache space. Multi-tier or multilevel caching has become popular in server
and desktop architectures, with distinctive levels giving greater efficiency through managed
tiering.

Basically, the less frequently access is made to certain data or instructions, the lower down the
cache level the data or instructions are composed. First Cache available is IBM 360/85 in the late
60s. Cache memory is fast and expensive. It is classified as "levels" that describe its closeness
and accessibility to the microprocessor. It can be divided into 3 levels:

Level 1 (L1) cache is extremely fast but relatively small, and is usually embedded in the
processor chip (CPU).

Level 2 (L2) cache is often more capacious than L1; it may be located on the CPU or on
a separate chip or coprocessor with a high-speed alternative system bus interconnecting the
cache to the CPU, so as not to be slowed by traffic on the main system bus.

Level 3 (L3) cache is typically specialized memory that works to improve the
performance of L1 and L2. It can be significantly slower than L1 or L2, but is usually double
the speed of RAM. In the case of multicore processors, each core may have its own dedicated
L1 and L2 cache, but share a common L3 cache.
For the cache memory configuration, we can see that it evolves from time to time. But, memory
cache traditionally works under three different configurations:

Direct mapping, in which each block is mapped to exactly one cache location.
Conceptually, this is like rows in a table with three columns: the data block or cache line that
contains the actual data fetched and stored, a tag that contains all or part of the address of the
fetched data, and a flag bit that connotes the presence of a valid bit of data in the row entry.

Fully associative mapping is similar to direct mapping in structure, but allows a block to
be mapped to any cache location rather than to a pre-specified cache location (as is the case
with direct mapping).

Set associative mapping can be viewed as a compromise between direct mapping and
fully associative mapping in which each block is mapped to a subset of cache locations. It is
sometimes called N-way set associative mapping, which provides for a location in main
memory to be cached to any of "N" locations in the L1 cache.
Memory Management Unit

A memory management unit (MMU) is a computer hardware component that handles all
memory and caching operations associated with the processor. In other words, the MMU is
responsible for all aspects of memory management. It is usually integrated into the processor,
although in some systems it occupies a separate IC (integrated circuit) chip.

The work of the MMU can be divided into three major categories:

Hardware memory management, which oversees and regulates the processor's use of
RAM (random access memory) and cache memory.

OS (operating system) memory management, which ensures the availability of adequate


memory resources for the objects and data structures of each running program at all times.

Application memory management, which allocates each individual program's required


memory, and then recycles freed-up memory space when the operation concludes.
Although the memory management unit can be a separate chip component, it is usually
integrated into the central processing unit (CPU). Generally, the hardware associated with
memory management includes random access memory (RAM) and memory caches. RAM is the
physical storage compartment that is located on the hard disk. It is the main storage area of the
computer where data is read and written. Memory caches are used to hold copies of certain data
from the main memory. The CPU accesses this information held in the memory cache, which
helps speed up the processing time.

When the physical memory, or RAM, runs out of memory space, the computer automatically
uses virtual memory from the hard disk to run the requested program. The memory management
unit allocates memory from the operating system to various applications. The virtual
address area, which is located within the central processing unit, is comprised of a range of
addresses that are divided into pages. Pages are secondary storage blocks that are equal in size.
The automated paging process allows the operating system to utilize storage space scattered on
the hard disk.

This feature is a major key to making this process work effectively and efficiently because the
system is not required to create one chunk of virtual memory to handle the program
requirements. Fragmentation is the problem caused by the creating of various sizes of memory
space to accommodate different size programs. This could lead to the possibility of not having
enough free space for larger programs when the total space available is actually enough.

Application memory management entails the process of allocating the memory required to run a
program from the available memory resources. In larger operating systems, many copies of the
same application can be running. The memory management unit often assigns an application the
memory address that best fits its need. Its simpler to assign these programs the same addresses.
Also, the memory management unit can distribute memory resources to programs on an as
needed basis. When the operation is completed, the memory is recycled for use elsewhere.

A virtual address is a memory address that a process uses to access its own memory. The virtual
address is not the same as the actual physical RAM address in which it is stored. When a process
accesses a virtual address, the Memory Management Unit (MMU) hardware translates the virtual
address into a physical address. The Operating System (OS) determines the mapping from virtual
address to physical address.

Virtual addresses allow isolation. Virtual addresses in one process refer to different physical
memory than virtual addresses in another. Besides, virtual addresses allow relocation. A program
does not need to know which physical addresses it will use when it is run. Compilers generate
relocatable code code that is independent of physical location in memory.
System Register

A register is a very small amount of very fast memory that is built into the CPU (central
processing unit) in order to speed up its operations by providing quick access to commonly used
values. Registers refers to semiconductor devices whose contents can be accessed (i.e., read and
written to) at extremely high speeds but which are held there only temporarily (i.e., while in use
or only as long as the power supply remains on). Registers are the top of the memory hierarchy
and are the fastest way for the system to manipulate data.
Registers are normally measured by the number of bits they can hold, for example, an 8-bit
register means it can store 8 bits of data or a 32-bit register means it can store 32 bit of data.
Registers are used to store data temporarily during the execution of a program. Some of the
registers are accessible to the user through instructions. Data and instructions must be put into
the system. So we need registers for this.
The basic computer registers with their names, size and functions are listed below:

Register Symbol Register Name Number of Bits Description


AC Accumulator 16 Processor Register
DR Data Register 16 Hold memory data
TR Temporary Register 16 Holds temporary Data
IR Instruction Register 16 Holds Instruction Code
AR Address Register 12 Holds memory address
PC Program Counter 12 Holds address of next instruction
INPR Input Register 8 Holds Input data
OUTR Output Register 8 Hold Output data

Vous aimerez peut-être aussi