Vous êtes sur la page 1sur 19

CACHE MEMORY

Prepared By -: Manan Mewada (TA, IET)

Outline

Introduction
Memory Hierarchy
Need of cache
Cache working
Jobs of cache
Bus structure
Cache mapping
Associative mapping
Direct mapping
Set-associative mapping

Introduction
Memory access time is important to performance!
Users want large memories with fast access times
ideally unlimited fast memory
Tradeoff between size and speed
Larger the size, slower the speed (more access time), vice
versa.

Tradeoff between size and cost.


Smaller the size, higher the cost/bit, vice versa.

Levels of the Memory Hierarchy


CPU
Part of The On-chip CPU Datapath,
16-128 bit Registers
Registers
One or more levels (Static RAM):
Level 1: On-chip 16-64K
Level 2: On-chip 256K-2M
Level 3: On or Off-chip 1M-16M
Dynamic RAM (DRAM)
256M-16G

Interface:
1G-32G
Interface:
80G-2T

Cache
Level(s)
Main Memory

Farther away from


the CPU:
Lower Cost/Bit
Higher Capacity
Increased Access
Time/Latency
Lower Throughput/
Bandwidth

EPROM
Magnetic Disc

Optical Disk or Magnetic Tape

Need of Cache
In an early 80s CPUs were very slow and main memories are
very small, so memory speed was match up with the
execution speed of CPU.
During the evaluation in technology CPU speed was increased,
but with the increment in size of the memory, the effective
speed of memory was not improved significantly.
To solve out this problem circuit designers introduced a small
and fast temporary memory which is known as cache.
Typical instruction execution time in CPU 1-2ns.
Cache memory access time 1-5ns.
Main memory access time 40-60ns.

Cache Working
CPU

Cache

Cache
controller

Main
Memory
(DRAM)

All the data and instruction need for CPU is stored in main
memory.
Caches are faster than DRAM, so they are used for temporary
data storage.
Hit If the requested data/instruction is in cache, read or
write operation is performed directly using that
data/instruction in cache, without accessing main memory.
Miss If requested data/instruction is not in cache, a block
containing the requested data/instruction is brought to
cache, and then the processor request is completed.

Jobs of Cache
To perform reading ahead tasks.
Hold on the commonly used instructions or commonly
used calculations for CPU

Bus Structure
CPU chip

register file
L1
cache
cache bus

L2 cache

ALU
system bus

bus interface

I/O
bridge

memory bus
main
memory

Cache Mapping
Commonly used methods:
Associative Mapped Cache
Direct-Mapped Cache
Set-Associative Mapped Cache

Associative Mapped Cache


Any main memory blocks (MB) can be mapped into each
cache block (CB).
To keep track of which on of the 212 possible MBs is in
each CB, a 12-bit tag field is added to each slot.
Cache Memory (256 Bytes)

Offset

Main Memory (64 KB)

4
1
Valid

12

Dirty

Tag

12

CB 0

MB 0

CB 1

MB 1

CB 15

MB 4095

12

Associative Mapped Cache


Valid bit is needed to indicate whether or not the CB
holds a line that belongs to the program being executed.
Dirty bit keeps track of whether or not a line has been
modified while it is in the cache.
The mapping from MBs to CBs is performed by
partitioning an address into fields.
For each CB, if the valid bit is 1, then the tag field of the
referenced address is compared with the tag field of the
CB.

Associative Mapped Cache


Advantages
Any MB can be placed into any CB.
Regardless of how irregular the data and program
references are, if a CB is available for the MB, it can be
stored in the cache.

Disadvantages
Considerable hardware overhead needed for cache
bookkeeping.
There must be a mechanism for searching the tag
memory in parallel.

Direct-Mapped Cache
Each CB corresponds to an explicit set of MBs.
In our example we have 212 MBs and 24 CBs.
A total of 212 / 24 = 28 MBs can be mapped onto each CB.

Main Memory (64 KB)


Cache Memory (256 Bytes)
Offset
Valid

Dirty

Tag

MB 0
MB 1

4
1

CB 0

MB 16

CB 1

MB 17

CB 15

MB 4095

Direct-Mapped Cache
The 16-bit main memory address is partitioned into a 8bit tag field, followed by a 4-bit CB field, followed by a 4bit offset field.
Tag (8)

CB field (4)

Offset (4)

When a reference is made to the main memory address,


the CB field identifies in which of the 24 CBs the block will
be found.
If the valid bit is 1, then the tag field of the referenced
address is compared with the tag field of the slot.

Direct-Mapped Cache
Advantages
The tag memory is much smaller than in associative
mapped cache.
No need for an associative search, since the CB field is
used to direct the comparison to a single field.

Disadvantages
Higher miss rate.

Set-Associative Mapped Cache


Combines the simplicity of direct mapping with the flexibility
of associative mapping
For this example, two CBs make up a set. Since there are 24
CBs in the cache, there are 24/2 =23 sets.
Cache Memory (256 Bytes)

Offset

MB 0

4
1
Valid

Dirty

Tag

MB 1

CB 0
set
0
CB 1

9
CB 15

Main Memory (64 KB)

MB 16
MB 17

set
7

MB 4095

Set-Associative Mapped Cache


When an address is mapped to a set, the direct mapping scheme
is used, and then associative mapping is used within a set.

Advantages
In our example the tag memory increases only slightly from the
direct mapping and only two tags need to be searched for each
memory reference.
The set-associative cache is widely used in todays microprocessors.

Thank You

Cache read and write policies


Cache Read

Cache Write

Data is in the
cache

Data is not in
the cache

Data is in the
cache

Data is not in
the cache

Forward to
CPU

Load through:
Forward the
data/byte as
cache line is
filled
Or
Fill cache line
and than
forward

Write through:
Write data to
both cache and
main memory
Or
Write Back:
Write data to
cache only.
Differ main
memory write
until cache
block is flush

Write allocate:
Bring line into
the cache, then
update it
Or
Write no
allocate:
Update main
memory only

Vous aimerez peut-être aussi