Vous êtes sur la page 1sur 40

Chapter 3.

2
OS Performance Issue
(Memory Management)
CSC204
Practical Approach to
Operating System
CS110

Contents
Memory Management
Memory Hierarchy
Physical Memory
Virtual Memory
Page Fault
Trashing
Cache - Principle of Locality

Memory Management
Memory management is the act

of managing computer memory.


In its simpler forms, this involves
providing ways to allocate portions
of memory to programs at their
request, and freeing it for reuse
when no longer needed.
The management of main memory
is critical to the computer system.

Memory Management
Subdividing memory to

accommodate multiple processes


Memory needs to be allocated to
ensure a reasonable supply of
ready processes to consume
available processor time

Memory Hierarchy
WHY need memory hierarchy in computer

system?
To provide the best performance at the lowest

cost, memory is organized in a hierarchical fashion


Small capacity, fast storage elements are

kept in the CPU (i.e. on-chip)


Larger capacity, slower main memory is
accessed through the data bus
Larger, (almost) permanent storage in the
form of disk and tape drives is still further
from the CPU

Processor<->Cache<>Other Memory

Each level of memory keeps a subset of the data contained

in the lower memory-level (i.e. from larger memory)


To access a particular piece of data, the CPU first sends
a request to its nearest memory, i.e. cache
If the data is not in cache, then main memory is queried. If
the data is not in main memory, then the request goes to
disk
Once the data is located at a level, then the data, and a
number of its nearby data elements are fetched into cache
memory
E.g. if data from address x is requested, then data from
address X +1, X + 2, etc. is also sent
A block (data from multiple blocks) of data is
transferred

Why is a block of data


transferred?

Data between levels is transferred using a

bus
Bus itself takes sometime to transfer data
it would more effective to use this

opportunity to get some other data you might


require in the future during one bus
transaction

Why is a block of data


transferred?

So why get data that is nearby? This because of

program structure:
Temporal Locality (Locality in Time) :
referenced memory is likely to be referenced again soon

(e.g. code within a loop)


Keep most recently accessed data items closer to the processor
Spatial Locality (Locality in Space):
memory close to referenced memory is likely to be

referenced soon (e.g., data in a sequentially access array)


Move blocks consists of contiguous words to the upper levels
Sequential locality: Instructions tend to be accessed

sequentially
The above three are known as Principles of Locality - , is
the phenomenon of the same value or related storage
locations being frequently accessed

Cache
Cache
is a small very fast memory (SRAM, expensive)
contains copies of the most recently accessed memory

locations (data and instructions): temporal locality


is fully managed by hardware (unlike virtual memory)
storage is organized in blocks of contiguous memory
locations: spatial locality
unit of transfer to/from main memory (or L2) is the
cache block

General structure
n blocks per cache organized in s sets
b bytes per block
total cache size n*b bytes

Physical Memory
Also referred to as the physical storage or

the real storage


This is typically the RAM modules that are
installed onto the motherboard.
Physical memory is a term used to
describe the total amount of memory
installed in the computer.
For example, if the computer has two 64MB
memory modules installed, it has a total of
128MB of physical memory.

Physical Memory

Physical Memory :
Memory Allocation
Fixed Partition
Scheme
Dynamic Partition
First Fit
Best Fit
Worst Fit

Compaction

Fixed Partition
Attempt at multiprogramming using fixed

partitions
one partition for each job
size of partition designated by reconfiguring

the system
partitions cant be too small or too large.
Critical to protect jobs memory space.
Entire program stored contiguously in

memory during entire execution.

Fixed Partition

Fixed Partition

Dynamic Partition
Available memory kept in contiguous blocks

and jobs given only as much memory as they


request when loaded.
Improves memory use over fixed partitions.
Performance decline as new jobs enter the
system
fragments of free memory are created between

blocks of allocated memory (external


fragmentation).
External fragmentation - total memory space

exists to satisfy a request, but it is not


contiguous

Dynamic Partitioning of
Main Memory &
Fragmentation

Dynamic Partition
Allocation Scheme

First-fit: Allocate the first partition that is big enough.


Keep free/busy lists organized by memory location (low-

order to high-order).
Faster in making the allocation.
Best-fit: Allocate the smallest partition that is big

enough
Keep free/busy lists ordered by size (smallest to largest).
Produces the smallest leftover partition.
Makes best use of memory.

Worst-fit:

Allocate the largest hole;

must also search entire list.


Produces the largest leftover hole.

Best-Fit vs. First-Fit


First-Fit
Increases memory
use
Memory allocation
takes more time
Reduces internal
fragmentation

Best-Fit
More complex
algorithm
Searches entire table
before allocating
memory
Results in a smaller
free space (sliver)

First-Fit Allocation
Example
Job List

J1
J2
J3
J4

Memory
Internal

Memory

location
fragmentation

block size

10K
20K
30K*
10K
Job

Job

number

sizeStatus

10240

30K

J1

10K Busy

20K

40960

15K

J4

10K Busy

5K

56320

50K

J2

20K Busy

30K

107520
20K
Free
Internal Fragmentation - allocated memory may be slightly larger than requested
Total Available: 115K Total Used:
40K
memory; this size difference is memory internal to a partition, but not being used

Release of Memory
Space : Deallocation

Deallocation for fixed partitions is simple


Memory Manager resets status of memory

block to free.
Deallocation for dynamic partitions tries to

combine free areas of memory whenever


possible
Is the block adjacent to another free block?
Is the block between 2 free blocks?
Is the block isolated from other free blocks?

Compaction Steps
Relocate every program in memory so

theyre contiguous.
Adjust every address, and every reference
to an address, within each program to
account for programs new location in
memory.
Must leave alone all other values within the
program (e.g., data values).
Compaction used to reduce external
fragmentation

Memory Before & After


Compaction

Virtual Memory
Virtual

memory is a technique that


allows the execution of processes that
may not be completely in memory.
One major advantage of this scheme is
that programs can be larger than
physical memory.
Virtual memory abstracts main memory
into an extremely large, uniform array of
storage separating logical memory as
viewed by the user from physical memory.

Virtual Memory
This technique frees the programmer

from the concerns of the memory storage


limit.
VM allows processes to share files and
address spaces and it provides an
efficient mechanism for process creation.
How Virtual memory being implemented
in OS:
1.Paging
2.Segmentation

1. Paging
Main memory is divided into a number of

equal-sized, relatively small frames.


Each process is divided into a number of
equal-sized pages same length as a frame.
A process is loaded by loading all of its
pages into available frames.

Not necessarily be contiguous.


Possible thru the use of a page table for each process.
Logical address (page number, offset) --- Physical Address
(frame number, offset).

Pros
No external fragmentation
Cons
A small amount of internal fragmentation.

1. Paging
Address Translation Architecture

1. Paging
Address generated by CPU is divided into:
Page number (p) used as an index into a

page table which contains base address of


each page in physical memory
Page offset (d) combined with base

address to define the physical memory


address that is sent to the memory unit
Page table is kept in main memory

1. Paging (Paging
Example - 1)

1. Paging (Paging
Example -2)

2. Segmentation
Based on common practice by programmers

of structuring their programs in modules


(logical groupings of code).
A segment is a logical unit such as: main

program, subroutine, procedure, function, local


variables, global variables, common block,
stack, symbol table, or array.
Main memory is not divided into page frames

because size of each segment is different.


Memory is allocated dynamically.

Segmentation Achitecture
Logical address consists of a two tuple:

<segment-number, offset>
Segment table maps two-dimensional
physical addresses; each table entry has:
base contains the starting physical

address where the segments reside in


memory
limit specifies the length of the segment

2. Segmentation (Address
Translation Architecture)

Virtual Memory
Virtual memory separation of user

logical memory from physical memory.


Only part of the program needs to be in

memory for execution.


Logical address space can therefore be much
larger than physical address space.
Allows address spaces to be shared by
several processes.
Allows for more efficient process creation.

Advantages of VM
Works well in a multiprogramming environment

because most programs spend a lot of time waiting.


Jobs size is no longer restricted to the size of main
memory (or the free space within main memory).
Memory is used more efficiently.
Allows an unlimited amount of multiprogramming.
Eliminates external fragmentation when used with
paging and eliminates internal fragmentation when
used with segmentation.
Allows a program to be loaded multiple times
occupying a different memory location each time.
Allows the sharing of code and data.
Facilitates dynamic linking of program segments.

Disadvantages of VM
Increased processor hardware costs.
Increased overhead for handling paging

interrupts.
Increased software complexity to prevent
thrashing.

Page Fault
Page fault - a failure to find a page in memory.
Thrashing a process is busy swapping pages in and

out
Procedure to handle page fault:
1.
2.
3.
4.
5.

6.

First, check an internal table for the process to determine whether


the reference was a valid or invalid access.
If the reference was invalid, terminate the process. If it was valid,
but have not yet brought in that page, now page it in.
Find a free frame
Schedule a disk operation to read the desired page into the newly
allocated frame.
When the disk read is complete, modify the internal table kept with
the process and the page table to indicate that the page is now in
memory.
Restart the instruction that was interrupted by the illegal address
trap. The process can now access the page as though it had always
been in memory.

Thrashing
Trashing an excessive amount of page

swapping back and forth between main


memory and secondary storage.
Operation becomes inefficient.
Caused when a page is removed from

memory but is called back shortly thereafter.


Can occur across jobs, when a large number
of jobs are vying for a relatively few number
of free pages.
Can happen within a job (e.g., in loops that
cross page boundaries).

Vous aimerez peut-être aussi