Vous êtes sur la page 1sur 78

Chapter 5

Memory Management

Memory Management - Early Systems Single-User Contiguous Scheme

Each program is loaded in its entirety into memory and is allocated as much contiguous memory space as needed. If program was too large - it couldnt be executed. Minimal amount of work done by Memory Manager. Hardware needed : 1) register to store base address; 2) accumulator to track size of program as it is loaded into memory.

0 256

Memory

OS Job1
300

Base register

300
Limit register

Job2
420

Job3
880 1024

120

Job4

Fixed (Static) Partitions Allows multiprogramming by using fixed partitions - one partition for each job The size of each partition remains static once the system is in operation. Each partition can only be reconfigured when the computer system is shut down, reconfigured and restarted. The partition sizes are critical. If the partition sizes are too small, larger jobs will be rejected. If partition sizes are too big, memory can be wasted if a job does not occupy the entire partition. Entire program is stored contiguously in memory during entire execution. Internal fragmentation is a problem. Internal fragmentation occurs when there are unused memory spaces within the partition itself.

Memory 300k 300k 300k 300k 300k


Job 2 (30k) 270k Job 3 (200k) 100k Job 3 = 200k Internal fragmentation Job1(250k) 50k Job1 = 250k Job 2 = 30k

Memory 300k 300k 300k 300k 300k

User Program (720k)


180k

User Program = 720k

Internal fragmentation

Simplified Fixed Partition Memory Table (Table 2.1)


Partition size 100K 25K 25K 50K Memory address 200K 300K 325K 350K Access Job 1 Job 4 Job 2 Partition status Busy Busy Free Busy

As each job terminates, the status of its memory partition is changed from busy to free so that an incoming job can be assigned to that partition. The fixed partition scheme works well if all of the jobs run on the same system are of the same size of if the sizes are known ahead of time and dont vary between reconfigurations.

Job 3 must wait even though 70K of free space is available in Partition 1 where Job 1 only occupies 30K of the 100K available. The jobs are allocated space on the basis of first available partition of required size.

Memory
J1 (30k) J3

100k
70k

J1 = 30k J2 = 50k J3 = 30k J4 = 25k

25k 25k 50k

J4 (25k)

J2 (50k)

Dynamic Partitions
Available memory are kept in contiguous blocks and jobs are given only as much memory as they request when loaded. Improves memory use over fixed partitions. Performance deteriorates as new jobs enter the system Fragments of free memory are created between blocks of allocated memory (external fragmentation).

OS
10k 5k

J5 (5k) J1 (10k) J2 (15k)

10k 15k 20k

35k 20k 10k

J7 (10k) J3 (20k) 45k


55k

J1 = 10k J2 = 15k J3 = 20k J4 = 50k J5 = External 5k J6 = 30k fragmentation J7 = 10k J8 = 30k

50k 20k

J6 (30k) J4 (50k)

85k 105k

In this example eight jobs are submitted for processing and allocated space on the basis of first-come-first-served. Job 8 has to wait even though theres enough free memory between partitions to accommodate it because the free memory space available is not contiguous. Since jobs are loaded in a contiguous manner, Job 8 needs to wait.

Dynamic Partition Allocation Schemes


First-fit: Allocate the first partition that is big enough.
Keep free/busy lists organized by memory location (low-order to high-order). Faster in making the allocation.

Best-fit: Allocate the smallest partition that is big enough


Keep free/busy lists ordered by size (smallest to largest). Produces the smallest leftover partition. Makes best use of memory.

First-Fit Allocation Example (Table 2.2)

Using a first-fit scheme, Job 1 claims the first available space. Job 2 then claims the first partition large enough to accommodate it, but by doing so it takes the last block large enough to accommodate Job 3. Therefore, Job 3 (indicated by the asterisk) must wait until a large block becomes available, even though theres 75K of unused memory space (internal fragmentation). Notice that the memory list is ordered according to memory location.

Fixed Partition Memory


J1 (10k)

First-Fit Allocation
J1 = 10k J2 = 20k J3 = 30k Is waiting J4 = 15k

30k 15k
J4 (15k)

20k

5k J2 (20k)

50k

30k

20k

Dynamic Partition Memory


J1 (10k)

First-Fit Allocation
J1 = 10k J2 = 20k J3 = 30k J4 = 15k

30k 15k

J2 (20k) J4 (15k) J3 (30k)

20k

50k
20k

20k

Note: A request for a block of 200 spaces has just been given to the Memory Manager. Using the first-fit algorithm and starting from the top of the list, the Memory manager locates the first block of memory large enough to accommodate the job, which is at location 6785. The job is then loaded, starting at location 6785 and occupying the next 200 spaces. The next step is to adjust the free list to indicate the block of free memory now starts at location 6985 (not 6785 as before) and that it contains only 400 spaces (not 600 as before)

6785 6985

105k First-Fit Allocation 5k


J1 (200k) 400k

600k 20k 205k 4050k 230k 1000k

J1 = 200k

Fixed Partition Memory


J1 (10k)

Best-Fit Allocation
J1 = 10k J2 = 20k J3 = 30k J4 = 10k J5 = 5k (is waiting)

15k 20k 30k


J2 (20k)

5k

J3 (30k) Internal fragmentation 40k

J4 (10k)

50k

Dynamic Partition Memory


J1 (10k)

Best-Fit Allocation
J1 = 10k J2 = 20k J3 = 30k J4 = 10k J5 = 5k

15k
J5 (5k)

5k

20k 30k

J2 (20k)

J3 (30k)

J4 (10k)

50k
40k

A request for a block of 200 spaces has just been given to the Memory Manager. Using the best-fit algorithm and starting from the top of the list, the Memory Manager searches the entire list and locates a block of memory starting at location 7600, which is the smallest block thats large enough to accommodate the job. The choice of this block minimizes the wasted space (only 5 spaces are wasted, which is less than in the four alternative blocks). The job is then stored, starting at location 7600 and occupying the next 200 spaces. Now the free list must be adjusted to show that the block of free memory starts at location 7800 (not 7600 as before) and that it contains only 5 spaces (not 205 as before).

105k Best-Fit Allocation 5k 600k 7600 7800


5k J1 = 200k

20k
J1 (200k)

205k 4050k 230k 1000k

Best-Fit vs. First-Fit


First-Fit Faster to implement but not may not be making efficient use of memory space. Best-Fit Uses memory efficiently but slower to implement because the entire free list table needs to be searched before allocation can be made. Algorithm is more complex because it needs to find smallest block of memory into which the job can fit. Memory list organized according to memory size, smallest to largest.

Algorithm is less complex.

Memory list organized according to memory locations, low-order

Release of Memory Space : Deallocation Deallocation for fixed partitions is simple Memory Manager resets status of memory block to free. Deallocation for dynamic partitions will be more complex because it tries to combine free areas of memory whenever possible. Example : If the block to be deallocated is adjacent to another free block Then The deallocated block is combined together with the free block The memory list is changed to reflect the starting address of the new free block(if starting address of new free block has changed) The free memory block size is changed to show its new size

Relocatable Dynamic Partitions The Memory Manager relocates programs to gather together all of the empty blocks and compact them to make one block of memory large enough to accommodate some or all of the jobs waiting to get in. Compaction Used to consolidate all external fragments (free areas in memory) into one contiguous block. In some cases, compaction enhances throughput by allowing more programs to be active at the same time. Compaction Steps Relocate every program in memory so theyre contiguous. Adjust every address, and every reference to an address, within each program to account for programs new location in memory. Must leave alone all other values within the program (e.g., data values).

Relocation The process by which programs are repositioned in main memory to allow compaction of free memory areas. When relocation takes place, the addresses specified by a program for either branching or data reference are modified, during execution, to allow the program to execute correctly. Memory Manager relocates programs to gather all empty blocks and compact them to make 1 memory block. Memory compaction (garbage collection, defragmentation) performed by OS to reclaim fragmented sections of memory space. Memory Manager optimizes use of memory & improves throughput by compacting & relocating. Relocation can be time-consuming and should be done sparingly. The options on the frequency of doing relocation and compaction include: When a certain percentage of main memory is used up (e.g. 75% used up) When the number of programs waiting for execution reaches a prescribed upper limit. When a prescribed amount of time has elapsed. Combinations of all the above options.

Example

OS
J1 (8k)
12k

10k 18k 30k

J4 (32k)
62k 30k 92k 108k

J6 = 84k

J2 (16k) J5 (48k)

156k 54k 210k

OS
J1 (8k)
12k

10k 18k 30k

OS
J1 (8k) J4 (32k)
Compaction

10k 18k

J4 (32k)
62k 30k

J2 (16k) J5 (48k)

50k 66k

J2 (16k) J5 (48k)

92k 108k 96k

114k

156k 54k

156k

OS
J1 (8k) J4 (32k) J2 (16k) J5 (48k)

10k 18k

50k 66k

J6 = 84k

114k

J6 (84k)
198k 12k 210k

Memory Management Recent Systems

Early schemes were limited to storing entire program in memory.


Fragmentation. Overhead due to relocation. More sophisticated memory schemes now that: Eliminate need to store programs contiguously. Eliminate need for entire program to reside in memory during execution.

More Recent Memory Management Schemes include: Paged Memory Allocation Demand Paging Memory Allocation Segmented Memory Allocation

Paged Memory Allocation


Divides each incoming job into pages of equal size. Works well if page size = size of memory block size (page frames) = size of disk section (sector, block). Before executing a program, the Memory Manager: 1. 2. 3. Determines number of pages in program. Locates enough empty page frames in main memory Loads all of the programs pages into them.

At compilation time every job is divided into pages:


Page 0 contains the first hundred lines. Page 1 contains the second hundred lines. Page 2 contains the third hundred lines. Page 3 contains the last fifty lines.

Program has 350 lines. Referred to by system as line 0 through line 349.

Job 1 (350 lines)


0 1 Job1 2 3 1st 100 lines Page 0 . . 99 100 101 102 nd . 2 100 lines Page 1 . 199 200 201 202 rd Page 2 . 3 100 lines . 299 300 301 302 . . 349

Page frame 100k 100k 100k 100k 100k 100k 100k 100k 100k 100k 100k 100k 100k
Page 0 Page 2

OS OS

50 lines

Page 1 Page 3

Page 3

0 1 2 3 4 5 6 7 8 9 10 11 12

Paging Requires 3 Tables to Track a Jobs Pages 1. Job Table (JT) - 2 entries for each active job. Size of job & memory location of its page map table. Dynamic grows/shrinks as jobs loaded/completed. 2. Page Map Table (PMT) - 1 entry per page. Page number & corresponding page frame memory address. Page numbers are sequential (Page 0, Page 1 ) 3. Memory Map Table (MMT) - 1 entry for each page frame. Location & free/busy status.

Job Table (JT)


Job Size Job 1 (360k) Job 2 (200k) Memory Address 0 360

Page Map Table (PMT)


Page No Page 0 Page 1 Page 2 Page 3 Frame No Frame 8 Frame 10 Frame 5 Frame 11

Memory Map Table (MMT)


Frame No Frame 8 Frame 9 Frame 10 Frame 11 Frame 12 Status busy free busy busy free

Displacement (Figure 3.2) Displacement (offset) of a line -- how far away a line is from the beginning of its page. Used to locate that line within its page frame. Relative factor. For example, lines 0, 100, 200, and 300 are first lines for pages 0, 1, 2, and 3 respectively so each has displacement of zero.

Example: If we use 100 lines as the page size, the page number and the displacement (the location within that page) of Line 214 can be calculated: 2 100 214 Page size Page No 200 Line no to be 14 located Quotient 2 Page Number Remainder 2 Displacement Displacement Line 214 is located on Page 2, 15 lines (Line 14) from the top of the page

Job is 215 lines


0 1 2 . . . 0 . . .

1st 100 lines

Page 0

99 99 100 0 101 . 102 . . . nd 100 lines . 2 . 199 200 201 202 . . . 214 99 0 . . .

Page 1

Remaining 15 lines

Page 2
14

Demand Paging
Bring a page into memory only when it is needed, so less I/O and memory is needed. Faster response. Takes advantage of the fact that programs are written sequentially, so not all pages are needed at once. For example: User-written error handling modules. Mutually exclusive modules. Certain program options are either mutually exclusive or not always accessible. Many tables assigned fixed amount of address space even though only a fraction of table is actually used. Demand paging has made virtual memory widely available.

Demand paging
2 1 4 3 5 7 8 9 7
Swap in 2 3 1 4 Swap out

Program B A

Demand paging
1 4 7 2 3
Swap in 7 3 1 4

Program A Program B

Requires use of a high-speed direct access storage device that can work directly with CPU. How and when the pages are passed (or swapped) depends on predefined policies that determine when to make room for needed pages and how to do so. Thrashing Is a Problem With Demand Paging Thrashing an excessive amount of page swapping back and forth between main memory and secondary storage. Operation becomes inefficient. Caused when a page is removed from memory but is called back shortly thereafter. Can occur across jobs, when a large number of jobs are vying for a relatively few number of free pages. Can happen within a job (e.g., in loops that cross page boundaries). Page fault a failure to find a page in memory.

Tables in Demand Paging Job Table. Page Map Table (with 3 new fields).
Determines if requested page is already in memory. Determines if page contents have been modified. Determines if the page has been referenced recently.
Used to determine which pages should remain in main memory and which should be swapped out.

Memory Map Table.

Page Fault
O/S
Page Map Table

Disk 0 1 i 2 5

A
Load X

C
Load M

A M C F

A C
C D A E B F M

F Logical Processes

F M Physical Memory

Page Fault
O/S
Page Map Table

Disk

A C
Load M

A 0 M i C 2 F 5

A C
C D A E B F M

F Logical Processes

F M Physical Memory

A A C
N

r C N X F Z Physical Memory N X

Page Replacement Policies


Policy that selects page to be removed is crucial to system efficiency. Policies used include:
First-in first-out (FIFO) policy best page to remove is the one that has been in memory the longest. Least-recently-used (LRU) policy chooses pages least recently accessed to be swapped out.

A B A C A B D B A C D a) First-In-First-Out (FIFO) A A B C C B B B A A D 9 page faults A A D D C C

A B A C A B D B A C D a) Least Recently Used (LRU) A A B A C A D B B A A D B C C

8 page faults

70120304230321201701
Example: 3 frames (3 pages can be in memory for the process)

a)

First-In-First-Out (FIFO)

7 7 7 2 2 2 4 4 4 0 0 0 7 7 7 0 0 0 3 3 3 2 2 2 1 1 1 0 0 1 1 1 0 0 0 3 3 3 2 2 2 1 15 page faults

70120304230321201701
Example: 3 frames

b) Least Recently Used (LRU)


Replace the page that has been used for the longest period of time

7 7 7 2 2 4 4 4 0 1 1 1 0 0 0 0 0 0 3 3 3 0 0 1 1 3 3 2 2 2 2 2 7 12 page faults

LRU The efficiency of LRU is only slightly better than with FIFO. LRU is a stack algorithm removal policy increasing main memory causes either a decrease in or same number of page interrupts.
LRU doesnt have same anomaly that FIFO does.

Beladys anomaly problem (in FIFO)


18 16 14 No of Page faults 10 8 6 4 2 1 2 3 4 No of Frames 5 6 7 12

Pros and Cons of Demand Paging A job is no longer constrained by the size of physical memory (virtual memory). (Pro) Uses memory more efficiently than previous schemes because sections of a job used seldom or not at all arent loaded into memory unless specifically requested. (Pro) Increased overhead caused by tables and page interrupts. (Con)

Segmented Memory Allocation Programmers commonly structure their programs in modules (logical groupings of code). A segment is a logical unit such as: main program, subroutine, procedure, function, local variables, global variables, common block, stack, symbol table, or array. Main memory is not divided into page frames because the size of each segment is different. In a segmented memory allocation scheme, jobs are divided into a number of distinct logical units called segments, one for each module that contains pieces that performs related functions. Memory is allocated dynamically.

Segment Map Table (SMT) When a program is compiled, segments are set up according to programs structural modules. Each segment is numbered and a Segment Map Table (SMT) is generated for each job. Contains segment numbers, their lengths, access rights, status, and (when each is loaded into memory) its location in memory.

Tables Used in Segmentation Memory Manager needs to track segments in memory: Job Table (JT) lists every job in process (one for whole system). Segment Map Table lists details about each segment (one for each job). Memory Map Table monitors allocation of main memory (one for whole system).

Main Program

Seg 0
Segment Map Table (SMT)
Seg No Size Status Access 200 400 240 busy busy free E E E

4000

Seg 0

Subroutine

Seg 1

0 1 2

Memory Address 4000 7000 6700

7000 Seg 1
Subroutine

Seg 2
Memory

Pros and Cons of Segmentation Compaction. External fragmentation. Secondary storage handling. Memory is allocated dynamically.

200k

P1
100k

Seg 0

P3

Seg 2

Program B (340k)
No segment can fit the program B , so External fragmentation
80k

P0
40k

Seg 1

P2
Memory

Seg 3

How paging overcome the segmentation

100k 100k

Page 0

Frame 0 Frame 2 Frame 1 Frame 3 Frame 4 Frame 5 Frame 6 Frame 7

Program B (340k) 100k 100k 100k 40k


Page 0 Page 1 Page 2 Page 3

P1 P3
Page 1

100k 100k 100k 100k 100k 100k

P0
Page 2

P2
Page 3
Memory

Internal fragmentation

Virtual Memory (VM) Even though only a portion of each program is stored in memory, virtual memory gives the appearance that programs are being completely loaded in main memory during their entire processing time. Shared programs and subroutines are loaded on demand, reducing storage requirements of main memory. VM is implemented through demand paging and segmentation schemes.

Comparison of VM with Paging and with Segmentation


Paging Allows internal fragmentation within page frames Doesnt allow external fragmentation Programs are divided into equal-sized pages Absolute address calculated using page number and displacement Requires PMT Segmentation Doesnt allow internal fragmentation

Allows external fragmentation Programs are divided into unequalsized segments Absolute address calculated using segment number and displacement Requires SMT

Advantages of VM
1. Works well in a multiprogramming environment because most programs spend a lot of time waiting. 2. Jobs size is no longer restricted to the size of main memory (or the free space within main memory). 3. Memory is used more efficiently. 4. Allows an unlimited amount of multiprogramming. 5. Eliminates external fragmentation when used with paging and eliminates internal fragmentation when used with segmentation. 6. Allows a program to be loaded multiple times occupying a different memory location each time. 7. Allows the sharing of code and data. 8. Facilitates dynamic linking of program segments.

Disadvantages of VM Increased processor hardware costs. Increased overhead for handling paging interrupts. Increased software complexity to prevent thrashing.

Vous aimerez peut-être aussi