Académique Documents
Professionnel Documents
Culture Documents
Memory Management
Each program is loaded in its entirety into memory and is allocated as much contiguous memory space as needed. If program was too large - it couldnt be executed. Minimal amount of work done by Memory Manager. Hardware needed : 1) register to store base address; 2) accumulator to track size of program as it is loaded into memory.
0 256
Memory
OS Job1
300
Base register
300
Limit register
Job2
420
Job3
880 1024
120
Job4
Fixed (Static) Partitions Allows multiprogramming by using fixed partitions - one partition for each job The size of each partition remains static once the system is in operation. Each partition can only be reconfigured when the computer system is shut down, reconfigured and restarted. The partition sizes are critical. If the partition sizes are too small, larger jobs will be rejected. If partition sizes are too big, memory can be wasted if a job does not occupy the entire partition. Entire program is stored contiguously in memory during entire execution. Internal fragmentation is a problem. Internal fragmentation occurs when there are unused memory spaces within the partition itself.
Internal fragmentation
As each job terminates, the status of its memory partition is changed from busy to free so that an incoming job can be assigned to that partition. The fixed partition scheme works well if all of the jobs run on the same system are of the same size of if the sizes are known ahead of time and dont vary between reconfigurations.
Job 3 must wait even though 70K of free space is available in Partition 1 where Job 1 only occupies 30K of the 100K available. The jobs are allocated space on the basis of first available partition of required size.
Memory
J1 (30k) J3
100k
70k
J4 (25k)
J2 (50k)
Dynamic Partitions
Available memory are kept in contiguous blocks and jobs are given only as much memory as they request when loaded. Improves memory use over fixed partitions. Performance deteriorates as new jobs enter the system Fragments of free memory are created between blocks of allocated memory (external fragmentation).
OS
10k 5k
50k 20k
J6 (30k) J4 (50k)
85k 105k
In this example eight jobs are submitted for processing and allocated space on the basis of first-come-first-served. Job 8 has to wait even though theres enough free memory between partitions to accommodate it because the free memory space available is not contiguous. Since jobs are loaded in a contiguous manner, Job 8 needs to wait.
Using a first-fit scheme, Job 1 claims the first available space. Job 2 then claims the first partition large enough to accommodate it, but by doing so it takes the last block large enough to accommodate Job 3. Therefore, Job 3 (indicated by the asterisk) must wait until a large block becomes available, even though theres 75K of unused memory space (internal fragmentation). Notice that the memory list is ordered according to memory location.
First-Fit Allocation
J1 = 10k J2 = 20k J3 = 30k Is waiting J4 = 15k
30k 15k
J4 (15k)
20k
5k J2 (20k)
50k
30k
20k
First-Fit Allocation
J1 = 10k J2 = 20k J3 = 30k J4 = 15k
30k 15k
20k
50k
20k
20k
Note: A request for a block of 200 spaces has just been given to the Memory Manager. Using the first-fit algorithm and starting from the top of the list, the Memory manager locates the first block of memory large enough to accommodate the job, which is at location 6785. The job is then loaded, starting at location 6785 and occupying the next 200 spaces. The next step is to adjust the free list to indicate the block of free memory now starts at location 6985 (not 6785 as before) and that it contains only 400 spaces (not 600 as before)
6785 6985
J1 = 200k
Best-Fit Allocation
J1 = 10k J2 = 20k J3 = 30k J4 = 10k J5 = 5k (is waiting)
5k
J4 (10k)
50k
Best-Fit Allocation
J1 = 10k J2 = 20k J3 = 30k J4 = 10k J5 = 5k
15k
J5 (5k)
5k
20k 30k
J2 (20k)
J3 (30k)
J4 (10k)
50k
40k
A request for a block of 200 spaces has just been given to the Memory Manager. Using the best-fit algorithm and starting from the top of the list, the Memory Manager searches the entire list and locates a block of memory starting at location 7600, which is the smallest block thats large enough to accommodate the job. The choice of this block minimizes the wasted space (only 5 spaces are wasted, which is less than in the four alternative blocks). The job is then stored, starting at location 7600 and occupying the next 200 spaces. Now the free list must be adjusted to show that the block of free memory starts at location 7800 (not 7600 as before) and that it contains only 5 spaces (not 205 as before).
20k
J1 (200k)
Release of Memory Space : Deallocation Deallocation for fixed partitions is simple Memory Manager resets status of memory block to free. Deallocation for dynamic partitions will be more complex because it tries to combine free areas of memory whenever possible. Example : If the block to be deallocated is adjacent to another free block Then The deallocated block is combined together with the free block The memory list is changed to reflect the starting address of the new free block(if starting address of new free block has changed) The free memory block size is changed to show its new size
Relocatable Dynamic Partitions The Memory Manager relocates programs to gather together all of the empty blocks and compact them to make one block of memory large enough to accommodate some or all of the jobs waiting to get in. Compaction Used to consolidate all external fragments (free areas in memory) into one contiguous block. In some cases, compaction enhances throughput by allowing more programs to be active at the same time. Compaction Steps Relocate every program in memory so theyre contiguous. Adjust every address, and every reference to an address, within each program to account for programs new location in memory. Must leave alone all other values within the program (e.g., data values).
Relocation The process by which programs are repositioned in main memory to allow compaction of free memory areas. When relocation takes place, the addresses specified by a program for either branching or data reference are modified, during execution, to allow the program to execute correctly. Memory Manager relocates programs to gather all empty blocks and compact them to make 1 memory block. Memory compaction (garbage collection, defragmentation) performed by OS to reclaim fragmented sections of memory space. Memory Manager optimizes use of memory & improves throughput by compacting & relocating. Relocation can be time-consuming and should be done sparingly. The options on the frequency of doing relocation and compaction include: When a certain percentage of main memory is used up (e.g. 75% used up) When the number of programs waiting for execution reaches a prescribed upper limit. When a prescribed amount of time has elapsed. Combinations of all the above options.
Example
OS
J1 (8k)
12k
J4 (32k)
62k 30k 92k 108k
J6 = 84k
J2 (16k) J5 (48k)
OS
J1 (8k)
12k
OS
J1 (8k) J4 (32k)
Compaction
10k 18k
J4 (32k)
62k 30k
J2 (16k) J5 (48k)
50k 66k
J2 (16k) J5 (48k)
114k
156k 54k
156k
OS
J1 (8k) J4 (32k) J2 (16k) J5 (48k)
10k 18k
50k 66k
J6 = 84k
114k
J6 (84k)
198k 12k 210k
More Recent Memory Management Schemes include: Paged Memory Allocation Demand Paging Memory Allocation Segmented Memory Allocation
Program has 350 lines. Referred to by system as line 0 through line 349.
Page frame 100k 100k 100k 100k 100k 100k 100k 100k 100k 100k 100k 100k 100k
Page 0 Page 2
OS OS
50 lines
Page 1 Page 3
Page 3
0 1 2 3 4 5 6 7 8 9 10 11 12
Paging Requires 3 Tables to Track a Jobs Pages 1. Job Table (JT) - 2 entries for each active job. Size of job & memory location of its page map table. Dynamic grows/shrinks as jobs loaded/completed. 2. Page Map Table (PMT) - 1 entry per page. Page number & corresponding page frame memory address. Page numbers are sequential (Page 0, Page 1 ) 3. Memory Map Table (MMT) - 1 entry for each page frame. Location & free/busy status.
Displacement (Figure 3.2) Displacement (offset) of a line -- how far away a line is from the beginning of its page. Used to locate that line within its page frame. Relative factor. For example, lines 0, 100, 200, and 300 are first lines for pages 0, 1, 2, and 3 respectively so each has displacement of zero.
Example: If we use 100 lines as the page size, the page number and the displacement (the location within that page) of Line 214 can be calculated: 2 100 214 Page size Page No 200 Line no to be 14 located Quotient 2 Page Number Remainder 2 Displacement Displacement Line 214 is located on Page 2, 15 lines (Line 14) from the top of the page
Page 0
99 99 100 0 101 . 102 . . . nd 100 lines . 2 . 199 200 201 202 . . . 214 99 0 . . .
Page 1
Remaining 15 lines
Page 2
14
Demand Paging
Bring a page into memory only when it is needed, so less I/O and memory is needed. Faster response. Takes advantage of the fact that programs are written sequentially, so not all pages are needed at once. For example: User-written error handling modules. Mutually exclusive modules. Certain program options are either mutually exclusive or not always accessible. Many tables assigned fixed amount of address space even though only a fraction of table is actually used. Demand paging has made virtual memory widely available.
Demand paging
2 1 4 3 5 7 8 9 7
Swap in 2 3 1 4 Swap out
Program B A
Demand paging
1 4 7 2 3
Swap in 7 3 1 4
Program A Program B
Requires use of a high-speed direct access storage device that can work directly with CPU. How and when the pages are passed (or swapped) depends on predefined policies that determine when to make room for needed pages and how to do so. Thrashing Is a Problem With Demand Paging Thrashing an excessive amount of page swapping back and forth between main memory and secondary storage. Operation becomes inefficient. Caused when a page is removed from memory but is called back shortly thereafter. Can occur across jobs, when a large number of jobs are vying for a relatively few number of free pages. Can happen within a job (e.g., in loops that cross page boundaries). Page fault a failure to find a page in memory.
Tables in Demand Paging Job Table. Page Map Table (with 3 new fields).
Determines if requested page is already in memory. Determines if page contents have been modified. Determines if the page has been referenced recently.
Used to determine which pages should remain in main memory and which should be swapped out.
Page Fault
O/S
Page Map Table
Disk 0 1 i 2 5
A
Load X
C
Load M
A M C F
A C
C D A E B F M
F Logical Processes
F M Physical Memory
Page Fault
O/S
Page Map Table
Disk
A C
Load M
A 0 M i C 2 F 5
A C
C D A E B F M
F Logical Processes
F M Physical Memory
A A C
N
r C N X F Z Physical Memory N X
8 page faults
70120304230321201701
Example: 3 frames (3 pages can be in memory for the process)
a)
First-In-First-Out (FIFO)
7 7 7 2 2 2 4 4 4 0 0 0 7 7 7 0 0 0 3 3 3 2 2 2 1 1 1 0 0 1 1 1 0 0 0 3 3 3 2 2 2 1 15 page faults
70120304230321201701
Example: 3 frames
7 7 7 2 2 4 4 4 0 1 1 1 0 0 0 0 0 0 3 3 3 0 0 1 1 3 3 2 2 2 2 2 7 12 page faults
LRU The efficiency of LRU is only slightly better than with FIFO. LRU is a stack algorithm removal policy increasing main memory causes either a decrease in or same number of page interrupts.
LRU doesnt have same anomaly that FIFO does.
Pros and Cons of Demand Paging A job is no longer constrained by the size of physical memory (virtual memory). (Pro) Uses memory more efficiently than previous schemes because sections of a job used seldom or not at all arent loaded into memory unless specifically requested. (Pro) Increased overhead caused by tables and page interrupts. (Con)
Segmented Memory Allocation Programmers commonly structure their programs in modules (logical groupings of code). A segment is a logical unit such as: main program, subroutine, procedure, function, local variables, global variables, common block, stack, symbol table, or array. Main memory is not divided into page frames because the size of each segment is different. In a segmented memory allocation scheme, jobs are divided into a number of distinct logical units called segments, one for each module that contains pieces that performs related functions. Memory is allocated dynamically.
Segment Map Table (SMT) When a program is compiled, segments are set up according to programs structural modules. Each segment is numbered and a Segment Map Table (SMT) is generated for each job. Contains segment numbers, their lengths, access rights, status, and (when each is loaded into memory) its location in memory.
Tables Used in Segmentation Memory Manager needs to track segments in memory: Job Table (JT) lists every job in process (one for whole system). Segment Map Table lists details about each segment (one for each job). Memory Map Table monitors allocation of main memory (one for whole system).
Main Program
Seg 0
Segment Map Table (SMT)
Seg No Size Status Access 200 400 240 busy busy free E E E
4000
Seg 0
Subroutine
Seg 1
0 1 2
7000 Seg 1
Subroutine
Seg 2
Memory
Pros and Cons of Segmentation Compaction. External fragmentation. Secondary storage handling. Memory is allocated dynamically.
200k
P1
100k
Seg 0
P3
Seg 2
Program B (340k)
No segment can fit the program B , so External fragmentation
80k
P0
40k
Seg 1
P2
Memory
Seg 3
100k 100k
Page 0
P1 P3
Page 1
P0
Page 2
P2
Page 3
Memory
Internal fragmentation
Virtual Memory (VM) Even though only a portion of each program is stored in memory, virtual memory gives the appearance that programs are being completely loaded in main memory during their entire processing time. Shared programs and subroutines are loaded on demand, reducing storage requirements of main memory. VM is implemented through demand paging and segmentation schemes.
Allows external fragmentation Programs are divided into unequalsized segments Absolute address calculated using segment number and displacement Requires SMT
Advantages of VM
1. Works well in a multiprogramming environment because most programs spend a lot of time waiting. 2. Jobs size is no longer restricted to the size of main memory (or the free space within main memory). 3. Memory is used more efficiently. 4. Allows an unlimited amount of multiprogramming. 5. Eliminates external fragmentation when used with paging and eliminates internal fragmentation when used with segmentation. 6. Allows a program to be loaded multiple times occupying a different memory location each time. 7. Allows the sharing of code and data. 8. Facilitates dynamic linking of program segments.
Disadvantages of VM Increased processor hardware costs. Increased overhead for handling paging interrupts. Increased software complexity to prevent thrashing.