Vous êtes sur la page 1sur 57

AFRICA NAZARENE UNIVERSITY

Department of Computer Science

CSC 302
Operating
Systems
PART THREE: MEMORY MANAGEMENT:
Lecture13: Swapping and Virtual Memory:
Lecture 13:

Scope:
Swapping.
Virtual memory.
Goals:
Discuss the application of swapping and virtual
memory strategies.
Examine the difference between memory
management using fixed size partitions and
variable size partition.

Swapping and Virtual Memory 2


swapping

Swapping and Virtual Memory 3


SwappingIntroduction:

Swapping involves bringing each process in


its entirety, running it for a while then putting
it back to disk.
Swapping uses variable size partitions.
Consider the diagram next slide

Swapping and Virtual Memory 4


Swapping illustration:

Swapping and Virtual Memory 5


Initially, only process A is in memory.
Then process B and C are created or
swapped in from disk.
In fig (d) A terminates or is swapped out to
disk.
Then D comes in and B goes out.
Finally E comes in.

Swapping and Virtual Memory 6


Fixed size vs variable size
partitions:
The main difference between the fixed
partitions in the previous section and the
variable partitions is that
the number, location and size of partitions vary
dynamically in the latter as processes come and
go.
The flexibility of not being tied to a fixed number of
partitions that may be too large or too small
improves memory utilization, but
complicates memory allocation and de-allocation
as well as keeping track of it.
Swapping and Virtual Memory 7
Memory compaction:

Swapping may create multiple holes in


memory.
When this occurs, it is possible to combine all
the holes into one big one by moving all
processes downward as far as possible.
This is known as memory compaction.
Memory compaction is usually not a common
practice because it takes a lot of CPU time.

Swapping and Virtual Memory 8


How much memory should we allocate to a process
when it is created or swapped in?
If processes are created with a fixed size that never
changes, then the allocation is simple: allocate what is
needed, no more and no less.
However, if a process data segments can grow,
then there is a problem.
If a hole is adjacent to the process, it can be allocated.
If the process is adjacent to another process, the growing
process will have will either have to be moved to a hole big
enough for it, or one or more processes will have to be
swapped out to create a large enough hole.
If a process cannot grow in memory and the swap area on
the disk is full, the process will have to wait or be killed.

Swapping and Virtual Memory 9


If it is expected that most processes will grow
as they run, it is better to allocate a little extra
memory whenever a process is swapped in
or moved, to reduce the reallocation
overhead.
However, when swapping processes to disk, only
memory actually in use should be swapped.

Swapping and Virtual Memory 10


Dynamic memory
management:
When memory is allocated dynamically, the
operating system must manage it.
There are two ways to keep track of memory
usage:
bit maps and
Linked lists.
See figurenext slide

Swapping and Virtual Memory 11


Swapping and Virtual Memory 12
Memory Management with
Bit Maps:
Memory is divided into allocation units.
Corresponding to each allocation unit is a bit in the
bit map, which is 0 if the unit is free and 1 if it is
occupied.
The size of the allocation unit is an important design
issue.
The smaller the allocation unit the larger the bit maps.
If the allocation unit is large, the bit map will be smaller, but
appreciable memory will be wasted in the last unit if the
process size is not exact multiple of the allocation unit.

Swapping and Virtual Memory 13


A bit map provides a simple way to keep
track of memory words in a fixed amount of
memory because the size of the bit map
depends on the size of memory and size of
allocation unit.
The main problem with it is that when it has
been decided to bring a k unit process into
memory, the memory manager must search
the bit map to find a run of k consecutive 0
bits in the map. This is a slow operation.

Swapping and Virtual Memory 14


Memory Management with
Linked Lists:
Another way of keeping track of memory is to
maintain a linked list of allocated and free memory
segments,
A segment is either a process or hole between two
processes.
The memory of fig 22(a) is represented in fig 22(c)
as linked list of segments.
Each entry in the list specifies a hole (H) or a process (P),
the address at which it starts, the length and a pointer to
the next entry.
In this example, the segment list is kept sorted by address.

Swapping and Virtual Memory 15


Memory allocation
algorithms
When the processes and holes are kept on a
list sorted by address, several algorithms can
be used to allocate memory for a newly
created or swapped in process.
First fit.
Next fit.
Best fit.
Worst fit.
We assume that the memory manager knows
how much memory to allocate.
Swapping and Virtual Memory 16
First-fit algorithm:

The simplest algorithm is first fit.


The memory manager scans along the list of
segments until it finds a hole that is big
enough.
The hole is then broken up into two pieces,
one for the process and one for unused
memory, except for the unlikely case of a
perfect fit.

Swapping and Virtual Memory 17


Next-fit algorithm:

Another variation of first fit is the next fit.


It works the same way as first fit, except that
it keeps track of where it is whenever it finds
a suitable whole.
The next time it is called to find a hole, it
starts searching the list from the place where
it left off last time, instead of the beginning as
with the previous algorithm.

Swapping and Virtual Memory 18


Best-fit algorithm:

The best-fit algorithm searches the entire list


and takes the smallest hole that is adequate.
Rather than breaking up a big hole that may
be needed later, best fit, tries to find a hole
that is close to the actual size needed.
Best fit is slower than first fit and also results
in more wasted memory because it tends to
fill memory with tiny useless holes.

Swapping and Virtual Memory 19


Worst-fit algorithm:

To get around the problem of breaking up


nearly exact matches into a process and a
tiny hole, one could think about worst fit,
i.e. take the largest available hole, so that the hole
broken off will be big enough to be useful.

Swapping and Virtual Memory 20


Paging and Virtual Memory

Swapping and Virtual Memory 21


Virtual Memory
Introduction:
In certain situations you can have programs that
dont fit in the available memory.
The solution is to split the program into units called
overlays.
The first overlay would start running and when it
was done it would call another overlay.
Some overlay systems are highly complex,
allowing multiple overlays in memory at once.
The overlays are kept on disk and swapped in and
out of memory by the operating system
dynamically on demand.

Swapping and Virtual Memory 22


The basic idea behind virtual memory is that
the combined size of the program, data and
stack may exceed the amount of physical
memory available for it.
The operating system keeps those parts of
the program currently in use in main memory
and the rest on disk.
For example,
a 16MB program can run on a 4MB machine by
carefully choosing which 4MB to keep in memory
at each instant, with pieces of the program being
swapped between disk and memory as needed.
Swapping and Virtual Memory 23
Virtual memory can also work in a
multiprogramming system, with bits and
pieces of many programs in memory at once.
While a program is waiting for part of itself to
be brought in (waiting for I/O operation), the
CPU can be given to another process, the
same as in other multiprogramming systems.

Swapping and Virtual Memory 24


Paging:

Most virtual memory systems use a


technique called paging.
On any computer, there exists a set of
addresses that programs can produce.
When a program uses an instruction like:
MOVE REG, 1000 -It is copying, the
contents of memory address 1000 to REG
or vice versa.
Addresses can be generated using indexing,
base registers and other ways.

Swapping and Virtual Memory 25


Address mapping - The
MMU
These program-generated addresses are called
virtual addresses and form the virtual address
space.
On computers without virtual memory, the virtual
address is put directly onto memory bus and causes
the physical memory word with the same address to
be read or written.
When virtual memory is used, the virtual addresses
do not go directly to memory bus.
Instead they go to a Memory Management Unit (MMU).
MMU maps the virtual addresses onto physical memory
addresses.

Swapping and Virtual Memory 26


The MMU

Swapping and Virtual Memory 27


Address mappingan
example
An example of how this mapping works is shown in
the figure below next slide.
In this example, we have a computer that can
generate 16-bit addresses, from 0 up to 64K.
These are virtual addresses.
The computer however has only 32K of physical
memory.
A 64K program cannot be loaded entirely in
memory.
A complete copy of the programs core image, up to
64K must be present on disk, however so that
pieces can be brought in as needed.

Swapping and Virtual Memory 28


Swapping and Virtual Memory 29
The virtual address space is divided into units
called pages.
The corresponding units in the physical
memory are called page frames.
The pages and page frames are usually
exactly the same size.
In this example they are 4K.
With 64K of virtual address space and 32K of
physical memory, we have 16 virtual pages
and 8 page frames.
Transfers between memory and disk are
always in units of a page.

Swapping and Virtual Memory 30


When a program tries to access address 0, for
example, using the instruction MOVE REG, 0, the
virtual address 0 is sent to the MMU.
The MMU sees that this virtual address falls in page 0 (0 to
4095), which according to its mapping is page frame 2
(8192 to 12287).
The memory knows nothing about the MMU and just sees a
request for reading or writing address 8192, which it honors.
Similarly, instruction MOVE REG, 8192 is effectively
transformed into MOVE REG, 24576
The virtual address 8192 is on virtual page 2 and this page
is mapped onto physical page frame 6 (physical address
24576 to 28671).
In another example,
virtual address 20500 is 20 bytes from the start of virtual
page 5 (virtual address 20480 to 24575) and maps onto
physical address 12888 + 20 = 12308.

Swapping and Virtual Memory 31


The fact that we can map 16 virtual pages
onto any 8-page frames does not solve the
problem that the virtual address space is
larger than physical memory.
Only 8 of the virtual pages are mapped onto
physical memory.
In the hardware, a present/absent bit keeps
track of whether a page is mapped or not.

Swapping and Virtual Memory 32


Page fault

If a program tries to use an unmapped page,


the MMU notices that the page is unmapped
and causes the CPU to trap to the operating
system.
The trap is called a page fault.
The operating system then picks one page
frame and writes its content back to the disk.
It then fetches the page just referenced into
the page frame just freed, changes the map
and restarts the trapped instruction.
Swapping and Virtual Memory 33
Page fault example
For example, using the instruction MOVE REG,
32780 which is 12 bytes within virtual page 8.
The operating system may decide to evict page
frame 1, then load virtual page 8 at physical address
4K and make two changes to the MMU map.
First, it would mark page frame 1 entrys as unmapped to
trap any future access to virtual address between 4K and
8K.
Then it would replace the cross in virtual page 8s entry
with a 1, so that when the trapped instruction is re-
executed, it will map to virtual address 32780 onto physical
address 4108.

Swapping and Virtual Memory 34


How the MMU works.

In the figure below, we see an example of a


virtual address, 8196 (0010000000000100 in
binary), being mapped using the MMU map
of fig 24.
The incoming 16-bit virtual address is split
into a 4-bit page number and 12-bit offset.
With 4 bits for page number we can represent 16
pages, and with 12 bits for the offset we can
address all 4096 bytes within a page.

Swapping and Virtual Memory 35


Swapping and Virtual Memory 36
The page number is used as an index to the
page table, yielding the number of the page
frame corresponding to that virtual page.
If the present/absent bit is 0, a trap to the
operating system is caused.
If the bit is 1, the page frame number is found
in the page table is copied to the high-order 3
bits of the output register along with the 12-bit
offset, which is unmodified.
The 15-bit output register is then put onto the
memory bus as the physical memory
address.

Swapping and Virtual Memory 37


Page Tables:
When mapping virtual addresses into physical
addresses, the virtual address is split into a virtual
page number and an offset.
The virtual page number is used as an index into the
page table to find the entry for that virtual page.
From the page table entry, the page frame number
(if any) is found.
The page frame number is attached to the high-
order end of the offset, replacing the virtual page
number, to form a physical address that can be sent
to the memory.

Swapping and Virtual Memory 38


The purpose of the page table is to map
virtual pages onto page frames.
There are two major issues to be considered:
The page table can be extremely large
The mapping must be fast
Modern computers use virtual addresses of
at least 32 bits.
With a 4K-page size, a 32-bit address space has
1 million pages.
With 1 million pages in virtual address space, the
page table must have 1 million entries.

Swapping and Virtual Memory 39


The second point is a consequence of the
fact that the virtual-to-physical mapping must
be done on every memory reference.
A typical instruction has an instruction word
and often a memory operand as well.
Consequently, it is necessary to make 2 or
more page references per instruction.
The need for large, fast page mapping is a
significant constraint on the way computers
are built.

Swapping and Virtual Memory 40


The simplest page table design is to have a single
page table consisting of an array of fast hardware
registers, with one entry for each virtual page,
indexed by virtual page number.
When a process is started, the operating system
loads the registers with the process page table,
taken from main memory.
During execution no more memory references are
needed for the page table.
The advantage of this approach is that it is straightforward
and requires no memory references during mapping.
A disadvantage is that it is potentially expensive if the page
table is large. Having to load the page table at every
context switch can also hurt performance.

Swapping and Virtual Memory 41


At the other extreme, the page table can be
entirely in main memory and a single register
that points to the start of the page table is
used.
This design allows the memory map to be
changed at context switching by reloading
one register.
The disadvantage is that it requires one or more
memory references to read page table entries
during execution of each instruction.

Swapping and Virtual Memory 42


Page Replacement
Algorithms:
When a page fault occurs, the operating system
has to choose which page to remove from the
memory to make room for the page that has to be
brought in.
If the page to be removed has been modified while
in memory, it must be rewritten to the disk to bring
the disk copy up to date.
If the page has not been modified there is no need
to rewrite it.
The page to be read in just overwrites the page being
evicted.

Swapping and Virtual Memory 43


It is possible to choose the page to be
removed randomly, though system
performance is much better if a page that is
not heavily used is chosen.
If a heavily used page is removed it will
probably be brought back in quickly, resulting
in extra overhead.

Swapping and Virtual Memory 44


Optimal Page Replacement
Algorithm:
At the moment that a page fault occurs, one
of the pages in memory will be referenced
on the next instruction.
Other pages may not be referenced until
later on.
Each page can be labeled with the number
of instructions that will be executed before
the page is first referenced.
This algorithm says that the page with the
highest label should be removed.
Swapping and Virtual Memory 45
The problem with this algorithm is that it is
not realizable.
At the point of the page fault the operating system
has no idea of when each of the pages will be
referenced next.
By running a program on a simulator and
keeping track of all page references, it is
possible to implement optimal page
replacement algorithm on the second run by
using the page reference information
collected during the first run.
This method is not used in practical systems.
Swapping and Virtual Memory 46
Not recently Used (NRU)
Page Replacement
Algorithm:
Most computers with virtual memory have
two status bits associated with each page
for collecting useful statistics.
R is set when the page is referenced (read) and
M when the page is written to (modified).
The bits are contained in each page table
entry.
It is essential that once a bit has been set to
1, it will stay until the operating system
resets it to 0.
Swapping and Virtual Memory 47
When a process is started up, both page bits
for all its pages are set to 0 by the operating
system.
Periodically, the R bit is cleared to distinguish
pages that have been referenced recently,
from those that have been.
When a page fault occurs, the operating
system inspects all the pages and divides
them into four categories based on the
current values of the R and M bits.

Swapping and Virtual Memory 48


Class 0: not referenced, not modified.
Class 1: not referenced, modified
Class 2: referenced, not modified
Class 3: referenced, modified
The NRU algorithm removes a page at
random from the lowest numbered non-empty
class. NRU is easy to understand, efficient to
implement and gives performance that, while
not optimal is often adequate.

Swapping and Virtual Memory 49


The First-In, First-Out (FIFO)
Page Replacement
Algorithm:
The operating system maintains a list of all
pages currently in memory, with the page at
the head of the list the oldest one and the
page at the tail the most recent arrival.
On a page fault, the page at the head is
removed and the new page added to the tail
of the list.
This is not a good algorithm because there
are chances of removing a heavily used
page.
Swapping and Virtual Memory 50
The second Chance Page
Replacement Algorithm:
A simple modification to FIFO that avoids
the problem of throwing out a heavily used
page is to inspect the R bit of the oldest
page.
If it is 0, the page is both old and unused, so it is
replaced.
If the R bit is 1, the bit is cleared and the page is
put at the end of the list of pages, and its load
time is updated as though it has just arrived in
memory.
The search continues.
Swapping and Virtual Memory 51
Examplesecond chance
Pages A through H are kept on a linked list sorted by
the time they arrived in memory.
Suppose that a page fault occurs at time 20.
The oldest page is A, which arrived at time 0, when
the process started.
If A has the R bit cleared, it is evicted from memory.
On the other hand if the R bit is set, A is put on to the end
of the list and its load time reset to current time (20).
The R bit is also cleared and the search for a
suitable page continues with B.

Swapping and Virtual Memory 52


Swapping and Virtual Memory 53
The second chance looks for an old page that has
not been referenced in the previous clock interval.
If all have been referenced then second chance
degenerates into a pure FIFO.
Suppose that all the pages have their R bits set.
One by one the operating system moves the pages to the
end of the list and clears their R bit.
Eventually, it comes back to page A, which now has its R
bit cleared.
At this point A is evicted.
Thus the algorithm always terminates.

Swapping and Virtual Memory 54


The Least recently Used
(LRU) Page Replacement
Algorithm:
A good approximation to the optimal algorithm is
based on the observation that pages that have
been heavily used in the last few instructions will
probably be heavily used again in the next few
instructions.
Conversely, pages that have not been used for
ages will probably remain unused for a longer
time.
This idea suggests a realizable algorithm:
when a page fault occurs, throws out the pages that have
been unused for the longest time.
This strategy is called LRU (Least Recently Used)
paging.

Swapping and Virtual Memory 55


To implement LRU, it is necessary to
maintain a linked list of all pages in memory,
with the most recently used page at the front
and the least recently used page at the rear.
The problem is that:
The list must be updated on every memory
reference.
Finding a page in the list, deleting and then
moving it from to the front is a very time
consuming operation.

Swapping and Virtual Memory 56


Next lecture:

File management.
I/O management.
Take time to read on these sections before
our next meeting.

Swapping and Virtual Memory 57

Vous aimerez peut-être aussi