Vous êtes sur la page 1sur 13

History, Evolution and Philosophy of Operating Systems

.I What is an operating system?


An operating system is a software program that acts as an intermediary between a user of a computer
and the computer hardware.
The operating system has two main purposes:
1) It makes the computer convenient to use
The operating system hides the complexity of the computer hardware and provides a user friendly
interface to the user of the computer (interactive user or programmer).
2) It makes efficient use of the resources of the computer
The computer is potentially powerful system with a lot of resources (CPU, main memory, secondary
memory, peripherals, programs, data. The operating system tries to enable users to make efficient
use of those resources.

.II Why study operating systems?


Very few people will ever design operating systems. However, the operating system course is one of the major
courses found in any Computer Science program.
Any person who uses computers will use operating systems. Very often the knowledge of the operating system
interface is sufficient for that person to use the computer system. However, it is important to know how
operating systems are build to use the computer more confidently and efficiently.
Most computer scientists will also need to evaluate computer systems during acquisitions. Very often they will
have to choose one operating system among several available in the market. It is important for those persons to
understand the internal features of those operating systems in order to make a correct decision.

.III History of operating systems


The history of operating systems is tightly related to the history of computer architectures. The evolution of the
computer architecture has brought more challenge but also more possibilities to the operating systems.
Operating systems have also contributed to the evolution of the computer architecture; in more than one
occasion some of the evolutions of the computer architecture were motivated by the need of operating systems
designers.

1 Early Systems
There was no operating system on the first computer systems. The operator of the computers had direct access
to the computer hardware. He/she directly manipulates the resources of the computer without any intermediate.
From the second half of the 1940th to the 1950th, the computers had more or less the following features.
Programs were entered using plug boards or punched cards. The results of the computation were given on
printers. There were a memory unit used to store both data and program. The operator of those machines was
also the programmer, and maintained the computer whenever there was a problem. At the execution of the
program, the programmer/operator could monitor the execution; for example, he could stop the execution of the
program whenever he wants to see the state of the machine on the display lights. There was no operating
system.
Execution Program 1 Execution Program 2
Executing
There were two important problems with those machines. First, since the programmer had to use directly the
machine language and had to control directly all devices of the computer, program writing was a very difficult
Setting
task. Second, the CPU, the most expensive Read ofOutput
Readcomponent Read
the computer, was underSetting
utilized since Read
it was idle
mostIdle
of the time. Indeed, the CPU was much faster than the Input/Output devices and setting up the system for

Time
the execution of a new program takes time. For example, a slow CPU works in the microseconds range (1
million instructions a second). The fastest card reader can read 17 cards (instructions) a second; thus the
computer needs 58,828 microseconds to read an instruction that it will take only a microsecond to execute.

2 Batch Systems
The Batch System computers that prevailed from 1955 to 1965 attempted to solve the problems of the previous
systems using simple techniques. First, to limit the setup time, an operator was assigned to take program cards
from programmers and put them in the card reader. As the operator was specialized in what he does he wastes
less time in the setup. Since many programs are loaded at the same time the setup time decreases considerably.
However, this doesn't limit the time between two card readings. However, he latest Batch Systems computers
started to use magnetic tapes; reading an instruction from a tape was much faster since it requires only around
20 microseconds. To reduce programming difficulty high level programming languages started to make their
appearance (COBOL, FORTRAN, etc.). In order to separate one batch program from the next one, control
cards were inserted between the two program's cards. Those cards can be considered as the first ancestors of
today's operating systems.
Control Card

Execution Program (Batch) 1 Execution Program (Batch) 2

Executing

Setting Read Read Output Read Read Read Read Read Read
Idle

Time

Figure: CPU execution time for Batch System computers


Batch systems improved CPU utilization but the rate of utilization remained very low. The interactivity that
existed between the first generation of computers and the user disappeared with batch systems.

3 Spooling
Around 1965, computer systems that use spooling started to appear. These computers try to limit the I/O
waiting time of the previous systems by doing I/O operations concurrently with processing. I/O channels deal
with I/O independently from the processor. If an output device can get its information from memory and
perform output without the processor directing the action, output can occur concurrently with processing.
Similarly, if an input device can place information into memory without processor intervention, input and
processing can occur concurrently. Whenever the attention of the CPU is required for an I/O channel, it sends
an interrupt that will inform the CPU about it.
The advent of disk systems allowed high flexibility in spooling. The jobs coming as input arrived sequentially
but could be stored temporarily on disk. A scheduler then selects a job from the pool in any desired order.
Similarly jobs being pooled for printing could be printed at convenience. The lock-step of sequential response
brought about by the nature of the card reader was broken. The operating system of spooling systems was of
course more complicated than the embryo of operating system found in the previous systems.
Output
Control Card

Execution Program 1 Execution Program 2

Executing

Setting Read Read Read Read Read Read Read Read Read
Idle

Time
Figure: CPU execution time for computers using spooling

4 Multiprogramming
With spooling only I/O channels could send interrupts. This has changed with multiprogramming where
application programs could also send interrupts. This enables switching from one program to another. With
multiprogramming, when a program A under execution is suspended pending an I/O operation, the processor
can change focus on another program B which is also in memory until an interrupt occurs. The processor will
eventually return to program A. Prior to multiprogramming I/O activities of a program could occur concurrently
with the processing of the same program. With the advent of multiprogramming I/O of one program was
allowed to overlap with processing of another program.
Input Program 1

Input Program 2
Control Card

Execution Program 1 Execution Program 2

Executing

Idle

Time
Figure: CPU execution time for computers using multiprogramming

5 Time Sharing
One big drawback introduced by batch systems was that users do not interact with their programs. Time sharing
introduced along with multiprogramming enabled each program to acquire the CPU regularly for a small slice
of time. The operating system switches processing from one user program to another. The processor is so fast
compared to the I/O devices and users, enough work can be done during that short slice of time to give the user
the impression that the entire system and its resources belong to him or her alone.
Execution of Program 1

Execution of Program 2

Execution of Program 3

Executing

Idle

Time

Figure: CPU execution time for computers using multiprogramming

6 Microcomputers
The evolution of operating system seen above concerns mainly the large mainframes and other multi-user
computers. At the end of the 1970s, microcomputers made their appearance. They were much smaller and less
expensive than the multi-user computers. Instead of maximization of the CPU utilization, user convenience and
responsiveness were privileged.
In the first years, all microcomputers were single user, single tasking. In the mid-1980s, many single user,
multi-tasking operating systems were developed for microcomputers. In the 1990s, operating systems as
sophisticated as the mainframe operating systems started to be developed for personal computers (UNIX,
Windows NT, etc.).
The communication problem between personal microcomputers has led to the development of networking
hardware and software. Initially separate software packages were introduced to add networking features to the
existing operating systems. But the latest operating systems have included networking features as an integral
component.

.IV Some Famous Operating Systems


1 Early operating systems for Mainframe and mini-computers
In the 1960s, IBM announced the first commercially available family of computers- the S/360 family. The
computers of that family have compatible architecture and run the same operating system. While the notion of a
single operating system sounded great in the mid-1960s, getting all existing IBM customers to forego their
investments and convert to the proposed OS/360 was another matter. Since there was great resistance IBM
responded by creating several systems for the S/360, the most popular being the DOS (each supported to some
extent today). Those operating systems support many of the modern operating system concepts. Those
operating systems support many of the modern operating system concepts.
Another famous operating system is the VMS. VMS (Virtual memory Systems) has been used for the very
successful DEC's PDP-11 series from the early 1970s. Even though it has a strong UNIX flavor it is unique to
the VAX family of DEC.

2 Unix Operating systems


The operating systems known as UNIX has its root in the Multics project conducted at MIT. Multics, developed
in the early 1960s, was a technically advanced operating system. But it failed to gain popularity and was
abandoned. In the late 1960s, the designer of Multics, Ken Thompson, later joined by Dennis Richie and others
began to design UNIX an operating system based on many of the concepts already developed on Multics. The
goals of the project were to build an interactive programming environment with many tools and tool-building
utilities to assist with many tools and tool-building utilities to assist in program development. The system was
designed to encourage a particular programming philosophy: "solve large tasks by combining a number of
simpler tasks". UNIX had also three other important characteristics: portability, device independence,
hierarchical file system.
The original version of UNIX was written in Assembly (PDP-7) but it has been rewritten in C except for a
small part written in assembly language.
The command language of UNIX was built into a command language interpreter or shell and is treated as an
application program rather than part of the operating system.
Until the early 1980s UNIX was not a commercial operating system. For this reason a lot of Universities and
Colleges started using it. Today there are some commercial UNIX operating systems while some others are
free. Among the commercial ones we can find XENIX of Microsoft, SCO, AIX of IBM. Among the free UNIX
operating systems we can mention Linux, and BSD.

3 MS-DOS, Windows 3.1, OS2 and Windows 95


MS-DOS, developed by Microsoft corporation for IBM personal computers in the early 1980s, is probably the
most popular of all operating systems. MS-DOS has been continuously upgraded for the 8086/8088, 80286 and
80386 systems. MS-DOS faced he challenge of adapting to the newer 32-bit machines.
In 1985, Microsoft released Windows 3.1, a graphical environment running on MS-DOS partly designed to
answer the challenge of the Macintosh. Windows runs most of the resources of the machine except the file
system. Windows 3.1 could run on 80286 and above based systems.
In the early 1990s, IBM and Microsoft designed operating system/2 a single user, multitasking operating
system. OS2 was upward and downward compatible with MS-DOS. OS/2 had a graphical interface similar to
the Macintosh OS. OS/2 didn't get as much success as it was predicted due to the new operating system of
Microsoft which has gained enormous popularity: Windows 95.
Windows 95 was released by Microsoft in 1995. It is a multitasking operating system with graphical interface.
It also had upward compatibility with MS-DOS and Windows 3.1. It has a very user-friendly interface that
attracted a lot of users of other systems, especially Macintosh. It had a lot of built in features such as
networking software and Internet tools. This has initiated one of the most famous trials: competitors of
Microsoft and some US government bodies sued Microsoft of introducing application software in the operating
system in order to kill application software developers.

4 Windows NT
Windows NT has been designed in the early 1990s to offer a high-end operating system for personal computers
(Intel and Alpha processor based) with security, protection and reliability features. Windows NT is a server
based operating system requiring high amount of memory (at least 16 MB) and fast processor. Windows NT
has inherited from Windows 3.1 its user interface. Later on Windows NT 4.0 has been developed with
Windows 95 interface. Windows NT is also upward compatible with MS-DOS and Windows 3.1.
It is planned that Windows 95 and Windows NT will merge in the future and form a single operating system.
The requirements of Windows NT are already becoming acceptable for most PC owners. The main factor that
probably makes users prefer Windows 95/98 to Windows NT is probably its high price.

5 Macintosh Operating System


Apple Computer's Macintosh system introduced the computing public to a different world of user interfacing.
The Macintosh interface was graphic, iconic, metaphorical, object oriented. Probably due to these features,
Macintosh's operating system was one of the most popular operating system. In fact it enabled Apple's
Macintosh Personal computer to be the only rival to the Intel based IBM compatible personal computers.
However, Macintosh has difficulty to cope with the overwhelming popularity of Windows 95 and the
monopolistic tactics of Microsoft.

.V Operating System Structures


Any system as complex as the Operating System cannot be produced unless a structured approach is used. A
number of structures have been used to develop the various operating systems that have been developed for the
last five decades. We are going to see some of them in this section.

1 Monolithic Systems
With operating systems that have adopted the monolithic organization, the operating system is written as a
collection of procedures, each which can call any of the other ones whenever it needs to. Each procedure has a
well defined interface in order to allow the programmer to use it in the development of other procedures. Even
with the monolithic systems, there are some structures. Generally, there is a main procedure that can be invoked
from application programs. Whenever the main procedure is called it invokes service procedures or also called
system calls. The system calls call utility procedures that can call directly the computer hardware.

Main Procedure

System Calls

Utility Procedures

Figure: A simple structure of a monolithic system

2 Layered System
The Layered Systems organize the operating system in a number of layers, each constructed on top of the other.
For example, the THE system designed by Dijkstra (1968) structures the operating system into 6 layers (Figure
below). Each layer handles a specific task of the operating system. For example, layer 0 handles process
allocation and multiprogramming, layer 1 Memory and drum management, ...

Layer # Task
5 The Operator
4 User programs
3 Input/Output
2 Operator-process communication
1 Memory and drum management
0 Processor allocation and multiprogramming
Figure: Structure of the THE system
A further generalization of the layering concept was present in MULTICS system. Instead of layers, MULTICS
was organized as a series of concentric rings, with the inner ones more privileged than the outer ones.
3 Virtual Machines
A timesharing operating system can give the illusion of multiple processes, each executing on its own processor
with its own virtual machine. The virtual-machine approach provides several Virtual Machines identical to the
underlying bare hardware without any additional feature such as a file system. The resources of the virtual
machine are shared to create the virtual machines.

Processes Processes Processes Processes

Kernel Kernel Kernel Kernel

Hardware Hardware

Figure: System models (a) Non virtual machine (b) Virtual Machine
The virtual machine concept has several advantages. First, since each machine is completely isolated from all
other machines, there is a complete protection. Second, this is a perfect vehicle for operating systems design
since it is possible to run the operating system under design on the virtual machine and a full machine is not
required for the research.
It has also some drawbacks. For instance, sharing of resources is difficult since the virtual machines are
completely isolated. However, two approaches have been developed for resource sharing of the operating
systems that support the virtual machine concept. First, it is possible to share the disk space. Second, it is
possible to define a network of virtual machines.

4 Client-server model
A trend in modern operating systems is to remove as much as possible from the operating system and to leave a
minimal kernel. Most of the operating system functions will be implemented in user processes. To request a
service, such as reading a block of a file, a user process (known as client) sends a request to a server process,
which then does the work and sends back the answer. One advantage of the client server model is its
adaptability to use in distributed systems.

.VI Summary
Operating Systems are invented to make better use of the computer hardware and to give a convenient interface
to the users. They have evolved considerably during the last four decades. Today's operating system are for
most multi-user, multi-tasking, with graphical interface and possesses various tools such as network software
tools.
It is impossible to design and implement an operating system, which is a large software, without having a
certain structuring method. There are various models for the structure of operating systems. The main four
models are: monolithic structure, layered structure, virtual machines and client server structure.
Process Management
.I Introduction
All modern operating systems can execute several tasks at a time (Windows 95, Windows NT, UNIX, etc.).
This allows them to use efficiently the resources of the computer, especially the processor. It also gives
convenience for the user, since he/she doesn't need to wait for the end of the execution of a program before
starting the execution of another program.
Early computer systems allowed only one program to be executed at a time. This program had complete control
of the resources of the computer system. Current-day operating systems allow several programs to be in
memory and execute concurrently. The same program can be run by two different users almost simultaneously.
This has necessitated the notion of process which is a program in execution.
A process is a unit of work. Every work (user or system) is accomplished on a computer using one or several
processes. On modern systems, these processes run concurrently using time sharing concepts.
The concept of executing several processes concurrently seems new but humans are used to execute tasks
concurrently in a way very similar to what operating systems do. For example, a mother can bake a cake and
take care of her child at the same time. While she is baking the cake using a recipe her child can cry because he
has fallen down and he is bleeding. She will obviously stop baking the cake and take care of her child by
following instructions on a medical care book and using a first aid kit. When she finishes the first aid care she
will go back and finish baking right from where she stopped. In this example, there are two jobs done
concurrently and the following important observations can be made:
1- The mother uses a program (recipe or medical care book) to perform her jobs
2- The mother can perform only one job at a time but she can also perform several jobs by switching
from one job to another

.II The process concept


1 The process
Informally, a process is a program in execution. The execution of a process progress in a sequential manner. A
process is more than the program code; there are other important information such the state of the execution,
data in registers as well as in memory. Unlike a program which is a passive entity, a process is an active entity.
A single program may be used for several processes.

Ready Running
Scheduler
Dispatch
I/O or event I/O or event
Completion Wait
Waiting

Figure: Diagram of process state

2 The process state


A process changes states while executing. During its execution, each process can be in one of the following
three states (see Figure below)
1. Running: The CPU is allocated for this process and instructions are being executed
2. Waiting: The process cannot execute because it is waiting for some I/O or event
3. Ready: The process is ready to execute but it cannot since the CPU is used by some other
process
When a process is created it will take the ready state. A program in execution becomes terminated and
disappears after it has executed its final instruction.

3 Process Control Block


The Process Control Block (PCB) represents the process in the operating system. It is stored in the RAM area
reserved for the operating system. If one has the information in the PCB of a process, one knows exactly what
the process is going to do. The information that are stored in the PCB comprise:
1. The process state: that indicates the current state of the specific process (running, waiting,
running)
2. The program counter: that indicates the address of the next instruction to be executed
3. The CPU registers: The PCB of a process that is not currently running contains the values of the
registers after the process has executed its last instruction before it stopped running. The number of
registers' value depend from processor to processor.
4. CPU scheduling information (priority, etc.)
5. Memory management information: location in memory reserved for that process
6. Accounting information: Amount of CPU and Real time used
7. I/O status information: list of I/O devices allocated to this process such as open files, taped
drives, etc.

4 Process hierarchies
Operating systems that support the process concept must provide some way to create and destroy the processes.
A process is generally created by an existing process upon executing a system call (usually called fork). The
creator of the process is called the parent while the process newly created is called the child.
In UNIX, the child is almost identical to the parent. It has the same code, an identical copy of the data and stack
segment. The child executes the code exactly from where the parent called the fork.
The parent and the child can create several child processes. It is therefore possible to have a whole tree of
processes.

5 Context switch
Switching the CPU from one process P1 to another process P2 requires saving the values of all the registers in
the PCB of P1. This is because all the values should be restored when P1 is given once again the CPU. Then,
the values of the registers at the end of the previous execution of P2 will be loaded from its PCB into the
register before P2 can start running. This task of saving the data in CPU of the process that is leaving and
bringing the data of the newly selected process into the CPU is called context switch.
Process switch time is an overhead (1 to 1,000 micro-seconds). It depends on the speed of the processor, the
number of registers to save, etc. This overhead should be taken into consideration in the design of the
scheduler.

.III Process scheduling


There are only a limited number of processors in the computer (usual only one) but there are always many
processes alive at one single time. One of the major tasks of the operating system is to choose the process that
should run at one given time. This important task of the operating system is called process scheduling.
The operating system does not choose the process randomly. It tries to achieve two important goals. First, it
tries to maximize CPU utilization. Second, for time sharing systems, it switches as frequently as it is reasonable
processes in order to give regular execution opportunities for all the processes.

1 Scheduling queues
For a uniprocessor system, at one given time, only one process can get the CPU. The other will be put in
various queues (waiting list). There are a number of queues.
a) The ready queue
All processes in the ready state will be put in the ready queue. This queue is generally stored as a linked
list. The elements of the linked lists are the PCBs of the processes.
b) The I/O device queue
A process may need the service of a certain device (disk for example). Since there are many processes
in the system the device may be busy in executing the request of some other process. Of course, the
process cannot run and has to wait for that particular device. The list of processes waiting for a
particular device is called a device queue. Each device has its own device queue.
A common representation for discussion of process scheduling is a queuing diagram (Figure). Each rectangular
box represents a queue. Two types of queues are present: the ready queue and the device queues. The circles
represent the resources that serve the queue and the arrows indicate the flow of the processes in the system.
CPU
Ready Queue execut
es
I/O
execut I/O Queue I/O Request
es

Time slice
expired

Child terminates Child executes Fork a child

Interrupt occurs Wait for an


interrupt

Figure: Queueing-diagran representation of process scheduling


A new process is initially put in the ready queue. It waits there until it is selected for execution and is given the
CPU.
One of the following could happen to a process under execution:
- The process could request an I/O and placed in a device queue; it will have the waiting state.
- The process could create a new child process and wait for its termination; the parent process will
have the waiting state.
- The process may receive an interrupt that forces it to go back in the ready queue; the process will
have the ready state.
A waiting process is put back in the ready queue as soon as it's I/O request is satisfied. A process continues this
cycle until it terminates, at which time it is removed from all queues and has its PCB and resources deallocated.
2 Goals for scheduling algorithms
A scheduler uses an algorithm to share the CPU between the processes. This algorithm is called the scheduling
algorithm. A scheduling algorithm decides on the policy for choosing a process for running among several
ready processes. What a good scheduling algorithm should insure are:
a) Fairness: all processes should get fair share of the CPU
b) Efficiency: the CPU should be used efficiently; it should be made as busy as possible
c) Response time: interactive users should get good response time
d) Turnaround: batch systems should wait the minimum possible time to get their output
e) Throughput: maximum number of processes should be processed per hour
These goals are contradictory. For example, to give good response time batch systems may be prevented from
running when there are batch programs. This improves response time for interactive processes but gives very
bad turnaround for batch processes. Designers of scheduling algorithms generally have to find a consensus that
enables not to penalize too much one goal to achieve another goal.

3 Preemptive and non-preemptive scheduling


With non-preemptive scheduling the scheduler gives the processor to a process and will let the process execute
until its completion or until it requests for an I/O. Therefore, the programmer of that process knows exactly
where there is a possibility for the process to lose the CPU and for another process to take over. However, this
scheduling method does not allow time sharing.
With preemptive scheduling, in contrast with non-preemptive scheduling, a process can be suspended at any
arbitrary instant, without any warning. Thus, the programmer doesn't know at which specific point of the
program the process may loose the processor. This may have serious consequences in the communication of
processes.

4 Quantum
With time sharing operating system there is a slice of time, called quantum, during which the process can run
without being interrupted by the operating system. If the process has still not finished its execution or is not
blocked during that slice of time, the operating system interrupts the execution of the process and gives the
CPU to some other process.
One of the most important decision that should be made during the design of an operating system is the length
of the quantum. Suppose the context switch takes 5 ms. If a quantum length of 20 ms is use, 20% of the CPU
time will be used for the overhead. Suppose that the quantum is set to 500 ms. Less than 1% of the CPU time
will be used for the overhead. However, setting large quantum has also other drawbacks; for example, it limits
the switching between processes and the user will not have the impression that the machine is only use by him
or her.

5 I/O bound and CPU bound processes


An I/O bound process is a process that spends most of its time doing I/O than it spends doing computation. A
CPU bound process, on the other hand, is one that have very rare I/O requests. I/O bound processes are
generally interactive processes (ex: editor).
The two types of processes have different requirements. I/O bound processes generally need CPU for very short
time. There is no need of giving CPU for a large amount of time for such processes since they will request an
I/O and lose the CPU before they have finished using the CPU for that amount of time. It is preferable to give
the CPU for shorter period but very frequently in order to limit the response time.
CPU bound processes need a lot of CPU time. Therefore, they would benefit from being allocated large amount
of CPU time.
6 Scheduling algorithms
There are a number of scheduling algorithms. Each has some advantages and drawbacks. In this section, some
of the most famous scheduling algorithms will be seen.
a) Round Robin Scheduling (RR)
The Round Robin algorithm is one of the oldest, fairest and most widely used algorithm. With this algorithm,
the scheduler maintains a single ready queue. When the CPU is liberated, it takes the first process in the list and
runs it for a given quantum of time. If the elected process runs for the whole quantum it is suspended and put at
the end of the ready queue. If the process request for an I/O before the end of the quantum, the process state
will be change to waiting and the CPU will be given to the first process in the ready queue.
This algorithm is very easy to implement. However, the average waiting time for a process is often quite long.
If there are n processes in the ready queue and the time quantum is q then each process gets 1/n of the CPU
time and it has to wait no longer than (n-1) * q time units. The performance of RR depends heavily on the size
of the quantum. If the quantum is very large there is almost no time sharing. If the quantum is very small there
is good time sharing but the overhead is very high.
This algorithm does not differentiate between I/O bound and CPU bound processes. For this reason response
time for I/O bound processes may be very slow if a high number of CPU bound processes are running.
b) Priority Scheduling (PS)
RR scheduling makes the assumption that all processes are equally important. In the real world, this is rarely
true. There are some processes that are more important than others; for example, a system process is more
important than a user process.
With priority scheduling, each process will be assigned a priority and, whenever the processor is free, the ready
process with the highest priority will be allowed to run. For example, if there are four priorities 1, 2, 3, 4 and
the highest priority is 4. Whenever the scheduler has to choose a process to run, it will look first in the list of
processes of priority 1 and it is only if there is no ready process of priority 4 that it will look for priority 3
process.
With PS, there is a risk that a higher priority process blocks indefinitely a lower level process from running.
This is called starvation and it is obviously not acceptable.
Priorities can be assigned statically but also dynamically. For example, dynamic priority allocation is
interesting to obtain good response time for interactive (I/O bound) processes. By giving high priority for I/O
bound processes, their response time can be considerably ameliorated. Giving high priority for I/O bound
processes does not penalize the other CPU bound processes because they don't use much CPU time anyway.
However, a process is not as such I/O bound and CPU bound from birth to death. Very often, a process may be
I/O bound in the beginning and become CPU bound later on. Suppose that an I/O bound process is given
statically a high priority and it becomes CPU bound some time later. This can create starvation for the other
processes. In this case dynamically assigned priorities are interesting. A simple algorithm for giving good
service for I/O processes is to set the priority 1/f, where f is the fraction of the last quantum that the process
used. In this way if an I/O bound process becomes CPU bound f will become 1 and its priority will be lowered
hence avoiding starvation for other processes.
c) Multiple queues scheduling
One of the earliest priority scheduling was used for a computer that cannot hold more than one process in
memory. For this computer, a process switch meant swapping the current process to disk and reading in a new
one from disk. The designer of the operating system quickly realized that it was more efficient to give CPU
bound processes a large quantum once in a while than giving them a small quanta frequently in order to reduce
the overhead.
The solution was to setup priority classes. Processes in the highest class were run one quantum, processes from
the next highest class were run two quanta, processes from the next highest class were run four quanta, etc.
d) Two level scheduling
If insufficient memory is available, some ready processes may be in memory while some other are on disk.
Switching to a process from disk is orders of magnitude more than switching to a process in memory. Two level
scheduling is invented to deal with this kind of situation.
With two level scheduling some subset of ready processes are loaded in memory. The scheduler restricts itself
to only those processes for a while. Periodically, a higher level scheduler removes those processes that have
been long enough in memory and puts in memory those processes that have been on disk too long. Thus, the
lower level scheduler is concerned about ready processes in memory while the higher level scheduler is
concerned with shuttling the processes back and forth between memory and disk.
e) Multiple processor scheduling
The processors within a multiprocessor are identical. They can perform the same tasks. One way of using those
processors is through load sharing by providing a separate queue for each processor. However, in this case one
processor can be idle while the other has a long queue. To prevent this, a single queue may be used and one of
the two approaches may be used. Either the processor examines itself the queue and selects the process to
execute or one of the processors can be the master scheduler that assigns processes for the others. The latter
solution is easier to implement.
d) Real time scheduling
Some processes have real time requirements. They have to finish in a given amount of time or they may create
some problem. For example, a process that controls the temperature of a nuclear plan should get the CPU in a
very strict amount of time; a default can result in a catastrophe.
The operating system that runs such processes should be able to guaranty certain CPU time requirements. If the
operating system can not give such a guaranty it shouldn't allow the process to be created in the first place. Real
time scheduling is possible using resource reservation. The operating system can reserve certain slot of time for
a given process at its creation. This guaranties the process that it will get at least those slots of time for
execution.

.IV Processes and threads


Until now, we have considered the process as the Unit of work that incorporates both unit of dispatching and
unit of resource ownership. This was true for all operating systems in the past but less and less true for many
modern operating systems. Many operating systems are adopting a finer unit of dispatching, the thread or
lightweight process.

Many processes can now exist within a single process. Threads within a single process share the same resources
but have their own threads of controls. The key benefit of threads derive from performance implications. It
takes less time to create and terminate a thread. It also takes less time for inter-thread communication as well as
switching from one thread to another. However, these benefits do not come without drawbacks. Since the
resources are shared without the intervention of the operating system, the programmer has to explicitly manage
the resources shared between the threads.

.V Summary
The process is a central concept in operating system. It consists of a program in execution. The PCB stores all
information on the process. This information is used by the operating system for scheduling and context switch.
There are a number of scheduling algorithms that are suitable for various situations. However, there are a
number of goals such as fairness, response time, efficiency that the designer of a scheduler should have in
mind.

Vous aimerez peut-être aussi