Vous êtes sur la page 1sur 11

1.

INTRODUCTION

1.1 What is an Operating System


A computer without software is useless. Software makes a computer become live. With software a
computer can store, process and retrieve information. Computer Software can roughly be divided into two
forms: system programs and application programs.

System programs manage the operations of the computer itself and application programs perform the
work that the user wants. The most fundamental system program is the Operating System, which
controls all computer resources and provides the base upon which application programs run.
Long ago, there was no such thing as an operating system. The computers ran one program at a time. The
computer programmers would load the program they had written and run them. If there was a bug in the
program, the programmer had to start over. Even if the program did run correctly, the programmer
probably never got to work on the machine directly. The program (punched card) was fed into the
computer by an operator who then passed the printed output to the programmer later on.

As technology advanced, many such programs, or jobs, were all loaded onto a single tape. This tape was
then loaded and manipulated by another program, which was the ancestor of today's operating systems.
This program (also known as a monitor) would monitor the behavior of the running program and if it
misbehaved (crashed), the monitor could then immediately load and run another.

The process of loading and monitoring programs in the computer was somehow cumbersome and with
time, it became apparent that some way had to be found to shield programmers from the complexity of
the hardware and allow for smooth sharing of the relatively vast computer resources. The way that has
evolved gradually is to put a layer of software on top of the bare hardware, to manage all the components
of the system and present the user with a virtual machine that is easier to understand and program. This
layer of software is the operating system.

We can view an operating system as a resource allocator. A computer system has many resources
(hardware and software) that may be required to solve a problem: CPU time, memory space, file storage
space, I/O device and so on. The operating system acts as a manager of these resources and allocates them
to specific programs and users as necessary for tasks. Within an operating system are the management
functions that determine who gets to read data from the hard disk, what file is going to be printed next,
what characters appear on the screen, and how much memory a certain program gets.

The concept of an operating system can be illustrated using the following diagram:

Banking Airline Web Browser


Application Programs
System Reservation

Compilers Editors Command


Interpreter
System Programs
Operating System

Machine Language

Microprogramming
Hardware
Physical Devices

Figure 1
Operating Systems 1
At the bottom layer is the hardware, which in many cases is composed of two or more layers. The lowest
layer contains physical devices such as IC chips, wires, network cards, cathode ray tubes etc. The next
layer, which may be absent in some machines, is a layer of primitive software that directly controls the
physical devices and provides a clean interface to the next layer. This software called the micro-program
is normally located in ROM. It is an interpreter, fetching the machine language instruction such as ADD,
MOVE and JUMP and carry them out as a series of little steps. The set of instructions that the micro-
program can interpret defines the machine language.

The machine language typically has between 50 and 300 instructions, mostly for moving data around the
machine, doing arithmetic and comparing values. In this layer, I/O devices are controlled by loading
values into special device registers.

A major function of the operating system is to hide all this complexity and give the programmer and users
a more convenient set of instruction to work with. For example, COPY FILE1 FILE2 is conceptually
simpler than having to worry about the location of the original file on disk, location of the new file and
the movement of the disk heads to effect the copying.

On top of the operating system is the rest of the system software for example, compilers, command
interpreters and editors. These are not part of the operating system. The operating system runs in kernel
or supervisor mode meaning is protected from user tampering. Compilers run in user mode, meaning that
users are free to write their own compiler or editor if they so wish.

Finally, above the system programs come the application programs. These are programs purchased or
written by the users to solve particular problems, such as word-processing, spreadsheets, databases etc

1.2 Functions of an Operating System

Operating Systems perform two basically unrelated functions:

i) Provision of a virtual machine


ii) Resource management

1.2.1 Provision of a virtual machine

A virtual machine is software that creates an environment between the computer platform and the end-
user. A programmer does not want to get too intimately involved with programming hardware devices
like floppy disks, hard disks and memory. Instead, the programmer wants a simple, high-level abstraction
to deal with. In the case of disks, typical abstraction would be that the disk contains a collection of named
files. Each file can be opened for reading or writing then read or written, and finally closed.

The program that hides the truth about hardware from the programmer and presents a nice, simple view of
named files that can be read and written is, of course, the operating system. The operating system also
conceals a lot of unpleasant business concerning interrupts, timers, memory management and other low
level features. In this view, the function of the operating system is to present the user with the equivalent
of an extended machine or virtual machine that is easier than the underlying hardware.

1.2.2 Resource management

Modern computers consist of processors, memories, timers, disks, network interface cards, printers etc.
The job of the operating system is to provide for an orderly and controlled allocation of processors,
memories and I/O devices among the various programs competing for them.

Operating Systems 2
For example, if different programs running in the same computer sent print jobs to the same printer at the
same time, if the printing is not controlled then the printing will be interleaved with say the first line of
the printout being for the first program, the second line being for the second program etc. The operating
system brings some order in such situation by buffering all output destined for the printer on disk. When
one program is finished, the operating system can then copy its output from the disk to the printer.

In this view, the operating system keeps track of who is using which resource, grant resource requests,
account for usage and mediate conflicting requests from different programs and users.

1.3 Operating system concepts

The interface between the operating systems and user programs is defined the set of ‘extended
instructions’ that the operating system provides. These instructions are referred to as system calls. The
calls available in the interface vary from one operating system to another although the underlying concept
is similar.

1.3.1 Processes

A process is basically a program in execution. Associated with each process is its address space: memory
locations, which the process can read and write. The address space contains the executing program, its
data and stack. Also associated with each process is some set of registers, including the program counter,
stack pointer and hardware registers and all information needed to run the program.

In a time-sharing system, the operating system decides to stop running one process and start running
another. When a process is suspended temporarily, it must later be restarted in exactly the same state it
had when it was stopped. This means that the context of the process must be explicitly saved during
suspension. In many operating systems, the information about each process, apart from the contents of its
address space, is stored in a table called the process table.

Therefore, a suspended program consists of its address space, usually referred to as the core image and its
process table entry. The key process management system calls are those dealing with the creation and
termination of processes. For example, a command interpreter (shell) reads commands from a terminal,
for instance a request to compile a program. The shell must create a new process that will run the
compiler and when the process has finished the compilation, it executes a system call to terminate itself.

A process can create other processes known as child processes and these processes can in turn create other
child processes. Related processes that are cooperating to get some job done often need to communicate
with one another and synchronize their activities. This communication is known as Inter-Process
Communication (IPC).

Other systems calls are available to request more memory or release unused memory, wait for a child
process to terminate and overlay its program with a different one.

1.3.2 Files

A file is a collection of related information defined by its creator. Commonly, files represent programs
and data. Data files may be numeric, alphabetic or alphanumeric. System calls are needed to create,
delete, move, copy, read and write files. Before a file can be read, it must be opened, and after reading it
should be closed. System calls are provided to do all these things.

Files are normally organized into logical clusters or directories, which make them easier to locate and
access. For example, you can have directories for keeping all your program files, word processing
documents, database files, spreadsheets, electronic mail etc. System calls are available to create and
Operating Systems 3
remove directories. Calls are also provided to put an existing file in a directory, remove a file from a
directory. This model gives rise to a hierarchical file system as shown below:

Root

Staff Student

James Peter John Andrew

ICS 636 ICS 613 Games Programs

Figure 2

Every file within a directory hierarchy can be specified by giving its path name from the root directory.
Such absolute path names consist of the list of directories that must be traversed from the root directory to
get to the file with slashes separating the components. For example, in the figure 2 above, the path for the
file ICS 613 can be specified by: /Staff/James/ICS613. The leading slash indicates that the path is
absolute.

At every instance, a process has a working directory, in which path names not beginning with a slash are
looked for. If we were working from /Staff directory then the path for the file above would be
James/ICS613. This is referred to as the relative path name.

1.3.3 Batch Systems

The early operating systems were batch systems. The common input devices were card readers and tape
drives. The common output devices were line printers, tape drives and card punches. The users did not
interact with the system, but would rather prepare a job and submit it to the computer operator, who
would feed the job into the computer and later on the output appeared.

The major task of the operating system was to transfer control automatically from one job to the next. To
speed processing, jobs with similar needs were batched together and run through the computer as a group.
Programmers would leave their jobs with the operator who would then sort them out into batches and as
the computer became available, would run each batch. The output would then be sent to the appropriate
programmer. The delay between job submission and completion also referred to, as the turnaround time
was high in these systems. In this execution environment the CPU, is often idle because of the disparity in
speed between the I/O devices and the CPU.

To reduce the turnaround time and CPU idle time in these systems, the spool (simultaneous peripheral
operation on-line) concept was introduced. Spooling, in essence uses the disk as a huge buffer, for reading
as far ahead as possible on input device and for storing output files until the output device is able to
accept them.

Operating Systems 4
1.3.4 Multiprogramming

Spooling will result in several jobs that have already been read waiting on disk, ready to run. This allows
the operating system to select which job to put in memory next, ready for execution. This is referred to as
job scheduling. The most important aspect of job scheduling is the ability to multiprogram. The operating
system keeps several jobs in memory at the same time, which is a subset of jobs kept in the job spool. The
operating system picks and starts executing one of the jobs in memory. Eventually, the job may have to
wait for some task such as an I/O operation to complete. In multiprogramming, when this happens the
operating system, simply switches to and executes another job.

If several jobs are ready to be brought from the job spool to the memory and there is no room for all of
them, then the system must chose among them. Making this decision is job scheduling. Having several
jobs or programs in memory at the same time ready for execution also requires some memory
management and that system must chose one among them. Making this decision is known as CPU
scheduling.

1.3.5 Time Sharing Systems

Time-sharing, or multitasking is a logical extension of multiprogramming. In time-sharing, multiple jobs


are executed by the CPU switching between them, but the switches occur so frequently that the users may
interact with each program while it is running. An interactive computer system provides on-line
communication between the user and the system

Time-sharing systems were developed to provide interactive use of the computer system at a reasonable
cost. A time-shared operating system uses CPU scheduling and multiprogramming to provide each user or
program with a small portion of a time-shared computer. It allows many users to share the computer
simultaneously. As the system switches rapidly from one user to the other, each user is given the
impression that they have their own computer, whereas one computer is being shared among many users

1.3.6 Parallel Systems

Most systems are single-processor systems, that is, they have only one main CPU. However, there is a
trend towards multiprocessing systems. Such systems have more than one processor in close
communication, sharing the computer bus, clock and sometimes memory and peripheral devices. These
systems are referred to as tightly coupled systems. The motivation for having such systems is to improve
the throughput and reliability of the system.

1.3.7 Real-Time Systems

A real time system is used when there are rigid time requirements on the operation of a processor or the
flow of data and thus often used as a control device in a dedicated application. Sensors bring data to the
computer. The computer must analyze the data and possibly adjust control to modify the sensor inputs.
Systems that control scientific experiments, medical imaging systems, industrial control systems and
some display systems are examples of real-time systems.

Operating Systems 5
1.4 Operating system structure

There are four major operating system designs that have been tried though these are not exhaustive:

1. Monolithic Systems
2. Layered Systems
3. Virtual Machines
4. Client-Server systems

1.4.1 Monolithic Systems

This is the most common approach, also referred to as the ‘big mess.’ These operating systems have no
structure. The operating system is written as a collection of procedures, each of which can call any other
when need be. Each procedure has a well-defined interface in term of the parameters and results.

To construct the object program of the operating system, you compile all the individual procedures, then
bind them together into a single object file using the system linker. In this model there is no information
hiding as every procedure is visible to all the other procedures.

It is possible to have little structure in a monolithic system. The system calls provided by the operating
system are requested by putting the parameters in well defined places such as registers or the stack and
executing a special trap instruction known as kernel call or supervisor call. This instruction switches the
machine from user mode to kernel mode and transfer control to the operating system.

Most CPUs have two modes: kernel mode for the operating system, in which all instructions are allowed
and user mode, for user programs, in which I/O and other instructions are not allowed.

In this model a user program makes a system call to the operating system, and when the system call is
finished, control is given back to the user program

1.4.2 Layered Systems

In these systems, the operating system is broken down into a hierarchy of layers, each constructed upon
the one below it. The first layered system was THE (built at Technische Hogeschool Eindhoven) system
built in Netherlands. This system had six layers:

Layer Function
5 The operator
4 User Programs
3 I/O management
2 Operator-process communication
1 Memory and Drum management
0 Process allocation and multiprogramming

Figure 3

Layer 0 dealt with allocation of the processor, switching between processes (context switching) when
interrupts occurred or timers expired. Layer 1 did memory management. It allocated space for processes
in main memory and word drum (pages) for those processes that could not fit in main memory. Above
layer 1 processes did not have to worry whether they were in main memory or the drum.

Operating Systems 6
Layer 2 handled communication between each process and the operator console. Layer 3 took care of
managing the I/O devices and buffering the information streams too and from them. Layer 4 is where the
user programs were found and the system operator was located in layer 5.

A further generalization of the layering concept was present in the MULTICS system. Instead of layers,
MULTICS was arranged as a series of concentric rings, with inner rings being more privileged that the
outer ones. When a procedure in an outer ring wanted to call a procedure in the inner ring, it had to make
the equivalent of a system call; a TRAP instruction whose parameters were carefully checked for validity
before allowing the call to proceed

1.4.3 Virtual Machines

The computer system VM/370 was based on the observation that a time-sharing system provides
multiprogramming and an extended machine with a more convenient interface that the bare hardware.
The essence of this system is to completely separate these two functions. The core of the system known as
the virtual machine monitor, runs on the bare hardware, does multiprogramming, providing not one but
several virtual machines to the next layer up. Unlike other operating systems, these virtual machines are
not extended machines, instead they are exact copies of the bare hardware including I/O, interrupts,
kernel/user mode and everything else the real machine has.

Each virtual machine can run any operating system that will run directly on the bare hardware. Different
virtual machines can and do run different operating systems. Some do run a single-user, interactive
system called CMS (Conversational Monitor System) for time-sharing users.

When a CMS program executes a system call, the call is trapped to the operating system in its own virtual
machine, not to VM/370, just as it would if it were running on a real machine instead of a virtual one.
CMS then issues the normal hardware I/O instructions. These I/O instructions are then trapped by
VM/370, which then performs them as part of it simulation of the real hardware.

By making a complete separation of the functions of multiprogramming and providing an extended


machine, each of the pieces can be much simpler, more flexible, and easier to maintain.

System call
I/O instruction CMS CMS CMS Trap
Figure 4
Trap VM/370
1.4.4 Client-Server
Model Bare hardware

VMS/370 gains much simplicity by moving a larger part of the traditional operating system code
(extended machine) into a higher layer, CMS. Nevertheless, VMS/370 is still a complex program because
it is difficult to simulate several virtual machines.

The trend in modern operating systems is to take this idea of moving code up into higher layers even
further and remove as much as possible from the operating system, leaving a minimal kernel. The usual
approach is to implement most of the operating system functions in user processes. To request a block of
a file, a user process (client process) sends the request to the server process, which then does the work and
sends back the answer.

Client Client Process Terminal … File Memory User mode


process process server server server server
Kernel mode
Operating Systems 7
Kernel

Figure 5

In this model, as shown in figure 5, all that the kernel does is handle the communication between clients
and servers. By splitting the operating system up into parts, each of which only handles one facet of the
system, such as file service, process service, terminal service or memory service, each part becomes small
and manageable.

Furthermore, because all the servers run as user-mode processes and not in kernel mode, they do not have
direct access to the hardware. Consequently, if a bug in the file server is triggered, the file service may
crash, but not the entire system. Another advantage is its adaptability to use in distributed systems. If a
client communicates with a server by sending it messages, the client need not know whether the message
is handled locally or whether it was sent across the network to a server on a remote machine.

1.5 Operating Systems Development (A Historical perspective)

Operating systems have developed through a number of phases (generations) and are basically connected
to the generations of computer hardware and software. Operating systems are very close to the computer
architecture.

1.5.1 First generation (1945 –1955) – Vacuum tubes and plug boards

The first generation computers were basically mechanical devices, there was only computer hardware e.g.
ENIAC. These computers were basically programmed manually by setting switches, plugging and
unplugging cables. You had to interact with the hardware directly. The operations of these machines were
very slow, cumbersome and tedious.

A method called the ‘stored program’ concept was developed to enhance their operations. This was made
possible by Von Neumann in 1949 and the computers that resulted from this were known as Von
Neumann machines. This led to the realization of digital computers and the beginning of the
programming concept. Some computers that made use of this concept were: EDVAC, EDSAC and the
IAS machines. These computers were made to have all features of modern computers i.e. CPU(ALU,
Registers), memory and I/O facilities.

Programs were written in machine language and programming done by one individual at a time. These
computers would have a sign-up sheet, which allocated every user a block time and exclusive access at
any time. Preparing a machine language program for execution involved:

• A programmer would write a program and operate it directly from the console
• Programs and data would be manually loaded into memory from the front panel switches, one
instruction at a time
• The appropriate buttons would then be pushed to set the starting address and start the
execution of the program
• If errors occurred, the programmer would halt the program and examine the contents of
memory and registers and debug the program directly from the console

The situation changed in the 1950’s:

• Assembly (symbolic) languages were developed

Operating Systems 8
• Punched cards were used to input programs and data into the computer using a punched card
reader.
• Assemblers were developed to compile assembly language to machine language

Problems:

• There was a significant setup time (time taken to load, assemble and execute programs)
• The CPU sat idle for lengths of time, under utilizing the resources and abilities of the
computer, that is, during I/O transfers, setup times, control being transferred to another
program etc

1.5.2 Second Generation Computers (1955 – 1965) – Transistors and Batch Systems

Characteristics of Computers and software at this time included:

• They used transistors instead of vacuum tubes. The computers became more reliable and
grew smaller in size. Consequently, computers became more commercialized and were sold
to businesses for data processing.
• Machine independent languages like COBOL, FORTRAN and ALGOL were being
developed.
• Additional system software like compilers, utilities for tape/card conversion and batch
monitors were made available
• The computers were being used for both scientific and business data processing

The problems with the computers at this time included:

• The CPU sat idle for some time due to time reserved and not being used and slow I/O data
transfers. This resulted into low utilization of the computer facilities and because computers
were being used commercially, it prompted scientists to develop mechanisms for high
utilization of the computer. For example, sharing of CPU by many users
• There was an I/O and computer speed disparity. The computers had become faster than the
I/O devices.

Solutions developed to address the problem include:

1. The use of professional operators to operate the computers. Programmers normally wrote
programs and were not allowed to run them, rather they presented them to the professional
operators
2. The programs (jobs) were usually batched and the operator would run the programs one by one.
3. Jobs with similar needs would be batched together to use the available setups common to them.
When the computer is available the batch is run as a group with intermediate setups. For example
all COBOL, FORTRAN programs etc. This was the basis of job scheduling, but the operator
performed the job to job transition manually
4. A rudimentary kind of operating system called a batch monitor was developed to reduce the
human setup time. It automated job sequencing. The monitor would be invoked and transfer
control to the first program, the program would then execute and return control to the monitor.
The monitor would then invoke the second program and transfer control to it.

Operating Systems 9
5. Off-line Spooling was introduced to supplement batch processing. Off-line spooling uses high-
speed I/O devices. It was decided that tapes rather than punched cards could be used for input of
programs. A number of jobs were collected the copied to a tape on an off-line computer. The tape
was rewound then mounted to the main computer by an operator and job executed one by one
delivering output to a tape.

6. Multiple buffering was introduced. This is overlapping CPU and I/O operations. This method
builds blocks of data into memory before output and also builds blocks of data before input to
memory. When a program requested for data or records, the data is read from tape to memory and
then from memory to the CPU, while the CPU was processing, another block of data could be
read to the second empty buffer of memory simultaneously.

1.5.3 Third Generation Computers (1965 – 1980) – ICs and multiprogramming

IC technology replaced transistor technology. This led to a great reduction in the cost of computers, size,
efficiency, speed and reliability. Semi-conductor (IC) memory became common. Microprogramming
came to a wide spread use to simplify CPU design. Concurrent programming techniques like
multiprogramming and time-sharing were introduced for sharing computer CPU. This time also witnessed
a greater increase in the use of magnetic disks

The problems during this time included:

• There were I/O and CPU speed disparities. The time the CPU sat idle was increased because
of the change in technology for developing processors, which was relatively faster compared
to the change in I/O technology by using magnetic disks.
• Poor utilization of the CPU, which had become even faster resulting in high turnaround times
and low throughput.

Solution to these problems:

• Introduction of multiprogramming. Memory was divided into several partitions and each job
was given a partition in memory. A job would then be given the CPU for execution; at one
point the job would perform an I/O operation. When this happened, the CPU would switch to
another job until it also needed the services on an I/O device. The CPU would keep on
searching for jobs that are ready to execute. This resulted in improvements in the turnaround
times and increased throughput.
• Online Spooling became a common phenomenon. Instead of the previous off line spooling,
scientists developed on-line spooling using the magnetic disk system. The disk is a Direct
Access Device (DAD). It was easier to read data from disk when requested for unlike the
tape, which had to be rewound or forwarded to retrieve data.
• The desire for applications requiring quick response paved way for time-sharing. In time-
sharing, the computer operates in an interactive mode with a number of logged in users.
Time-sharing systems use CPU scheduling and multiprogramming to provide each user with
a small portion of a time-shared computer called a time slice
• Real time systems also became popular.

Operating Systems 10
1.5.4 Fourth Generation Computers (1980 …) – VLSI and Personal Computers

VLSI (Very Large Scale Integration) led to the possibility of manufacturing a whole component in a
single IC chip. This made the computers at this time to be fast, reliable, cheap and efficient. This led to:
• Personal Computers
• Parallel Processing systems
• Computer Networks

1. Personal Computers - Large software industries were developed to manufacture the required
software like single user O/S for use in personal computers. Initially, they were only single user O/S
but nowadays we have multitasking operating systems like Windows ’95 where a user runs several
tasks concurrently. These operating systems have also opted for user convenience and responsiveness
through better user interface (Graphical User Interface).

2. Parallel Processing Systems – Parallel systems make use of more than one processor to perform
tasks. Operating systems for parallel systems were developed to efficiently exploit the underlying
available hardware.

3. Computer Networks – Independent computers were interconnected by a network to allow users


share the available resources like printers, servers, plotters and hard disks. They could and exchange
data through some communication medium. The geographical region they cover can categorize
computer networks. Consequently, they are divided into LAN (Local Area Networks) covering a
smaller geographical region like a building or small town and WAN (Wide Area Networks) covering a
larger geographical region like a city, continent or the entire globe. Network operating systems were
developed to manage computers in a network.

Operating Systems 11

Vous aimerez peut-être aussi