Vous êtes sur la page 1sur 13

Chapter 3.

1 – The Functions of Operating Systems

1. Introduction to Operating Systems

1.1 What is an operating system?

Operating system is a program or a set of programs that controls the hardware and the execution
of all other application programs and acts as an intermediary between the user(s) and computer
hardware by providing a user interface.

1.2 What are the functions of an operating system?

1. Manages the memory of the computer


2. Provides an interface between the user and the machine.
3. Controls how the computer responds to user’s requests
4. Controls how the hardware communicate with each other
5. Provides an interface between application software and the machine
6. Provides an environment in which application software can be executed
7. Organizes and handles the files
8. Provides security to the files

2. Features of a Desktop Operating System

1. User interface: the two main types of user interfaces are command line and graphical user
interface.

2. Device drivers (Routines which control hardware). Example: Keyboard driver

3. Multitasking capability which enables the computer to run more than one program at one time.
Example: Running a word processing program, spreadsheet program and a database program at
the same time in the computer.

4. Spooling: directs jobs and data files to a queue on a backing store before sending them to their
intended peripheral device.

5. Security: ensures that users can keep their files confidential

6. Scheduler: allocates job priorities and find and resolve deadlocks

7. Memory manager: allocates memory to jobs and data

8. Interrupt handler: routine in the operating system which puts interrupts in a queue until they are
processed by the processor.
2

9. Translators: converts source code of application programs into object code or to machine code

3. Job Management

3.1 What is a job?

Job is an instance of a running program with a separate data area maintained in the memory for
its execution.

3.2 What components and mechanisms of the operating system are involved in job
management?

1. Scheduler

2. Job priorities

3. Scheduling strategy

3.3 What are the states of a job?

A job may be in any one of the following three states at any given moment:

• Running state: if the process is actually using the CPU – only one job can be in the running
state at a given time.

• Ready state: if the job can make use of the CPU if it is available. More than one job can be in
the ready state at a given time.

• Blocked state: if the job is waiting for input or output and cannot use the CPU even if the
CPU is free to be used – only one job can be in the blocked state at a given time.

The following diagram shows the relationship between the three states for a particular job.

If the currently running job requests I/O, it relinquishes the processor and goes into the blocked
state.

If the job uses up its time slice before completing its task, then the job is placed in the ready
state, while some other job gains the use of the processor and go into the running state.
3

Job entering the


Ready memory

Time-slice exceeded Event completed

Running Blocked
input/output event requested or
interrupt is generated

3.4 Why jobs are allocated priorities?

Processor can run only one job at a time. Certain jobs are needed to be given higher priorities as
they are more important than others. The jobs that are assigned with higher priorities will gain
processor time ahead of those with low priorities.

3.5 How jobs are allocated priorities?

In a multitasking environment, either the users or the operating system can allocate job priorities.

3.6 What is scheduler?

The scheduler is the component in the operating system which allocates job priorities. Scheduler
does this in accordance with a scheduling policy.

The scheduling policy should not be too complex. Otherwise the computer will spend more time
deciding whose turn it is, than getting on with the job.

3.7 What are the types of schedulers?

Jobs entering the system or memory are put in the Job Queue (Ready Queue) by the high level
scheduler (HLS).

Sometimes it is necessary to swap the jobs between the main memory and the backing store. This
is done by the medium level scheduler (MLS).

Moving jobs from ready state to running state is done by the low level scheduler (LLS).
4

3.8 What are the objectives of a scheduling policy?

A scheduling policy should try to

1. Maximize throughput – try to process as many jobs as possible in as little time as


possible.

2. Maximize the number of interactive users receiving acceptable response times (at
most a few seconds)

3. Balance resource use – if for example a printer is idle, a high priority could be given to
a job that uses the printer

4. Avoid pushing the low priority jobs to the back of the queue indefinitely. This can be
achieved by giving jobs a high priority based on how long they have been in the queue.

5. Enforce priorities – in environments where users can assign priorities to jobs, the
scheduler must favor the high priority jobs

6. Achieve a balance between response time and utilization of resources.

7. Prevent a system failure due to overloading

3.9 What are the criteria that the scheduler would use?

1. How much I/O the job needs

2. How much processor time the job has already used

3. How much time the job has already spent in the ready queue

4. How much more processor time the job needs to complete

5. Job priority – high priority processes should be favored

6. Whether the job is batch or interactive

7. How urgently a fast response is needed

3.10 What are the scheduling strategies for job management?

1. Shortest job first


The CPU reviews the jobs waiting to be processed. The job which can be completed in the
shortest time will be processed first.

2. Shortest remaining time first


CPU reviews the jobs as a result of an interrupt occurred. It selects the job which can be
completed in the shortest possible time.
5

3. Job priority
The CPU executes the jobs in the order of their importance.

4. Round robin
Each job is given a time slice. When the time slice over, job goes to back of ready queue.

3.11 Explain how the priorities are assigned according to the job type in a multitasking
operating system?

1. Jobs can be classified as I/O bound jobs and processor bound jobs

2. I/O bound jobs should be given higher priorities than processor bound jobs

3. This allows peripherals to operate while a processor bound job is processed

3.12 Describe how the operating system manages the throughput of jobs?

1. Enables jobs to exist in any one of the three states: running. Ready and blocked

2. Allocates priorities to jobs to according to the scheduling policies

3. Maintains a queue of ready state jobs

4. Uses an LLS to move jobs in and out of the running state

5. Uses an MLS to swap jobs between main memory and virtual memory

6. Uses an HLS to load jobs into the job queue in the ready state

4. Memory Management

4.1 How memory is managed to allow more than more than one large job to appear to be
stored simultaneously in memory?

1. By partitioning the memory and allocating a partition to each job rather than allocating one
whole job to spread through the whole memory.

2. Split a job into pages and put less frequently used pages in virtual memory.

3. Swap the pages between the main memory and the virtual memory as needed

4. Jobs of different sizes are allocated to partitions of those appropriate sizes


6

4.2 What Components and Mechanisms of the Operating System are involved in Memory
Management?

1. Loaders

2. Linkers

3. Segmentation (also known as variable partitioning)

4. Paging

5. Virtual memory

6. Swapping

4.3 What is loader?

1. Loader is a program in the operating system which loads jobs into the main memory of the
computer.

2. When doing this the loader adjusts the memory addresses of the instructions by calculating the
address of each instruction relative to the address of the first instruction.

4.4 What is linker?

Linker is a program which links the modules correctly by calculating the addresses of linked
modules relative to the address of the first instruction of the program. Linkers help library
routines to be linked to several programs. It helps in producing an executable file.

4.5 What is segmentation (also known as variable partitioning)?

1. This is the act of dividing memory into logical parts called segments or variable
partitions, which are of different sizes.

2. Segments match natural division of jobs such as dividing the program into subroutines and
groups of related subroutines.

3. Individual segments can be present in memory at one time without the need for the whole
program to be there

4. An index is required to store the base address, and the length of each segment

4.6 What is paging?

1. This is the act of dividing memory into logical parts of equal sizes.

2. Jobs or files are allocated a number of pages according to the size of the job.
7

3. Bigger the software, higher the number of pages that will be allocated to it.

4. Individual pages can be present in memory at one time without the need for the whole program
to be there.

5. An index is required to store the addresses of pages. Addresses can be calculated by adding
page address to raw address

4.7 What is virtual memory?

1. Virtual memory is the area on the disk that is used to store the pages of jobs of currently
running programs and data temporarily, which makes it look like that the computer, has more
memory than it actually has.

2. The pages kept in the virtual memory are usually the ones which are less frequently used by the
processor.

4.8 What is swapping?

Swapping is the act of transferring non-frequently used pages or segments to the virtual memory
to make room available in the main memory for the frequently used pages or segments.

4.9 What is fragmentation of memory?

1. As jobs and files are loaded into the memory they occupy space which when vacated leaves
gaps in memory. This splitting of available memory into noncontiguous pieces is called
fragmentation

2. If a larger file is sent to that area, it has to be broken up to fit into those fragmented areas of the
memory.

5. Management of Interrupts and Spooling

5.1 What Components and Mechanisms of the Operating System are involved in
input/output Management?

1. Interrupt handler

2. Interrupts

3. Spooling
8

5.2 What is an interrupt?

This is a signal from a hardware device or an instruction from a software program indicating the
processor the need for a change in execution.

5.3 What is interrupt handler?

1. Interrupt handler is a routine in the operating system which puts interrupts in a queue until they
are processed by the processor.

2. There is a specific interrupt handler for each different type of interrupt, for example, keyboard
interrupt handler, mouse interrupt handler etc.

5.4 What are the events that generate interrupts?

1. Interrupts generated by the running job

The job might need to perform I/O, obtain more storage or communicate with the operator.

2. I/O interrupts from hardware devices

These are initiated by the I/O hardware and signal to the CPU that the status of a channel or
device has changed. This change of the status of a channel happens when:

(a) An input device needs to send data to the CPU

(b) An output device needs to send a message to the CPU

(c) When an error occurs in the hardware device

(d) When a device is made ready.

3. Timer interrupts

These are generated by the timer within the processor and allow the operating system to
perform certain functions at regular intervals called “time slices”. An example of using time
slice is when each user in a multi-user system is allocated a certain amount of time slice before
the processor’s timer generates an interrupt which makes the processor stop attending to that
process and start attending another process.

4. Program check interrupts

These interrupts are caused by various types of errors such as division by zero.

5. Machine check interrupts

These are caused by malfunctioning hardware.


9

5.5 What happens when a processor which is currently working on a job receives an
interrupt?

1. When an interrupt occurs the processor completes the current fetch execute cycle anyway

2. After completing the current fetch execute cycle, the processor compares the priority of the
current job with the priority of the job that should handle the interrupt

3. If the priority of the job handling the interrupt is higher, the contents of the special registers are
saved, the current job is put in the ready state and the job handling the interrupt is put in the
running state

4. The interrupt is serviced by running program. On completion, values of special registers from
the original program are replaced and the original job is restored

5. If the priority of the job handling the interrupt is lower (after step 2), the job handling the
interrupt is put in the correct place in the job queue according to its job priority and the current
job continues into the next fetch execute cycle

5.6 Further notes on how the interrupt mechanism works

1. There is a special register in the CPU called the interrupt register. At the beginning of each
fetch-execute cycle, the interrupt register is checked. Each bit of the register represents a
different type of interrupt and if a bit is set, the state of the current job is saved and the
operating system routes control to the appropriate interrupt handler.

2. Since more than one device may request an interrupt simultaneously, each device is assigned a
priority. Slow-speed devices such as terminals and printers are given a high priority since they
are more liable to get behind with what they are doing and so be allowed to start as soon as
possible so that they do not eventually hold up processing.

3. In some cases if an interrupt occurs during data transfer, some data can get lost and so the
operating system will disable other interrupts until it completes its task.

4. In a large multi-user system there is a constant stream of interrupts directed at the processor
and it must respond as quickly as possible to these in order to provide an acceptable response
time. Once an interrupt is received, the operating system disables interrupts while it deals with
the current interrupt. Since this could mean that interrupts are disabled for a large proportion of
the time, large operating systems simply determines the cause of the interrupt and then passes
the problem over to the specific interrupt handler, leaving itself free to deal with the next
interrupt.

5. In smaller systems, the operating system handles all interrupts itself, which means that the
interrupts are disabled for a large proportion of time.

5.7 What is polling?

Polling is the sequential checking of a range of possibilities or peripherals devices to identify


which should be dealt with next.
10

5.8 What is the difference between an interrupt system and a polling system?

1. Interrupt system is a system where the peripherals are served when the peripheral generates a
control signal to break the execution of the current process.

2. Polling system is the sequential checking of a range of peripheral devices to identify which
should be dealt with next.

5.9 What SPOOL stands for?

SPOOL stands for Simultaneous Peripheral Output On Line.

5.10 What is the purpose of spooling?

1. Spooling works by directing jobs and data files to a queue on a backing store before sending
them to their intended device.

2. The jobs and data files in the queue will be then taken up by the intended device on a first in
first out basis.

3. Spooling is a method of resource allocation by maximizing the use of peripheral equipment.

5.11 What is print spooling?

1. Print spooling means the act of putting multiple print jobs in a queue (known as the print
spooler), which are intended to be sent to a print server. Print spooler runs as a separate job.

2. Print server is a multi access computer system. The network is designed in such as way that the
printing jobs from all the network users are sent to the print server.

3. If the jobs are not prioritized, they are executed by the processor taking them from the print
queue one at a time on a first-in-first-out basis. If the jobs are prioritized the jobs would be sent
to the printer in the order of their priorities.

5.12 What is print spooler?

Print spooler is a queue of printing jobs created at a multi-access computer system known as the
print server, when multiple printing jobs are sent to the printer simultaneously.

5.13 Why the execution of several printing jobs sent simultaneously to a print server
becomes slow and inefficient, when they are made to be executed by the processor without
spooling?

1. When multiple printing jobs are sent to a print server simultaneously and if a print spooler is
not used by the print server, a separate job will be created to handle each printing job. As a
result only one document can be sent to the printer in one time slice.
11

2. If spooling has been used, more than one document can be sent to the printer during a single
time slice provided that the printer buffer is large enough to contain that much of data. As a
result only a less number of time slices will be used to finish off the jobs. This makes the
printing much faster when a print spooler is used.

6. Multi-Access Systems

6.1 What are the operating system components required by a mainframe system to
implement multi-access capability?

Time-slicing
Segmentation
Swapping
Spooling

6.2 How does the multi-access capability implemented?

1. By giving each user in turn a fixed amount of processor time.

2. The processor time allocated to each user or the time slice for each user to access the central
computer depends on the system clock of the processor.

3. Each time a user tries to access the central computer an interrupt signal is generated by the
user’s computer which has to be handled by the central computer’s processor.

4. The scheduler of the central computer decides the next job to be handled based on the central
computer’s scheduling strategy.

6.3 As the number of users of a multi-access system increases, there is a gradual


degradation of performance that rapidly worsens as the number of users are increased
further. Explain this observation.

1. In a multi-access system each user in turn is given a time slice of processor attention on a
round robin basis. Therefore as the number of users increases they experience a gradual
degradation of performance.

2. When the number of users is increased further, the main memory would not have enough space
to maintain all the jobs in it at the same time. Beyond this point swapping would occur to
transfer pages to the virtual memory, to make room in the main memory for the frequently used
pages or segments. The swapping and excessive paging activity would lead to rapid worsening
of performance.

6.4 In a multi-access system when the number of users is increased how can the problem of
initial gradual degradation and subsequent rapid worsening of performance be remedied?

1. By excluding jobs which require very large storage


12

2. By increasing the main memory of the computer

7. Features of a Network Operating System

7.1 File sharing

This feature of a network operating system allows many users of the network to use the same file
at the same time.

7.2 Providing security

1. In order to avoid corruption and inconsistency, the network operating system must implement
access rights on the file. For example, the network operating system should only allow one user
“write access” to the file and other users must be allowed “read only access”.

2. The network operating system should not allow the users to change the system files. This is
implemented by making these files read only as well as making them “hidden”.

3. The network operating system should not allow the ordinary users to change the attributes that
makes the system files “hidden” other than the system administrator.

4. The network operating system must allow access to the computer system only the users who
can establish their identity by providing a valid password.

7.3 Creating personal profiles

The network operating system should create profiles for individual users. For example, when a
user logs on with the valid password he or she always presented with the same screen.

7.4 Monitoring the network by logging the details of the network usage

For example the network operating system should log the following details:
Files that a particular user accessed
Documents printed using the network printer
Amount of time that the network was used

7.5 Serving copies of files to users

Network operating system should serve a copy of a file when a user requests for one

7.6 Directory services

1. This makes it possible for the network administrator to partition the disk storage space of the
network server into logical areas
13

2. Each logical area of the network server has a fixed amount of disk space and the network
operating system should prevent the user exceeding the amount of storage allocated

3. A logical area which is assigned to one user can be prevented from access by other users.

4. Some logical areas of the network server may be shared, i.e. those files in the shared folders
can be accessed by many users at the same time.

7.7 Transparency

This is the capability of the network operating system to make the user feel that he is using only
the resources of his own computer, even when he is provided with the services from other
computers in the network. Some examples of transparency are:

1. When a user logs onto the network, the network operating system displays the same profile
(same screen with the same icons and menus) irrespective of the computer from which the user
logs onto to the network.

2. Operating system controls the hardware in such a way that the individual does not know that he
is using a network. For example, when the user sends a file to the printer, the file is
automatically sent to the printer server to which the printer is connected.

3. When the user requests to access the Internet, the network operating system automatically have
the user connected to the proxy sever which is already connected to the Internet.

Vous aimerez peut-être aussi