Vous êtes sur la page 1sur 26

2nd SEM B.Sc.



An Operating system is a computer program, which acts as an intermediary

between the user of the computer & the computer hardware.

Abstract view [components] of O.S.:

User 1 User 2 User 3 User n - - - - - - -
User 1
User 2
User 3
User n
- -
- - -
- -
- -
- -
Application program
Operating system

User are developing & using the application program. This application program

running or execute hard with the use of operating system.

Resource Manager: & Extended machine:

This O.S. acts as resource manager. Because

O.S. accepts data ( resource ) from hardware & allocating in memory.

O.S. manager this resource.

This O.S. accept different types of I/O device files & software for that we call

operating system is Extended machine.

Type of O.S

1. Simple batch system:

Supplying the job to the CPU is real problem & setup the time for each job is

also a problem. For that O.S. make a concept of simple batch system. It

increases the high CPU utilization.

The above problem solved using below technique.

Use of tape drive the job is supplied continuously to the CPU it increase

CPU utilization.

Development of job sequence using queuing method job supplies one by


2nd SEM B.Sc.

2. Spooling:

Spooling is a method to supplies the job to the O/P device continuously without any miss for this tech we need following setup.

Disk I/P Device CPU Printer
I/P Device

The above fig. Disk is a memory its having high capacity buffer. It receives

the data or job from card reader & supplies the data or job to printer directly from the buffer.

It decreases the idle time of CPU & O/P device

Multiprogrammed Batch system: - This increases the CPU utilization O.S. keeps multiple jobs in the memory at

a time it sends the job (prg) to CPU continuously Batch by Batch Multiprogrammed Batch system allows to enter the multiple job at a time. Drawback: - User can’t interact with job while executing. Time sharing system: - Time sharing system allows several users to use the system simultaneously. In a time sharing system multiple jobs are executed simultaneously each action or command tends to be short only a little CPU time is needed for each user. Time sharing system each job sharing its CPU utilization. Real time system: - In real time system is also a type of time sharing but it shares its time for fixed time constraints processing must be done within defined constraints otherwise the system will fail. Distributed system: - In this distributed system the jobs are distributed to all the processor [multiprocessing system only]. This tech. used in network operating system.

2nd SEM B.Sc.

Personal computer system: - This O.S. is suitable for personal computer’s this personal computer system is purely design for single user & hid needs. The users are using printer, fax, music system and video units. This O.S. supports the above I/P devices. This O.S. allows to use Internet, LAN etc. Virus protection is another important requirement of personal computers.

Parallel system: -

requirement of personal computers. Parallel system: - User MEMORY P1 P2 P3 I/O CPU (Multiple processor



CPU (Multiple processor )

The combinations of many processors are act in CPU. The main advantage of such system is to increase the overall efficiency of the system and the jobs are completed in a minimum time. System components: -[ Operating system structure ]:

Operating system act on different component each components act with different structures. They are:

1. Process Management,

2. Main memory management,

3. Secondary storage management,

4. I/O management (Device),

5. File management, and

6. Protection & Command interpreter system.

1. Process management: - Process is a job it is ready to execute. The O.S. manages the following:

Creation of process, Suspension & resumption of process, Allocation of CPU & execution of process, Handling CPU wastage time.

2nd SEM B.Sc.

2. Main memory Management: -

The main memory is only a storage device. The O.S. manages the following steps:

Keeping track of the usage of memory,

Allocation & deallocation,

Allocating the process with some address.

3. Secondary – storage device: -

Main memory is limited & volatile. But secondary storage device is

permanent & huge no. of process or data is stored in this device.

O.S.will manages the following step:

Scheduling of disk,

Allocation & deallocation,


Manager free space. Etc.

4. I/O device Management: -

O.S. manages the following steps:

Interface of i/p & o/p device, Receiving & sending the data, Initiating i/p 7 o/p operation etc., Manages all i/p & o/p device activity.

5. File management system: -

It is the one of the visible component in O.S.

The O.S. system manages the file for the following activity:

Locating the file, save the file,

Creating the file,

Deleting the file,

Modify the file,

Backup or recovery of the file, etc.

6. Protection system: -

Protection is very important to protect file, data & software, etc.

It maintains the following items:

Input – out protection,

Memory protection,

CPU protection,

Process protection.

2nd SEM B.Sc.

Command interpreter system: - The command interpreter acts as an interface between the user & the system. When the user gives the command the interpreter checks for the validity of the command. It is proper it should be executed immediately.

Operating system services: - This provides certain service to programmer & user to make the programming easier & operate computer easily. Different services are:

Program execution: -

Input & O/P operations File system manipulation Communications Error detection

Resource allocation Accounting

Protection. etc.

System calls: - System calls provides the system service using system program. This provides the interface between a process & the operating system.

Information or parameters may be passed to system calls using different methods. The various system calls are:

1. System calls for the process management End, abort a process, load & execute a process, create & terminate the process.

2. System calls for the file management: - Create or delete a file. Open, read, write & close a file.

3. System calls for the device management:- Request device, release device, Logically attach or detach devices. Get or set a device attributes.

2nd SEM B.Sc.

4. System calls for the communication management:

Create, delete communication connection, Attach or detach remote devices, Send or receive messages.

5. System calls for information maintenance. Get or set time or date, Get or set system data, Get or set process, file or device attributes.

System programs: - The O.S. also contants a set of program called system program. The main aim of these programs is to simplify user interaction with the system. System program are divided into different categories:

File manipulation, Status information, File modification, programming language support, Communication, application program, Command interpreter.

2nd SEM B.Sc.



PROCESS: A Process is a unit of work in a modern computer system. It is basically a program in the state of execution.

States of a Process:

The process (program) may be executed in various steps. This step of process activity is called as states of a process. The process state diagram explains the process states.

Accepted Interrupt New Exit Ready Running I/O complete I/O Wait Wait Remainder
I/O complete
I/O Wait

New state: In this state passive program has to be converted as dynamic activity called process in this state process is created.

Ready state: In this states the process is waiting to be assigned.

Running: In this state it executes a sequence of instructions & may go the wait (remainder) state. [Only one process running at a time].

Waiting: The process is waiting for the occurrence of some event such as the completion of an I/O operation.

2nd SEM B.Sc.

Process control block [P.C.B.]:

In this block we get the information about a process. The information identifies the current state of the process. The O.S. uses the P.C.B. to keep track of the status of the process.

The information’s as follows:

1. Current state of the process (Ready/Run/Wait)

2. Process identification

3. Process priority.

4. Pointers to allocated resources.

5. I/O status information.

6. Program counter (PC) indicates the address of

next Instruction to be executed.

Process number

Process state

Program counter


Memory limits

List of open files

Process number Process state Program counter Register Memory limits List of open files
Process number Process state Program counter Register Memory limits List of open files
Process number Process state Program counter Register Memory limits List of open files

Scheduling of process:

In a multiprogramming Environment this processor Program 1 to program 2 … The process 1 coming back if there is a multiple activity.

Process 2 Execution Process 1 Save register Idel Reload register Idel Execution Save register Reload
Process 2
Process 1
Save register
Reload register
Save register
Reload register

2nd SEM B.Sc.

Scheduling queues:

In a multiprogramming environment, Whenever a process is created it is

Immediately put into ready queue.

Ready Queue CPU I - O I – O Queue I – O request Time
Ready Queue
I - O
I – O Queue
I – O request
Time slice
Create a
sub - process
Wait for

CPU executes the sequence of events. Which goes through a cycle, which includes

a CPU execution & I – O wait.

Once the process is allocated the CPU & is executing. One of the several

events could occur.

Process could given an I –O request & placed in I/O queue.

Process could create a new sub process & wait for its termination.

Process could be removed by using interrupt signals.

Schedulers: -

The choice or selection of the process and its corresponding queue is done

by that part of the O.S. called Scheduler.

Schedulers are of different types.

1. Long Scheduler.

2. Short term scheduler.

2nd SEM B.Sc.

Long term scheduler selects job from the job pool and load it into the

memory for execution. Short term scheduler selects the process that is already in

the ready queue for execution. Suspended or swapped process queue Short term Lowest Ready queue
the ready queue for execution.
Suspended or swapped process queue
Short term
Ready queue
I/O waiting queue

Context switch: -

In a multiprogramming environment it is necessary to move from one

process to another. Switching the CPU to another process required saving the

state of the old process & loading the saved state for the new process. This is

called as Context switch.

Scheduling Criteria:

In each scheduling algorithm has its own set of properties. The algorithms

are selected in a particular situation depending upon its criteria some of the

criteria are:

a) CPU utilization:

Utilization of CPU means keep the CPU busy & executing the users

programs & perform.

b) Throughput: -

It is the number of jobs completed per unit time for long processes this

rate may be less whereas for the short processes it is more.

c) Turnaround time: -

It is the interval from the time of submission of job to its completion.

This is the sum of waiting time, execution time an idle time.

d) Response time: -

The submission of the request until the first response is produced.


The amount of time to be taken to first response.

2nd SEM B.Sc.

e) Waiting time: - The amount of time spent waiting for the allocation of various resources. It is the difference between turnaround time & actual execution of the job.

First come first served scheduling:

This is very simplest algorithm it served the job depending upon there arrival. When the process enters the ready queue, its PCB is linked on the tail (end) of the queue. Advantage: - This average weighting time is more. Ex:


Execution time (in ms)







The waiting time of P1 = 0 ms

------- do ------------P2


= 32 ms = 34 ms

66 ms
66 ms

The Avg. waiting time = 66 / 3 = 22 ms.

Shortest job First [SJF] scheduling: - In this algorithm the CPU executes the process depending upon execution time of the process. It selects the minimum execution time process first. Suppose 2 or more process having same execution time use FCFS for that process. In this algorithm the average waiting time is less. For above table total waiting time = 2 + 7 = 9/3. The average waiting time = 3 ms.

2nd SEM B.Sc.

Priority scheduling: - In this algorithm the process may be executes depending the highest priority of the process. The priority is defined in 2 ways. They are internally & externally. Internally defined priorities use some measurable quantities to compute the priority of the process.Externally defined priorities are the concept of importance of process.The major problem of this scheduling is indefinite starvation or blocking. Round Robin [RR] scheduling:

In this scheduling the processes are executed using time slice. RR technique is mainly designed for time sharing systems. In this technique of scheduling a small unit of time called a time quantum or time slice is defined. Each process executed with equal interval of time.

Multilevel queue scheduling:

Depending upon the characteristics of the jobs multilevel queue make it as a batch & produced for execution.

queue make it as a batch & produced for execution. System process Interactive process Interactive editing
queue make it as a batch & produced for execution. System process Interactive process Interactive editing

System process Interactive process Interactive editing process Batch process User process

Five different queue

Lowest priority The ready queue is partitions as commonly a fixed priority preemptive algorithm.

Multiple processors scheduling: - Multiple processor are used in a single system it increase the throughout, reliability & the computing power of the system. One of the main advantage of multiprocessor environment is the higher reliability of the system. If the one of the processor is fails the remaining processors can share the load & continue the work.

2nd SEM B.Sc.



Logical address: - To access the data the CPU calculates & generates the memory

address, such memory is Logical address.

Physical address: - The data is stored in memory the actual location where the

data is present is referred to as physical address.

Contiguous allocation memory management

Occupying a set continuous locations to stores the process.

1. Single contiguous allocation,

2. Partitioned allocation,

3. Re – locatable partitioned allocation.

Basic functions of memory management

1. Knowing the allocation status of each memory location: - Where it is free or allocated.

2. Determining a proper policy for allocation: - It determines which process should get how much memory & where.

3. Memory allocation: - The process is allocated in proper determining allocation.

4. Memory de – allocation: - Once the process is over it de – allocate the data address.

Single contiguous allocation:

This is very simple technique to allocating a process it requires very simple

hardware. The memory is divide into 2 portions. One for O.S and other portion

is completely available to the user process.



Hardware support: -


“Bound register” are used to allocating the memory.



Advantage: -


1. It requires very little hardware.

0 k

100 k

500 k

1024 k

2. It is very simple design & implement.

3. It requires very little memory.

2nd SEM B.Sc. Partitioned allocation: -

In Multiprogrammed environment the multiple jobs are stored in memory.

The easiest method is by dividing the physical memory into several

partitions & allocating each partition to a separate jobs address space.

This size of the partitions can be fixed is called static partition.

The size of the partitions can be depending on the process size is called

Dynamic partition.

Static partition: - This is one of the simplest schemes for memory allocation.

Here the physical memory is divided into a number of partitions of fixed size.

The O.S maintains a tables indicating which parts of the memory are available

& free this table called static partition status table.

Dynamic partition: -Here also the memory is partitioned in to different allocation

depending on the process.

Two tables may be used to implement this tech.

1. Allocated partition table [APT ]

2. Free area table [FAT]

Storage Allocation strategies:-

The jobs are stored in partitioned memory using different strategies. They are:-

1. First Fit:- The process is allocated in the partitioned memory to the available


2. Next Fit: - This is the modification of first fit. It searches smaller blocks that

tend to be at the begging at the free area table.

3. Best Fit : - In this scheme, the free area table is sorted by size then it fit the

process perfectly.

4. Work Fit : - This is the complement of best fit. It allocates large memory


2nd SEM B.Sc.

Re locatable partitioned allocation: -

In the partitioned approach a job may allocated the space in the memory. The executed process data is fragmented and combing all the areas into one contiguous area.

This process is called as Compaction or Burning.

Non – contiguous allocation: -

Page memory Management: - The jobs address space in divided into pieces

of equal size called pages & physical memory is divided into block of the

same size. Using this mapping facility the pages are placed in the blocks

such that the pages are logically contiguous.

These pages are use certain tables to manage, that table is page map table. The

page map table [PMT] contains a list of memory blocks.

Memory Management function: -

a) Keeping track of track memory status: -

Using page map table.

Memory block table.

b) Allocation policy: - Handled by the job scheduler.

Dynamic address translation: - It is separating the address into 2 parts. Page

number and byte no’s within that page. Replacing the page number by the

memory block number using PMT generates the physical address. In this case

PMT identify the memory block and transformed to high speed associative

memory. So this is much faster.

Page Map Table Address ( b )

p d + p
p d

Page number displacement

b+ p

Loc d

2nd SEM B.Sc. Advantage of paged memory allocation: -

a) Eliminates fragmentation, which results in better memory & processor utilization.

b) Compaction overhead eliminated.

Virtual memory Management: -

In this method where only a portion of jobs address space is actually loaded

into physical memory.

Virtual memory Management scheme it uses the memory more than 100%. It

receives the dates continuously depending upon stored address.

Different virtual memory managements are:

a) Demand paging

b) Segmentation

c) Segmentation & paging.

Demand paging: - The first page is loaded into physical memory other pages of

the job are loaded as and when demanded. Hence it is referred to as demand

paged memory management. The O.S loads this page from the secondary storage

& adjust the page map table entries accordingly.

Segmentation: -Segmentation gives dynamic relocations. Memory wasted due to

fragmentation can be reduced by dividing the address space into logical modules

that may be placed in non – contiguous area of memory. This is called as


Page replacement

When the page is replaced or removed when the job is completed. It is having 3 different tech.

1. First – in first out: -[FIFO]: This algorithm is used to replace the first in

pages or replace the resident page that spent the longest time memory

[oldest page].

2. Least recently used: -[LRU]: This performs better than FIFO. Because it

replaces the least & recently completed pages.

3. Tuple coupling: - The bigger size pages are break into small pages like P1

& P 2 then it uses the LRU technique to replace the page.

SWAPPING: -Removing of suspended process from memory to disk [swap – out] &

their subsequent return [swap – in ] is called Swapping.

It improves CPU utilization in PMT if no suitable partition is found it places new


2nd SEM B.Sc.



File system is consists of collection of files & a directory structure which helps to organizes the files. This is a sequence of bits, bytes, lines or records this files are access through files management system. File Attributes: - The files are differentiating into different type depending upon the extension of the file. Vinay.pas, file.dat, vin.c, file.dat, xx.exe, file.obj,… etc. And each file is having its own attributes they are: -

NAME: - A file is referred by a symbolic name.

TYPE: - This information is needed for those systems that support different file type.

LOCATION: - This indicates the location of the file.

SIZE: - This indicates the current size of the file. ( In bytes, words or blocks ).

PROTECTION: - Access – control information controls who can do the operation of reading, writing & execution. TIME, DATE & USER IDENTIFICATION: - This attributes the creation & lost modification time & date for the file.

File operations: -[ 5 mark question ]: -

The following steps are the operations that can be performed on a file. Creating a file:- Two steps are necessary to create a file.

1. Space in the system must be found for a file,

2. An entry for the new file must be made in the directory.

Writing a file: - First search the particular file and the file pointer keeps track of

location where the next write is to take place. The file pointer is to be updated. Reading a file: - To read a file directory, entry is searched for the location of the file. Once the word takes place the file pointer is updated for next operation.

Repositioning a file: - The directory is searched for the named files & the current file pointer is set to a given repositioning value. Delete a file: - To delete a file directory, first the directory is searched. Then the particular location is freed, & directory entry is erased.

2nd SEM B.Sc.

Access Methods: - [ 3 to 7 marks ] :

File stores information, when it is used, this information must be accesses & read into computer memory. The following are different access methods.

1. Sequential Access,

2. Direct Access, and

3. Other Access methods.

Sequential Access: - The simple access method is sequential access. Information in the file is processed in order one record after other. This method is very useful for file pointer because file pointer updates its value one after the other at the time of read & writes operation. Direct Access: - Magnetic disks, floppy disks supports direct access. Here a particular file block or record can be accessed directly. Here there are no concepts of sequential ordering insertion & deletions of records are easy. Other Access Method: -This method is built on top of a direct access method. These methods generally involve the construction of an index for the file. This maintains the index table to access a file.

DIRECTORY STRUCTURE: - The directory can be defined as a symbol table or collection of entries. Each entry contains the file name and its location in the file system.

Operations on the directory: -

1. Search for a file: -A directory structure can be searched to find an entry for a particular file.

2. Create a file:- New files can be created & added to the directory.

3. Delete a file: - If the file is no longer needed such file can be removed from the directory.

4. List a directory: - All the files in the directory can be listed along with its entry contents.

5. Rename a file: - Because the name of a file represents its contents to its users, the name must be changeable when its contents are changed.

2nd SEM B.Sc.

Directory Structures: - [ 5 marks ]:

1. Single – level directory,

2. Two – level directory, and

3. Tree structured directory.

Single – level directory: -

The simplest directory structure is a single – level directory. All the files

are contained in a single directory it is very easy to implement but very

difficulty when the number of user are creating & naming a file.

Two – level directory structure: -

To solve the above said problem, it is better to go in for 2 level directory

structure. The master directory is contains the user directory, address & access

control information.

Tree structured directory: -

It is more powerful & flexible approach.

directory: - It is more powerful & flexible approach. Master directory User directory User sub directory

Master directory

User directory

User sub directory


The use of tree structured directory minimizes the difficulty in assigning unique

names and this system is very useful to stores the files individual in separate


File protection: - In a multi – user system, the file protection is very important.

Many different protection mechanisms have been proposed.

Naming: - The user can give the name to the file, so that users cannot easily

guess. But this type of protection can easily be broken down.

Passwords: -Another approach is to be use a password with each file or

access the system in controlled by password.

Access control: - Another approach is making access dependent on the

identity of the user. In this, various users may needs different types of

access to a file.

2nd SEM B.Sc.


User program

B.Sc. FILE SYSTEM STRUCTURE: -[ 3 TO 5 MARKS ]: User program Sequential Indexed Sequential Random


Indexed Sequential


Logical I/O

Basic I/O supervisor

Basic File System

DISK device drives

Tape device driver

Explain this in your own steps, minimum 4 to 5 lines.


The files are allocated in a memory in different method.


Contiguous allocation: - For each file a single contiguous set of block is allocated to a file, at the time of file creation. The file allocation table needs just a single entry for each file. It showing the starting block and length of the file. This very simplest allocation method.



Chained allocation: - Here the allocation is on individual block. Each block

contains a pointer to the next block in the chain. The file allocation table needs just a single entry for each file to show the starting block & length of the file. The starting block contains the next block address, like it is linked to one other.

4. Indexed allocation: - Here the file allocation table contains a separate one level index for each file. This index allocation block contains the address of all the files address.

2nd SEM B.Sc.


Disk scheduling algorithm

1. PRIORITY This is based on priority. The control of scheduled is outside the control of disk management software. Usually starter jobs are given higher priority than longer jobs. The longer jobs have to wait excessively for longer times.

1. LAST IN FIRST OUT: This algorithm is used place of scheduling the jobs a disk the order of recently placed files. However if the kept busy because of larger workload. In lasts in first out policies its work on the basis of stack.


In the algorithm is to select the disks input / output request that requires the least movement of the disk are from its current position Because it select the jobs it is very near to present job However this choice should provide better performance than FIFO.

3. SCAN:

In scan scheduling the arm is required to move in one direction only Satisfying all out standing requests enroll, until it reaches the last track in that direction.

This scan moves the head in only one direction; these are many different type of







Deadlock Characterization Deadlock can arise if four conditions hold simultaneously. (All four must hold)



Mutual exclusion: only one process at a time can use a resource.


Hold and wait: a process holding at least one resource is waiting to acquire


additional resources held by other processes.



No preemption: a resource can be released only voluntarily by the process holding it, after that process has completed its task.


Circular wait: there exists a set {P 0 , P 1 ,


P n } of waiting processes such that P 0 is


waiting for a resource that is held by P 1 , P 1 is waiting for a resource that is held by P 2 ,

P n-1 is waiting for a resource that is held by P n , and P n is waiting for a resource that is held by P 0 . Resource Allocation Graph



A set of vertices V and a set of edges E.


V is partitioned into two types:

P = {P 1 , P 2 ,


P n } , the set consisting of all the processes in the system.

R = {R 1 , R 2 ,


R m }, the set consisting of all resource types in the system.


request edge – directed edge P i --> R j


assignment edge – directed edge R j --> P i


Example Resource Allocation graph with no cycles:

o assignment edge – directed edge R j --> P i o Example Resource Allocation graph
o Example Resource Allocation Graph with cycles : o Basic Facts:
Example Resource Allocation Graph with cycles :
Basic Facts:

If graph contains no cycles => no deadlock. If graph contains a cycle => if there is only one instance per resource type, then deadlock. if there are several instances per resource type, then we have the possibility of deadlock. Methods for Handling Deadlocks

Deadlock Prevention

Deadlock Avoidance

Deadlock Detection and Recovery




Adversely affects Utilization

Utilization improved at a cost

Depends on frequency of detection

Involves violating at least one of the necessary conditions for deadlock

RAG Algorithm Bankers Algorithm(Multiple instances)

Maintain a wait-for graph and periodically invoke a "cycle- search"


Critical systems

Applications that have deterministic needs

Database Applications

Deadlock Prevention Restrain the ways that processes can make resource requests:



Mutual Exclusion – not required for sharable resources; must hold for nonsharable




Hold and Wait – must guarantee that whenever a process requests a resource, it does

No Preemption

not hold any other resources. Require process to request and be allocated all its resources before it begins

execution, or allow process to request resources only when the process has none. Low resource utilization; starvation possible.

If a process that is holding some resources requests another resource that

cannot be immediately allocated to it, then all resources currently being held are released. Preempted resources are added to the list of resources for which the process is

waiting. Process will be restarted only when it can regain its old resources, as well as the new ones that it is requesting.


Circular Wait – impose a total ordering of all resource types, and require that each process requests resources in an increasing order of enumeration.

Deadlock Avoidance Requires that the system has some additional a priori information available.


Simplest and most useful model requires that each process declare the maximum number of resources of each type that it may need.


The deadlock-avoidance algorithm dynamically examines the resource-allocation state to ensure that there can never be a circular-wait condition.


Resource-allocation state is defined by the number of available and allocated resources, and the maximum demands of the processes.

Safe State:


When a process requests an available resource, system must decide if immediate allocation leaves the system in a safe state.


System is in safe state if there exists a safe sequence of all processes.


Sequence <P 1 , P 2,


P n > is safe if for each P i , the resources that P i can still request

can be satisfied by currently available resources + resources held by all the P j , with j <



If P i 's resource needs are not immediately available, then P i can wait until all

P j have finished. When all P j have finished, P i can obtain needed resources, execute, return

allocated resources, and terminate. When P i terminates, P i+1 can obtain its needed resources, and so on.


If no such sequence exists then the system is said to be unsafe.

Basic Facts:


If a system is in safe state => no deadlocks.


If a system is in unsafe state => possibility of deadlock.


Avoidance =>ensure that a system will never enter an unsafe state.

Bankers Algorithm


Multiple instances.


Each process must a priori claim maximum use.


When a process requests a resource it may have to wait.



When a process gets all its resources it must return them in a finite amount of time.


Data Structures for the Banker’s Algorithm:

Let n = number of processes, and m = number of resource types. Available : Vector of length m. If Available[j]= k, there are k instances of

Need[i,j] = Max[i,j] - Allocation[ i,j]

resource type R j available. Max : n x m matrix. If Max[i,j]= k, then process P i may request at most k

instances of resource type R j . Allocation : n x m matrix. If Allocation[i,j]= k, then P i is currently allocated k

instances of R j . Need : n x m matrix. If Need i,j]= k, then P i may need k more ainstances of R j

to complete its task.



Given X and Y are vectors of length n, we say that X<=Y (less than or equal) if

, For example, if X = (1, 7, 3, 2) and Y = (0, 3, 2, 1), then Y <= X.

and only if X[i] <= Y[i] for all i=1,2,


Y < X if Y

<= X and Y <> X We treat each row in the matrices Allocation and Need as vectors and refer to them as Allocation i and Need i , respectively. Therefore, the vector Allocation i specifies the resources currently allocated to process P i ; the vector Need i specifies the additional resources that process P i may still request to complete its task.


Safety Algorithm:

Step1: Let Work and Finish be vectors of length m and n, respectively. Initialize:

Work := Available Finish[i] := false for

i = 1, 2,



Step2: Find an i such that both: (a unfinished process P i whose future need for resources can be accommodated)


Finish[i] = false


Need i <= Work

If no such i exists,go to Step 4 else note P i to be next in the safe sequence


Work := Work + Allocation i Finish[i] := true go to step 2. Step4: If Finish[i] = true for all i, then the system is in a safe state. print the safe sequence out else declare state unsafe


Resource-Request Algorithm for process P i Request i = request vector for process P i . If Request i [j]= k, then process P i wants k instances of resource type R j . Step1: If Request i <= Need i , go to step 2. Otherwise, raise error condition, since process has exceeded its maximum claim. Step2: If Request i <= Available, go to step 3. Otherwise, P i must wait, since resources are not available. Step3: Pretend to allocate requested resources to P i by modifying the state as follows:

Available:= Available - Request i ; Allocation i := Allocation i + Request i ; Need i := Need i - Request i ; If safe --> the resources are allocated to P i . If unsafe --> P i must wait, and the old resource-allocation state is restored.