Vous êtes sur la page 1sur 8

OPERATING SYSTEMS ASSIGNMENT

Sai Sushmitha.P
(1602-12-737-100)
I.T. - ( section B)

SECTION-A

1. When two processes are said to be in deadlock state?


A set of processes is deadlocked when every process is the set is waiting for a resource that
is currently allocated to another process in the set ( which can only be released when that
other waiting process makes progress ).
2. Priority scheduling leads to starvation (True/False)?
True, since low priority jobs can wait indefinitely.
3. What is spinlock and why is it called so?
Semaphores require busy waiting. While a process is in its critical section, any other process
that tries to enter its critical section must loop continuously in the entry code. This
continual looping is clearly a problem in real multi programming system where a single CPU
is shared among many processes. Busy waiting wastes CPU cycles that other processes
might use productively. This type of semaphore is called Spinlock because the process
spins while waiting for the lock.
4. Name one system call related to process management?
-

Fork () :- Used to create a new process


Exec() :- Execute a new program
Wait():- wait until the process finishes execution
Exit():- Exit from the process
Getpid():- get the unique process id of the process
Getppid():- get the parent process unique id

5. Discuss about context switching


Interrupts cause the operating system to change a CPU from its current task and to run
a kernel routine. When an interrupt occurs, the system needs to save the current
context of the process currently running on the CPU so that it can restore that context
when its processing is done, essentially suspending the process and then resuming it.
Switching the CPU to another process requires performing a state save of the current
process and a state restore of a different process. This task is known as context
switching.
SECTION-B
6. A. Discuss deadlock avoidance in detail?
Simplest and most useful model requires that each process declare the maximum
number of resources of each type that it may need.
The deadlock-avoidance algorithm dynamically examines the resource-allocation state
to ensure that there can never be a circular-wait condition.
Resource-allocation state is defined by the number of available and allocated resources,
and the maximum demands of the processes.
Safe State
When a process requests an available resource, system must decide if immediate
allocation leaves the system in a safe state.
System is in safe state if there exists a safe sequence of all processes.
Sequence <P1, P2, , Pn> is safe if for each Pi, the resources that Pi can still request can
be satisfied by currently available resources + resources held by all the Pj, with j<I.
1. If Pi resource needs are not immediately available, then Pi can wait until all Pj have
finished.
2. When Pj is finished, Pi can obtain needed resources, execute, return allocated
resources, and terminate.
3. When Pi terminates, Pi+1 can obtain its needed resources, and so on.
Resource-allocation graph algorithm
- Works only if each resource type has one instance.
Algorithm:
- Add a claim edge, Pi Rj, if Pi can request Rj in the future
- Represented by a dashed line in graph
- A request Pi Rj can be granted only if:

- Adding an assignment edge Rj Pi does not introduce cycles

Bakers algorithm
- Multiple instances.
- Each process must a priori claim maximum use.
- When a process requests a resource it may have to wait.
- When a process gets all its resources it must return them in a finite amount of
time.
- If request[i] > need[i] then
error (asked for too much)
- If request[i] > available[i] then
wait (cant supply it now)
- Resources are available to satisfy the request
- Lets assume that we satisfy the request. Then we would have:
available = available - request[i]
allocation[i] = allocation [i] + request[i]
need[i] = need [i] - request [i]
- Now, check if this would leave us in a safe state:
If yes, grant the request,
If no, then leave the state as is and cause process to wait.

B. Differentiate between long term, short term and medium term scheduling.
Long Term Scheduler
It is also called job scheduler. Long term scheduler determines which programs are
admitted to the system for processing. Job scheduler selects processes from the queue
and loads them into memory for execution. Process loads into the memory for CPU
scheduling. The primary objective of the job scheduler is to provide a balanced mix of
jobs, such as I/O bound and processor bound. It also controls the degree of
multiprogramming. If the degree of multiprogramming is stable, then the average rate
of process creation must be equal to the average departure rate of processes leaving
the system. On some systems, the long term scheduler may not be available or minimal.
Time-sharing operating systems have no long term scheduler. When process changes
the state from new to ready, then there is use of long term scheduler.
Short Term Scheduler
It is also called CPU scheduler. Main objective is increasing system performance in
accordance with the chosen set of criteria. It is the change of ready state to running
state of the process. CPU scheduler selects process among the processes that are ready
to execute and allocates CPU to one of them.
Short term scheduler also known as dispatcher, execute most frequently and makes the
fine grained decision of which process to execute next. Short term scheduler is faster
than long term scheduler.
Medium Term Scheduler
Medium term scheduling is part of the swapping. It removes the processes from the
memory. It reduces the degree of multiprogramming. The medium term scheduler is incharge of handling the swapped out-processes.
S.N. Long Term Scheduler

Short Term Scheduler

Medium Term Scheduler

It is a job scheduler

It is a CPU scheduler

It is a
scheduler.

Speed is lesser than short term Speed is fastest among Speed is in between both short
scheduler
other two
and long term scheduler.

It controls the degree


multiprogramming

It is almost absent or minimal in It is also minimal in time It is a part of Time sharing


time sharing system
sharing system
systems.

It selects processes from pool


It can re-introduce the process
It selects those processes
and loads them into memory for
into memory and execution can
which are ready to execute
execution
be continued.

of

process

It provides lesser control


It reduces the
over
degree
of
multiprogramming.
multiprogramming

swapping

degree

of

7. Consider the following set of processes, with the length of the CPU-burst time given in
milliseconds.
Process
burst time
priority
P1
10
3
P2
1
1
P3
2
3
P4
1
4
P5
5
2
The processes are assumed to have arrived in the order P1, P2, P3, P4, P5, all at time 0.
a) Draw four gantt charts illustrating the execution of these processes using FCFS, SJF,
A non-preemptive priority (a smaller priority number implies a higher priority) and
RR(quantum=1) scheduling.
b) What is the waiting time of each process for each of the scheduling algorithms in
part a, which is the best scheduling algorithm according to you?

a)
FCFs:
P1
0

P2
10

P3
11

P4
13

P5
14

19

Time = (10+11+13+14+19)/5 = 13.4 s


SJF:
P2
0

P4
1

P3
2

P5
4

P1
9

19

Time = (1+2+4+9+19)/5=7 S
Priority:
P2

P5

0
1
6
Time = (0+1+6+16+18+19)/5=12 s

P1

P3
16

P4
18

19

round robin:
P
1

P
2

P
3

P
4

P
5

P
1

P
3

P
5

P
1

P
5

P
1

P
5

P
1

P
5

P
1

P
1

P
1

P
1

P
1

b) SJF
Best approach to minimize waiting time.
Impossible to implement
Processer should know in advance how much time process will take.
8. a) Draw process state diagram and explain the states in detail
New - The process is in the stage of being created.
Ready - The process has all the resources available that it needs to run, but the CPU is
not currently working on this process's instructions.
Running - The CPU is working on this process's instructions.
Waiting - The process cannot run at the moment, because it is waiting for some
resource to become available or for some event to occur. For example the process may
be waiting for keyboard input, disk access request, inter-process messages, a timer to go
off, or a child process to finish.
Terminated - The process has completed.

b) Discuss about the process control block in detail.

Each process is represented in the operating system by a process control block (PCB)
also called a task control block. PCB is the data structure used by the operating system.
Operating system groups all information that needs about particular process. Process
control block includes CPU scheduling, I/O resource management, file management
information etc.. The PCB serves as the repository for any information which can vary
from process to process. Loader/linker sets flags and registers when a process is
created. If that process get suspended, the contents of the registers are saved on a stack
and the pointer to the particular stack frame is stored in the PCB.
Pointer
Pointer points to another process control block. Pointer is used for maintaining the
scheduling list.
Process State
Process state may be new, ready, running, waiting and so on.
Program Counter

Program Counter indicates the address of the next instruction to be executed for this
process.
CPU registers
CPU registers include general purpose register, stack pointers, index registers and
accumulators etc. number of register and type of register totally depends upon the
computer architecture.
Memory management information
This information may include the value of base and limit registers, the page tables, or
the segment tables depending on the memory system used by the operating system.
This information is useful for deallocating the memory when the process terminates.

Vous aimerez peut-être aussi