Vous êtes sur la page 1sur 7

CSE-316 (OPERATING SYSTEM)

HOMEWORK 2

Part A

a) For the processes liked in table, draw a chart illustrating their


execution using Priority Scheduling. A larger priority number has
higher priority.

a) Preemptive.
Process Arrival Time Burst Priority
b) Non preemptive A 0.000 4 3
B 1.0001 3 4
C 2.0001 3 6
D 3.0001 5 5

Ans. a) Priority (Preemptive

b) For a processes listed in Table, draw a chart illustrating their


execution using:

1. First-Come-First-Served

2. Shortest Job First

3. Round Robin (Quantum=2)

4. Round Robin (Quantum=1)

Process Arrival Time Processing Time

A 0.000 3

B 1.001 6

C 4.001 4

D 6.001 2
Ans. In FCFS

3 6 4 2

0 A 3 B 9 C 13 D 15

In Round Robin (Quantum =2)


A B C D A B C B

0 2 4 6 8 9 11 13
15

In Round Robin (Quantum=1)

A B C D A B C D A B C B C B

0 1 2 3 4 5 6 7 8 9 10 11 12 13 15

In SJF(Preemptive)

A D D A C B

0 1 2 3 5 9 15

c) Illustrate the concept of semaphores? How it will be implemented


in critical section problem.

Ans. A semaphore is a service very often offered by real-time operating systems to


allow programmers to perform one of two major functions: synchronize 2 tasks or control
sharing of resources between 2 or more tasks. There are different kinds of semaphores,
but one of the most common is called a binary semaphore. In this kind of semaphore,
only 1 task at a time may "have" the semaphore. The simplest way to think of the binary
semaphore might be to consider the semaphore like a baton and there is only one baton.
Say if you want to allow only 1 person to speak at a time (consider people to be tasks),
then you make a rule that only the person holding the baton may speak. A person wanting
to speak will go and pick up the baton. If nobody else has it, he may speak. If someone
else has it, he must wait until the baton is put down by whoever has it. This is the
"sharing resources" way a semaphore is used.

The two tasks will be synchronized at the point the semaphore is passed from the first to
the second just as a baton is passed from one runner to the next in a relay race.

Solution Of critical Problem Using Semaphores:-

Critical sections can also be managed with help of the underlying hardware. Given a Test-
and-Set function that atomically sets a variable to a value and returns its previous value,
the entry section of the critical section scaffold can simply be:

while Test-and-Set(lock) do nothing;

However, this algorithm does not follow the bounded waiting requirement of critical
sections. In addition, threads are still running and constantly checking the lock condition,
eating processing cycles. To solve these problems, we can associate a lock with a list of
threads that are waiting for it. This data structure is called a semaphore.

type semaphore = record


{
value: integer;
L: list of thread;
}

When using a semaphore, you only use two functions: P and V. Function P is an acronym
for proberen, which is German for test. Function V is an acronym for verhogen, German
for increment. The internal variables are only accessed inside these functions. The P
function blocks execution until it is safe to enter a critical section. The V function denotes
that the critical section execution has completed. Therefore, use of semaphores can be fit
into the Critical-Section scaffold like this:

repeat
P(S);
critical section
V(S);
remainder section
until false;

Part B

d) a) Round robin can be termed as preemptive FCFS? Justify your


answer with an example.

Ans. One of the oldest, simplest, fairest and most widely used algorithm is round robin
(RR).

In the round robin scheduling, processes are dispatched in a FIFO manner but are given a
limited amount of CPU time called a time-slice or a quantum.

If a process does not complete before its CPU-time expires, the CPU is preempted and
given to the next process waiting in a queue. The preempted process is then placed at the
back of the ready list.

Round Robin Scheduling is preemptive (at the end of time-slice) therefore it is effective
in time-sharing environments in which the system needs to guarantee reasonable response
times for interactive users.

The only interesting issue with round robin scheme is the length of the quantum. Setting
the quantum too short causes too many context switches and lower the CPU efficiency.
On the other hand, setting the quantum too long may cause poor response time and
appoximates FCFS.

In any event, the average waiting time under round robin scheduling is often quite long.

b)In what kind of an environment can round robin be implemented


over FCFS?

Ans. If so the basic guideline would be this: Use round robin if it is desirable to allow
long running processes to execute while not interfering with shorter ones, with the side
effect that order of completion is not guaranteed. Round Robin can suffer if there are
many processes in the system, since it will take longer for each process to complete since
the round trip is longer.

If you do need a guaranteed order of completion, FCFS is a better choice but long running
processes can stall the system. However, each process is given the full attention of the
system and can complete in the fastest possible time, so that can be a benefit.

In the end it really does come down to not necessarily design but need: Do I need semi-
synchronous execution or do I need in-order execution? Is it to my benefit for processes
to take longer but compute in sync or will I be better off if everything executes as fast as
possible? The needs of the system dictate the model to use.

5. How Deadlock prevention is different from deadlock avoidance ?

Ans. Deadlock Prevention:

1) Mutual Exclusion – not required for sharable resources; must hold for
non-sharable resources.

2) Hold and Wait – must guarantee that whenever a process requests a


resource, it does not hold any other resources.

 Require process to request and be allocated all its resources


before it begins execution

 Or…allow process to request resources only when the process has


none.

 Cons: Lower resource utilization

3) No Preemption –

 If a process that is holding some resources requests another


resource that cannot be immediately allocated to it, then all
resources currently being held are released.
 Preempted resources are added to the list of resources for which the
process is waiting.

 Process will be restarted only when it can regain its old resources,
as well as the new ones that it is requesting.

4) Circular Wait – impose a total ordering of all resource types, and require
that each process requests resources in an increasing order of
enumeration.

Deadlock Avoidance:-
1) Simplest and most useful model requires that each process declare the
maximum number of resources of each type that it may need.

2) The deadlock-avoidance algorithm dynamically examines the resource-


allocation state to ensure that there can never be a circular-wait condition.

3) Resource-allocation state is defined by the number of available and


allocated resources, and the maximum demands of the processes.

6. (a) How Process is different from Thread. Discuss structure used for
thread management in detail.

Ans. The major difference between threads and processes is

1.Threads share the address space of the process that created it; processes have
their own address.

2.Threads have direct access to the data segment of its process; processes have
their own copy of the data segment of the parent process.

3.Threads can directly communicate with other threads of its process; processes
must use interprocess communication to communicate with sibling processes.

4.Threads have almost no overhead; processes have considerable overhead.

5.New threads are easily created; new processes require duplication of the parent
process.
6.Threads can exercise considerable control over threads of the same process;
processes can only exercise control over child processes.

7.Changes to the main thread (cancellation, priority change, etc.) may affect the
behavior of the other threads of the process; changes to the parent process does
not affect child processes.

(b) How Interprocess communication is managed in Windows?

Ans. Interprocess Communication is managed in windows :-

 Should the application be able to communicate with other applications running on


other computers on a network, or is it sufficient for the application to communicate
only with applications on the local computer?
 Should the application be able to communicate with applications running on other
computers that may be running under different operating systems (such as 16-bit
Windows or UNIX)?
 Should the user of the application have to choose the other applications with which
the application communicates, or can the application implicitly find its cooperating
partners?
 Should the application communicate with many different applications in a general
way, such as allowing cut-and-paste operations with any other application, or
should its communications requirements be limited to a restricted set of
interactions with specific other applications?
 Is performance a critical aspect of the application? All IPC mechanisms include
some amount of overhead.
 Should the application be a GUI application or a console application? Some IPC
mechanisms require a GUI application.

Vous aimerez peut-être aussi