Académique Documents
Professionnel Documents
Culture Documents
HOMEWORK 2
Part A
a) Preemptive.
Process Arrival Time Burst Priority
b) Non preemptive A 0.000 4 3
B 1.0001 3 4
C 2.0001 3 6
D 3.0001 5 5
1. First-Come-First-Served
A 0.000 3
B 1.001 6
C 4.001 4
D 6.001 2
Ans. In FCFS
3 6 4 2
0 A 3 B 9 C 13 D 15
0 2 4 6 8 9 11 13
15
A B C D A B C D A B C B C B
0 1 2 3 4 5 6 7 8 9 10 11 12 13 15
In SJF(Preemptive)
A D D A C B
0 1 2 3 5 9 15
The two tasks will be synchronized at the point the semaphore is passed from the first to
the second just as a baton is passed from one runner to the next in a relay race.
Critical sections can also be managed with help of the underlying hardware. Given a Test-
and-Set function that atomically sets a variable to a value and returns its previous value,
the entry section of the critical section scaffold can simply be:
However, this algorithm does not follow the bounded waiting requirement of critical
sections. In addition, threads are still running and constantly checking the lock condition,
eating processing cycles. To solve these problems, we can associate a lock with a list of
threads that are waiting for it. This data structure is called a semaphore.
When using a semaphore, you only use two functions: P and V. Function P is an acronym
for proberen, which is German for test. Function V is an acronym for verhogen, German
for increment. The internal variables are only accessed inside these functions. The P
function blocks execution until it is safe to enter a critical section. The V function denotes
that the critical section execution has completed. Therefore, use of semaphores can be fit
into the Critical-Section scaffold like this:
repeat
P(S);
critical section
V(S);
remainder section
until false;
Part B
Ans. One of the oldest, simplest, fairest and most widely used algorithm is round robin
(RR).
In the round robin scheduling, processes are dispatched in a FIFO manner but are given a
limited amount of CPU time called a time-slice or a quantum.
If a process does not complete before its CPU-time expires, the CPU is preempted and
given to the next process waiting in a queue. The preempted process is then placed at the
back of the ready list.
Round Robin Scheduling is preemptive (at the end of time-slice) therefore it is effective
in time-sharing environments in which the system needs to guarantee reasonable response
times for interactive users.
The only interesting issue with round robin scheme is the length of the quantum. Setting
the quantum too short causes too many context switches and lower the CPU efficiency.
On the other hand, setting the quantum too long may cause poor response time and
appoximates FCFS.
In any event, the average waiting time under round robin scheduling is often quite long.
Ans. If so the basic guideline would be this: Use round robin if it is desirable to allow
long running processes to execute while not interfering with shorter ones, with the side
effect that order of completion is not guaranteed. Round Robin can suffer if there are
many processes in the system, since it will take longer for each process to complete since
the round trip is longer.
If you do need a guaranteed order of completion, FCFS is a better choice but long running
processes can stall the system. However, each process is given the full attention of the
system and can complete in the fastest possible time, so that can be a benefit.
In the end it really does come down to not necessarily design but need: Do I need semi-
synchronous execution or do I need in-order execution? Is it to my benefit for processes
to take longer but compute in sync or will I be better off if everything executes as fast as
possible? The needs of the system dictate the model to use.
1) Mutual Exclusion – not required for sharable resources; must hold for
non-sharable resources.
3) No Preemption –
Process will be restarted only when it can regain its old resources,
as well as the new ones that it is requesting.
4) Circular Wait – impose a total ordering of all resource types, and require
that each process requests resources in an increasing order of
enumeration.
Deadlock Avoidance:-
1) Simplest and most useful model requires that each process declare the
maximum number of resources of each type that it may need.
6. (a) How Process is different from Thread. Discuss structure used for
thread management in detail.
1.Threads share the address space of the process that created it; processes have
their own address.
2.Threads have direct access to the data segment of its process; processes have
their own copy of the data segment of the parent process.
3.Threads can directly communicate with other threads of its process; processes
must use interprocess communication to communicate with sibling processes.
5.New threads are easily created; new processes require duplication of the parent
process.
6.Threads can exercise considerable control over threads of the same process;
processes can only exercise control over child processes.
7.Changes to the main thread (cancellation, priority change, etc.) may affect the
behavior of the other threads of the process; changes to the parent process does
not affect child processes.