Vous êtes sur la page 1sur 47

Operating System

LABORATORY WORKBOOK

Name ____________________
Roll No ___________________
Marks Obtained ____________
Signature___________________

CONTENTS
S
No
.

Object of Experiments

Remarks

Introduction to algorithms, reading and writing algorithms.

Introduction Of Scheduling and Algorithm to calculate Average


WaitingTime (AWT) and its implementation.

Algorithm to calculate Average Turnaround Time (ATT) and its


implementation.

Populating the Processor Time Line.

Implementing non pre-emptive algorithms, First Come First


Server (FCFS) and calculating AWT and ATT.

Implementing non pre-emptive algorithms, Shortest Job First


(SJF).

Implementing Priority based Scheduling.

Implementing Round Robin Algorithm.

Producer Consumer Problem using Semaphores

10

Producer Consumer Problem using Monitors

11

Producer Consumer Problem Using Blocking Queue

12

Memory Management Techniques: LRU, OPTIMAL & FIFIO

Date

Signature

Operating System
LAB-1

Name ____________________
Roll No ___________________
Date ______________________
Signature___________________

Objective
Introduction to algorithms, reading andwriting algorithms.
Theory
An algorithm is a finite well-defined step-by-step list of instruction for solving a particular problem. Algorithms can contain
mathematical equations, logics, linguistic terms, graphs and flowcharts.
The format for the formal presentation of an algorithm consists of two parts.
The first part is a paragraph telling the purpose of algorithm, identifying variables in algorithms and listing the input data.
The second part consists of list of steps to be executed.
An algorithm contains the following compulsory elements.
Identification Number
All algorithms have an identification number or a label on them
e.g. Algorithm 4.3
where 4 represents chapter and 3 represents that it is the third algorithm of chapter 4.
Steps, Control, Exit
Starting with step 1, steps are executed one after the other in a sequence. Hence control is transferred sequentially.
However in some cases if defined the control can go from any one step directly to any other step ahead or backward
e.g. instead of going to step 2 from step 1 step 5 can be
Steps contain one or more than one statement separated by a comma ending with a full stop
e.g. Set K:=1, LOC:=1 and MAX := DATA [1].
Steps and statements are executed left to right.
Algorithm completes if Exit statement is encountered.
Variable Names
Variable names are always in capital letters and counter variable names may have single letters
e.g. MAX, DATA, MIN, etc.
for counters K, N, etc.
Assignment Statement
Assignment operators use dot equal tonotation
e.g. MAX := DATA [1]
which means assign the value of DATA [1] to MAX.
Input / Output
Taking input and variable initialization is done using Read statement
e.g. Read: K.
Conditions and checks
Almost all algorithm contain conditions and\or checks which are implemented by the key If followed by a condition.
If K > N then:
Important to note :1.
Algorithms can not have deadlocks.
2.
Algorithms can not have infinite loops or such logics which are never ending.
3.
Algorithms do not have unused variables.
4

Example
The example below shows an algorithm which finds the largest value from an array. Please note that this algorithm is valid
for all positive values of largest element and a finite length of array. The code written below is also known as
PESUDO CODE.
Algorithm
Algorithm 1.1:
(Largest Element in Array) A nonempty array DATA with N numerical values is given.
This algorithm finds the location LOC and MAX of the largest element of DATA. The
Variable K is used as a counter.
Step 1. [Initialize.]Set K:=1, LOC:=1 and MAX:=DATA[1].
Step 2.

[Increment counter.] Set K:=K+1.

Step 3.

[Test counter.] If K>N, then:


Write: LOC, MAX, and Exit.

Step 4.
Step 5.

[Compare and update.] If MAX < DATA[K], then:


Set LOC:=K and MAX:=DATA[K].

[Repeat loop.] Go to Step 2.

Algorithms can also have Numerical equations. The algorithm below is implementing a quadratic equation.
Quadratic formula,
If D = b2 - 4ac

_______
x = - b b2 - 4ac
2a

If D = negative

If D = 0

If D = positive

Then

Then

Then

No real solution

One real solution

two real solution

And x = -b.
2a

Algorithm
Algorithm 1.2:
(Quadratic Equation) This algorithm inputs the coefficients A, B, C of a quadratic
equation and outputs the real solutions, if any.
Step 1.

Read: A, B, C.

Step 2.

Set D := B2 - 4AC.

Step 3.

If D > 0, then:
_
_
(a)
Set X1 := ( -B + D ) / 2A and X2 := ( -B -D ) / 2A .
(b) Write: X1, X2
Else if D = 0, then:
(a)
Set X := -B / 2A .
(b) Write: UNIQUE SOLUTION, X
Else
Write: NO REAL SOLUTION.
[End of If structure.]

Step 4.

Exit.

Assignment
Write an algorithm which finds the lowest value in an array. Also implement that algorithm in C or Java.
Write an algorithm which generates the prime numbers between 1 and 50. Also implement that algorithm in C or
Java.

Operating System
LAB-2

Name ____________________
Roll No ___________________
Date ______________________
Signature___________________

Objective
Introduction to Scheduling and Algorithm to calculate Average Waiting Time (AWT) and its implementation.
Scheduling is the method by which threads, processes or data flows are given access to system resources (e.g.
processor time, communications bandwidth). This is usually done to load balance a system effectively or achieve a
target quality of service.
CPU scheduler can be invoked at ve different points:
1. When a process switches from the new state to the ready state.
2. When a process switches from the running state to the waiting state.
3. When a process switches from the running state to the ready state.
4. When a process switches from the waiting state to the ready state.
5. When a process terminates.

Non-preemptive vs. Preemptive Scheduling


Under non-preemptive scheduling, each running process keeps the CPU until it completes or it switches to the
waiting (blocked) state.
Under preemptive scheduling, a running process may be also forced to release the CPU even though it is neither
completed nor blocked
Scheduling Criteria
Several criteria can be used to compare the performance of scheduling algorithms
CPU utilization keep the CPU as busy as possible
Throughput # of processes that complete their execution per time unit
Turnaround time amount of time to execute a particular process
Waiting time amount of time a process has been waiting in the ready queue
Response time amount of time it takes from when a request was submitted until the rst response is
produced, not the complete output.
Scheduling Algorithms
First-Come-First-Served
Shortest-Job-First, Shortest-remaining-Time-First
Priority Scheduling
Round Robin
Multi-level Queue

Theory
Average Waiting Time

Process

Where,

TAWt

is the average waiting time

TCPU1
N

is

the time when system


is the total number of process
PROCESS
BURST
WAITING
TOTAL

Process-1
8
0
0

Burst
Time

P1
P2
P3

P4
P5

TAWt

TCPU1 TCPU2 TCPU3 ........


N

9
1

Process-2
4
0+8
8

resources were allocated to this process

Process-3
9
0+8+4
12

Process-4
5
8+4+9
21

Process-5
1
8+4+9+5
26

PESUDO CODE(algorithm) and C Language code for calculating Average Waiting Timefor the given set of processes.
Program
Algorithm
Algorithm 2.1:
(Average Waiting Time) A nonempty array
PROCSS [ B ] with N Number of process and
Burst time B for each process values is
given. WT is the waiting Time and AWT is the
average waiting time. The variables K1 and
K2 are used as counters. This algorithm
calculates the average waiting time AWT.
Step 1.

Read PROCESS[B].

Step 2.

Set K1 := 1, WT := 0, AWT := 0.

Step 3.
Step 4.
Step 5.

Repeat Steps 4 and 5 while K1<= N.


Set K2 := 1.

Repeat Steps 6 and 7 while K2< K1.


Step 6.
Step 7.

WT := WT + PROCESS[K2].

[Increment counter.] Set K2 := K2+1


Step 8.

Step 9.

[End of Step 5,K2 loop]

[Increment counter.] Set K1 := K1+1

Step 10. [End of Step 3, K1 loop]


Step 11. [Calculate average waiting time.] AWT := WT / N.
Step 12. Exit.

Assignment
Calculate and verify AWT for the following sets of burst times.
o 8,4,5,9,1
o 9,4,7,2,5
o 8,3,6,1,4
8

Operating System
LAB-3
Name ____________________
Roll No ___________________
Date ______________________
Signature___________________

Objective
Algorithm to calculate Average Turnaround Time (ATT), Time between submission and completion
and its implementation.
Theory
Average Turnaround Time

(T
TWt

CPU

TBt)1 (T

Process

CPU

TBt ) 2 (T
N

CPU

Burst
Time

P1
P2
P3

P4
P5

4
9
1

TBt ) 3 ............

Where,

TBt

is the burst time of a process

TCPU
N

is the time when system resources were allocated to a process


is the total number of process
PROCESS
BURST
TURNAROUND
TOTAL

Process-1
8
0+8
8

Process-2
4
0+8+4
12

Process-3
9
8+4+9
21

Process-4
5
8+4+9+5
26

C Language code for calculating Average Turnaround Time for the given set of processes.
Program
Algorithm

10

Process-5
1
8+4+9+5+1
27

Assignment
Calculate and verify ATT for the following sets of burst times.
o 8,4,5,9,1
o 9,4,7,2,5
o 8,3,6,1,4

11

12

Operating System
LAB-4
Name ____________________
Roll No ___________________
Date ______________________
Signature___________________

13

Objective
Populating the Processor Time Line.
Theory
Processor Time Line
Processor Time Line is a time graph
the process with respect to time.
If the processes in the given table are
they will generate the following time

TIME
PROCES
S

Burst
Time

Process

P1
P2
P3

P4
P5

which shows the allocation of CPU resources to


observing First Come First Server algorithmthen
line.

4
2
1

10

11

12

13

14

P1

P
1

P
1

P
2

P
2

P
2

P
2

P
3

P
3

P4

P4

P4

P4

P4

P5

Program

Explanation
Conditions that apply to the program for Processor
Time Line:
All process arrive on First Come First Server
concept

All process arrive in sequence of their order


and have same priority

All processes have burst time less than 10


hence a (time line) tl array of length 50. (i.e. 5 x
10 = 50).
When CPU is being allocated to a process at any time,
then for that time the name of that process will be
written in the Time Line.
In the end the program will print the entire time line.
Output of the Time Line program on left is given below.
111222233444445

Assignment
Populate and verify the time line for the following sets of burst times also calculate their AWT and ATT.
o 8,4,5,9,1
9,4,7,2,5
8,3,6,1,4
14

Operating System
LAB-5

Name ____________________
Roll No ___________________
Date ______________________
Signature___________________

15

Objective
Implementing non pre-emptive algorithms, First Come First Server (FCFS) and calculating AWT and ATT
Arrival
Burst
Process
Theory
Time
Time
Average Waiting
Time
8
P
3
1

Where,

TAWt

is

TCPU1
N

the

P2
P3

P4
P5

TAWt

TCPU1 TCPU2 TCPU3 ........


N

average waiting time

is the time when system resources were allocated to this process


is the total number of process

Average Turnaround Time

(T
TWt

CPU

TBt)1 (T

CPU

TBt) 2 (T
N

CPU

TBt ) 3 ............

Where,

TBt

is the burst time of a process

TCPU
N

is the time when system resources were allocated to a process


is the total number of process

PESUDO CODE for first come first server algorithm provided all the processes arrive sequentially.
Algorithm
Algorithm 2.1:
(First Come First Serve) A nonemptyarray PROCSS [ B ] with N Number of process and
Burst time B for each process values is given. WT is the waiting Time and TT is the turn
around time. The variable K is used as a counter. This algorithm finds the average
waiting time AWT and average turn around time ATT.
Variable K is used as a counter.
Step 1.

Read PROCESS[B].

Step 2.

Set K := 1, WT := 0.

Step 3.

Repeat Steps 4 and 5 while K < N-1.

Step 4. WT := WT + PROCESS[K].
TT := TT + PROCESS[K] + PROCESS[K+1].
Step 5.

[Increment counter.] Set K := K+1


Step 6. TT := TT + PROCESS[0] + PROCESS[N]
Step 7.

[Calculate average waiting time.] AWT := WT / N.

Step 8. [Calculate average waiting time.] ATT := TT / N.


Step 9.

Exit.

16

First Come First Serve (FCFS)

Jobs are executed on first come, first serve basis.

Easy to understand and implement.

Poor in performance as average wait time is high.

Wait time of each process is following


Proces
s

Wait Time : Service Time - Arrival Time

P0

0-0=0

P1

5-1=4

P2

8-2=6

P3

16 - 3 = 13

Average Wait Time: (0+4+6+13) / 4 = 5.55


Assignment
Implement the Priority Based Scheduling and Calculate the Average Waiting Time (AWT) and Average Turn
Around Time (ATT).
17

Operating System
LAB-6

Name ____________________
Roll No ___________________
Date ______________________
Marks Obtained ____________
Signature___________________

18

Objective
Implementing non pre-emptive algorithms, Shortest Job First (SJF).
Theory
A set of processes arrive with some given order and with varying burst times. According to the SJF Algorithm shortest job
should be evaluated first. Hence the process queue is sorted such that shortest job is on the first location.

Shortest Job First (SJF)

Best approach to minimize waiting time.

Processer should know in advance how much time process will take.

Process

Arrival Time

Execute
Time

Service
Time

P0

P1

P2

14

P3

Wait time of each process is following


Process Wait Time : Service Time - Arrival Time
19

P0

3-0=3

P1

0-0=0

P2

14 - 2 = 12

P3

8-3=5

Average Wait Time: (3+0+12+5) / 4 = 5.50

Assignment
Calculate the Average Waiting Time (AWT) and Average Turn Around Time (ATT) for the above Algorithm of
Shortest Job First (SJF).

Operating System
20

LAB-7

Name ____________________
Roll No ___________________
Date ______________________
Marks Obtained ____________
Signature___________________

Objective
Implementing Priority based Scheduling.
Theory
A set of 5 processes arrive randomly with some given priority and with varying burst times. According to the Priority
Scheduling Algorithm process with highest priority should be executed first. Hence the processes in queue are sorted
according to their priority.
Algorithm
Algorithm 4.1:
(Priority Based Scheduling) A nonempty two dimensional array PROCSS [ P ] [ B ] with N
Number of Process. Each Process has given values for Priority P and Burst time B. This
algorithm sorts the processes according to their priority.Variable K is outer counter and
Ki is the inner counter.
Step 1.

Read PROCESS[P][B].

Step 2.

[Initialize] Set K := 1, Ki := 1.

Step 3.

Repeat step 4 while K < N.


21

Step 4.

Repeat while Ki< N.


(a)
If PROCESS[Ki][1]> PROCESS[Ki][2], then:
Interchange PROCESS[Ki][0] and PROCESS[Ki][1]
(b) [Increment counter.] Set Ki := Ki+1.
[End of inner loop]
(c)

[Increment counter.] Set K := K+1.

[End of Step 3 outer loop]


Step 5.

Exit.

Each process is assigned a priority. Process with highest priority is to be executed first and so on.

Processes with same priority are executed on first come first serve basis.

Priority can be decided based on memory requirements, time requirements or any other resource requirement.

Wait time of each process is following


Process Wait Time : Service Time - Arrival Time
P0

0-0=0

P1

3-1=2

P2

8-2=6

P3

16 - 3 = 13

Average Wait Time: (0+2+6+13) / 4 = 5.25

Assignment
Implement the Priority Based Scheduling and Calculate the Average Waiting Time (AWT) and Average Turn
Around Time (ATT).

22

Operating System
LAB-8

Name ____________________
Roll No ___________________
Date ______________________
23

Marks Obtained ____________


Signature___________________

Objective
Implementing Round Robin Algorithm.
Theory
A set of 5 processes arrives with some given order and with varying burst times. According to the SJF Algorithm shortest
job should be evaluated first. Hence the process queue is sorted such that shortest job is on the first location.
Algorithm
Algorithm 5.1:

(Round Robin) A nonempty array DATA with N numerical values is given.


This algorithm finds the location LOC and MAX of the largest element of DATA. The
Variable K and Ke is used as a counter.
Step 1.

[Initialize.] Set K := 1, LOC := 1 and MAX := DATA[1].

Step 2.

[Increment counter.] Set K := K+1.

Step 3.

[Test counter.] If K > N, then:


Write: LOC, MAX, and Exit.

Step 4.

[Compare and update.] If MAX < DATA[K], then:


Set LOC := K and MAX := DATA[K].

Step 5.

[Repeat loop.] Go to Step 2.

Step 6.

Set Ke:= 1.

Step 7.

Repeat Steps 8 and 9 while MAX>0.

Step 8.

DATA[Ke] := DATA[Ke] 1.

Step 9.

Exit.

Each process is provided a fix time to execute called quantum.

24

Once a process is executed for given time period. Process is preempted and other process executes for
given time period.
Context switching is used to save states of preempted processes.

Wait time of each process is following


Proces
s

Wait Time : Service Time - Arrival Time

P0

(0-0) + (12-3) = 9

P1

(3-1) = 2

P2

6-2) + (15-9) = 10

P3

(9-3) + (18-12) = 12

Average Wait Time: (9+2+10+12) / 4 = 8.25


Assignment
Implement the Round Robin Algorithm and Calculate the Average Waiting Time (AWT) and Average Turn Around
Time (ATT).

25

Operating System
LAB-9

Name ____________________
Roll No ___________________
Date ______________________
Marks Obtained ____________
Signature___________________

26

Objective
Study produces consumer problem with Semaphores

Producer Consumer Problem


The producer-consumer problem (also known as the bounded-buffer problem) is a classic example of a multiprocess synchronization problem. The problem describes two processes, the producer and the consumer, who share a
common, fixed-size buffer used as a queue. The producer's job is to generate a piece of data, put it into the buffer and
start again. At the same time, the consumer is consuming the data (i.e., removing it from the buffer) one piece at a time.
The problem is to make sure that the producer won't try to add data into the buffer if it's full and that the consumer won't
try to remove data from an empty buffer.
The solution for the producer is to either go to sleep or discard data if the buffer is full. The next time the consumer
removes an item from the buffer, it notifies the producer, who starts to fill the buffer again. In the same way, the consumer
can go to sleep if it finds the buffer to be empty. The next time the producer puts data into the buffer, it wakes up the
sleeping consumer. The solution can be reached by means of inter-process communication, typically using semaphores.
An inadequate solution could result in a deadlock where both processes are waiting to be awakened.
To solve the problem, a less than perfect programmer might come up with a solution shown below. In the solution two
library routines are used, sleep and wakeup. When sleep is called, the caller is blocked until another process wakes it up
by using the wakeup routine. The global variable itemCount holds the number of items in the buffer.
int itemCount = 0;
procedure producer() {
while (true) {
item = produceItem();
if (itemCount == BUFFER_SIZE) {
sleep();
}
putItemIntoBuffer(item);
itemCount = itemCount + 1;
if (itemCount == 1) {
wakeup(consumer);
}
}
}
procedure consumer() {
while (true) {
27

if (itemCount == 0) {
sleep();
}
item = removeItemFromBuffer();
itemCount = itemCount - 1;
if (itemCount == BUFFER_SIZE - 1) {
wakeup(producer);
}
consumeItem(item);
}
}
The problem with this solution is that it contains a race condition that can lead to a deadlock. Consider the following scenario:
1. The consumer has just read the variable itemCount, noticed it's zero and is just about to move inside the if-block.
2. Just before calling sleep, the consumer is interrupted and the producer is resumed.
3. The producer creates an item, puts it into the buffer, and increases itemCount.
4. Because the buffer was empty prior to the last addition, the producer tries to wake up the consumer.
5. Unfortunately the consumer wasn't yet sleeping, and the wakeup call is lost. When the consumer resumes, it goes to
sleep and will never be awakened again. This is because the consumer is only awakened by the producer
when itemCount is equal to 1.
6. The producer will loop until the buffer is full, after which it will also go to sleep.
Since both processes will sleep forever, we have run into a deadlock. This solution therefore is unsatisfactory.
An alternative analysis is that if the programming language does not define the semantics of concurrent accesses to shared
variables (in this case itemCount) without use of synchronization, then the solution is unsatisfactory for that reason, without
needing to explicitly demonstrate a race condition.

Using semaphores
Semaphores solve the problem of lost wakeup calls. In the solution below we use two semaphores, fillCount and emptyCount,
to solve the problem. fillCount is the number of items to be read in the buffer, and emptyCount is the number of available
spaces in the buffer where items could be written. fillCount is incremented and emptyCount decremented when a new item
has been put into the buffer. If the producer tries to decrement emptyCount while its value is zero, the producer is put to sleep.
The next time an item is consumed, emptyCount is incremented and the producer wakes up. The consumer works analogously.

semaphore fillCount = 0; // items produced


semaphore emptyCount = BUFFER_SIZE; // remaining space
28

procedure producer() {
while (true) {
item = produceItem();
down(emptyCount);
putItemIntoBuffer(item);
up(fillCount);
}
}

procedure consumer() {
while (true) {
down(fillCount);
item = removeItemFromBuffer();
up(emptyCount);
consumeItem(item);
}
}
The solution above works fine when there is only one producer and consumer. With multiple producers sharing
the same memory space for the item buffer, or multiple consumers sharing the same memory space, this
solution contains a serious race condition that could result in two or more processes reading or writing into the
same slot at the same time. To understand how this is possible, imagine how the
procedure putItemIntoBuffer() can be implemented. It could contain two actions, one determining the next
available slot and the other writing into it. If the procedure can be executed concurrently by multiple producers,
then the following scenario is possible:
1. Two producers decrement emptyCount
29

2. One of the producers determines the next empty slot in the buffer
3. Second producer determines the next empty slot and gets the same result as the first producer
4. Both producers write into the same slot
To overcome this problem, we need a way to make sure that only one producer is
executing putItemIntoBuffer() at a time. In other words we need a way to execute a critical
section with mutual exclusion. To accomplish this we use a binary semaphore called mutex. Since the value
of a binary semaphore can be only either one or zero, only one process can be executing between
down(mutex) and up(mutex). The solution for multiple producers and consumers is shown below..
mutual exclusion refers to the problem of ensuring that no two processes or threads are in their critical
section at the same time. Here, a critical section refers to a period of time when the process accesses a shared
resource, such as shared memory.
semaphore mutex = 1;

// Controls access to critical section

semaphore fillCount = 0; // counts number of full buffer slots


semaphore emptyCount = BUFFER_SIZE; // counts number of empty buffer slots

procedure producer() {
while (true) {
item = produceItem();

// loop forever
// create a new item to put in the buffer

// decrement the emptyCount semaphore


down(mutex); // enter critical section
putItemIntoBuffer(item); // put item in buffer
up(mutex);

// leave critical section

up(fillCount);// increment the fullCount semaphore


}
}

procedure consumer() {
30

down(emptyCount);

while (true) {

// loop forever

down(fillCount);
down(mutex);

// decrement the full semaphore


// enter critical section

item = removeItemFromBuffer();
up(mutex);

// take a widget from the buffer

// leave critical section

up(emptyCount); // increment the empty semaphore


consumeItem(item);

// consume the item

}
}
Notice that the order in which different semaphores are incremented or decremented is essential: changing the
order might result in a deadlock.
Assignment:

31

Operating System
LAB-10

Name ____________________
Roll No ___________________
Date ______________________
Marks Obtained ____________
Signature___________________

32

Objective
Study produces consumer problem with Monitors

Using monitors
The following pseudo code shows a solution to the producer-consumer problem using monitors. Since mutual
exclusion is implicit with monitors, no extra effort is necessary to protect the critical section. In other words, the
solution shown below works with any number of producers and consumers without any modifications. It is also
noteworthy that using monitors makes race conditions much less likely than when using semaphores.
monitor ProducerConsumer {
int itemCount
condition full;
condition empty;

procedure add(item) {
while (itemCount == BUFFER_SIZE) {
wait(full);
}

putItemIntoBuffer(item);
itemCount = itemCount + 1;

if (itemCount == 1) {
notify(empty);
}
}

33

procedure remove() {
while (itemCount == 0) {
wait(empty);
}

item = removeItemFromBuffer();
itemCount = itemCount - 1;

if (itemCount == BUFFER_SIZE - 1) {
notify(full);
}

return item;
}
}

procedure producer() {
while (true) {
item = produceItem()
ProducerConsumer.add(item)
}
}

procedure consumer() {
while (true) {
item = ProducerConsumer.remove()
34

consumeItem(item)
}
}
Note the use of while statements in the above code, both when testing if the buffer is full or empty. With
multiple consumers, there is a race condition where one consumer gets notified that an item has been put into
the buffer but another consumer is already waiting on the monitor so removes it from the buffer instead. If
the while was instead an if, too many items might be put into the buffer or a remove might be attempted on an
empty buffer.

Synchronized Keyword provides following functionality essential for concurrent programming:

1) synchronized keyword provides locking which ensures mutual exclusive access of shared resource and prevent data race
condition.
2) synchronized keyword also prevent reordering of code statement by compiler which can cause subtle concurrent issue if we don't
use synchronized or volatile keyword.
3) synchronized keyword involve locking and unlocking. before entering into synchronized method or block thread needs to acquire
the lock at this point it reads data from main memory than cache and when it release the lock it flushes write operation into main
memory which eliminates memory inconsistency errors.

Mutual exclusion refers to the problem of ensuring that no two processes or threads (henceforth referred to only as
processes) are in their critical section at the same time
Critical section is a piece of code that accesses a shared resource (data structure or device) that must not be
concurrently accessed by more than one thread of execution
Race condition or race hazard is a type of flaw in a system where the output is dependent on the sequence or timing of
other uncontrollable events.

Assignment: Producer Consumer Problem with Wait and Notify( Using Monitor)
Producer Consumer Problem is a classical concurrency problem and in fact it is one of the concurrency design pattern. In this program
we have used wait and notify method from java.lang.Object class.

Step: 1
Create new Java application with the name ProducerConsumerSolution
Step: 2 import the following classes
import java.util.Vector;
import java.util.logging.Level;
import java.util.logging.Logger;
Step: 2
35

Create a Producer class implements Runnable


Declare 2 variables Vector sharedQueue, int SIZE;
Make a constructor of producer with these to variables
Step: 3
Override run method for producer
for (int i = 0; i < 7; i++) {
System.out.println("Produced: " + i);
try {
produce(i);
} catch (InterruptedException ex) {

Step: 4
Write a method name produce throws InterruptedException
//wait if queue is full
while (sharedQueue.size() == SIZE) {
synchronized (sharedQueue) {
System.out.println("Queue is full " + Thread.currentThread().getName()
+ " is waiting , size: " + sharedQueue.size());
sharedQueue.wait();
}
//producing element and notify consumers
synchronized (sharedQueue) {
sharedQueue.add(i);
sharedQueue.notifyAll();
}
Step: 5
Repeat step 2 for class consumer
Step: 6
Write override run method for consumer
while (true) {
try {
System.out.println("Consumed: " + consume());
Thread.sleep(50);
} catch (InterruptedException ex) {

}
Step: 7
Write consume throws InterruptedException
//wait if queue is empty
while (sharedQueue.isEmpty()) {
synchronized (sharedQueue) {
System.out.println("Queue is empty " + Thread.currentThread().getName()
+ " is waiting , size: " + sharedQueue.size());
sharedQueue.wait();
}
}
//Otherwise consume element and notify waiting producer
synchronized (sharedQueue) {
sharedQueue.notifyAll();
return (Integer) sharedQueue.remove(0);
}
36

}
Step: 8
In the main method create an object of Vector class with the name of sharedQueue, initialize an integer variable size with
4
Create 2 objects of Thread class by calling public Thread (Runnable target, String name) constructor for Producer and
consumer Runnable
public Producer(Vector sharedQueue, int size)
public Consumer(Vector sharedQueue, int size)
Start both threads

37

Operating System
LAB-11

Name ____________________
Roll No ___________________
Date ______________________
Marks Obtained ____________
Signature___________________

38

Real World Example of Producer Consumer Design Pattern


Producer consumer pattern is everywhere in real life and depict coordination and collaboration. Like
one person is preparing food (Producer) while other one is serving food (Consumer), both will use shared table
for putting food plates and taking food plates. Producer which is the person preparing food will wait if table is
full and Consumer (Person who is serving food) will wait if table is empty. Table is a shared object here.

Benefit of Producer Consumer Pattern


Its indeed a useful design pattern and used most commonly while writing multi-threaded or concurrent code.
Here are few of its benefit:
1) Producer Consumer Pattern simple development. You can Code Producer and Consumer independently
and concurrently, they just need to know shared object.
2) Producer doesn't need to know about who is consumer or how many consumers are there. Same is true
with Consumer.
3) Producer and Consumer can work with different speed. There is no risk of Consumer consuming half-baked
item. In fact by monitoring consumer speed one can introduce more consumers for better utilization.
4) Separating producer and Consumer functionality result in more clean, readable and manageable code.
Producer Consumer Problem in Multi-threading
Producer-Consumer Problem is also a popular design pattern so that Producer should wait if Queue or bucket
is full and Consumer should wait if queue or bucket is empty. This problem can be implemented or solved by
different ways, classical way is using wait and notify method to communicate between Producer and Consumer
thread and blocking each of them on individual condition like full queue and empty queue. With introduction
of BlockingQueue Data Structure in Java .Its now much simpler because BlockingQueue provides this control
implicitly by introducing blocking methods put() and take(). Now don't require using wait and notify to
communicate between Producer and Consumer. BlockingQueue put () method will block if Queue is full in case
of Bounded Queue and take () will block if Queue is empty. In next section we will see acode example of
Producer Consumer design pattern.
Assignment:

Using Blocking Queue to implement Producer Consumer Pattern


BlockingQueue amazingly simplifies implementation of Producer-Consumer design pattern by providing out of
box support of blocking on put() and take(). Developer doesn't need to write confusing and critical piece of
wait-notify code to implement communication. BlockingQuue is an interface and Java provides different
implantation like ArrayBlockingQueue and LinkedBlockingQueue , both implement FIFO order or elements,
while ArrayLinkedQueue is bounded in nature LinkedBlockingQueue is optionally bounded.

39

Operating System
LAB-12

Name ____________________
Roll No ___________________
Date ______________________
Marks Obtained ____________
Signature___________________

40

Objective
Evaluate algorithm by running it on a particular string of memory references (reference string) and computing the number
of page faults on that string
In a computer operating system that uses paging for virtual memory management, page replacement algorithms decide
which memory pages to page out (swap out, write to disk) when a page of memory needs to be allocated. Paging
happens when a page fault occurs and a free page cannot be used to satisfy the allocation, either because there are
none, or because the number of free pages is lower than some threshold.
When the page that was selected for replacement and paged out is referenced again it has to be paged in (read in from
disk), and this involves waiting for I/O completion. This determines the quality of the page replacement algorithm: the less
time waiting for page-ins, the better the algorithm. A page replacement algorithm looks at the limited information about
accesses to the pages provided by hardware, and tries to guess which pages should be replaced to minimize the total
number of page misses, while balancing this with the costs (primary storage and processor time) of the algorithm itself.
Page replacement algorithm is an algorithm decides which pages should be writing to disk when new page needs to
allocated. Page replacement increases the system performance. While there is also a possibility of picking a randomize
page to remove during the page fault occurs, in this case system performance will be optimize if small sized pages is
chosen rather than the heavily pages. Page replacement algorithms does the work on the basis of both theoretical and
implementations.
Page replacement takes the following procedure. If no frame is free, we find one that is not currently being used and free
it. We can free the frame by making changing in the page table to point that the page is no longer available in the memory.
Now we can modify and use the page fault procedure to include page replacement.
LRU
LRU replacement associates with each page the time of that pages last use,When a page must be replaced, LRU
chooses the page that has not been used for the longest period of time
OPTIMAL
Replace page that will not be used for longest period of time,This is a design to guarantee the lowest page-fault rate for a
fixed number of frames
FIFO
When a page must be replaced, the oldest page is chosen
Assignment #1
Write a code to find out the number of hits for given page frame for LRU and FIFO.

41

Operating System
LAB-13

Name ____________________
Roll No ___________________
Date ______________________
Marks Obtained ____________
Signature___________________

42

Object:
Discuss Deadlock prevemtation and Implement Bankers Algorithm

The Banker's algorithm is a resource allocation & deadlock avoidance algorithm developed by Edsger Dijkstra
that tests for safety by simulating the allocation of pre-determined maximum possible amounts of all resources,
and then makes a "safe-state" check to test for possible deadlock conditions for all other pending activities,
before deciding whether allocation should be allowed to continue.
Algorithm
The Banker's algorithm is run by the operating system whenever a process requests resources. The algorithm
prevents deadlock by denying or postponing the request if it determines that accepting the request could put
the system in an unsafe state (one where deadlock could occur). When a new process enters a system, it must
declare the maximum number of instances of Banker's algorithm Example
Assuming that the system distinguishes between four types of resources, (A, B, C and D), the following is an
example of how those resources could be distributed. Note that this example shows the system at an instant
before a new request for resources arrives. Also, the types and number of resources are abstracted. Real
systems, for example, would deal with much larger quantities of each resource.
Available system resources:
ABCD
3112
Processes (currently allocated resources):
ABCD
P1 1 2 2 1
P2 1 0 3 3
P3 1 1 1 0
Processes (maximum resources):
ABCD
P1 3 3 2 2
P2 1 2 3 4
P3 1 1 5 0
Safe and Unsafe States
A state (as in the above example) is considered safe if it is possible for all processes to finish executing
(terminate). Since the system cannot know when a process will terminate, or how many resources it will have
requested by then, the system assumes that all processes will eventually attempt to acquire their stated
maximum resources and terminate soon afterward. This is a reasonable assumption in most cases since the
system is not particularly concerned with how long each process runs (at least not from a deadlock avoidance
perspective).
Also, if a process terminates without acquiring its maximum resources, it only makes it easier on the system.
Given that assumption, the algorithm determines if a state is safe by trying to find a hypothetical set of requests
by the processes that would allow each to acquire its maximum resources and then terminate (returning its
resources to the system). Any state where no such set exists is an unsafe state.
Pseudo-Code[3]
P - set of processes
Mp - maximal requirement of resources for process p
Cp - current resources allocation process p
A - currently available resources
43

while (P != ) {
found = False
foreach (p P) {
if (Mp Cp A) {
/* p can obtain all it needs. */
/* assume it does so, terminates, and */
/* releases what it already has. */
A = A + Cp
P = P {p}
found = True
}
}
if (!found) return UNSAFE
}
return SAFE
Example
We can show that the state given in the previous example is a safe state by showing that it is possible for each
process to acquire its maximum resources and then terminate.
1. P1 acquires 2 A, 1 B and 1 D more resources, achieving its maximum
The system now still has 1 A, no B, 1 C and 1 D resource available
2. P1 terminates, returning 3 A, 3 B, 2 C and 2 D resources to the system
The system now has 4 A, 3 B, 3 C and 3 D resources available
3. P2 acquires 2 B and 1 D extra resources, then terminates, returning all its resources
The system now has all resources: 6 A, 4 B, 7 C and 6 D
4. Because all processes were able to terminate, this state is safe
Note that these requests and acquisitions are hypothetical. The algorithm generates them to check the safety of
the state, but no resources are actually given and no processes actually terminate. Also note that the order in
which these requests are generated if several can be fulfilled doesn't matter, because all hypothetical
requests let a process terminate, thereby increasing the system's free resources.
For an example of an unsafe state, consider what would happen if process 2 were holding 1 more unit of
resource B at the beginning.
Requests
When the system receives a request for resources, it runs the Banker's algorithm to determine if it is safe to
grant the request. The algorithm is fairly straight forward once the distinction between safe and unsafe states is
understood.
1. Can the request be granted?
If not, the request is impossible and must either be denied or put on a waiting list
2. Assume that the request is granted
3. Is the new state safe?
If so grant the request
If not, either deny the request or put it on a waiting list
Whether the system denies or postpones an impossible or unsafe request is a decision specific to the operating
system.
Example
Continuing the previous examples, assume process 3 requests 2 units of resource C.
1. There is not enough of resource C available to grant the request
2. The request is denied
On the other hand, assume process 3 requests 1 unit of resource C.
1. There are enough resources to grant the request
2. Assume the request is granted
The new state of the system would be:
Available system resources
44

ABCD
Free 3 1 0 2
Processes (currently allocated resources):
ABCD
P1 1 2 2 1
P2 1 0 3 3
P3 1 1 2 0
Processes (maximum resources):
ABCD
P1 3 3 2 2
P2 1 2 3 4
P3 1 1 5 0
Determine if this new state is safe
P1 can acquire 2 A, 1 B and 1 D resources and terminate
Then, P2 can acquire 2 B and 1 D resources and terminate
Finally, P3 can acquire 3 C resources and terminate
Therefore, this new state is safe
Since the new state is safe, grant the request
Finally, assume that process 2 requests 1 unit of resource B.
1. There are enough resources
2. Assuming the request is granted, the new state would be:
Available system resources:
ABCD
Free 3 0 1 2
Processes (currently allocated resources):
ABCD
P1 1 2 2 1
P2 1 1 3 3
P3 1 1 1 0
Processes (maximum resources):
ABCD
P1 3 3 2 2
P2 1 2 3 4
P3 1 1 5 0
This state safe? Assuming P1, P2, and P3 request more of resource B and C.
P1 is unable to acquire enough B resources
P2 is unable to acquire enough B resources
P3 is unable to acquire enough C resources
No process can acquire enough resources to terminate, so this state is
not safe
Since the state is unsafe, deny the request
Note that in this example, no process was able to terminate. It is possible that some processes will be able to
terminate, but not all of them. That would still be an unsafe state.

45

Operating System
LAB-14

Name ____________________
Roll No ___________________
Date ______________________
Marks Obtained ____________
Signature___________________

46

Object:
Discuss Election and Ring Algorithm

Election Algorithms
Many distributed algorithms employ a coordinator process that performs functions needed by the other
processes in the system. These functions include enforcing mutual exclusion, maintaining a global wait-for
graph for deadlock detection, replacing a lost token, and controlling an input or output device in the
system. If the coordinator process fails due to the failure of the site at which it resides, the system can
continue execution only by restarting a new copy of the coordinator on some other site.
The algorithms that determine where a new copy of the coordinator should be restarted are called election
algorithms.
Election algorithms assume that a unique priority number is associated with each active process in the
system. For ease of notation, we assume that the priority number of process Pi is i. To simplify our
discussion, we assume a one-to-one correspondence between processes and sites and thus refer to both
as processes. The coordinator is always the process with the largest priority number. Hence, when a
coordinator fails, the algorithm must elect that active process with the largest priority number. This
number must be sent to each active process in the system. In addition, the algorithm must provide a
mechanism for a recovered process to identify the current coordinator.

The Ring Algorithm


The ring algorithm assumes that the links between processes are unidirectional and that each process
sends its messages to the neighbor on the right. The main data structure used by the algorithm is the
active list, a list that contains the priority numbers of all active processes in the system when the
algorithm ends; each process maintains its own active list. The algorithm works as follows:
1. If process Pi detects a coordinator failure, it creates a new active list that is initially empty. It then sends
a message elect(i) to its neighbor on the right and adds the number i to its active list.
2. If Pi receives a message elect(j) from the process on the left, it must respond in one of three ways:
a. If this is the first elect message it has seen or sent, Pi creates a new active list with the numbers i and j.
It then sends the message elect(i), followed by the message elect(j).
b. If i jthat is, if the message received does not contain Pis number then Pi adds j to its active list and
forwards the message to its neighbor on the right.
c. If i = jthat is, if Pi receives the message elect(i)then the active list for Pi now contains the numbers
of all the active processes in the system. Process Pi can now determine the largest number in the active
list to identify the new coordinator process.

47

Vous aimerez peut-être aussi