Vous êtes sur la page 1sur 3

AMITY UNIVERSITY DUBAI

Operating System Concepts


CSIT123
BSc IT - SEM-2

ASSIGNMENT
10 MARKS(Each 2 marks)

1) Consider the following processes with arrival time and burst time. Calculate average turnaround time,
average waiting time and average response time using round robin with time quantum 3?

Process id Arrival time Burst time


P1 5 5
P2 4 6
P3 3 7
P4 1 9
P5 2 2
P6 6 3

Ans:-
Queue.
P4 P5 P3 P2 P4 P1 P6 P3 P2 P4 P1 P3
Gantt Chart

P4 P5 P3 P2 P4 P1 P6 P3 P2 P4 P1 P3
1. 4. 6. 9. 12. 15. 18. 21. 24. 27. 30. 32. 33

Turn Around time = Exit time – Arrival time


Response time= Time when process gets the cpu first – arrival time

Burst Waiting time Turn Around time Response time


Process id Arrival time
time
P1 5 5 (15-5)+(30-18)=22 32-5=27 15-5=10
P2 4 6 (9-4)+(24-12)=17 27-4=23 9-4=5
P3 3 7 (6-3)+(21-9)+(32-24)=23 33-3=30 6-3=3
P4 1 9 (1-1)+(12-4)+(27-15)=20 30-1=29 1-1=0
P5 2 2 4-2=2 6-2=4 4-2=2
P6 6 3 18-6=12 21-6=15 18-6=12

Average turnaround time=(27+23+30+29+4+15)6=21.33


Average waiting time=(22+17+23+20+2+12)6=16
Average response time=(10+5+3+0+2+12)6=5.33

2) An operating system uses the banker’s algorithm for deadlock avoidance when managing the allocation of three resource types
X, Y and Z to three processes P0, P1 and P2. The table given below presents the current system state. Here, the Allocation matrix
shows the current number of resources of each type allocated to each process and the Max matrix shows the maximum number of
resources of each type required by each process during its execution. There are 3 units of type X, 2 units of type Y and 2 units of
type Z still available. The system is currently in safe state. Consider the following independent requests for additional resources in
the current state-
REQ1: P0 requests 0 units of X, 0 units of Y and 2 units of Z. Can this request be processed as per the Bankers algorithm and find
out if a safe state can be achieved after its execution.

Max
Allocation

X Y Z X Y Z

P0 0 0 1 8 4 3

P1 3 2 0 6 2 0
P2 2 1 1 3 3 3

Ans:- This is the current safe state.

AVAILABLE X=3, Y=2, Z=2

MAX ALLOCATION

XYZ XYZ

P0 843 001

P1 620 320

P2 333 211

Now, if the request REQ1 is permitted, the state would become :

AVAILABLE X=3, Y=2, Z=0

MAX ALLOCATION NEED

XYZ XYZ XYZ

P0 843 003 840

P1 620 320 300

P2 333 211 122


 

Now, with the current availability, we can service the need of P1. The state would become :

AVAILABLE X=6, Y=4, Z=0

MAX ALLOCATION NEED

XYZ XYZ XYZ

P0 843 003 840

P1 620 320 000

P2 333 211 122


 
With the resulting availability, it would not be possible to service the need of either P0 or P2, owing to lack
of Z resource.
Therefore, the system would be in a deadlock.
⇒ We cannot permit REQ1 to be executed as a safe state will not be reached after its execution.

3) How Buffering can improve the performance of a Computer system?

Ans:- If C.P.U and I/O devices are nearly same at speed, the buffering helps in making the C.P.U and the I/O devices work at full speed in such
a way that C.P.U and the I/O devices never sit idle at any moment. Normally the C.P.U is much faster than an input device. In this case the C.P.U
always faces an empty input buffer and sits idle waiting for the input device which is to read a record into the buffer. For output, the C.P.U
continues to work at full speed till the output buffer is full and then it starts waiting. Thus buffering proves useful for those jobs that have a balance
between computational work and I/O operations. In other cases, buffering scheme may not work well. Buffering matches the speed between the
sender and receiver of the data stream. Buffer stores the original copy of data. Buffer is an area in primary memory (RAM).

4) What are the differences between paging and segmentation?


Ans:- In the operating system, paging means of writing and reading data to the secondary storage unit and use the same in the
primary storage that’s referred to as the main memory. In computer architecture, paging serves a vital role in enabling memory
management processes. To define paging in OS, we have to understand how OS reads and accesses data from the secondary
storage in the form of blocks called pages. These blocks or pages have the same size. The physical area of memory comprises a
single page referred to as a frame. While using paging, the structure need not include a singular physically contiguous space in
the secondary storage. The approach of paging in computer architecture provides users with an advantage over conventional
memory management methods. This because it allows for more efficient and speedier usage of the available storage.
Segmentation leads to the presence of a set of permissions, length, which is associated with the same. If the offset present in
the segment lies in the range as specified by the segment’s length, then the referencing is permitted, else a hardware exception
will be raised. The user program, as well as the associated data , is divisible into several segments. These segments need not be
of the same size, even though there exists a maximum segment length. The logical address using the segmentation comprises of
two parts - the number of segments plus the dislocations present in that segment .
The key difference between Paging and Segmentation
 An essential difference between paging and segmentation is that paging delivers virtual as well as physical address space.
 It also provides a secondary memory space in the form of blocks (pages) of equivalent lengths.
 Segmentation gives a virtual address space in the form of blocks (segments).
 These blocks correspond directly to objects on the programming level.
 Segments have no fixed length and can be altered during program execution.
 Yet another difference is that application developer are unaware of paging.
 To them, memory is linear; also, they know that the system and processing unit are managing the partition and converting them to virtual
addresses.
 In segmentation systems, the segment and page are listed as two parts in the programs. The pages are of equal length, while the segments are
different in size.

5) Explain semaphores and write a short note on it.


Ans:- Semaphore was proposed by Dijkstra in 1965 which is a very significant technique to manage concurrent processes by using a simple integer value,
which is known as a semaphore. Semaphore is simply a variable which is non-negative and shared between threads. This variable is used to solve the critical
section problem and to achieve process synchronization in the multiprocessing environment.

Semaphores are of two types:

1. Binary Semaphore – This is also known as mutex lock. It can have only two values – 0 and 1. Its value is initialized to 1. It is used to implement
the solution of critical section problem with multiple processes.
2. Counting Semaphore – Its value can range over an unrestricted domain. It is used to control access to a resource that has multiple instances.
1. First, look at two operations which can be used to access and change the value of the semaphore variable. P operation is also called wait, sleep or
down operation and V operation is also called signal, wake-up or up operation.
2. Both operations are atomic and semaphore(s) is always initialized to one. Here atomic means that variable on which read, modify and update
happens at the same time/moment with no pre-emption i.e. in between read, modify and update no other operation is performed that may change the
variable.
3. A critical section is surrounded by both operations to implement process synchronization.
Now, let us see how it implements mutual exclusion. Let there be two processes P1 and P2 and a semaphore s is initialized as 1. Now if suppose P1 enters
in its critical section then the value of semaphore s becomes 0. Now if P2 wants to enter its critical section then it will wait until s > 0, this can only happen
when P1 finishes its critical section and calls V operation on semaphore s. This way mutual exclusion is achieved. 

Vous aimerez peut-être aussi