Vous êtes sur la page 1sur 14

2010

OPERATING SYSTEM DESIG


AND PRINCIPLE

ASSIGNMENT 2

SUBMITTED TO:
RAMANDEEP SIR SUBMITTED BY :
ANJANI KUNWAR
RA1803A10
10807973
B.TECH(CSE)-H
Ques1. Semaphores can be used for solving critical section problem
for an single instance and for multiple instance. How?

A:

Semaphores are devices used to help with synchronization. If multiple processes


share a common resource, they need a way to be able to use that resource without
disrupting each other. You want each process to be able to read from and write to
that resource uninterrupted.

A semaphore will either allow or disallow access to the resource, depending on


how it is set up. One example setup would be a semaphore which allowed any
number of processes to read from the resource, but only one could ever be in the
process of writing to that resource at a time.

Insight about Semaphore:

First let us begin our topic by giving an insight to what is actually a semaphore. A
semaphore is nothing but a term used in UNIX for a variable which acts as a
counter. So the next question that comes in mind is what for we need this variable.
It’s so simple. The reason is explained below. For instance there may be times
when two processes try to access the same file simultaneously. In this event we
must control the access of the file when the other process is accessing. This is done
by assigning value to semaphore.

The value of the semaphore is initialized by the first process when the file is in
access by it. When the second process try to access the file it checks the value of
the semaphore and if it finds the value as initialized it does not access the file.
After the first process is completed it reinitializes the semaphore value and now the
second process uses it. The above example is for two processes but a semaphore
can be used even when numbers of processes try to access the same file. Thus
semaphores are used to coordinate access to a resource by different processes.
Where a Semaphore gets stored

We have seen that semaphore can be used when numbers of processes try to access
the same file. In this case we must make the semaphore available accessible to all
processes so that they can read and check the value and also initialize and
reinitialize the value of semaphore appropriately. For this reason only the
semaphore is stored in kernel so that it can be accessed by all processes.

Details of Semaphore

The command

$ ipcs –s
will give the list of existing semaphores.

Function to create a simple semaphore

The function that can be used to create semaphores is semget( ). This function
takes 3 arguments as input namely:

Name of the semaphore with which we are going to create the semaphore. This is
technically called as key of the semaphore.

The number of sub semaphores to be created. The minimum required number is 1.

The semaphore mode is specified. There are 2 modes in semaphore namely read
and rewrite.

The return integer value from this function indicates the semaphore id with which
the semaphore is associated and if the return integer is -1 it indicates that an error
has occurred in the creation of semaphore.

How Kernel maintains internally the semaphores:

For every semaphore created a structure is created inside by the kernel byname
semid_ds and this structure is found in header file sys/ipc.h.

how to set user defined values for semaphore:


In the create semaphore function we saw that the semaphore created returns a
semaphore id which is system generated. By we could realize in real usage
scenarios it would prove helpful only when the user would be able to define the
value of the semaphore. This is because if the user would be able to define the
value of the semaphore then they could design easily stating the numbers they have
given for semaphores .For instance number one may be used to mark that
semaphore is free and say number two to mark the semaphore is in use by another
process and so on. This can be done by using the following functions and values
namely:

semctl( )
GETPID
SETVAL
along with semget( ) which we used before for creating semaphores.

how to do this:

• The first step is to create semaphore as we discussed before by using


semget( ) function which returns the semaphore id.
• Before seeing how to do this first let us see what the semctl( ) function takes
as arguments. It takes 4 parameters as arguments namely:
• The semaphore id created by semget( ) function
• Sub semaphore id. For the first time it would be zero since it gets created
that time.
• GETPID. This returns the Process id of the process.
• value to be placed for the above process id. This value only will be returned
by the function semctl( ).
• After this by using SETVAL in semctl( ) function with all the arguments the
same except instead of GETPID we place SETVAL and give the value to be
placed in this which is returned by the semctl( ) function.

This is how we assign user defined values to semaphores which give ease of
maintenance and use of semaphores by users.

Aspects of Semaphore:

• Semaphores are identified by semaphore id which is unique for each


semaphore.
• The semaphores can be deleted by suing the function semdelete(semaphore)
• The semaphore value can be incremented or decremented by using functions
wait (sem) and signal (sem) respectively. wait (sem) decrements the
semaphore value and if in the process of decrementing the value of
semaphore reaches negative value then the process is suspended and placed
in queue for waiting. signal (sem) increments the value of the semaphore and
it is opposite in action to wait (sem).In other words it causes the first process
in queue to get executed.

The value of semaphore represents thus the number of threads which are nothing
but processes. In other words we found that if the value is positive then we have
threads to decrement and proceed for execution without suspending. If the value of
semaphore is negative then it represents that the number of threads or process is
blocked and kept in suspended state. If the value of semaphore is zero then it
means that there are no threads or processes in waiting state.

So from the above the important fact in semaphores is when we create semaphores
we found that semaphores can be assigned with any value. Also we found from
above that after creation of semaphores we can modify that is either increment or
decrement the value of semaphores. But at any situation we cannot read the current
value of semaphore.

Ques2 Show if wait() and signal() semaphore operations are not


executed automatically then mutual exclusion may be violated?

ANS: If two processes execute wait(S) at the same time, only one will enter, and
the other one wait till signal is raised. If wait is not executed automatically, then
there is a chance that both processes read the same value of S into a register, and
then both processes see the same value and enter the critical section
simultaneously. Similarly, same reason for signal.
Ques3.Write the similarities and difference between the semaphore
and monitor? Take an example to show that their way of working is
different but both can provide synchronization?
ANS:

Higher-Level Synchronization
 We looked at using locks to provide mutual exclusion
 Locks work, but they have some drawbacks when
critical sections are long
◆ Spinlocks – inefficient
◆ Disabling interrupts – can miss or delay important events
 Instead, we want synchronization mechanisms that
◆ Block waiters
◆ Leave interrupts enabled inside the critical section
 Look at two common high-level mechanisms
◆ Semaphores: binary (mutex) and counting
◆ Monitors: mutexes and condition variables

 Use them to solve common synchronization problems

Semaphores
 Semaphores are another data structure that provides
mutual exclusion to critical sections
◆ Block waiters, interrupts enabled within CS
◆ Described by Dijkstra in THE system in 1968
 Semaphores can also be used as atomic counters
◆ More later
 Semaphores support two operations:
◆ wait(semaphore): decrement, block until semaphore is open
» Also P(), after the Dutch word for test, or down()
◆ signal(semaphore): increment, allow another thread to enter

» Also V() after the Dutch word for increment, or up()

Semaphore Types
 Semaphores come in two types
 Mutex semaphore
◆ Represents single access to a resource
◆ Guarantees mutual exclusion to a critical section
 Counting semaphore
◆ Represents a resource with many units available, or a
resource that allows certain kinds of unsynchronized
concurrent access (e.g., reading)
◆ Multiple threads can pass the semaphore
◆ Number of threads determined by the semaphore “count”

» mutex has count = 1, counting has count = N

Monitors

 A monitor is a programming language construct that


controls access to shared data
◆ Synchronization code added by compiler, enforced at runtime
◆ Why is this an advantage?
 A monitor is a module that encapsulates
◆ Shared data structures
◆ Procedures that operate on the shared data structures
◆ Synchronization between concurrent procedure invocations
 A monitor protects its data from unstructured access
 It guarantees that threads accessing its data through
its procedures interact only in legitimate ways

Monitor Semantics

 A monitor guarantees mutual exclusion


◆ Only one thread can execute any monitor procedure at any
time (the thread is “in the monitor”)
◆ If a second thread invokes a monitor procedure when a first
thread is already executing one, it blocks
» So the monitor has to have a wait queue…
◆ If a thread within a monitor blocks, another one can enter
Summary
 Semaphores
◆ wait()/signal() implement blocking mutual exclusion
◆ Also used as atomic counters (counting semaphores)
◆ Can be inconvenient to use
 Monitors
◆ Synchronizes execution within procedures that manipulate
encapsulated data shared among procedures
» Only one thread can execute within a monitor at a time
◆ Relies upon high-level language support
 Condition variables
◆ Used by threads as a synchronization point to wait for events
◆ Inside monitors, or outside with locks

Ques4. What are deadlocks? Are they useful or harmful? Justify by


giving examples?

ANS:

Its as the permanent blocking of a set of processes that either compete for a system
resources or communicate with each other. or deadlock involves comflicting needs
for resources by 2 or more processes. whether or not deadlock occurs depends
upon both the dynamics of application or the details of application.

Here is an Example.

In a multi programing system , suppose 2 processes are there and each want to
print a very large tape file. Process A request permission to use the printer and its
granted. Process B then request for tape drive and then it is also granted. now A
asks for the tape drive and A is denied until B releases it. Instead of releasing the
tape drive B asks for the printer. At this stage both the process are blocked and will
remain forever. This situation is called deadlock.

Deadlock prevention is concerned with imposing certain restriction on the


enviornment of processes, so that deadlock can never occur. The OS aims at
avoiding a deadlock rather than preventing it.

Deadlock Avoidance is concerned with the Starting with an Enviornment, where a


deadlock is theorotically possible. but by some algorithm followed by the OS. it is
ensured, before allocating any resource that after allocating it a deadlock can be
avoided. If that cannot be guranteed the OS cannot grant the request of a process
for a resource at the first place.

Ques5. How resource allocation graph helps in deadlock avoidance


and deadlock prevention? Take an example in which there are 3
resources and 3 instance of each resource and 4 processes running.
Take the requirement of instance by a process arbitrary and show in
how deadlock occurrence, deadlock avoidance and deadlock
prevention?

ANS:

Avoidance

Deadlock can be avoided if certain information about processes is available in


advance of resource allocation. For every resource request, the system sees if
granting the request will mean that the system will enter an unsafe state, meaning a
state that could result in deadlock. The system then only grants requests that will
lead to safe states. In order for the system to be able to figure out whether the next
state will be safe or unsafe, it must know in advance at any time the number and
type of all resources in existence, available, and requested. One known algorithm
that is used for deadlock avoidance is the Banker's algorithm, which requires
resource usage limit to be known in advance. However, for many systems it is
impossible to know in advance what every process will request. This means that
deadlock avoidance is often impossible.

Two other algorithms are Wait/Die and Wound/Wait, each of which uses a
symmetry-breaking technique. In both these algorithms there exists an older
process (O) and a younger process (Y). Process age can be determined by a
timestamp at process creation time. Smaller time stamps are older processes, while
larger timestamps represent younger processes.

Wait/Die Wound/Wait
O needs a resource held by Y O waits Y dies
Y needs a resource held by O Y dies Y waits

It is important to note that a process may be in an unsafe state but would not result
in a deadlock. The notion of safe/unsafe states only refers to the ability of the
system to enter a deadlock state or not. For example, if a process requests A which
would result in an unsafe state, but releases B which would prevent circular wait,
then the state is unsafe but the system is not in deadlock.

Prevention

• Removing the mutual exclusion condition means that no process may have
exclusive access to a resource. This proves impossible for resources that
cannot be spooled, and even with spooled resources deadlock could still
occur. Algorithms that avoid mutual exclusion are called non-blocking
synchronization algorithms.
• The "hold and wait" conditions may be removed by requiring processes to
request all the resources they will need before starting up (or before
embarking upon a particular set of operations); this advance knowledge is
frequently difficult to satisfy and, in any case, is an inefficient use of
resources. Another way is to require processes to release all their resources
before requesting all the resources they will need. This too is often
impractical. (Such algorithms, such as serializing tokens, are known as the
all-or-none algorithms.)
• A "no preemption" (lockout) condition may also be difficult or impossible to
avoid as a process has to be able to have a resource for a certain amount of
time, or the processing outcome may be inconsistent or thrashing may occur.
However, inability to enforce preemption may interfere with a priority
algorithm. (Note: Preemption of a "locked out" resource generally implies a
rollback, and is to be avoided, since it is very costly in overhead.)
Algorithms that allow preemption include lock-free and wait-free algorithms
and optimistic concurrency control.
• The circular wait condition: Algorithms that avoid circular waits include
"disable interrupts during critical sections", and "use a hierarchy to
determine a partial ordering of resources" (where no obvious hierarchy
exists, even the memory address of resources has been used to determine
ordering) and Dijkstra's solution.

Resource-Allocation Graph

• In some cases deadlocks can be understood more clearly through the use of
Resource-Allocation Graphs, having the following properties:
o A set of resource categories, { R1, R2, R3, . . ., RN }, which appear as
square nodes on the graph. Dots inside the resource nodes indicate
specific instances of the resource. ( E.g. two dots might represent two
laser printers. )
o A set of processes, { P1, P2, P3, . . ., PN }
o Request Edges - A set of directed arcs from Pi to Rj, indicating that
process Pi has requested Rj, and is currently waiting for that resource
to become available.
o Assignment Edges - A set of directed arcs from Rj to Pi indicating
that resource Rj has been allocated to process Pi, and that Pi is
currently holding resource Rj.
o Note that a request edge can be converted into an assignment edge
by reversing the direction of the arc when the request is granted.
( However note also that request edges point to the category box,
whereas assignment edges emanate from a particular instance dot
within the box. )
o For example:
• If a resource-allocation graph contains no cycles, then the system is not
deadlocked. ( When looking for cycles, remember that these are directed
graphs. ) See the example in Figure 7.2 above.
• If a resource-allocation graph does contain cycles AND each resource
category contains only a single instance, then a deadlock exists.
• If a resource category contains more than one instance, then the presence of
a cycle in the resource-allocation graph indicates the possibility of a
deadlock, but does not guarantee one. Consider, for example, Figures 7.3
and 7.4 below:
Ques6. Consider the deadlock situation in dining philosopher problem
when the philosopher obtains the chopstick one at a time. Discuss
how the four necessary conditions hold for deadlock in this setting?
Discuss how deadlock can be avoided by eliminating any of the four
necessary conditions?

Mutual Exclusion: When a philosopher picks up one chopstick, it cannot be


shared with others. If they could be shared this situation would not exist and would
prevent any deadlocks from occurring.

Hold and Wait: When the philosopher tries to pick up a chopstick, he only picks
up one at a time. If he could pick up both chopsticks at one time then a deadlock
condition could not exist.

No preemption: Once a philosopher picks up a chopstick, it cannot be taken away


from her. If it could, then a deadlock condition could not exist.

Circular Wait: Because all of the philosophers are sitting in a round table and
each philosopher has access to the chopsticks next to them, if a philosopher picks
up one chopstick he will affect the philosopher sitting next to him. and the
philosopher on that side can also affect the philosopher sitting next to her in the
same manner. This holds true all the way around the table. If one of the
philosophers in the table could pick up a chopstick that another philosopher never
needed, a deadlock condition would not exist.

Vous aimerez peut-être aussi