Académique Documents
Professionnel Documents
Culture Documents
ASSIGNMENT 2
SUBMITTED TO:
RAMANDEEP SIR SUBMITTED BY :
ANJANI KUNWAR
RA1803A10
10807973
B.TECH(CSE)-H
Ques1. Semaphores can be used for solving critical section problem
for an single instance and for multiple instance. How?
A:
First let us begin our topic by giving an insight to what is actually a semaphore. A
semaphore is nothing but a term used in UNIX for a variable which acts as a
counter. So the next question that comes in mind is what for we need this variable.
It’s so simple. The reason is explained below. For instance there may be times
when two processes try to access the same file simultaneously. In this event we
must control the access of the file when the other process is accessing. This is done
by assigning value to semaphore.
The value of the semaphore is initialized by the first process when the file is in
access by it. When the second process try to access the file it checks the value of
the semaphore and if it finds the value as initialized it does not access the file.
After the first process is completed it reinitializes the semaphore value and now the
second process uses it. The above example is for two processes but a semaphore
can be used even when numbers of processes try to access the same file. Thus
semaphores are used to coordinate access to a resource by different processes.
Where a Semaphore gets stored
We have seen that semaphore can be used when numbers of processes try to access
the same file. In this case we must make the semaphore available accessible to all
processes so that they can read and check the value and also initialize and
reinitialize the value of semaphore appropriately. For this reason only the
semaphore is stored in kernel so that it can be accessed by all processes.
Details of Semaphore
The command
$ ipcs –s
will give the list of existing semaphores.
The function that can be used to create semaphores is semget( ). This function
takes 3 arguments as input namely:
Name of the semaphore with which we are going to create the semaphore. This is
technically called as key of the semaphore.
The semaphore mode is specified. There are 2 modes in semaphore namely read
and rewrite.
The return integer value from this function indicates the semaphore id with which
the semaphore is associated and if the return integer is -1 it indicates that an error
has occurred in the creation of semaphore.
For every semaphore created a structure is created inside by the kernel byname
semid_ds and this structure is found in header file sys/ipc.h.
semctl( )
GETPID
SETVAL
along with semget( ) which we used before for creating semaphores.
how to do this:
This is how we assign user defined values to semaphores which give ease of
maintenance and use of semaphores by users.
Aspects of Semaphore:
The value of semaphore represents thus the number of threads which are nothing
but processes. In other words we found that if the value is positive then we have
threads to decrement and proceed for execution without suspending. If the value of
semaphore is negative then it represents that the number of threads or process is
blocked and kept in suspended state. If the value of semaphore is zero then it
means that there are no threads or processes in waiting state.
So from the above the important fact in semaphores is when we create semaphores
we found that semaphores can be assigned with any value. Also we found from
above that after creation of semaphores we can modify that is either increment or
decrement the value of semaphores. But at any situation we cannot read the current
value of semaphore.
ANS: If two processes execute wait(S) at the same time, only one will enter, and
the other one wait till signal is raised. If wait is not executed automatically, then
there is a chance that both processes read the same value of S into a register, and
then both processes see the same value and enter the critical section
simultaneously. Similarly, same reason for signal.
Ques3.Write the similarities and difference between the semaphore
and monitor? Take an example to show that their way of working is
different but both can provide synchronization?
ANS:
Higher-Level Synchronization
We looked at using locks to provide mutual exclusion
Locks work, but they have some drawbacks when
critical sections are long
◆ Spinlocks – inefficient
◆ Disabling interrupts – can miss or delay important events
Instead, we want synchronization mechanisms that
◆ Block waiters
◆ Leave interrupts enabled inside the critical section
Look at two common high-level mechanisms
◆ Semaphores: binary (mutex) and counting
◆ Monitors: mutexes and condition variables
Semaphores
Semaphores are another data structure that provides
mutual exclusion to critical sections
◆ Block waiters, interrupts enabled within CS
◆ Described by Dijkstra in THE system in 1968
Semaphores can also be used as atomic counters
◆ More later
Semaphores support two operations:
◆ wait(semaphore): decrement, block until semaphore is open
» Also P(), after the Dutch word for test, or down()
◆ signal(semaphore): increment, allow another thread to enter
Semaphore Types
Semaphores come in two types
Mutex semaphore
◆ Represents single access to a resource
◆ Guarantees mutual exclusion to a critical section
Counting semaphore
◆ Represents a resource with many units available, or a
resource that allows certain kinds of unsynchronized
concurrent access (e.g., reading)
◆ Multiple threads can pass the semaphore
◆ Number of threads determined by the semaphore “count”
Monitors
Monitor Semantics
ANS:
Its as the permanent blocking of a set of processes that either compete for a system
resources or communicate with each other. or deadlock involves comflicting needs
for resources by 2 or more processes. whether or not deadlock occurs depends
upon both the dynamics of application or the details of application.
Here is an Example.
In a multi programing system , suppose 2 processes are there and each want to
print a very large tape file. Process A request permission to use the printer and its
granted. Process B then request for tape drive and then it is also granted. now A
asks for the tape drive and A is denied until B releases it. Instead of releasing the
tape drive B asks for the printer. At this stage both the process are blocked and will
remain forever. This situation is called deadlock.
ANS:
Avoidance
Two other algorithms are Wait/Die and Wound/Wait, each of which uses a
symmetry-breaking technique. In both these algorithms there exists an older
process (O) and a younger process (Y). Process age can be determined by a
timestamp at process creation time. Smaller time stamps are older processes, while
larger timestamps represent younger processes.
Wait/Die Wound/Wait
O needs a resource held by Y O waits Y dies
Y needs a resource held by O Y dies Y waits
It is important to note that a process may be in an unsafe state but would not result
in a deadlock. The notion of safe/unsafe states only refers to the ability of the
system to enter a deadlock state or not. For example, if a process requests A which
would result in an unsafe state, but releases B which would prevent circular wait,
then the state is unsafe but the system is not in deadlock.
Prevention
• Removing the mutual exclusion condition means that no process may have
exclusive access to a resource. This proves impossible for resources that
cannot be spooled, and even with spooled resources deadlock could still
occur. Algorithms that avoid mutual exclusion are called non-blocking
synchronization algorithms.
• The "hold and wait" conditions may be removed by requiring processes to
request all the resources they will need before starting up (or before
embarking upon a particular set of operations); this advance knowledge is
frequently difficult to satisfy and, in any case, is an inefficient use of
resources. Another way is to require processes to release all their resources
before requesting all the resources they will need. This too is often
impractical. (Such algorithms, such as serializing tokens, are known as the
all-or-none algorithms.)
• A "no preemption" (lockout) condition may also be difficult or impossible to
avoid as a process has to be able to have a resource for a certain amount of
time, or the processing outcome may be inconsistent or thrashing may occur.
However, inability to enforce preemption may interfere with a priority
algorithm. (Note: Preemption of a "locked out" resource generally implies a
rollback, and is to be avoided, since it is very costly in overhead.)
Algorithms that allow preemption include lock-free and wait-free algorithms
and optimistic concurrency control.
• The circular wait condition: Algorithms that avoid circular waits include
"disable interrupts during critical sections", and "use a hierarchy to
determine a partial ordering of resources" (where no obvious hierarchy
exists, even the memory address of resources has been used to determine
ordering) and Dijkstra's solution.
Resource-Allocation Graph
• In some cases deadlocks can be understood more clearly through the use of
Resource-Allocation Graphs, having the following properties:
o A set of resource categories, { R1, R2, R3, . . ., RN }, which appear as
square nodes on the graph. Dots inside the resource nodes indicate
specific instances of the resource. ( E.g. two dots might represent two
laser printers. )
o A set of processes, { P1, P2, P3, . . ., PN }
o Request Edges - A set of directed arcs from Pi to Rj, indicating that
process Pi has requested Rj, and is currently waiting for that resource
to become available.
o Assignment Edges - A set of directed arcs from Rj to Pi indicating
that resource Rj has been allocated to process Pi, and that Pi is
currently holding resource Rj.
o Note that a request edge can be converted into an assignment edge
by reversing the direction of the arc when the request is granted.
( However note also that request edges point to the category box,
whereas assignment edges emanate from a particular instance dot
within the box. )
o For example:
• If a resource-allocation graph contains no cycles, then the system is not
deadlocked. ( When looking for cycles, remember that these are directed
graphs. ) See the example in Figure 7.2 above.
• If a resource-allocation graph does contain cycles AND each resource
category contains only a single instance, then a deadlock exists.
• If a resource category contains more than one instance, then the presence of
a cycle in the resource-allocation graph indicates the possibility of a
deadlock, but does not guarantee one. Consider, for example, Figures 7.3
and 7.4 below:
Ques6. Consider the deadlock situation in dining philosopher problem
when the philosopher obtains the chopstick one at a time. Discuss
how the four necessary conditions hold for deadlock in this setting?
Discuss how deadlock can be avoided by eliminating any of the four
necessary conditions?
Hold and Wait: When the philosopher tries to pick up a chopstick, he only picks
up one at a time. If he could pick up both chopsticks at one time then a deadlock
condition could not exist.
Circular Wait: Because all of the philosophers are sitting in a round table and
each philosopher has access to the chopsticks next to them, if a philosopher picks
up one chopstick he will affect the philosopher sitting next to him. and the
philosopher on that side can also affect the philosopher sitting next to her in the
same manner. This holds true all the way around the table. If one of the
philosophers in the table could pick up a chopstick that another philosopher never
needed, a deadlock condition would not exist.