Vous êtes sur la page 1sur 26

INTRODUCTION

• To introduce the critical-section problem,


whose solutions can be used to ensure the
consistency of shared data.
• To present both software and hardware
solutions of the critical-section Problem.
• To examine several classical process-
synchronization problems.
• To explore several tools that are used to
solve process synchronization problems.
Process Synchronization
 This is a mechanism which ensure that
two or more concurrent process do not
simultaneously execute some particular
program segment known as critical
section.
 When one process start executing the
critical section the other thread should
wait until the first thread finishes
Process Synchronization cont...

 Synchronization Introduced to handle


problems arise when multiple program
are executed
Example of Process Synchronization
 Suppose that there are three processes
namely 1, 2 and 3. All three of them are
concurrently executing and then need to
share a common resource (critical
section) synchronization should be used
here to avoid any conflicts for accessing
this shared resource.
 Hence when process 1 and 2 both try to
access that resource it should be assigned
to only one process at a time. If it’s
assigned to process 1 the other process
(process2) need to wait until process 1
free that resource.
RACE CONDITION
 Is a situation where several processes
access and manipulate the same data
concurrently and the outcome of the
execution depends on the particular
order in which the access takes place.
 To guard against the race condition above,
we need to ensure that only one process
at a time can be manipulating the variable
counter. To make such a guarantee, we
require that the processes be
synchronized in some way.
THE CRITICAL-SECTION

 Critical section is the code segment that


access shared variables and as to be
executed as an atomic action.
THE CRITICAL-SECTION cont…
 The important feature of a system is that,
when one process is executing in its critical
section, no other process is to be allowed to
execute in its critical section. Thus the
execution of critical section by the processes
is mutually exclusive in time.

 Each process must request permission to


enter its critical section. The section of code
implementing this request in the entry section.
The critical section may be followed by an exit
section. The remaining code is the remainder
section.
A solution to the critical-section
problem must satisfy the
following three requirements:

Mutual exclusion.
Progress.
Bounded waiting.
1.Mutual exclusion
 If process Pi is executing in its critical
section, then no other processes can be
executing in their critical sections.
2.Progress
 If no process is executing in its critical
section and some processes wish to enter
their critical sections, then only those
processes that are not executing in their
remainder sections can participate in
deciding which will enter its critical
section next, and this selection cannot be
postponed indefinitely.
3.Bounded waiting
 There exists a bound, or limit, on the
number of times that other processes are
allowed to enter their critical sections
after a process has made a request to
enter its critical section and before that
request is granted.
Two general approaches are used to
handle critical sections in operating
Systems:
A preemptive kernel allows a process
to be preempted while it is running in
kernel mode

A non-preemptive kernel does not


allow a process running in kernel mode to
be preempted; a kernel-mode process will
run until it exits kernel mode, blocks or
voluntarily yields control of the CPU.
Synchronization Hardware

 In Hardware synchronization, many


system provide hardware for a critical
section code. A single processor or
uniprocessor system could disable
interrupts by executing currently running
code without preemption, which is very
inefficient on multiprocessor system.
SPECIALHARDWARE
INSTRUCTION
 Most widely implemented instruction are:
 Test and Set instruction
 Exchange instruction
Test and Set instruction
 This instruction is executed atomically.
Thus, if two test and set instruction are
executed simultaneously(each on a
different CPU), they will be executed
sequentially in some arbitrary order.

 This is access to a memory location


exclude any other access to that same
location which is shared one.
Algorithm using the test and set()
instruction that satisfies all the critical-
section requirements. The common are:
Mutex Locks

 The hardware-based solutions to the


critical-section problem instead,
operating-systems designers build
software tools to solve the critical-
section problem. The simplest of these
tools is the mutex lock.

 We use the mutex lock to protect critical


regions and thus prevent race conditions.
Mutex Locks cont…

 a process must acquire the lock before


entering a critical section, it releases the
lock when it exits the critical section. The
acquire() function acquires the lock, and
the release() function releases the lock.
acquire() {
while (!available)
; /* busy wait */
available = false;;
}
Semaphores

 Is an integer variable that, apart from


initialization, is accessed only through two
standard atomic operations: wait() and
signal().
 Thedefinition of wait() is as follows:
wait(S) {
while (S <= 0)
; // busy wait
S--; }
Semaphores cont…

 The definition of signal() is as follows:


signal(S) {
S++; }
Semaphore Usage

 Operating systems often distinguish


between counting and binary semaphores.
The value of a counting semaphore can
range over an unrestricted domain. The
value of a binary semaphore can range
only between 0 and 1. Thus, binary
semaphores behave similarly to mutex
locks. In fact, on systems that do not
provide mutex locks, binary semaphores
can be used instead for providing mutual
exclusion.
Semaphore Implementation

The definitions of the wait() and signal()


semaphore operations just described
present the same problem. To overcome
the need for busy waiting, we can modify
the definition of the wait() and
signal()operations as follows:
 When a process executes the wait()
operation and find that the semaphore
value is not positive, it must wait.
However, rather than engaging in busy
waiting, the process can block itself.
Semaphore Implementation
cont…
The block operation places a process into a
waiting queue associated with the
semaphore, and the state of the process is
switched to the waiting state. Then
control is transferred to the CPU
scheduler, which selects another process
to execute.
The wait() semaphore operation
can be defined as:
wait(semaphore *S) {
S->value--;
if (S->value < 0) {
add this process to S->list;
block();
}
}
The signal() semaphore operation
can be defined as
signal(semaphore *S) {
S->value++;
if (S->value <= 0) {
remove a process P from S->list;
wakeup(P);
}
}
Thanks

Vous aimerez peut-être aussi