Vous êtes sur la page 1sur 16

Process synchronization cooperating process - can affect or be affected by other processes in the system - can share a logical address

space (both code and data) or just share data with another process - sharing a logical address space: lightweight processes or threads - share data: data may become inconsistent producer-consumer problem - bounded buffer: you can use a counter that is basis for waiting until a producer can produce and a consumer can consume eg, producer code: ! while (true) { //produce an item in nextProduced while (counter == BUFFER_SIZE) ; // do nothing buffer[in] = nextProduced; in = (in + 1) % BUFFER_SIZE; counter++; } ! consumer code: ! while (true) { while (counter == 0) ; //do nothing nextConsumed = buffer[out]; out = (out + 1) % BUFFER_SIZE; counter--; //consume the item in nextConsumed } - however executing the codes above in parallel (concurrently) will be a little confusing and wont work properly - what if counter++ and counter-- execute simultaneously? Conversion to machine language may yield variable execution sequences. - eg, counter = 5 then counter++ and counter-- execute simultaneously in different processors, each processor has a register that is an accumulator - register for CPU1, which is executing the increment instruction copies counter -register1 = 5 - then increments it: register = 6
Notes compiled by @kmp091, code and content based on Silberschatz Operating System Concepts 7th edition

- register for CPU2, which is executing the decrement instruction copies counter -register2 = 5 - then decrements it: register = 4 - in this case, the counter was not able to return the incremented value in time for the decrement function to execute correctly - say the decrement instruction nished later, then instead of the correct answer (5), 4 is returned to counter - the statements are interleaved (alternated) when executed as machine code, so end result is unpredictable race condition - several processes access and manipulate the same data concurrently, and the outcome is unpredictable (various orders of execution can take place) - to guard against this in the producer-consumer problem, it must be ensured that one process at a time can be manipulating the variable counter (so there must be a synchronization/coordination that is ensured) Critical-section problem A system consisting of n processes can have a segment of code called a critical section. critical section - could involve changing common variables, updating a table, writing a le, et al. When one process is executing its critical section, no other processes must execute its own critical section. (No two processes are executing critical sections at the same time) The critical section problem involves a protocol in which processes can use to cooperate: - each process requests permission to enter its critical section (this part of code involving requests is called the entry section) - an exit section may follow the critical section (the code right after the critical section) - the remainder section contains the remaining code eg, do { **entry section** critical section **exit section** remainder section } while (TRUE); Solutions to the critical section problem must satisfy the three: 1) Mutual exclusion - no other processes can get into critical sections if a process is already in one 2) Progress - if there are no processes in their critical sections, only the processes that are not executing their remainder sections can help decide which process will enter their critical section next (and this selection cant be postponed indenitely)
Notes compiled by @kmp091, code and content based on Silberschatz Operating System Concepts 7th edition

3) Bounded waiting - theres a bound/limit on the number of times that other processes can enter their critical sections after a process requests permission to enter its critical section and before that request is granted (a process x is bound to enter its critical section so other processes have limited time to enter their critical sections before xs request is granted) examples of kernel race conditions: 1) a list of opened les is maintained by the kernel, two les are opened simultaneously thus the updates end up competing for the list 2) maintenance of memory allocation 3) maintenance of process lists 4) interrupt handling Design the kernel to make it free of race conditions 1) non-preemptive kernel - inherently free of race conditions on kernel data structures (only one process is active in the kernel at a time) - kernel mode processes are not preempted (it will run until it exits kernel mode, blocks or releases CPU) - an arbitrarily long process can run, making the system less responsive - examples: Win 2000, Win XP, tradition UNIX 2) preemptive kernel - needs to be carefully designed to avoid race conditions with shared kernel data - difcult to design in SMP architectures (its possible for two kernel mode processes to run simultaneously on different processors) - more suitable for real time programming (allow a real time process to preempt a kernel process that is currently running) thus making the system more responsive - examples: Linux 2.6, commercial UNIX (Solaris and IRIX) Petersons solution - software based solution to the critical section problem - wont work on modern architecture, but its a good algorithmic description of solving the critical-section problem (for illustration of the critical section problems design complexities) - restricted to two processes that alternate executions between their critical sections and remainder sections int turn;! ! ! boolean ag[2];! - whose turn is it to enter the critical section? - is a process ready to enter its critical section?

P1 would correspond to ag[1] in a 1-based array, for example; if its ready to enter its critical section, then true could be its corresponding value, and it allows the other process to execute its critical section (turn = 2); if the system grants the process the chance to execute its critical section, then turn could conceivably hold 1 instead, representing P1 (this is if P2 also starts to execute) The structure for a process is dened by Peterson is as follows: do {
Notes compiled by @kmp091, code and content based on Silberschatz Operating System Concepts 7th edition

ag[i] = TRUE; turn = j; while (ag[j] && turn == j); // do nothing while j is ready to enter its critical section and turn is assigned to j critical section ag[i] = FALSE; remainder section } while (TRUE); If the system contains two processes (P1 and P2), then ag[1] and ag[2] would be true, and turn could either be set to 1 or 2, depending on the execution sequence. In other words, if P1 and P2 wish to enter their critical sections at the same time, turn would assume both 1 and 2, but one of the assignments wont last (ie, overwritten immediately). The eventual value of turn decides which of the two processes is allowed to enter its critical section rst. In Petersons solution, mutual exclusion is preserved, progress requirement is set and the bounded waiting requirement is set. 1) mutual exclusion is preserved each process that enters its critical section is ensured of this, because the process Pi only enters its critical section when ag[j] == false or turn == i (one process can execute its critical section when the other process is not ready, or the turn variable is set to i) 2) progress requirement is set and the bounding waiting requirement is set when one of the processes nishes its critical section, it will reset its ag to false, allowing the other process to execute its critical section (progress!) and one process waits (trapped by the while statement) for at most one entry by the other process, so waiting is not indenite Synchronization Hardware - a hardware based solution to the critical-section problem using a lock - critical sections are protected by locks - race conditions are prevented by requiring critical regions to acquire locks that protect them - locks are released when a process exits the critical section - all other solutions to the critical section problem are based on the premise of locking, albeit the locks design vary from implementation to implementation - in a uniprocessor environment (single processor), prevent interrupts from occurring while a shared variable is being modied to solve the critical-section problem - ensures that the sequence of instructions would execute in order without preemption, because other processes arent allowed to run - this is the approach of nonpreemptive kernels
Notes compiled by @kmp091, code and content based on Silberschatz Operating System Concepts 7th edition

- this cannot be done in multiprocessor environments (the message to disable interrupts is passed to all processors, and this can be time-consuming and can thus delay critical section execution) - atomic data manipulation: special hardware instructions provide the ability to test or modify the content of a word, or swap two words as one uninterruptible unit - TestAndSet() - Swap() Locks can solve the critical section problem: do { acquire lock critical section release lock remainder section } while (TRUE); Semaphores - synchronization tool that overcomes the complicated nature of hardware-based solutions (TestAndSet() or Swap()) - integer variable (S) that is accessed through two standard atomic operations wait() and signal() wait(S) { while S < = 0; // dont reduce Semaphore to negative number S--; } signal(S) { S++; } All modications to the semaphore must be executed indivisibly (atomically/no other processes can simultaneously modify the same semaphore value) - The testing of wait() for S (to disable the conversion to negative values) must also be executed without interruption Using semaphores Operating systems operate on two kinds of semaphores: counting semaphore - unlimited domain (just a regular nonnegative number) binary semaphore - 0 or 1; also known as mutex locks (provide mutual exclusion) > Semaphores can be used to deal with the critical section problem for multiple processes. - n processes share the mutex semaphore, initialized to 1 do {
Notes compiled by @kmp091, code and content based on Silberschatz Operating System Concepts 7th edition

waiting (mutex); //critical section

signal (mutex); //remainder section } while (TRUE); > Counting semaphores can be used to control access to a given resource that has a nite number of instances (resource management), so it is initialized to the number of available resources - each process does a wait(), so the number of instances decrements - system releases resource, reected after it does signal() - all resources are used when semaphore is 0 (block requests for resources) > Synchronization problems can be solved by semaphores ! One implementation: - a synch semaphore (the name is arbitrary) initialized to 0 - Statement 1 (belonging to Process 1) must be done before Statement 2 (belonging to Process 2) - Process 1 and Process 2 should then share synch - Process 1 executes S1 and sends a signal to synch after execution - Process 2 waits for synch to have a value before executing S2 Terms: busy waiting - while a process is in its critical section, any other process that tries to enter critical section must loop continuously in the entry code (a problem in a real multiprogramming system, where a single CPU is shared among processes) spinlock - a type of semaphore that has the disadvantage of busy waiting - the process spins while waiting for the lock - advantage of not having context switch time while waiting for the lock overcome busy waiting: send the waiting process to a waiting queue and ask the CPU scheduler to employ another process; when signal() is executed, restart the waiting process using wakeup() and place it in the ready queue implementation: typedef struct { int value; struct process *list;! } semaphore;

holds the list of waiting processes; pointer to a list of any queueing algorithm

Notes compiled by @kmp091, code and content based on Silberschatz Operating System Concepts 7th edition

wait }

(semaphore *S) { S->value--; if (S->value < 0) { add process to S->list; block(); suspends process that invokes it }

signal (semaphore *S) { S->value++; if (S->value <= 0) { P = remove process from S->list; wakeup(P); resumes execution of a blocked process P } } This implementation may have negative semaphore values; its magnitude is the number of processes waiting on that semaphore. This results in switching the order of decrement and the test in the implementation of the wait() operation. - Semaphores must be atomic, because that would guarantee that no two processes would execute wait() or signal() at the same time. - in a uniprocessor environment, inhibit interrupts while wait() or signal() are executing - in a multiprocessor environment, interrupts must be disabled, but this is bad so spinlocks can be used to augment the performance slowdown - new wait() and signal() implementations do not eliminate busy waiting, but they are limited to just the critical sections, which should be short Deadlocks and starvation - semaphore implementation with waiting queue may end up in a situation where two or more processes are waiting indenitely for an event (signal()) to be caused by one of the waiting processes - example, P0 executes wait(S) and P1 executes wait(Q) - both processes end up waiting for each other to execute a signal() - P0 waits for P1 to signal(S) - P1 waits for P0 to signal(Q) - every process in the set is waiting for an event that can be caused by another process in the set (this happens in resource acquisition and release most often) starvation - indenite blocking (processes wait indenitely in the semaphores waiting queue, especially when the queue gets larger) Classic Deadlock Scenarios These scenarios are used to test new synchronization schemes.
Notes compiled by @kmp091, code and content based on Silberschatz Operating System Concepts 7th edition

1) Bounded-Buffer Problem - pool of n buffers, each capable of holding one item - a mutex semaphore for mutual exclusion accesses the buffer pool and is initialized to 1 - empty and full semaphores count the number of empty and full buffers (init. to n and 0 respectively) Producer: Consumer: do { do { ... wait(full); //produce item in nextp wait(mutex); ... ... wait(empty); // remove item from buffer wait(mutex); //to nextc ... ... //add nextp to buffer signal(mutex); ... signal(empty); signal(mutex); ... signal(full); //consume item in nextc } while (TRUE); ... } while (TRUE); 2) Readers-Writers Problem - database shared among several concurrent processes - some processes only read (readers), some both read and write/update (writers) - readers accessing data simultaneously will not be bad - writers and some other thread (reader/writer) access data simultaneously = chaos - shared databases access should be exclusive when being written on - rst readers-writers problem: no reader will be kept waiting unless a writer has already obtained permission to use the shared object (readers dont wait for other readers) - second readers-writers problem: once a writer is ready, it performs its write ASAP - solutions to both problems involve starvation (rst case: writers starve, second case: readers starve) solution to rst readers-writers problem: semaphore mutex = 1; mutual exclusion when readcount is updated semaphore wrt = 1; mutual exclusion during writing int readcount = 0; how many processes are currently reading the object

Notes compiled by @kmp091, code and content based on Silberschatz Operating System Concepts 7th edition

Writer: do { wait(wrt); ... //writing performed ... signal(wrt); } while (TRUE);

Reader: do { wait(mutex); readcount++; if(readcount == 1) wait(wrt); signal(mutex); ... // reading is performed ... wait(mutex); readcount--; if(readcount == 0) signal(wrt); signal(mutex); } while (TRUE);

Other systems generalize this to reader-writer locks. Request a lock in read mode or write mode, whichever is appropriate. Multiple processes can concurrently acquire a reader-writer lock in read mode, but only one during write mode. Useful when there are more readers than writers, or when it is easy to identify which processes only read shared data and which threads only write shared data. 3) Dining Philosophers Problem - Five philosophers think and eat and share a circular table surrounded by ve chairs, each belonging to one philosopher - They share a bowl of rice and ve chopsticks - When a philosopher thinks, she does not interact with her colleagues - When a philosopher is hungry, she tries to pick up two chopsticks closest to her (left and right neighbors), and can only pick up one at a time - When someone else is using a chopstick, a philosopher cannot pick it up - When philosopher is eating, she eats without releasing her chopsticks - She puts down both chopsticks after eating, and thinks again - Represents many concurrency and starvation problems in a simple manner A bad solution: - each chopstick is a semaphore (semaphore chopstick[5];) initialized to 1 - a philosopher picks it up with a wait() operation, releases with signal() (but one does not simply wait for two chopsticks, eat, signal and then think again, because this leads to a deadlock when all philosophers become hungry simultaneously [they have left chopstick, no right chopstick forever])
Notes compiled by @kmp091, code and content based on Silberschatz Operating System Concepts 7th edition

A pretty good solution: - allow at most four philosophers to be sitting simultaneously at the table - allow a philosopher to pick up her chopsticks only if both chopsticks are available (pick them up at a critical section) - use an asymmetric solution (odd philosopher picks up left chopstick rst, even philosopher picks up right chopstick rst) Solutions must ensure philosophers dont starve to death (deadlock-free starvation-free) Monitors - using semaphores incorrectly = timing errors that are difcult to reproduce (happens in special cases) signal(mutex); wait(mutex); ... ... critical section critical section ... ... wait(mutex); wait(mutex); ^ results in violation of mutex ^ results in deadlock - high-level synchronization construct (monitors) helps prevent programmer errors with semaphores Why use monitors - provides a set of programmer-dened operations that are provided mutual exclusion within the monitor - ensures that only one process at a time can be active within the monitor - programmer does not program the synchronization constraint explicitly (abstracted) - not sufciently powerful in some cases - programmer can dene additional conditions that operate on wait() and signal() - signal() does not have an effect when no processes are suspended - a process x that uses signal() releases another process y associated with a certain condition: either x nishes rst, or y nishes rst (because both processes cannot be active within the monitor at the same time) - signal and wait: x waits until y leaves the monitor or wait for another condition - signal and continue: y waits until x leaves the monitor or waits for another condition Dining Philosophers and Monitors - philosopher may pick up chopsticks only if both of them are available - three states of philosophers are dened: ! enum { thinking, hungry, eating } state [5]; - a philosopher can set state to eating only if her two neighbors are not eating ! if (state[(i+4) % 5] != eating && state[(i+1) %5] != eating) state[i] = eating;
Notes compiled by @kmp091, code and content based on Silberschatz Operating System Concepts 7th edition

- a condition must also be set so the philosopher can delay herself when she is hungry but unable to get the chopsticks she needs condition self[5]; - a monitor dp now contains operations pickup(), putdown(), test() and the initialization code along with the states of each philosopher and the condition.

- a philosopher must pickup(i) rst, then eat, before putdown(i) - no two neighbors are eating simultaneously (no deadlocks occur), but it is possible for a philosopher to starve to death Implementing a Monitor Mutual Exclusion - a process executes wait(mutex) before entering the monitor, and executes signal(mutex) after exiting the monitor -the semaphore next is introduced because a signaling process must wait until the resumed process either leaves or waits; signaling processes may suspend themselves -integer next_count counts the number of processes suspended on next
Notes compiled by @kmp091, code and content based on Silberschatz Operating System Concepts 7th edition

Conditions - for each condition x, we introduce a semaphore x_sem and an integer variable x_count, both initialized to 0 - x.wait() can then be implemented as x_count++; if (next_count > 0) signal(next); else signal(mutex); wait(x_sem); x_count--; - x.signal() can be implemented as ! if (x_count > 0) { next_count++; signal(x_sem); wait(next); next_count--; } Process resumption in a monitor - if several processes are suspended on condition x, and an x.signal() operation is executed by some process, then how do we determine which of the suspended processes should be resumed next? - conditional-wait construct can be used to control sequence of resumption - priority number is dened (lower number = higher priority; executed next when x.signal() is executed) Synchronization Examples Solaris - adaptive mutexes - protects access to critical data items - initially a standard semaphore implemented as a spinlock (only if a lock is held for less than a few hundred instructions) - spins when a thread is holding a piece of data (thread is about to nish) - blocks a thread when it holds a piece of data, causing the thread to sleep (causing the thread to spin would be inefcient) - uses condition variables and semaphores for longer code segments - reader-writer locks - protects frequently accessed data (but are usually accessed if read-only)

Notes compiled by @kmp091, code and content based on Silberschatz Operating System Concepts 7th edition

- turnstiles - orders list of threads waiting to acquire either an adaptive mutex or a reader-writer lock - queue structure containing threads blocked on a lock - arranged according to a priority-inheritance protocol (lower-priority threads that hold a lock inherit the properties of a higher-priority thread until release of the lock) Windows XP - uses spinlocks to protect access to global resources (short code segments) - thread is never preempted while holding a spinlock - dispatcher objects provide synchronization outside the kernel - signaled state of dispatcher: object is available and a thread will not block when acquiring the object - nonsignaled state of dispatcher: object is not available and a thread will block when attempting to acquire the object - events are condition variables (notify a waiting thread when a desired condition occurs) - timers are used to notify threads that a specied amount of time has expired - blocked threads on nonsignaled dispatcher object are sent to a waiting queue representing the object Linux - spinlocks and semaphores, reader-and-writer versions of these two locks - on SMP systems, spinlocks are fundamental mechanisms Single processor Disable kernel preemption Enable kernel preemption Multiprocessor Acquire spinlock Release spinlock

Preemption disable and enable is controlled by a preempt counter (if the counter is larger than 0, it is not safe to preempt) Pthreads - mutex locks - condition variables - read-write locks Atomic Transactions - mutual exclusion ensures atomic execution of critical sections - two critical sections are executed concurrently = sequential execution in an unknown order - database systems are concerned with data consistency, so database methods benet operating systems (which can be like databases as they also deal with data) ensuring atomicity: 1) system model transaction - collection of instructions that perform a single logical function
Notes compiled by @kmp091, code and content based on Silberschatz Operating System Concepts 7th edition

* transactions should have their atomicity preserved despite the possibility of failures in the system * transactions that nish executing are committed, otherwise they are aborted when transactions are aborted, the changes must be rolled back (previous values are stored) Storage media - volatile storage: does not survive system crashes - nonvolatile storage: survives system crashes - stable storage: never lost 2) log-based recovery - write ahead logging: system maintains a log that describes an operation of a transaction write ! LOG - transaction name - data item name - old value - new value - start of transaction - commit/abort of transaction Before transaction executes, commits, etc., its activity is written to the log Any write operation is preceded by a new log entry Logs must be written to stable storage before transaction actually executes undo and redo - idempotent operations to reset or redo operations; multiple executions must have the same result as one execution ! > undo only if transaction is recorded to have started but hasnt committed ! > redo only if transaction contains both starts and commits 3) checkpoints - log searching is time consuming, however checkpoints can be added to reduce overhead - write-ahead log is maintained, and system performs checkpoints that: - output all log records currently residing in volatile storage to stable storage - output all modied data residing in volatile storage to stable storage - output a log record <checkpoint> onto stable storage - checkpoints streamline recovery 4) Concurrent atomic transactions multiple transactions at a time are addressed - each transaction is atomic, so concurrent execution of transactions must be equivalent to the case where these transactions are executed step by step in some order - this property is called serializability and can be maintained by executing each transaction within a critical section - all transactions share a common semaphore mutex, initialized to 1 - rst action of transaction is to wait(mutex) - signal(mutex) is executed when transaction commits or aborts
Notes compiled by @kmp091, code and content based on Silberschatz Operating System Concepts 7th edition

Serializability - two data items are both read and written to by two transactions - these transactions are executed in an arbitrary order (schedule) - serial schedule: transactions are executed atomically - nonserial schedule: order of execution overlap with one another - possibility of conicting operations (one transaction accesses the data item of another) - not necessarily incorrect - when both transactions dont conict with each other, the schedule (nonserial or serial) produces the same output, thus a schedule like this is considered conict serializable locking protocol: each data item has a lock and a protocol governs how locks are acquired and released - shared: transaction can read but not write on a data item - exclusive: transaction can both read and write on a data item - transactions request a lock in an appropriate mode on a data item, depending on operations to be performed by the transaction - transactions that require a shared lock while a data item is in exclusive mode must wait - transactions can unlock a data item that it locked earlier, but cannot unlock while its operating on the data item - its not always desirable to unlock a data item after a transactions last access, because serializability may not be ensured two-phase locking protocol: each transaction issue lock and unlock requests in two phases - growing phase: transaction may obtain locks but not release any locks - shrinking phase: transaction may release locks but not obtain new locks - transaction is in growing phase initially and acquires locks as needed, but begins to shrink when it release a lock - ensures conict serializability timestamp-based protocols - selecting an order in advance helps determine serializability - deadlock-free (no transaction ever waits) - timestamp is assigned before a transaction executes - transactions are stamped using a logical counter, which is incremented after a new timestamp is assigned - could also be stamped by system clock, but system clocks vary when transactions come from different places - timestamps determine serializability order - the produced schedule must be equivalent to a serial schedule (use two timestamp values) - W-timestamp(Q) - largest timestamp of any transaction that successfully executed write(Q) - R-timestamp(Q) - largest timestamp of any transaction that successfully executed read(Q) When a transaction issues read(Q):
Notes compiled by @kmp091, code and content based on Silberschatz Operating System Concepts 7th edition

- if timestamp < W-timestamp(Q), transaction reads value of data that was overwritten, so read operation is rejected and the transaction is rolled back - if timestamp W-timestamp, read transaction is executed and R-timestamp(Q) is set to the maximum of R-timestamp(Q) and the transaction When transaction issues write(Q): - if timestamp < R-timestamp(Q), value of data that the transaction is producing was needed previously and the transaction assumed that this value would never be produced, so write operation is rejected and the transaction is rolled back - if timestamp < W-timestamp(Q), then the transaction is attempting to write an obsolete value of the data, so the write operation is rejected and the transaction is rolled back - else, the write operation is executed

Notes compiled by @kmp091, code and content based on Silberschatz Operating System Concepts 7th edition

Vous aimerez peut-être aussi