Vous êtes sur la page 1sur 6

PROCESS SCHEDULING

Introduction
Scheduling is the method by which threads, processes or data flows are given access to system resources (e.g. processor time, communications bandwidth). This is usually done to load balance a system effectively or achieve a target quality of service. The need for a scheduling algorithm arises from the requirement for most modern systems to perform multitasking (execute more than one process at a time) and multiplexing (transmit multiple flows simultaneously). In a single processor system, only one process can run at a time; any others must wait until the CPU is free and can be re-scheduled. The objective of multi-programming is to have some process running at all time, to maximize CPU utilization. The idea is relatively simple. A process is executed until it must wait, typically for the completion of some I/O request. In a simple computer system, the CPU then just sits idle. All this waiting time is wasted; no useful work is accomplished. With multi-programming, we try to use this time productively. Several processes are kept in memory at one time. When one process has to wait, the operating system takes the CPU away from that process and gives the CPU to another process. This pattern continues. Every time one process has to wait, another process can take over use of the CPU. Scheduling of this kind is a fundamental operating system function. Almost all computer resources are schedule before use. The CPU is one of the primary computer resources. Thus, its scheduling is central to operating system design.

Objectives of Process Scheduling


Fairness Fairness is important under all circumstances. A scheduler makes sure that each process gets its fair share of the CPU and no process can suffer indefinite postponement. Note that giving equivalent or equal time is not fair. Think of safety control and payroll at a nuclear plant. Policy Enforcement The scheduler has to make sure that system's policy is enforced. For example, if the local policy is safety then the safety control processes must be able to run whenever they want to, even if it means delay in payroll processes.

Efficiency Scheduler should keep the system (or in particular CPU) busy cent percent of the time when possible. If the CPU and all the Input/Output devices can be kept running all the time, more work gets done per second than if some components are idle. Response Time A scheduler should minimize the response time for interactive user. Turnaround A scheduler should minimize the time batch users must wait for an output. Throughput A scheduler should maximize the number of jobs processed per unit time.

Preemptive Vs. Non-preemptive Scheduling


The Scheduling algorithms can be divided into two categories with respect to how they deal with clock interrupts. Non-preemptive Scheduling A scheduling discipline is Non-preemptive if, once a process has been given the CPU; the CPU cannot be taken away from that process. Following are some characteristics of Non-preemptive scheduling In Non-preemptive system, short jobs are made to wait by longer jobs but the overall treatment of all processes is fair. In Non-preemptive system, response times are more predictable because incoming high priority jobs cannot displace waiting jobs. In Non-preemptive scheduling, a scheduler executes jobs in the following two situations. o When a process switches from running state to the waiting state. o When a process terminates.

Preemptive Scheduling A scheduling discipline is preemptive if, once a process has been given the CPU can taken away. The strategy of allowing processes that are logically runable to be temporarily suspended is called Preemptive Scheduling and it is contrast to the "run to completion" method.

Scheduling Techniques
First-come First-serve Scheduling Round Robin Scheduling Priority Scheduling Shortest Job First Scheduling

First-come First-serve Scheduling


The first-come, first-serve scheduling algorithm is by far the simplest process scheduling algorithm. With this scheme, the process that requests the CPU first is allocated the CPU first. The implementation of the FCFS policy is easily managed with a FIFO queue. When a process enters a ready queue, its PCB is linked onto the tail of the queue. When the CPU is free, it is allocated to the process at the head of the queue. The running process is then removed from the queue. The code for FCFS scheduling is simple to write and understand. However, the average waiting time under the FCFS policy is quite long. Example 3 processes with compute times 12, 3, and 3 Job Arrival order P1, P2, P3 Job Execution order P1, P2, P3 Average response time = (12 + 15 + 18)/3 = 15 Job arrival order P2, P3, P1 Job Execution order P2, P3, P1 Average response time = (3 + 6 + 18) /3 = 9 Advantages: Simple

Disadvantages: Average waiting time is highly variable. Shorter jobs may wait behind longer ones. May lead to poor overlap between I/O and CPU processing. CPU bound processes will make I/O bound processes to wait and hence those devices remain idle.

Shortest Job First Scheduling


Here, processes are assigned their order in the increasing order of the burst times. The process with the minimum burst time is assigned first. This algorithm associates with each process the length of the processs next CPU burst. When the CPU is available, it is assigned to the process that has the smallest next CPU burst. If the next CPU bursts of two processes are the same, FCFS scheduling is used to break the tie. The scheduling depends on the length of the next CPU burst of a process, rather than its total length. This algorithm is provably optimal, in that it gives the minimum average waiting time for a given set of processes. Moving a short process before a long one decreases the waiting time of the short process more than it increases the waiting time of the long process. Consequently, the average waiting time decreases. It can be either preemptive or non-preemptive. Example, Process Burst time (ms) A 25 B 54 C 12 D 19 Therefore, the order of execution of processes will be: C, D, A, B Average response time of system= (25+79+91+110)/5= 61ms Advantages: It always produces minimum average response time for the given processes.

Disadvantages: A continuous stream of short jobs will starve the longer jobs. Finding out the shortest runnable process is problematic.

Priority Scheduling
A priority is associated with each process, and the CPU is allocated to the process with the highest priority. Equal priority processes are scheduled in FCFS order. An SJF algorithm is simply a priority algorithm where the priority is the inverse of the (predicted) next CPU burst. The larger the CPU burst, the lower the priority, and vice-versa. Priorities can be defined either internally or externally. Internally defined priorities use some measurable quantity to compute the priority of the process. For example, time limits, memory requirements, the number of open files, and the ratio of average I/O burst to average CPU burst have been used in computing priorities. External priorities are set by criteria outside the operating system such as the importance of the process. It can be either preemptive or nonpreemptive. Example - the processes A, B, C, D, E are assigned priority numbers 2, 3,5,1,4. The execution order of the processes will be: C, A, B, E, D.

Round Robin Scheduling


The round robin scheduling algorithm is designed especially for time sharing systems. Ity is similar to FCFS scheduling, but preemption is added to switch between processes. A small unit of time, called a time quantum is defined. It is generally from 10-100 milliseconds. The ready queue is treated as a circular queue. The CPU scheduler goes around the ready queue, allocating the CPU to each process for a time interval of up to 1 time quanta. To implement RR scheduling, we keep the ready queue as a FIFO queue of processes. New processes are added to the tail of the ready queue. The CPU scheduler picks the first process from the ready queue, sets a timer to interrupt after one time quantum, and dispatches the process. One of the two things will happen. The process may have a CPU burst of less than one time quantum. In this case, the process itself will release the CPU voluntarily. The scheduler will then proceed to the next process in the ready queue. Otherwise, if the CPU burst of the current running process is longer than one time quantum, the timer will go off and will cause an interrupt to the operating system. A context switch will be executed, and the process will be put at the tail of the ready queue. The CPU scheduler will then select the next process in the ready queue. The average waiting time under the RR policy is often long.

Example -The time slot could be 100 milliseconds. If job1 takes a total time of 250ms to complete, the round-robin scheduler will suspend the job after 100ms and give other jobs their time on the CPU. Once the other jobs have had their equal share (100ms each), job1 will get another allocation of CPU time and the cycle will repeat. This process continues until the job finishes and needs no more time on the CPU. Job1 = Total time to complete 250ms (quantum 100ms). First allocation = 100ms. Second allocation = 100ms. Third allocation = 100ms but job1 self-terminates after 50ms. Total CPU time of job1 = 250ms.

Vous aimerez peut-être aussi