Vous êtes sur la page 1sur 26

SUBJECT: OPERATING SYSTEM GROUP OF INSTITUTION

UNIT III PROCESS SCHEDULING

PATEL

Q. What is scheduling? Explain various short term scheduling criteria. Q. When and how the short-term, medium-term and long-term scheduling policies are applied? Draw the queuing diagram for scheduling. Ans !rocess "cheduling A scheduling is fundamental operating s#stem. All computer resources are scheduled before use. Since CPU is one of the primary computer resources are scheduled before use. Since is one of the primary computer resources, its scheduling is central to operating system design. Scheduling refers to a set of policies and mechanism supported by operating system that controls the order in which the work to be done is completed. A scheduler is an operating s#stem program that selects the next $o% to %e admitted for execution. The main objecti e of scheduling is to increase CPU utili!ation and higher Throughput. Throughput is the amount of work accomplished in a gi en time inter al. CPU scheduling is the basis of operating system which supports multiprogramming concepts. "y ha ing a number of programs in computer memory at the same time, the CPU may be shared among them. The assignment of physical processors to processes allows processors to accomplish work. The problem of determining when processors should be assigned and to which processes is called processor scheduling or CPU scheduling. #hen more than one process is run able, the operating system must decide which one first. The part of the operating system concerned with this decision is called the scheduler, and algorithm it uses is called the scheduling algorithm. &#pes of "cheduler '. "hort term "cheduler( )!* scheduler The short term scheduler selects the process for the processor among the processes which are already in $ueue %in memory&. The scheduler will e'ecute $uite fre$uently %mostly at least once e ery () milliseconds&. *t has to be ery fast in order to achie e better processor utili!ation. %+ispatches from ready $ueue& +. ,ong term "cheduler( -o% "cheduler .loads from dis/0
+esign by, Asst. Prof. -ikas .atariya /0(101)023141 Page 1

SUBJECT: OPERATING SYSTEM GROUP OF INSTITUTION

UNIT III PROCESS SCHEDULING

PATEL

5ong term scheduler selects processes from the process pool and loads selected processes into memory for e'ecution. The long term scheduler e'ecutes much less fre$uently when compared with the short term scheduler. *t controls the degree of multiprogramming %no. of process in memory at a time&. 1. 2edium term scheduler Sometimes it can be good to reduce the degree of multiprogramming by remo ing process from memory and storing into disk. These processes can then be reintroduced into memory by the medium 6 term scheduler. This operation is also known as swapping. 77777777777777777777777777777777777777777777777777777777777777777777777777777 77777777777 Q. Explain the goal of scheduling. Ans 3oals of scheduling .o%$ectives0 *n this section we try to answer following $uestion, #hat the scheduler try to achie e8 9any objecti es must be considered in the design of a scheduling discipline. *n particular, a scheduler should consider fairness, efficiency, response time, turnaround time, throughput, etc., Some of these goals depends on the system one is using for e'ample batch system, interacti e system or real:time system, etc. but there are also some goals that are desirable in all systems. 4airness ;airness is important under all circumstances. A scheduler makes sure that each process gets its fair share of the CPU and no process can suffer indefinite postponement. <ote that gi ing e$ui alent or e$ual time is not fair. Think of safety control and payroll at a nuclear plant. !olic# Enforcement The scheduler has to make sure that system=s policy is enforced. ;or e'ample, if the local policy is safety then the safety control processes must be able to run whene er they want to, e en if it means delay in payroll processes. Efficienc# Scheduler should keep the system %or in particular CPU& busy cent percent of the time when possible. *f the CPU and all the *nput>output de ices can be kept running all the time, more work gets done per second than if some components are idle. 5esponse &ime A scheduler should minimi!e the response time for interacti e user. &urnaround A scheduler should minimi!e the time batch users must wait for an output.
+esign by, Asst. Prof. -ikas .atariya /0(101)023141 Page 2

SUBJECT: OPERATING SYSTEM GROUP OF INSTITUTION

UNIT III PROCESS SCHEDULING

PATEL

&hroughput A scheduler should ma'imi!e the number of jobs processed per unit time. 6e !redicta%le A gi en job should utili!e the same amount of time and should cost the same regardless of the load on the system. 2inimi7e 8verhead Scheduling should minimi!e the wasted resources o erhead. 77777777777777777777777777777777777777777777777777777777777777777777777777777777777 7777777777 Q. Explain various )!* "cheduling )riteria? Ans The goal of scheduling algorithm is to identify the process whose selection will result in the best possible system performance. There are different scheduling algorithms, which has different properties and may fa or one class of processes o er another, which algorithm is best, to determine this there are different characteristics used for comparison. The scheduling algorithms determine the importance of each of the criteria. '. )!* *tili7ation The key idea is that if CPU is busy all the time, the utili!ation factor of all the components of the system will be also high. CPU utili!ation is the ratio of busy time of the processor to the total time passes for processes to finish. 4ormula !rocessor utili7ation9 .processor %us# time0 ( processor %us# time : processor idle time0 +. &hroughput *t refers to the amount of work completed in a unit of time. ?ne way to measure throughput is by means of the number of processes that are completed in a unit of time. The higher the number of processes, the more work apparently is being done by the system. The throughput can be calculated by using the 4ormula &hroughput9 .;o. of process completed0 ( .time unit0
+esign by, Asst. Prof. -ikas .atariya /0(101)023141 Page 3

SUBJECT: OPERATING SYSTEM GROUP OF INSTITUTION

UNIT III PROCESS SCHEDULING

PATEL

1. &urnaround &ime *t may be defined as inter al from the time of submission of a process to the time of its completion. *t is the sum of the periods spent waiting to get into memory, waiting in the ready $ueue, CPU time and *>? operations. 4ormula &urnaround &ime9 t .process completed0 < t .process su%mitted0 =. Waiting &ime This is the time spent in the ready $ueue. *n multiprogramming operating system se eral jobs reside at a time in memory. CPU e'ecutes only one job at a time. The rest of jobs wait for the CPU. The waiting time may be e'pressed as turnaround time, less than the actual processing time. 4ormula Waiting time9 &urnaround time < !rocessing time >. 5esponse &ime Time between submission and first response. 4ormula 5esponse time 9 t .first response0 < t .su%mission of request0 *t is used in time sharing and real time ?S. @owe er it characteristics differ in the two systems. *n time sharing system it may be defined as inter al from the time the last character of a command line of a program or transaction is entered to the time the last result appears on the terminal. *n real time system it may be defined as inter al from the time an internal or e'ternal e ent is signaled to the time the first instruction of the respecti e ser ice routine is e'ecuted. Throughput and CPU utili!ation may be increased by e'ecuting the large number of processes but then response time may suffer. 77777777777777777777777777777777777777777777777777777777777777777777777777777777777 77777777 Q. What is !rocessor "cheduling? Write a short note on 5ound 5o%in Algorithm in detail. Q. What is preemption? Explain various preemptive scheduling policies. Ans "cheduling Algorithms

+esign by, Asst. Prof. -ikas .atariya /0(101)023141 Page 4

SUBJECT: OPERATING SYSTEM GROUP OF INSTITUTION

UNIT III PROCESS SCHEDULING

PATEL

CPU scheduling deals with the problem of deciding which of the processes in the ready $ueue is to be allocated to the CPU.The Scheduling algorithms can be di ided into two categories with respect to how they deal with clock interrupts. ;on-preemptive "cheduling A scheduling discipline is non : preempti e if, once a process has been gi en the CPU, the CPU cannot be taken away from that process. 4ollowing are some characteristics of non - preemptive scheduling *n non : preempti e system, short jobs are made to wait by longer jobs but the o erall treatment of all processes is fair. *n non: preempti e system, response times are more predictable because incoming high priority jobs can not displace waiting jobs.

?n non - preemptive scheduling, a scheduler executes $o%s in the following two situations. #hen a process switches from running state to the waiting state. #hen a process terminates.

4irst )ome 4irst "erved .4)4"0 and "hortest -o% 4irst ."-4&, are considered to be the non 6 preempti e scheduling algorithms. !reemptive "cheduling A scheduling discipline is preempti e if, once a process has been gi en the CPU can taken away. Preemption means the operating system mo es a process from running to ready without the process re$uesting it. An ?S implementing this algorithm switches to the processing of a new re$uest before completing the processing of the current re$uest. The preempted re$uest is put back into the list of the pending re$uests. The strategy of allowing processes that are logically run able to be temporarily suspended is called Preempti e Scheduling and it is contrast to the Arun to completionA method. 5ound 5o%in scheduling, !riorit# %ased scheduling and "5&; scheduling are considered to be the preempti e scheduling algorithms. '. 4irst < )ome 4irst < "erve .4)4"0 The simplest scheduling algorithm is first Come ;irst Ser e %;C;S&. Bobs are scheduled in the order they are recei ed. ;C;S is non 6 preempti e. *mplementation is easily accomplished by implementing

+esign by, Asst. Prof. -ikas .atariya /0(101)023141 Page 5

SUBJECT: OPERATING SYSTEM GROUP OF INSTITUTION

UNIT III PROCESS SCHEDULING

PATEL

a $ueue of the processes to be scheduled or by storing the time the process was recei ed and selecting the process with the earliest time. Example ' +raw the Cantt chart for the ;C;S policy, considering the following set of processes that arri es at time ), with the length of CPU time gi en in milliseconds. Calculate a erage waiting time, a erage turnaround time, throughput and CPU utili!ation. !rocess P( P4 P2 "olution *f the process arri es in the order p(, p4 and p2, then the Cantt chart will be as, !' ) )ompleted &ime (2 !rocess completed !+ 4( &urnaround &ime9 t.process completed < t.process su%mitted0 : (2 : )E(2 4(: )E4( ()D : )E()D !1 ()D Waiting &ime9 &urnaround time < !rocessing time : (2 : (2 E ) 4( 6 1E(2 ()D 6 12E4( !rocessing &ime (2 )1 12

) (2 4( ()D

: P( P4 P2

A erage Turnaround TimeE (2 / 4( / ()D > 2 ED3 ms A erage #aiting TimeE ) / (2 / 4( >2 E ((.22 ms Throughput E number of process completed > time unit Throughput E 2 > ()D E.)41 Processor Utili!ation E Processor busy time > Processor busy time / Processor *dle time Processor Utili!ation E %()D > ()D / )& 7 ()) E())F Example +

+esign by, Asst. Prof. -ikas .atariya /0(101)023141 Page 6

SUBJECT: OPERATING SYSTEM GROUP OF INSTITUTION

UNIT III PROCESS SCHEDULING

PATEL

Calculate the turnaround time, waiting time, a erage turnaround time, a erage waiting time, throughput and processor utili!ation for the gi en set of processes that arri e at a gi en arri e time shown in the table, with the length of processing time gi en in milliseconds.

!rocess P( P4 P2 PD PG "olution

Arrival &ime ) 4 2 G 1

!rocessing &ime 2 2 ( D 4

*f the processes arri e as per the arri al time, the Cantt chart will be !' ) )ompleted &ime 2 !+ 3 !rocess completed !1 H &urnaround &ime9 t.process completed < t.process su%mitted0 : 2 6 ) E2 3 6 4 ED H62ED (( 6 G E3 (2 6 1 EG != (( !> (2 Waiting &ime9 &urnaround time < !rocessing time : 2 6 2 E) D 6 2 E( D 6 ( E2 3 6 D E4 G 6 4 E2

) 2 3 H (( (2

: P( P4 P2 PD PG

A erage Turnaround TimeE 2 / D / D / 3 / G > G ED.D ms A erage #aiting TimeE ) / ( / 2 / 4 / 2 >G E (.1 ms Throughput E number of process completed > time unit Throughput E G > (2 E ).21 Processor Utili!ation E Processor busy time > Processor busy time / Processor *dle time Processor Utili!ation E %(2> (2 / )& 7 ()) E())F +. "hortest -o% 4irst ."-40

+esign by, Asst. Prof. -ikas .atariya /0(101)023141 Page 7

SUBJECT: OPERATING SYSTEM GROUP OF INSTITUTION

UNIT III PROCESS SCHEDULING

PATEL

This algorithm is assigned to the process that has smallest ne't CPU processing time or burst time, when the CPU is a ailable. *n case of a tie, ;C;S scheduling algorithm can be used. *t is originally implemented in a batch processing en ironment. Example 1 Consider the following set of processes with the following processing time which arri ed at the same time. Calculate a erage turnaround time, a erage waiting time and throughput. !rocess P( P4 P2 PD "olution Using SB; scheduling because the shortest length of process will first get e'ecution the Cantt chart will be != ) 2 !' 0 !1 (3 !+ 4D !rocessing &ime )3 )1 )H )2

"ecause the shortest processing time is of the process pD, then process p( and then p2 and process p4. The waiting time for process p( is 2 ms, for process p4 is (3ms, for process p2 is 0 ms and for the process pD is ) ms as )ompleted &ime !rocess completed &urnaround &ime9 t.process completed < t.process su%mitted0 : 2 6 ) E2 0 6 ) E0 (3 6 ) E (3 4D 6 ) E4D Waiting &ime9 &urnaround time < !rocessing time : 2 6 2 E) 0 6 3 E2 (3 6H E0 4D 61 E(3

) 2 0 (3 4D

: PD P( P2 P4

A erage Turnaround TimeE 2 / 0 / (3 / 4D > D E(2 ms A erage #aiting TimeE ) / 2 / 0 / (3 > DE H ms Throughput E number of process completed > time unit Throughput E D > 4D E ).(3
+esign by, Asst. Prof. -ikas .atariya /0(101)023141 Page 8

SUBJECT: OPERATING SYSTEM GROUP OF INSTITUTION

UNIT III PROCESS SCHEDULING

PATEL

Processor Utili!ation E Processor busy time > Processor busy time / Processor *dle time Processor Utili!ation E %4D> 4D / )& 7 ()) E())F (Explain the round-robin scheduling policy with example.) 5ound 5o%in .550 Iound Iobin %II& scheduling is a preempti e algorithm that relates the process that has been waiting the longest. This is one of the oldest, simplest and widely used algorithims.The round robin scheduling algorithms is primarily used in time:sharing and a multi:user system en ironment where the primary re$uirement is to pro ide reasonable good response times and in general to share the system fairly among all system users. "asically the CPU time is di ided into time slices. Each process is allocated a small time-slice called quantum. <o process can run for more than can one $uantum while others are waiting in the ready $ueue. *f a process needs more CPU time to complete after e'hausting one $uantum, it goes to the end of ready $ueue to await the ne't allocation. To implement the II scheduling, Jueue data structure is used to maintain the Jueue of Ieady process. A new process is added at the tail of that Jueue. The CPU scheduler picks the first process from the ready Jueue, Allocate processor for a specified time Juantum. After that time the CPU scheduler will select the ne't process is the ready Jueue. Example = Consider the following set of process with the processing time gi e in milliseconds. !rocess P( P4 P2 "olution *f we use a time Quantum of = milliseconds then process P* gets the first D milliseconds. Since at re$uires another 4) milliseconds, it is preempted after the first time Juantum, and the CPU is gi en to ne't process in the Jueue, Process P4.Since process P4 does not need and milliseconds, it $uits before its time Juantum e'pires. The CPU is then gi en to the ne't process, Process P2 one each process has recei ed ( time Juantum, the CPU is returned to process P( for an additional time $uantum. The Cant chart will be, !' ) D !+ H !1 () !' (D !' (1 !' 44 !' 43 !' 2) !rocessing &ime 4D )2 )2

+esign by, Asst. Prof. -ikas .atariya /0(101)023141 Page 9

SUBJECT: OPERATING SYSTEM GROUP OF INSTITUTION

UNIT III PROCESS SCHEDULING

PATEL

)ompleted &ime

&urnaround &ime9 t.process completed < t.process su%mitted0 ) : : 2) P( 2) 6 ) E2) H P4 H 6 ) EH () P2 () 6 ) E () A erage turn around time E %2)/H/()&>2 E DH>2 E (G.33 A erage waiting time E %3/D/H&>2 E (H>2 E G.33 Throughput E 2>2) E).( Processer utili!ation E %2)>2)& 7 ()) E())F 1. "horted 5emaining &ime ;ext ."5&;0

!rocess completed

Waiting &ime9 &urnaround time < !rocessing time : 2) 6 4D E3 H 6 2 ED () 62 EH

This is the preempti e ersion of shortest job first. These permits a process that enters the ready list to preempt the running process if the time for the new process %or for its ne't burst& is less then the remaining time for the running process %or for its current burst&. 5et us understand with the help of an e'ample. Example > Consider the set of for processes arri ed as per timings described in the table, !rocess P( P4 P2 PD "olution At time ), only process P( has entered the system, so it is the process that e'ecutes. At time (, process P4 arri es. At that time, process P( has D time units left to e'ecute At this juncture process 4 Ks processing time is less compared to the P( left out time %D units&. So P4 starts e'ecuting at time (. At
+esign by, Asst. Prof. -ikas .atariya /0(101)023141 Page 10

Arrival &ime ) ( 4 2

!rocessing &ime G 4 G 2

SUBJECT: OPERATING SYSTEM GROUP OF INSTITUTION

UNIT III PROCESS SCHEDULING

PATEL

time 4, process P2 enters the system with the processing time G units. Process P4 continues e'ecuting as it has the minimum number of time units when compared with P( and P2. At time 2, process P4 terminates and process PD enters the system. ?f the processes P(, P2 and PD, has the smallest remaining e'ecution time so it starts e'ecuting. #hen process P( terminates at time (), process P2 e'ecutes .&he 3antt chart is shown %elow P( ) P4 ( PD 2 P( 3 P2 () (G

Turnaround time for each process can be computed by subtracting the time it terminated from the arri al time. Turn around Time Et %Process Completed& : t %Process Submitted& The turnaround time for each of the processes is, P(, () 6 ) E () P4, 2 6 (E 4 P2, (G 6 4 E (2 PD, 3 6 2 E 2 The a erage turnaround time is %()/4/(2/2& > D E H The waiting time can be computed by subtracting processing time from turnaround time, yielding the following D results for the process as P(, () 6 G E G P4, 4 6 4 E ) P2, (2 6 G E 1 PD, 2 6 2 E ) The a erage waiting time E %G/)/1/)& > D E 2.4G milliseconds ;our jobs e'ecuted in (G time units, so throughput is D > (G E ).43 time units>job. Example @ Consider the set of for processes arri ed as per timings described in the table, !rocess P( P4 P2 Arrival &ime ) 2 G !rocessing &ime 1 D 0

+esign by, Asst. Prof. -ikas .atariya /0(101)023141 Page 11

SUBJECT: OPERATING SYSTEM GROUP OF INSTITUTION

UNIT III PROCESS SCHEDULING

PATEL

PD PG "olution P( ) 4 PG P4 G

H 4

G 2

PD 0

P( (D

P2 4) 40

)ompleted &ime

!rocess completed

(0 G 0 (2 41

P( PG P4 PD P2

&urnaround &ime9 t.process completed < t.process su%mitted0 4) 6 ) E4) G 6 4 E2 0 6 2 E3 (D 6 H E H 406 G E4D

Waiting &ime9 &urnaround time < !rocessing time 4) 6 1 E (4 2 6 2 E) 3 6 D E4 H 6G E4 4D 60 E(G

The a erage waiting time E %(4/)/4/4 / (G& > G E 3.4 milliseconds =. !riorit# 6ased "cheduling or Event-Driven .ED0 "cheduling A priority is associated with each process and the scheduler always picks up the highest priority process for e'ecution from the ready $ueue. L$ual priority processes are scheduled ;C;S.The le el of priority may be determined on the basis of resource re$uirements, processes characteristics and its run time beha ior. A major problem with a priority based scheduling is indefinite blocking or star ation of a lost priority process by a high priority process. *n general, completion of a process within finite time cannot be guaranteed with this scheduling algorithm. A solution to the problem of indefinite blockage of low priority process is pro ided by aging priority. Aging priority is a techni$ue of gradually increasing the priority of processes %of low priority& that wait in the system for a long time.L entually,tha older processes attain high priority and are ensured of completion in a finite time. Example A As an e'ample, consider the following set of fi e processes, assumed to ha e arri ed at the same time with the length of processor timing in milliseconds, : !rocess P( P4 P2 !riorit# 2 ( D !rocessing &ime () ( 4

+esign by, Asst. Prof. -ikas .atariya /0(101)023141 Page 12

SUBJECT: OPERATING SYSTEM GROUP OF INSTITUTION

UNIT III PROCESS SCHEDULING

PATEL

PD PG "olution !+ ) )ompleted &ime ( !>

G 4

( G

!' 3 (3

!1 (1

!= (0 Waiting &ime9 &urnaround time < !rocessing time : ( 6 ( E) 3 6 4 ED (3 6 () E3 (1 6 4 E(3 (0 6 ( E(1

!rocess completed

) ( 3 (3 (1 (0

: P4 PG P( P2 PD

&urnaround &ime9 t.process completed < t.process su%mitted0 : ( 6 ) E( 3 6 ) E3 (3 6 ) E (3 (1 6 ) E(1 (0 6 ) E(0

Using priority scheduling us would schedule these processes according to the following Cantt chart, A erage turn around time E %(/3/(3/(1/(0& > G E 3)>G E (4 A erage waiting time E %)/D/3/(3/(1& > G E 1.1 Throughput E G>(0 E ).43 Processor utili!ation E %(0>(0& 7 ()) E ())F Priorities can be defined either internally or e'ternally. *nternally defined priorities use one measurable $uantity or $uantities to complete the priority of a process. Example B ;or the gi en fi e processes arri ing at time ), in order with the length of CPU time in milliseconds. !rocess P( P4 P2 PD PG !rocessing &ime () 40 )2 )H (4

+esign by, Asst. Prof. -ikas .atariya /0(101)023141 Page 13

SUBJECT: OPERATING SYSTEM GROUP OF INSTITUTION

UNIT III PROCESS SCHEDULING

PATEL

Consider the ;C;S, SB; and II %time sliceE() milliseconds& scheduling algorithms for the abo e set of process which algorithm would gi e the minimum a erage waiting time8

"olution '. 4or 4)4" algorithm the 3antt chart is as follows !' ) () !+ 20 !1 D4 != D0 !> 3(

!rocess P( P4 P2 PD PG

!rocessing &ime () 40 2 H (4

Waiting &ime ) () 20 D4 D0

A erage #aiting TimeE %) / () / 20 / D4 / D0& > G E41 milliseconds

+. 4or "-4 scheduling algorithm, we have !1 ) !rocess P2 PD P( PG P4 2 != () !rocessing &ime 2 H () (4 40 !' 4) !> 24 Waiting &ime )) 2 () 4) 24 !+ 3(

A erage #aiting TimeE %)/2/()/4)/24& > G E (2 milliseconds

4or 5ound 5o%in scheduling algorithm .time quantum 9 'Cmilliseconds 0 !' ) () !+ 4) !1 42 != 2) !> D) !+ G) !> G4 !+ 3(

+esign by, Asst. Prof. -ikas .atariya /0(101)023141 Page 14

SUBJECT: OPERATING SYSTEM GROUP OF INSTITUTION

UNIT III PROCESS SCHEDULING

PATEL

!rocess P( P4 P2 PD PG

!rocessing &ime () 40 )2 )H (4

Waiting &ime ) 24 4) 42 D)

A erage waiting TimeE %)/ 24 / 4) / 42 / D)& > G E 42 milliseconds ;rom the abo e calculation of a erage waiting time we found that SB; policy results in less than from ;C;S and II. 77777777777777777777777777777777777777777777777777777777777777777777777777777777777 777777777 Q. Explain Dighest response ratio next . D55; 0 E 2ultilevel 4eed%ac/ )!* "cheduling algorithm. Ans Highest Response Ratio Next (HRRN) scheduling is a non-preemptive discipline, similar to Shortest Job Next (SJN), in which the priority of each job is dependent on its estimated run time, and also the amount of time it has spent waiting Jobs gain higher priority the longer they wait, which prevents indefinite postponement (process starvation) !n fact, the jobs that have spent a long time waiting compete against those estimated to have short run times

"eveloped by #rinch $ansen to correct certain wea%nesses in SJN including the difficulty in estimating run time 2ultiple 4eed%ac/ Queues

+esign by, Asst. Prof. -ikas .atariya /0(101)023141 Page 15

SUBJECT: OPERATING SYSTEM GROUP OF INSTITUTION

UNIT III PROCESS SCHEDULING

PATEL

Fair-share scheduling:
&air-share scheduling is a scheduling strategy for computer operating systems in which the '() usage is e*ually distributed among system users or groups, as opposed to e*ual distribution among processes &or example, if four users (+,#,',") are concurrently executing one process each, the scheduler will logically divide the available '() cycles such that each user gets ,-. of the whole (/00. 1 2 3 ,-.) !f user # starts a second process, each user will still receive ,-. of the total cycles, but each of user #4s processes will now use /, -. 5n the other hand, if a new user starts a process on the system, the scheduler will reapportion the available '() cycles such that each user gets ,0. of the whole (/00. 1 - 3 ,0.) +nother layer of abstraction allows us to partition users into groups, and apply the fair share algorithm to the groups as well !n this case, the available '() cycles are divided first among the groups, then among the users within the groups, and then among the processes for that user &or example, if there are three groups (/,,,6) containing three, two, and four users respectively, the available '() cycles will be distributed as follows7

/00. 1 6 groups 3 66 6. per group

+esign by, Asst. Prof. -ikas .atariya /0(101)023141 Page 16

SUBJECT: OPERATING SYSTEM GROUP OF INSTITUTION


UNIT III PROCESS SCHEDULING

PATEL

8roup /7 (66 6. 1 6 users) 3 // /. per user 8roup ,7 (66 6. 1 , users) 3 /9 :. per user 8roup 67 (66 6. 1 2 users) 3 ; 6. per user

5ne common method of logically implementing the fair-share scheduling strategy is to recursively apply the round-robin scheduling strategy at each level of abstraction (processes, users, groups, etc ) <he time *uantum re*uired by round-robin is arbitrary, as any e*ual division of time will produce the same results 77777777777777777777777777777777777777777777777777777777777777777777777777777777777 7777777777 Q.' Explain the term 2ultiprocessor "cheduling in terms of loosel# coupled and tightl# coupled s#stem. Ans 2ultiprocessor "cheduling #hen a computer system contains more than a single processor, se eral new issues are introduced into the design of scheduling functions. #e will e'amine these issues and the details of scheduling algorithms for tightly coupled multi:processor systems. )lassification of multiprocessor s#stems ,oosel# coupled or distri%uted multiprocessor or cluster Lach processor has its own main memory and *>? channels.

4unctionall# speciali7ed processors an e'ample is an *>? processor controlled by a master processor

&ightl# coupled multiprocessor processors share a common main memory controlled by the operating system

"#nchroni7ation granularit# A good way of characteri!ing multiprocessor and placing them in conte't with other architectures is to consider the synchroni!ation granularity. Scheduling concurrent processes has to take into account the synchroni!ation of processes. Synchroni!ation granularity means the fre$uency of synchroni!ation between processes in a system.
+esign by, Asst. Prof. -ikas .atariya /0(101)023141 Page 17

SUBJECT: OPERATING SYSTEM GROUP OF INSTITUTION

UNIT III PROCESS SCHEDULING

PATEL

Applications e'hibit %showup& parallelism at arious le els. There are at least fi e categories of parallelism that differ in the degree of granularity. &#pes of s#nchroni7ation granularit# 4ine 6 parallelism inherent in a single instruction stream 2edium 6 parallel processing or multitasking within a single application )oarse 6 multiprocessing of concurrent processes in a multiprogramming en ironment Fer# coarse 6 distributed processing across network nodes to form a single computing en ironment ?ndependent 6 multiple unrelated processes ?ndependent parallelism #ith independent parallelism, there is no e'plicit synchroni!ation among processes. Ge# features Separate application or job <o synchroni!ation Same ser ice as a multi programmed uni:processor Time:sharing systems e'hibit this type of parallelism

)oarse and ver# coarse-grained parallelism #ith coarse and ery coarse:grained parallelism, there is synchroni!ation among processes, but at a ery gross le el. %e.g. at the beginning and at the end& This kind of situation is easily handled as a set of concurrent processes running on a multiprogrammed uniprocessor and can be supported on a multiprocessor with little or no change to user software. *n general, any collection of concurrent processes that need to communicate or synchroni!e can benefit from the use of a multiprocessor architecture. 2edium-grained parallelism 9edium:grained parallelism is present in parallel processing or multitasking within a single application. A single application can be effecti ely implemented as a collection of threads within a single process.

+esign by, Asst. Prof. -ikas .atariya /0(101)023141 Page 18

SUBJECT: OPERATING SYSTEM GROUP OF INSTITUTION

UNIT III PROCESS SCHEDULING

PATEL

"ecause the arious threads of an application interactt so fre$uently, scheduling decisions concerning one thread may affect the performance of the entire application. 4ine-grained parallelism ;ine:grained parallelism represents a much more comple' use of parallelism than is found in the use of threads. Usually does not in ol e the ?S but done at compilation stage. @igh data dependency EEM high fre$uency of synch. Ge# features @ighly parallel applications Speciali!ed and fragmented area

3ranularit# Example Falve 3ame "oftware

-al e is an entertainment and technology company that has de eloped a number of popular games, as well as source engine.

Source engine is the 2+ engine or animation engine used by al e for its game.

*n recent year, -al e has reprogrammed the source engine software to use multithreading to e'ploit the power of multicore processor chips from *ntel and A9+.

9ulticore refers the placement of multiprocessor on single chip typically 4 or D processor. An S9P system can consist of a single chip or multiple chips.

*ndi idual modules called system are assigned to indi idual processor. *n the source engine case putting rendering on one processor, A* on another processor and physics on another. *t is known as )oarse threading.

+esign by, Asst. Prof. -ikas .atariya /0(101)023141 Page 19

SUBJECT: OPERATING SYSTEM GROUP OF INSTITUTION

UNIT III PROCESS SCHEDULING

PATEL

9any similar or identical tasks are spread across multiple processor for e'ample a loop that iterates o er an array of data can be split into a number of smaller parallel loops indi idual threads. *t is known as 4ine grained threading.

To in ol e the selecti e use of fine grain threading for some system and single threading for other system is known as @ybrid threading. Design issues Scheduling on a multiprocessor in ol es three interrelated issues, The assignment of processes to processors The use of multiprogramming on indi idual processors The actual dispatching of a process

&he scheduling depends on degree of granularity number of processors a ailable Assignment of processes to processors The simplest scheduling approach is to treat the processors as a pooled resource and assign processes to processors on demand. "tatic or d#namic assignment of a process "tatic assignment, a process is permanently assigned to one processor from acti ation until its completion. A dedicated short:term $ueue is maintained for each processor. Advantages, less o erhead in the scheduling. Disadvantages one processor can be idle, with an empty $ueue, while another processor has a backlog. D#namic assignment, All processes go into one global $ueue and are scheduled to any a ailable processor. Thus, o er the life of a process, the process may be e'ecuted on different processors at different times. Advantages, better processor utili!ation. Disadvantages inefficient use of cache memory, more difficult for the processors to communicate.
+esign by, Asst. Prof. -ikas .atariya /0(101)023141 Page 20

SUBJECT: OPERATING SYSTEM GROUP OF INSTITUTION

UNIT III PROCESS SCHEDULING

PATEL

--------------------------------------------------------------------------------------------------------------------Q.' Explain the term &hread scheduling in concurrent processing. Ans Ge# features of threads An application can be a set of threads that cooperate and e'ecute concurrently in the same address space. Threads running on separate processors yield a dramatic gain in performance. 3eneral approaches to thread scheduling ,oad sharing processes are not assigned to a particular processor. A global $ueue of ready threads is maintained, and each processor, when idle, selects a thread from the $ueue. Fersions of ,oad "haring 4irst come first served .4)4"0 when a job arri es, each of its threads is placed consecuti ely at the end of the shared $ueue. "mallest num%er of threads first the shared ready $ueue is organi!ed as a priority $ueue, with highest priority gi en to threads from jobs with the smallest number of unscheduled threads. !reemptive smallest num%er of threads first highest priority is gi en to jobs with the smallest number of unscheduled threads. An arri ing job with a smaller number of threads than an e'ecuting job will preempt threads belonging to the scheduled job. 3ang scheduling a set of related threads is scheduled to run on a set of processors at the same time, on a one:to:one basis. "imultaneous scheduling of threads that make up a single process Dedicated processor assignment each program is allocated a number of processors e$ual to the number of threads in the program, for the duration of the program e'ecution %this is the opposite of the load:sharing approach& )omparison with gang scheduling, "imilarities : threads are assigned to processors at the same time Differences : in dedicated processor assignment threads do not change processors. D#namic scheduling the application is responsible for assigning its threads to processors. *t may alter the number of threads during the course of e'ecution. 8n request for a processor, ?S does the following, *f there are idle processors, use them to satisfy the re$uest.
+esign by, Asst. Prof. -ikas .atariya /0(101)023141 Page 21

SUBJECT: OPERATING SYSTEM GROUP OF INSTITUTION

UNIT III PROCESS SCHEDULING

PATEL

?therwise, if the job making the re$uest is a new arri al, allocate it a single processor by taking one away from any job currently allocated more than one processor. *f any portion of the re$uest cannot be satisfied, it remains outstanding until either a processor becomes a ailable for it or the job rescinds the re$uest. *pon release of one or more processors %including job departure&, ?S does the following, Scan the current $ueue of unsatisfied re$uests for processors. Assign a single processor to each job in the list that currently has no processors %i.e., to all waiting new arri als&. Then scan the list again, allocating the rest of the processors on an ;C;S basis. The o erhead of this approach may negate this apparent performance ad antage. --------------------------------------------------------------------------------------------------------------------------Q.' Dow do #ou classif# the different approaches for 5eal-time scheduling? "tate various 5ealtime scheduling techniques availa%le and discuss an# one in detail. Ans 5eal-&ime "cheduling Correctness of the system depends not only on the logical result of the computation, but also on the time at which the results are produced. Tasks or processes attempt to control or react to e ents that take place in the outside world. These e ents occur in Areal timeA and processes must be able to keep up with them. Examples
o o o o o o

Control of laboratory e'periments Process control plants Iobotics Air traffic control Telecommunications 9ilitary command and control systems

&#pes of &as/s A. With respect to urgenc# A hard real-time tas/ is one that must meet its deadlineN otherwise it will cause undesirable damage or a fatal error to the system.

+esign by, Asst. Prof. -ikas .atariya /0(101)023141 Page 22

SUBJECT: OPERATING SYSTEM GROUP OF INSTITUTION

UNIT III PROCESS SCHEDULING

PATEL

A soft real-time tas/ has an associated deadline that is desirable but not mandatoryN it still makes sense to schedule and complete the task e en if it has passed its deadline. 6. With respect to execution An non-periodic tas/ has a deadline by which it must finish or start, or it may ha e a constraint on both start and finish time. A periodic tas/ is one that e'ecutes once per period & or e'actly & units apart. )haracteristics of 5eal-time 8perating "#stems Ieal:time operating systems can be characteri!ed as ha ing uni$ue re$uirements in fi e general areas,
o o o o o

+eterminism Iesponsi eness User control Ieliability ;ail:soft operation

Determinism ?perations are performed at fi'ed, predetermined times or within predetermined time inter als Concerned with how long the operating system delays before acknowledging an interrupt 5esponsiveness @ow long, after acknowledgment, it takes the operating system to ser ice the interrupt

*ncludes amount of time to %egin execution of the interrupt *ncludes the amount of time to perform the interrupt

+eterminism and responsi eness together make up the response time to e'ternal e ents. *ser control *t is essential to allow the user fine:grained control o er task priority. The user should be able to distinguish between hard and soft tasks and to specify relati e priorities within each class.
o o o

User specifies priority Specifies paging #hat processes must always reside in main memory

+esign by, Asst. Prof. -ikas .atariya /0(101)023141 Page 23

SUBJECT: OPERATING SYSTEM GROUP OF INSTITUTION


o o

UNIT III PROCESS SCHEDULING

PATEL

+isks algorithms to use Iights of processes

5elia%ilit# 5oss or degradation of performance may ha e catastrophic %+angerous& conse$uences. 4ail-soft operation is a characteristic that refers to the ability of a system to fail in such a way as to preser e as much capability and data as possible.
o

attempt either to correct the problem or minimi!e its effects while continuing to run.

"ta%ilit# A real:time system is stable if, in cases where it is impossible to meet all task deadlines, the system will meet the deadlines of its most critical, highest:priority tasks, e en if some less critical task deadlines are not always met 5eal-&ime "cheduling Ieal:time scheduling is one of the most acti e areas of research in computer science. The algorithms can be classified along three dimensions,
o o

When to dispatch Dow to schedule

%(& whether a system performs schedulability analysis, %4& if it does, whether it is done statically or dynamically, and %2& whether the result of the analysis itself produces a schedule or plan according to which tasks are dispatched at run time. When to dispatch The problem here concerns how often the operating system will interfere to make a scheduling decision. L'amples of different policies are listed below,
o o o o

Iound:robin preempti e scheduler Priority:dri en non:preempti e scheduler Priority:dri en preempti e scheduler on preemption points *mmediate preempti e scheduler

Dow to schedule
+esign by, Asst. Prof. -ikas .atariya /0(101)023141 Page 24

SUBJECT: OPERATING SYSTEM GROUP OF INSTITUTION

UNIT III PROCESS SCHEDULING

PATEL

)lasses of algorithms "tatic ta%le-driven approaches these perform a static analysis of feasible schedules of dispatching. The result of the analysis is a schedule that determines, at run time, when a task must begin e'ecution.

Applicable to periodic tasks *nfle'ible approach : any change re$uires the schedule to be redone

"tatic priorit#-driven preemptive approaches again, a static analysis is performed, but no schedule is drawn up. Iather, the analysis is used to assign priorities to tasks, so that a traditional priority:dri en preempti e scheduler can be used. D#namic planning-%ased approaches feasibility is determined at run time %dynamically&. An arri ing task is accepted for e'ecution only if it is feasible to meet its time constraints. D#namic %est effort approaches no feasibility analysis is performed. The system tries to meet all deadlines and aborts any started process whose deadline is missed. Used for non:periodic tasks. 77777777777777777777777777777777777777777777777777777777777777777777777777777777777 77777777

+esign by, Asst. Prof. -ikas .atariya /0(101)023141 Page 25

SUBJECT: OPERATING SYSTEM GROUP OF INSTITUTION

UNIT III PROCESS SCHEDULING

PATEL

+esign by, Asst. Prof. -ikas .atariya /0(101)023141 Page 26

Vous aimerez peut-être aussi