Académique Documents
Professionnel Documents
Culture Documents
394-398
ISSN: 2222-2510
2011 WAP journal. www.waprogramming.com
Ronak Karimi
Mehdi Hariri *
Kermanshah University of
Medical Sciences,
Kermanshah, Iran
minibigs_m@yahoo.co.uk
Kermanshah University of
Medical Sciences,
Kermanshah, Iran
rk_respina_67@yahoo.com
Kermanshah University of
Medical Sciences,
Kermanshah, Iran
mehdi.hariri@yahoo.com
Abstract: This paper gets into task scheduling in operating systems. Main methods and techniques of
scheduling are presented and described in brief. So, with a short introduction containing the description of
long-term, medium-term, short term and dispatcher scheduling, main concepts are illustrated. Investigated
methods are: FIFO, shortest Job First, fixed-priority pre-emptive scheduling, round-robin scheduling and
multilevel feedback queue. These methods are compared in conclusion.
Key word: Task scheduling, operating systems (OS), process timing
I.
INTRODUCTION
In computer science, scheduling is the method by which threads, processes or data flows are given access to system
resources (e.g. processor time, communications bandwidth). This is usually done to load balance a system effectively
or achieve a target quality of service. The need for a scheduling algorithm arises from the requirement for most
modern systems to perform multitasking (execute more than one process at a time) and multiplexing (transmit multiple
flows simultaneously).
Operating systems may feature up to four types of scheduler:
Long-term scheduling: The long-term, or admission scheduler, decides which jobs or processes are to be
admitted to the ready queue (in the Main Memory); that is, when an attempt is made to execute a program, its
admission to the set of currently executing processes is either authorized or delayed by the long-term
scheduler. Thus, this scheduler dictates what processes are to run on a system, and the degree of concurrency
to be supported at any one time - i.e.: whether a high or low amount of processes are to be executed
concurrently, and how the split between IO intensive and CPU intensive processes is to be handled. In
modern operating systems, this is used to make sure that real time processes get enough CPU time to finish
their tasks. Without proper real time scheduling, modern GUI interfaces would seem sluggish. The long term
queue exists in the Hard Disk or the "Virtual Memory" [1]. Long-term scheduling is also important in largescale systems such as batch processing systems, computer clusters, supercomputers and render farms. In these
cases, special purpose job scheduler software is typically used to assist these functions, in addition to any
underlying admission scheduling support in the operating system.
Medium-term scheduling: The medium-term scheduler temporarily removes processes from main memory
and places them on secondary memory (such as a disk drive) or vice versa. This is commonly referred to as
"swapping out" or "swapping in" (also incorrectly as "paging out" or "paging in"). The medium-term
scheduler may decide to swap out a process which has not been active for some time, or a process which has a
low priority, or a process which is page faulting frequently, or a process which is taking up a large amount of
memory in order to free up main memory for other processes, swapping the process back in later when more
memory is available, or when the process has been unblocked and is no longer waiting for a resource [1][2].
In many systems today (those that support mapping virtual address space to secondary storage other than the
swap file), the medium-term scheduler may actually perform the role of the long-term scheduler, by treating
binaries as "swapped out processes" upon their execution. In this way, when a segment of the binary is
required it can be swapped in on demand, or "lazy loaded" [2].
Short-term scheduling: The short-term scheduler (also known as the CPU scheduler) decides which of the
ready, in-memory processes are to be executed (allocated a CPU) next following a clock interrupt, an IO
interrupt, an operating system call or another form of signal. Thus the short-term scheduler makes scheduling
394
Masoud Nosrati et al., World Applied Programming, Vol (2), No (6), June 2012.
decisions much more frequently than the long-term or mid-term schedulers - a scheduling decision will at a
minimum have to be made after every time slice and these are very short. This scheduler can be preemptive,
implying that it is capable of forcibly removing processes from a CPU when it decides to allocate that CPU to
another process, or non-preemptive (also known as "voluntary" or "co-operative"), in which case the
scheduler is unable to "force" processes off the CPU. [1]. In most cases short-term scheduler is written in
assembly because it is a critical part of the operating system.
Dispatcher: Another component involved in the CPU-scheduling function is the dispatcher. The dispatcher is
the module that gives control of the CPU to the process selected by the short-term scheduler. This function
involves the following:
o
o
o
Switching context
Switching to user mode
Jumping to the proper location in the user program to restart that program
The dispatcher should be as fast as possible, since it is invoked during every process switch. The time it takes for the
dispatcher to stop one process and start another running is known as the dispatch latency [3].
II.
SCHEDULING ALGORITHMS
Scheduling disciplines are algorithms used for distributing resources among parties which simultaneously and
asynchronously request them. Scheduling disciplines are used in routers (to handle packet traffic) as well as in
operating systems (to share CPU time among both threads and processes), disk drives (I/O scheduling), printers (print
spooler), most embedded systems, etc.
The main purposes of scheduling algorithms are to minimize resource starvation and to ensure fairness amongst the
parties utilizing the resources. Scheduling deals with the problem of deciding which of the outstanding requests is to
be allocated resources. There are many different scheduling algorithms. In this section, we introduce several of them.
In packet-switched computer networks and other statistical multiplexing, the notion of a scheduling algorithm is used
as an alternative to first-come first-served queuing of data packets.
Scheduling algorithms are listed and described below:
395
Masoud Nosrati et al., World Applied Programming, Vol (2), No (6), June 2012.
Shortest job next (SJN), also known as Shortest Job First (SJF) or Shortest Process Next (SPN), is a scheduling policy
that selects the waiting process with the smallest execution time to execute next. SJN is a non-preemptive algorithm.
Shortest remaining time is a preemptive variant of SJN.
Shortest job next is advantageous because of its simplicity and because it maximizes process throughput (in terms of
the number of processes run to completion in a given amount of time). It also minimizes the average amount of time
each process has to wait until its execution is complete. However, it has the potential for process starvation for
processes which will require a long time to complete if short processes are continually added. Highest response ratio
next is similar but provides a solution to this problem.
Another disadvantage of using shortest job next is that the total execution time of a process must be known before
execution. It is not possible to accurately predict execution time for all processes.
Shortest job next can be effectively used with interactive processes which generally follow a pattern of alternating
between waiting for a command and executing it. If the execution of a command is regarded as a separate "process",
past behavior can indicate which process to run next, based on an estimate of its running time.
Shortest is used in specialized environments where accurate estimates of running time are available. Estimating the
running time of queued processes is sometimes done using a technique called aging [6][7].
396
Masoud Nosrati et al., World Applied Programming, Vol (2), No (6), June 2012.
Example: If the time slot is 100 milliseconds, and job1 takes a total time of 250 ms to complete, the round-robin
scheduler will suspend the job after 100 ms and give other jobs their time on the CPU. Once the other jobs have had
their equal share (100 ms each), job1 will get another allocation of CPU time and the cycle will repeat. This process
continues until the job finishes and needs no more time on the CPU.
Job1 = Total time to complete 250 ms (quantum 100 ms).
1. First allocation = 100 ms.
2. Second allocation = 100 ms.
3. Third allocation = 100 ms but job1 self-terminates after 50 ms.
4. Total CPU time of job1 = 250 ms.
Another approach is to divide all processes into an equal number of timing quanta such that the quantum size is
proportional to the size of the process. Hence, all processes end at the same time [10].
At the base level queue the processes circulate in round robin fashion until they complete and leave the system.
Optionally, if a process blocks for I/O, it is 'promoted' one level, and placed at the end of the next-higher queue. This
allows I/O bound processes to be favored by the scheduler and allows processes to 'escape' the base level queue. In the
multilevel feedback queue, a process is given just one chance to complete at a given queue level before it is forced
down to a lower level queue [11][12].
III.
Each of stated scheduling algorithms has its own features and characteristics. Table 1 indicates the difference and
compares them.
Scheduling algorithm
Response time
Low
Medium
Medium
High
High
Low
Medium
High
High
Medium
Low
High
Low
Medium
High
397
High
Medium
High
Medium
Medium
Masoud Nosrati et al., World Applied Programming, Vol (2), No (6), June 2012.
REFERENCES
[1]
[2]
[3]
Stallings, William (2004). Operating Systems Internals and Design Principles (fifth international edition). Prentice Hall. ISBN 0-13-147954-7.
Stallings, William (2004). Operating Systems Internals and Design Principles (fourth edition). Prentice Hall. ISBN 0-13-031999-6.
Baewicz, Jacek; Ecker, K.H.; Pesch, E.; Schmidt, G.; Weglarz, J. (2001). Scheduling computer and manufacturing processes (2 ed.). Berlin
[u.a.]: Springer. ISBN 3-540-41931-4.
[4] Kruse, Robert L. (1987) [1984]. Data Structures & Program Design (second edition). Joan L. Stone, Kenny Beck, Ed O'Dougherty (production
process staff workers) (second (hc) textbook ed.). Englewood Cliffs, New Jersey 07632: Prentice-Hall, Inc. div. of Simon & Schuster. pp. 150.
ISBN 0-13-195884-4.
[5] Wikipedia. Article FIFO, available at: http://en.wikipedia.org/wiki/First_In_First_Out
[6] Tanenbaum, A. S. (2008). Modern Operating Systems (3rd ed.). Pearson Education, Inc.. p. 156. ISBN 0-13-600663-9.
[7] Wikipedia. Article Shortest job next, available at: http://en.wikipedia.org/wiki/Shortest_Job_First
[8] Wikipedia. Article Fixed-priority pre-emptive scheduling, available at: http://en.wikipedia.org/wiki/Fixed_priority_pre-emptive_scheduling
[9] Silberschatz, Abraham; Galvin, Peter B.; Gagne, Greg (2010). "Process Scheduling". Operating System Concepts (8th Ed.).
John_Wiley_&_Sons (Asia). pp. 194. ISBN 978-0-470-23399-3. "5.3.4 Round Robin Scheduling"
[10] Wikipedia. Article Round-robin scheduling, available at: http://en.wikipedia.org/wiki/Round-robin_scheduling
[11] Kleinrock, L.; Muntz, R. R. (July 1972). "Processor Sharing Queueing Models of Mixed Scheduling Disciplines for Time Shared System".
Journal of the ACM 19 (3): 464482. doi:10.1145/321707.321717
[12] Wikipedia. Article Multilevel feedback queue, available at: http://en.wikipedia.org/wiki/Multilevel_feedback_queue
398