Vous êtes sur la page 1sur 138

Operating System

What is an operating
system?
• A program that runs on the “raw” hardware and
supports
– Resource Abstraction
– Resource Sharing
• Abstracts and standardizes the interface to the
user across different types of hardware
– Virtual machine hides the messy details which must
be performed
• Manages the hardware resources
– Each program gets time with the resource
– Each program gets space on the resource
• May have potentially conflicting goals:
– Use hardware efficiently
– Give maximum performance to each user
What is the OS made of?

Hardware

Shell

Kernel and
system software

Users

Other Applications
OS cntd …..
• User – The system representation of the human
operator who requests for services.
• Application Software – Special software to
help the user do his task (E.g.. MS Word)
• Shell – The program that interprets the
commands or requests given by the user and
gets the job done by the kernel.
• Kernel – The core of the operating system. It
uses the hardware to do the jobs required by
the user or the system. It coordinates among
the hardware and interfaces it with the above
layers.
• System Software – Software that can access
the hardware directly and generally provides
various system services. (E.g.. The kernel
itself, device drivers etc.).
• Hardware – The set of electronic devices that
OPERATING SYSTEM T h e La ye rs O f
OVERVIEW A S yste m

H um ans

Pro g ra m
In te rfa ce

U se r Pro g ra m s

O . S . In te rfa ce

O .S .

H a rd w a re In te rfa ce /
Privile g e d
In stru ctio n s
D isk / Ta p e / M e m o r
y 1: Operating Systems Overview 5
The Goals of an OS
• Let users run programs:
– Correctness
• Memory boundaries, priorities, steady
state

– Convenience
• User should not handle the tiny details
(encapsulate/abstract), provide
synchronization primitives, system
calls, file system, tools
The Goals of an OS
• Let users run programs:
– Efficiency
• Resource Utilization, resource Sharing,
Multitasking

– Fairness (in resource allocation)
• Among: users, tasks, resources
• The tradeoff between efficiency and
fairness
Operating System Organization
• Modified microkernel architecture
– Not a pure microkernel
– Many system functions outside of the
microkernel run in kernel mode
• Any module can be removed,
upgraded, or replaced without
rewriting the entire system

8
Kernel-Mode Components
• Executive
– Contains base operating system
services
• Memory management
• Process and thread management
• Security
• I/O
• Interprocess communication
• Kernel
– Consists of the most used
components 9
Kernel-Mode Components
• Hardware abstraction layer (HAL)
– Isolates the operating system from
platform-specific hardware
differences
• Device drivers
– Translate user I/O function calls into
specific hardware device I/O
requests
• Windowing and graphics systems
– Implements the graphical user
interface (GUI) 10
Process Management
• A processis a program in execution. A
process needs certain resources,
including CPU time, memory, files,
and I/O devices, to accomplish its
task.
• The operating system is responsible for
the following activities in connection
with process management.
– Process creation and deletion.
– process suspension and resumption.
– Provision of mechanisms for:
• process synchronization
• process communication
Main-Memory
Management
• Memory is a large array of words or bytes,
each with its own address. It is a repository
of quickly accessible data shared by the
CPU and I/O devices.
• Main memory is a volatile storage device. It
loses its contents in the case of system
failure.
• The operating system is responsible for the
following activities in connections with
memory management:
– Keep track of which parts of memory are
currently being used and by whom.
– Decide which processes to load when
memory space becomes available.
– Allocate and deallocate memory space as
needed.
File Management

• A file is a collection of related information


defined by its creator. Commonly, files
represent programs (both source and
object forms) and data.
• The operating system is responsible for the
following activities in connections with file
management:
– File creation and deletion.
– Directory creation and deletion.
– Support of primitives for manipulating files
and directories.
– Mapping files onto secondary storage.
– File backup on stable (nonvolatile) storage
I/O System
Management
• The I/O system consists of:
– A buffer-caching system
– A general device-driver interface
– Drivers for specific hardware devices
Secondary-Storage
• Management
Since main memory (primary storage) is
volatile and too small to accommodate all
data and programs permanently, the
computer system must provide secondary
storage to back up main memory.
• Most modern computer systems use disks as
the principle on-line storage medium, for
both programs and data.
• The operating system is responsible for the
following activities in connection with disk
management:
– Free space management
– Storage allocation
– Disk scheduling
Networking (Distributed
Systems)
• A distributedsystem is a collection
processors that do not share memory or a
clock. Each processor has its own local
memory.
• The processors in the system are connected
through a communication network.
• Communication takes place using a
protocol.
• A distributed system provides user access
to various system resources.
• Access to a shared resource allows:
– Computation speed-up
– Increased data availability
– Enhanced reliability
Protection System
• Protection refers to a mechanism for
controlling access by programs,
processes, or users to both system
and user resources.
• The protection mechanism must:
– distinguish between authorized and
unauthorized usage.
– specify the controls to be imposed.
– provide a means of enforcement.
Command-Interpreter
System
• Many commands are given to the
operating system by control
statements which deal with:
– process creation and management
– I/O handling
– secondary-storage management
– main-memory management
– file-system access
– protection
– networking
Command-Interpreter System
(Cont.)
• The program that reads and
interprets control statements is
called variously:

– command-line interpreter
– shell (in UNIX)

 Its function is to get and execute


the next command statement.

Operating System Services
• Program execution – system capability to load a
program into memory and to run it.
• I/O operations – since user programs cannot
execute I/O operations directly, the operating
system must provide some means to perform
I/O.
• File-system manipulation – program capability to
read, write, create, and delete files.
• Communications – exchange of information
between processes executing either on the
same computer or on different systems tied
together by a network. Implemented via shared
memory or message passing.
• Error detection – ensure correct computing by
detecting errors in the CPU and memory
hardware, in I/O devices, or in user programs.
Additional Operating System
Functions
Additional functions exist not for helping the

user, but rather for ensuring efficient system


operations.
• Resource allocation – allocating resources to
multiple users or multiple jobs running at
the same time.
• Accounting – keep track of and record which
users use how much and what kinds of
computer resources for account billing or
for accumulating usage statistics.
• Protection – ensuring that all access to
system resources is controlled.

Monolithic OS structure
Main
procedure

Service
routines

Utility
routines
Microkernels (client-server)

Client Client Process Termina … File Memory


l User mode
process process server server server server

Microkernel Kernel mode

• Processes (clients and OS servers) don’t share


memory
– Communication via message-passing
– Separation reduces risk of “byzantine” failures
• Examples include Mach
Types of OS
• General purpose OS
• Real time OS

GPOS
• Kernel of this OS is more generalised.
• Kernel contains all kinds of services
required for executing generic
applications.
• GPOS are non deterministic in
behaviour.
• Generally deployed in Desktops,
PCs…
• Eg : win XP/ MS-DOS

RTOS
• ‘Real time’ implies deterministic
timing behaviour( means OS
services consumes only known and
expected amounts of time
regardless the number of services).
• RTOS decides which applications
should run in which order and how
much time needs to be allocated
for each application.
• Eg: windows CE, Vx Works
What Is an Operating System?
For an Embedded System
• Provides software tools for a
convenient and prioritized control
of tasks.

• Provides tools for task (process)
synchronization.

• Provides a simple memory
management system
Abstract View of A System
(Embedded System)

Application

OS

Hardware
Difference from Desktop OS
• An RTOS is closely linked to the
application with only the needed
functions present
• Desktop operating systems start at
power-up time and they start
applications. An RTOS is started by
an application
• In RTOSs, safety is sacrificed for
performance
Real time kernel
• It contains only the minimal set of
services required for running the user
applications/tasks
• Basic functions of real time kernel are
– Task/ process management
– Task/ process scheduling
– Task/ process synchronization
– Error/ exception handling
– Memory management
– Interrupt handling
– Time management
Process/Task Concept
• Process is a program in execution;
process execution must progress in
sequential fashion
• A process includes:
– program counter
– stack
– data section
Process/Task Concept
• Task States:
– Running: Instructions are being
executed
– Ready: The process is waiting to be
assigned to a process
– Blocked: The process is waiting for
some event to occur
– terminated: The process has finished
execution
– new: The process is being created
Task states
Whatever the task
needs, happens

blocked ready

This is Another
highest ready task
Task needs priority is higher
something ready task priority
to happen before
it can continue

running
Task State Transitions
Task / process management
• It deals with
– Sets up memory space for tasks
– Loads the tasks code into memory space
– Allocates system resources
– Sets up a TCB(task control block) for task
– Task/ process deletion
Task / process management
cntd….
• TCB contains the following set of information
– Task ID- task identification number
– Task state- current state of task( ready, waiting)
– Task type- hard real time or soft real time or
background task
– Task priority- is 1 for task with priority -1
– Task context pointer- pointer for context saving
– Task memory pointer - pointers to code memory, data
memory, stack memory
– Task system resource pointer- pointer to system
resources(mutex, semaphores)
– task pointers- pointers to other TCBs
– Other parameters- other relevant task parameters

Task / process management
cntd….
• TCBs are kernel dependent.
• Task management services utilizes TCB in
following way.
– Creates a TCB for a task on creating a task
– Delete / remove the TCB of a task when the task is
terminated or deleted
– Reads the TCB to get the state of the task
– U[date the TCB with updated parameters on need basis (on
context switching)
– Modify the TCB to change the priority of the task
dynamically
Task / process scheduling
• Deals with sharing the CPU among
various tasks / processes.
• Kernel application called “
Scheduler” handles task
scheduling.
• Scheduler is nothing but an algorithm
implementation,which performs
efficient and optimal scheduling of
the tasks to provide a deterministic
behaviour.
Task / process
synchronisation
• Deals with synchronising the
concurrent access of a resource,
which is shared across multiple
tasks and the communication
between various tasks.
Error / exception handling
• Deals with registering and handling the
errors occurred /exceptions raised
during the execution of the tasks.
• Errors/ exceptions can be insufficient
memory, timeouts, deadlocks,
deadline missing, bus error, divide by
zero etc.
• These can happen at kernel level
services( deadlocks) or task
level(timeouts) .
• OS kernel gives all the errors in the
form of system calls( API). Eg is
Memory management
• Deals with allocating memory.
• Predictable timing and deterministic behavior are achieved at the cost of
memory allocation in RTOS.
• RTOS uses block based memory allocation technique where as GPOS uses
dynamic memory allocation.
• Blocks are stored in “ free buffer queue”.
• RTOS kernels allow tasks to access the memory block without memory
protection.
• Few RTOS uses virtual memory concept for memory allocation.
• Block memory concept avoids the garbage collection overhead.
• Block based memory allocation achieves deterministic behavior with the
trade of limited choice of memory chunk size and suboptimal memory
usage.

Interrupt handling
• Deals with handling of various types of
interrupts.
• Interrupts provide real time behavior to the
systems.
• It informs the processor that external device
or task associated with it requires
attention of CPU.
• They can be either synchronous or
asynchronous.
• Synchronous- interrupts which occur in sync
with currently executing task. Eg – divide
by zero, memory segmentation error etc.
• Asynchronous –interrupts which occur at
point of execution of any task . Eg –
Time management
• Accurate time management is essential for
providing precise time reference for all
applications.
• Time reference to kernel is provided by a
high- resolution Real Time Clock(RTC)
hardware chip ( hardware timer).
• The hardware timer is programmed to
interrupt the processor/ controller at a
fixed rate. This is called “ timer tick”.
• Some RTOS provide options for selecting the
required kernel functions at the time of
building a kernel.

Real-Time Systems (Cont.)
• Hard real-time:
– Secondary storage limited or absent, data stored in
short term memory, or read-only memory (ROM)
– Conflicts with time-sharing systems, not supported by
general-purpose operating systems.

• Soft real-time
– Limited utility in industrial control of robotics
– Useful in applications (multimedia, virtual reality)
requiring advanced operating-system features.
Computing Elements

Applications

Programming paradigms
Threads
Threads
Interface
Interface
Microkernel
Microkernel Operating System
Multi - Processor Computing
P P P
System
P ..
P P Hardware

PP Processor Thread Process


Process
Basic Process Model

STACK STACK

Shared
Sharedmemory
memorysegments,
segments,pipes,
pipes,open
openfiles
filesor
ormmap’d
mmap’dfiles
files

DATA
DATA DATA
DATA

TEXT
TEXT TEXT
TEXT
Shared
Shared Memory
Memory
maintained
maintained by kernel
by kernel
processes
processes processes
processes
Processes
The Process Model

• Multiprogramming of four programs


• Conceptual model of 4 independent, sequential processes
• Only one program active at any instant
Process States (1)

• Possible process states


– running
– blocked
– ready
• Transitions between states shown
Process States (2)

• Lowest layer of process-structured


OS
– handles interrupts, scheduling
• Above that layer are sequential
Implementation of
Processes (1)

 Fields of a process table entry


Implementation of
Processes (2)

 Skeleton of what lowest level of OS does


when an interrupt occurs
1.
Process management
• Process creation
• Process termination
• Start up sequence
Process Creation
• Parent process create children processes,
which, in turn create other processes, forming
a tree of processes.
• Resource sharing (one of)
– Parent and children share all resources.
– Children share subset of parent’s resources.
– Parent and child share no resources.
• Address space (one of)
– Child duplicate of parent.
– Child has a program loaded into it.
• Execution (one of)
– Parent and child execute concurrently.
– Parent waits until child terminate.
Process Creation
Principal events that cause process

creation
1.System initialization
• Execution of a process creation
system
1.User request to create a new process
2.Initiation of a batch job
Process Termination
• Process executes last statement and
tells the OS
– Output status from child to parent
– Process’ resources are deallocated by
operating system.
• Parent may terminate execution of
children processes
– Child has exceeded allocated resources.
– Task assigned to child is no longer
required.
– Parent is exiting (options)
• Operating system does not allow child to
continue if its parent terminates.
• Cascading termination.
Process Termination
Conditions which terminate processes

1.Normal exit (voluntary)


2.Error exit (voluntary)
3.Fatal error (involuntary)
4.Killed by another process
(involuntary)
The Startup Sequence
• Initialize registers
• Put ROM start address (or indirect
address) in PC
• Start interrupt?-fetch-decode-execute
cycle
• Load OS from disk to RAM
• Start OS processes, including
terminal logins and daemons
• Wait for interrupt
Process Hierarchies

• Parent creates a child process, child


processes can create its own process
• Forms a hierarchy
– UNIX calls this a "process group"
• Windows has no concept of process
hierarchy
– all processes are created equal
Threads
• Concept of multithreading
• Thread standards- posix, win 32
threads , java threads,
What are Threads?
➘Thread is a piece of code that can execute in
concurrence with other threads.
➘It is a schedule entity on a processor
Hardware
Context ✐Local state
Regist ✐Global / shared state
Regist
ers
ers ✐PC
Status
Status
Word
Word
✐Hard / Software Context
Program
Program
Counter
Counter
Running Thread Object
Threads
• A thread (or lightweight process) is a
basic unit of CPU utilization; it
consists of:
– Program counter
– Register set
– Stack space
• A thread shares with its peer threads
its:
– Code segment
– Data segment
– Operating-system resources
 Collectively known as a task.
• A traditional or heavyweight process
is equal to a task with one thread.
Threaded Process
Model
THREAD
THREAD
STACK
STACK

SHARED
SHARED
MEMORY
MEMORY

THREAD
THREAD
DATA
DATA
Threads within a process THREAD
THREAD
TEXT
TEXT
●Independent executables
●All threads are parts of a process hence communication
easier and simpler.
Levels of Parallelism
Code-Granularity
Code-Granularity
Code
Code Item
Item
Task
Task ii-- Task
Task Large
ll Task
Task ii ii++11 Large grain
grain
((task level
task level))
Program
Program

func1
func1 func2
func2 func3
func3
Medium
Medium grain
grain
(( )) (( )) (( )) ((control
control level
level))
❍Task {{
....
{{
....
{{
....
Function
Function
❍Control ....
....
....
}}
....
....
....
}}
....
....
....
((thread
thread))
❍Data }}

❍Multiple Issue aa aa aa Fine


Fine grain
grain
( ( 00 ) ) ( ( 11 ) ) ( ( 22 ) ) ((data
data level
level))
=..
=.. =..
=.. =..
=.. Loop
Loop
bb bb bb
( ( 00 ) ) ( ( 11 ) ) ( ( 22 ) )
=..
=.. =..
=.. =..
=.. Very
Very fine
fine
grain
grain
Lo
Lo ((multiple
multiple
++ xx issue
ad
ad issue))
With
With hardware
hardware
Simple Thread

Example
void *func ( )
{

 /* define local data */


 - - - - - - - - - - -
 - - - - - - - - - - - /* function code */
 - - - - - - - - - - -
 thr_exit(exit_value);
}

main ( )
{

 thread_t tid;
 int exit_value;
 - - - - - - - - - - -
 thread_create (0, 0, func (), NULL, &tid);
 - - - - - - - - - - -
 thread_join (tid, 0, &exit_value);
 - - - - - - - - - - -
}
Single and Multithreaded
Processes
Threads
The Thread Model (1)

(a) Three processes each with one thread


(b) One process with three threads

67
The Thread Model (2)

• Items shared by all threads in a


process
• Items private to each thread 68
The Thread Model (3)

 Each thread has its own stack


69
Thread Usage (1)

 A word processor with three threads


70
Thread Usage (2)

 A multithreaded Web server


71
Thread Usage (3)

• Rough outline of code for previous


slide
 (a) Dispatcher thread
 (b) Worker thread
72
Thread Usage (4)

 Three ways to construct a server

73
Various Implementations
PThreads
• A POSIX standard (IEEE 1003.1c) API for thread creation and
synchronization
• API specifies behavior of the thread library, implementation is
up to development of the library
• Common in UNIX operating systems (Solaris, Linux, Mac OS X)

Windows Threads

• Implements the one-to-one mapping


• Each thread contains
– A thread id
– Register set
– Separate user and kernel stacks
– Private data storage area
• The register set, stacks, and private storage area are known as
the context of the threads
Various Implementations
Linux Threads

• Linux refers to them as tasks rather than threads


• Thread creation is done through clone() system call
• clone() allows a child task to share the address space of the parent task
(process)

Java Threads

• Java threads may be created by:


– Extending Thread class
– Implementing the Runnable interface
• Java threads are managed by the JVM.
Thread pre- emption
• Types of thread
– User level thread
– Kernel / system level thread
– Many to one model
– One to one model
– Many to many model
Computational Model
UUse
serrLLeevveellTThhre
reaaddss U se r- L e v e l S ch e d u le
( User )
VVirtu
irtuaallPPro
roce
cesso
ssors
rs K e rn e l- L e v e l S ch e d u le
( Kernel )
PPhhyysica
sicall
PPro
roce
cesso
ssors
rs

Parallel Execution due to :


★Concurrency of threads on Virtual


Processors
★Concurrency of threads on Physical
Processor
 True Parallelism :

General Architecture of
Thread Model
Hides the details of machine
architecture
Maps User Threads to kernel
threads
Process VM is shared, state
change in VM by one thread
visible to other.
User Threads
• Thread management done by user-
level threads library

• Three primary thread libraries:


– POSIX Pthreads
– Win32 threads
– Java threads
Implementing Threads in User
Space

 A user-level threads package


80
Kernel Threads
• Supported by the Kernel

• Examples
– Windows XP/2000
– Solaris
– Linux
– Tru64 UNIX
– Mac OS X
Implementing Threads in the
Kernel

 A threads package managed by the


kernel 82
Many-to-One
• Many user-level threads mapped to
single kernel thread.
• Used on systems that do not support
kernel threads.
One-to-One
• Each user-level thread maps to
kernel thread.
• Examples
– Windows 95/98/NT/2000
– OS/2
Many-to-Many Model
• Allows many user level threads to be
mapped to many kernel threads.
• Allows the operating system to create a
sufficient number of kernel threads.
– Solaris 2
– Windows NT/2000 with the ThreadFiber
package
Hybrid Implementations

 Multiplexing user-level threads onto kernel- level


threads
86
Threading
Issues
Semantics of fork() and exec() system calls

• Does fork() duplicate only the calling thread


or all threads?

Thread cancellation

• Terminating a thread before it has finished


• Two general approaches:
– Asynchronous cancellationterminates
the target thread immediately
– Deferred cancellation allows the target
thread to periodically check if it should
be cancelled

Threading
Issues
Signal handling
• Signals are used in UNIX systems to notify a process that a particular
event has occurred
• A signal handler is used to process signals
1. Signal is generated by particular event
2. Signal is delivered to a process
3. Signal is handled
• Options:
– Deliver the signal to the thread to which the signal applies
– Deliver the signal to every thread in the process
– Deliver the signal to certain threads in the process
– Assign a specific thread to receive all signals for the process
Thread pools

• Create a number of threads in a pool where they await work


• Advantages:
– Usually slightly faster to service a request with an existing
thread than create a new thread
– Allows the number of threads in the application(s) to be
bound to the size of the pool
Threading
Issues
Thread specific data
• Allows each thread to have its own copy of data
• Useful when you do not have control over the thread creation
process (i.e., when using a thread pool)

Scheduler activations

• Many:Many models require communication to maintain the


appropriate number of kernel threads allocated to the
application
• Scheduler activations provide upcalls - a communication
mechanism from the kernel to the thread library
• This communication allows an application to maintain the
correct number kernel threads

Advantages and Drawbacks of
Threads
• Advantages:
• the overhead for creating a thread is
significantly less than that for
creating a process
• multitasking, i.e., one process serves
multiple clients
• switching between threads requires
the OS to do much less work than
switching between processes
• Drawbacks:
• not as widely available as longer
established features
• writing multithreaded programs require
more careful thought
• more difficult to debug than single
threaded programs
• for single processor machines, creating
several threads in a program may not
necessarily produce an increase in
performance (only so many CPU cycles
to be had)
Thread vs process
Multi-Processing, Multi-Threaded
Threaded Libraries, Multi-threaded I/O

Application

Application Application

Application
CPU
CPU
CPU CPU CPU CPU

Better Response Times in Higher Throughput for


Multiple Application Parallelizeable
Environments Applications
Multi-threading, continued...
Multi - threaded OS enables parallel ,
scalable I / O
Application

Application
Application

Multiple, independent I/O


OS Kernel requests can be
satisfied simultaneously
because all the major
disk, tape, and network
drivers have been multi-
threaded, allowing any
given driver to run on
multiple CPUs
CPU CPU CPU
simultaneously.
Multithreading -
Uniprocessors
• Concurrency Vs
Parallelism
KKConcurrency
Concurrency
P1
P1

P2
P2 CPU

P3
P3

time
time

Number
Numberof
ofSimulatneous
Simulatneousexecution
executionunits
units>>no
noof
ofCPUs
CPUs
Multithreading
Multithreading -- Multiprocessors
Multiprocessors

Concurrency
Concurrency Vs
Vs Parallelism
Parallelism

CPU

P1
P1
CPU
P2
P2

P3 CPU
P3

time
time

No
Noof
ofexecution
executionprocess
process==no
noof
ofCPUs
CPUs
Multi-tasking
• Types of multi-tasking
– Co-operative multi-tasking
– Pre-emptive multi-tasking
– Non – preemptive multi-tasking
Multitasking
Task Scheduling
• Tasks enter the blocked state voluntarily
• Some part of the RTOS is responsible for
determining when a task needs to be
moved from the blocked state to the
ready state (in response to some event)
• The scheduler is in charge of the running
state, it selects from the ready tasks, the
one with the highest priority and runs it
From Scheduler organization
other Remove the running process
states
Process
Ready process Descriptors

Ready List
Enqueuer

Dispatcher Context switcher C

• When a process is changed in the ready state, the enqueuer places a


Scheduler
pointer to the process descriptor into a ready list
• Context switcher saves the content of all processor registers of the
process being removed into the process’ descriptor, whenever the
scheduler switches the CPU from executing a process to executing
another
– Voluntary context switch
– Involuntary context switch
• The dispatcher is invoked after the current process has been removed
from the CPU; the dispatcher chooses one of the processes
enqueued in the ready list and then allocates CPU to that process by
Scheduler types
• Cooperative scheduler (voluntary CPU
sharing)
– Each process will periodically invoke the process
scheduler, voluntarily sharing the CPU
– Each process should call a function that will
implement the process scheduling.
• yield (Pcurrent, Pnext) (sometime implemented
as an instruction in hardware), where Pcurrent
is an identifier of the current process and the
Pnext is an identifier of the next process)
• Preemptive scheduler (involuntary CPU
sharing)
– The interrupt system enforces periodic
involuntary interruption of any process’s
execution; it can force a process to
involuntarily execute a yield type function (or
instruction)
– This is done by incorporating an interval timer
Cooperative scheduler
Process P1 Process descriptor for P1

yield (*, scheduler);

Process descriptor for scheduler

Process scheduler Process descriptor for P2


Scheduler {
s = select(…);
yield (*,s);

Operating System Inte rface


}
...

Process P2

yield (*, scheduler);

• Possible problems:
...
– If the processes are not voluntarily cooperate with the others
– One process could keep the process forever
Preemptive scheduler
In te rva lTim e r{ S e tIn te rva l(< p ro g ra m a b le V a lu e
In te rru p tC o u n t = >{
In te rrp tC o u n t - 1 ; K = p rg ra m m a b le V a lu e ;
if ( In te rru p tC o u n t < = 0 ){ In te rru p tC o u n t = K ;
In te rru p tR e q u e st = T R U E }
In te rru p tC o u n t = K ;
}
}

• A programmable interval timer will cause an interrupt


to run every K clock tics of an interval timer, thus
causing the hardware to execute the logical
equivalent of a yield instruction to invoke the
interrupt handler
• The interrupt handler for the timer interrupt will call
the scheduler to reschedule the processor without
any action on the part of the running process
• The scheduler is guaranteed to be invoked once
every K clock ticks
Scheduling Objectives

• Fairness (equitable shares of CPU)


• Priority (most important first)
• Efficiency (make best use of equipment)
• Encouraging good behavior (can’t take
advantage of the system)
• Support for heavy loads (degrade
gracefully)
• Adapting to different environments
(interactive, real-time, multi-media)
Performance Criteria
• Fairness
• Efficiency: keep resources as busy as possible
• Throughput: # of processes that completes in unit
time
• Turnaround Time (also called elapse time)
– amount of time to execute a particular
process from the time its entered
• Waiting Time
– amount of time process has been waiting in
ready queue
• Response Time
– amount of time from when a request was
first submitted until first response is
produced.
– predictability and variance
• Proportionality:
– meet users' expectation
Scheduling Criteria
• CPU utilization
– keep the CPU as busy as possible

• Throughput
– no. of processes completed per time unit

• Turnaround time
– how long it takes to complete a process

• Waiting time
– the total time a process is in the ready queue
– the measure used in chapter 5

• Response time
– time a process takes to start responding

Scheduling – Metrics
• Simplicity – easy to implement
• Job latency – time from start to completion
• Interactive latency – time from action start
to expected system response
• Throughput – number of jobs completed
• Utilization – keep processor and/or subset
of I/O devices busy
• Determinism – insure that jobs get done
before some time or event
• Fairness – every job makes progress
Preemptive vs. Non-preemptive
scheduling
• Non-preemptive scheduling:
– The running process keeps the CPU
until it voluntarily gives up the CPU
• process exits Run 4 Termin
ated
• switches to blocked statening 1
3
• 1 and 4 only (no 3)
Rea Bloc
• Preemptive scheduling: dy ked
– The running process can be
interrupted and must release the
CPU (can be forced to give up CPU)
Preemptive vs. non-preemptive
scheduling
CPU scheduling decisions may take

place when a process:


1. Switches from running to waiting


state
2. Terminates

3. Switches from waiting to ready

4. Switches from running to ready

state
Preemptive vs. non-preemptive
scheduling
• When scheduling takes place only under
circumstances 1 and 2, we say that the
scheduling scheme is non-preemptive;
otherwise, its called preemptive

• Under non-preemptive scheduling, once
the CPU has been allocated to a process,
the process keep the CPU until it release
the CPU either by terminating or by
switching to waiting state. (Windows 95
and earlier)
Preemptive vs. non-preemptive
scheduling
• Preemptive scheduling incurs a cost
associated with access to share data.

• Consider the case of two processes that
share a data. It is preemptive so that
while one process is updating the data,
the second process then tries to read the
data, which are in an inconsistent state.
(Ch6)
Task scheduling
classification
• Non – preemptive scheduling
– First-come-First Served( FCFS)/FIFO
scheduling
– Last-Come-First Served(LCFS)/LIFO
Scheduling
– Shortest Job First (SJF) Scheduling
– Priority based Scheduling
• Pre-emptive Scheduling
– Pre-emptive SJF/ Shortest Remaining
Time(SRT)
– Round Robin (RR) Scheduling
First Come First Served
• Non-preemptive algorithm
• This scheduling strategy assigns priority to
processes in the order in which they
request the processor
– The priority of a process is computed by the
enqueuer by time stamping all incoming
processes and then having the dispatcher to
select the process that has the oldest time
stamp
– Alternative implementation consists of having
the ready list organized as a FIFO data
structure (where each entry points to a
process descriptor); the enqueuer adds
processes to the tail of the queue and the
dispatcher removes processes from the head
of the queue
First Come First Served (FCFS)
FCFS example
1200TTRnd
i
0
τ(pi)
350
0 350 475 950 (p i)
1275
1 125
2 475
P0 P1 P2 P3 P4
3 250
4 75

• Average turn around time:


– TTRnd = (350 +475 +950 + 1200 +
1275)/5 = 850
• Average wait time:
– W = (0 + 350 +475 + 950 + 1200)/5 =
595
FCFS Features

• May not give the best average


waiting time.

• Average times can vary a lot


depending on the order of the
processes.

• Convoy effect
– small processes can get stuck
behind a big process
FCFS drawbacks

• Favors CPU-bound processes


– A CPU-bound process monopolizes the
processor
– I/O-bound processes have to wait until
completion of CPU-bound process
• I/O-bound processes may have to wait
even after their I/Os are completed
(poor device utilization)
– Better I/O device utilization could be
achieved if I/O bound processes had
higher priority
LIFO
Shortest Job First Scheduling (SJF)

• When the CPU is available, assign


it to the process with the
smallest next CPU burst duration
– better name is “shortest next CPU
burst”

• Can be preemptive or non-


preemptive
Shortest Job First
• There are non-preemptive and preemptive variants
• It is an optimal algorithm from the point of view of
average turn around time; it minimizes the average
turn around time
• Preferential service of short jobs
• It requires the knowledge of the service time for each
process
• In the extreme case, where the system has little idle
time, the processes with large service time will
never be served
• In the case where it is not possible to know the service
time for each process, this is estimated using
predictors
– Pn = aOn-1 + (1-a)Pn-1 where
• On-1 = previous service time
• Pn-1 = previous predictor
• a is within [0,1] range
– If a = 1 then Pn-1 is ignored
i
0
1
τ(pi)
350
125
SJF example
2 475
3 250
4 75
TTRnd(p i)
0 75 200 450 800 1275
P4 P1 P3 P0 P2

• Average turn around time:


– TTRnd = (800 + 200 +1275 + 450 +
75)/5 = 560
• Average wait time:
– W = (450 + 75 +800 + 200 + 0)/5 = 305

SJF Features
• Provably optimal
– gives the minimum average waiting
time

• Problem: it is usually impossible to


know the next CPU burst duration
for a process
– solution: guess (predict)
SJF / SPN Critique

• Possibility of starvation for longer processes


• Lack of preemption is not suitable in a time
sharing environment
• SJF/SPN implicitly incorporates priorities
– Shortest jobs are given preferences
– CPU bound process have lower priority,
but a process doing no I/O could still
monopolize the CPU if it is the first to
enter the system


Shortest Job First (Shortest
Process Next)

• Selection function: the process with the


shortest expected CPU burst time
– I/O-bound processes will be selected first
• Decision mode: non-preemptive
• The required processing time, i.e., the CPU
burst time, must be estimated for each
process
Is SJF/SPN optimal?
• If the metric is turnaround time (response time), is SJF or
FCFS better?
• For FCFS, resp_time=(3+9+13+18+20)/5 = ?
– Note that Rfcfs = 3+(3+6)+(3+6+4)+…. = ?
• For SJF, resp_time=(3+9+11+15+20)/5 = ?
– Note that Rfcfs = 3+(3+6)+(3+6+4)+…. = ?
• Which one is smaller? Is this always the case?

Is SJF/SPN optimal?
• Take each scheduling discipline, they both choose
the same subset of jobs (first k jobs).
• At some point, each discipline chooses a different
job (FCFS chooses k1 SJF chooses k2)
• Rfcfs =nR1+(n-1)R2+…+(n-k1)Rk1 +….+(n-k2) Rk2 +
….+Rn
• Rsjf =nR1+(n-1)R2+…+(n-k2)Rk2 +….+(n-k1) Rk1 +
….+Rn
• Which one is smaller? Rfcfs or Rsjf ?
Priorities
• Implemented by having multiple
ready queues to represent each
level of priority
• Scheduler the process of a higher
priority over one of lower priority
• Lower-priority may suffer starvation
• To alleviate starvation allow dynamic
priorities
– The priority of a process changes
based on its age or execution
history
Time slice (Round Robin)
• Preemptive algorithm
• Each process gets a time slice of CPU time, distributing the
processing time equitably among all processes requesting the
processor
• Whenever the time slice expires, the control of the CPU is given
to the next process in the ready list; the process being
switched is placed back into the ready process list
• It implies the existence of a specialized timer that measures
the processor time for each process; every time a process
becomes active, the timer is initialized
• It is not well suited for long jobs, since the scheduler will be
called multiple times until the job is done
• It is very sensitive to the size of the time slice
– Too big – large delays in response time for interactive
processes
– Too small – too much time spent running the scheduler
– Very big – turns into FCFS
• The time slice size is determined by analyzing the number of the
instructions that the processor can execute in the give time
slice.
Time slice (Round Robin)
i
0
1
τ(pi)
350
125
example
0
P0 P1
100
P2 P3
200
P4 P0
300
P1 P2
400
P3
475
P4 P0 P1
550
P2 P3
650

2 475
3 250 650 750 850 950 1050 1150 1250 1275
4 75 P0 P2 P3 P0 P2 P3 P0 P2 P0 P2 P2 P2 P2

Tim e slice size is 5 0 , n e g lig ib le a m o u n t o f tim e fo r


co n text sw itch in g

• Average turn around time:


– TTRnd = (1100 + 550 + 1275 + 950 + 475)/5 =
870
• Average wait time:
– W = (0 + 50 + 100 + 150 + 200)/5 = 100
• The wait time shows the benefit of RR
algorithm in the terms of how quickly a
Priority based schedule
i
0
1
τ(pi)
350
125
Priority
5
2
example
H ig h e st p rio rity co rre sp o n d s to h ig h e st va lu e
2 475 3
3 250 1
4 75 4
0 350 425 900 1025
P0 P4 P2 P1 P3

• Average turn around time:


– TTRnd = (350 + 425 + 900 + 1025 +
1275)/5 = 795
• Average wait time:
– W = (0 + 350 + 425 + 900 + 1025)/5 =
540
Round-Robin

■ Selection function: same as FCFS


■ Decision mode: preemptive
◆ a process is allowed to run until the time slice period
(quantum, typically from 10 to 100 ms) has
expired
◆ a clock interrupt occurs and the running process is
put on the ready queue
RR Features
• Average waiting time can be quite long.
• Context switching is an important overhead when
the time quantum is small.
• Average turnaround time of a set of processes does
not necessarily improve as the time quantum
12size
.5 increase:
12 Process Time
11 . 5 P1 6
P2 3
Turnaround Time

11 P3 1
P4 7
Average

10 . 5
10
9.5
1 2 3 4 5 6 7

Time Quantum
RR Time Quantum

• Quantum must be substantially


larger than the time required to
handle the clock interrupt and
dispatching
• Quantum should be larger then the
typical interaction
– but not much larger, to avoid
penalizing I/O bound processes
RR Time Quantum
Round Robin: critique

• Still favors CPU-bound processes


– An I/O bound process uses the CPU for a time
less than the time quantum before it is
blocked waiting for an I/O
– A CPU-bound process runs for all its time
slice and is put back into the ready queue
• May unfairly get in front of blocked
processes
RR scheduling with overhead
example
0 120 240 360 480 540 575 635 670
i
0
τ(pi)
350
P0 P1 P2 P3 P4 P0 P1 P2 P3 P4 P0 P1 P
1 125
2 475 790 910 1030 1150 1270 1390
3 250
4 75 P0 P2 P3 P0 P2 P3 P0 P2 P0 P2 P2 P2

Tim e slice size is 5 0 , 1 0 u n its o f tim e fo r co n text


sw itch in g

• Average turn around time:


– TTRnd = (1320 + 660 + 1535 + 1140 +
565)/5 = 1044
• Average wait time:
– W = (0 + 60 + 120 + 180 + 240)/5 =
120
Priority based scheduling (Event
Driven)
• Both preemptive and non-preemptive variants
• Each process has an externally assigned priority
• Every time an event occurs, that generates process switch,
the process with the highest priority is chosen from the
ready process list
• There is the possibility that processes with low priority
will never gain CPU time
• There are variants with static and dynamic priorities; the
dynamic priority computation solves the problem with
processes that may never gain CPU time (the longer the
process waits, the higher its priority becomes)
• It is used for real time systems

Vous aimerez peut-être aussi