Académique Documents
Professionnel Documents
Culture Documents
2 load
2 load +1
p estcpu +p nice
- Load is sampled average of length of run queue plus
short-term sleep queue over last minute
Run queue determined by p usrpri/4
p usrpri 50 +
p estcpu
4
+2 p nice
(value clipped if over 127)
23/36
Sleeping process increases priority
p estcpu not updated while asleep
- Instead p slptime keeps count of sleep time
When process becomes runnable
p estcpu
2 load
2 load +1
p slptime
p estcpu
- Approximates decay ignoring nice and past loads
Previous description based on [McKusick]
a
(The
Design and Implementation of the 4.4BSD Operating
System)
a
See library.stanford.edu for off-campus access
24/36
Pintos notes
Same basic idea for second half of project 1
- But 64 priorities, not 128
- Higher numbers mean higher priority
- Okay to have only one run queue if you prefer
(less efcient, but we wont deduct points for it)
Have to negate priority equation:
priority = 63
recent cpu
4
2 nice
25/36
Limitations of BSD scheduler
Hard to have isolation / prevent interference
- Priorities are absolute
Cant donate priority (e.g., to server on RPC)
No exible control
- E.g., In monte carlo simulations, error is 1/
N after N trials
- Want to get quick estimate from new computation
- Leave a bunch running for a while to get more accurate results
Multimedia applications
- Often fall back to degraded quality levels depending on
resources
- Want to control quality of different streams
26/36
Real-time scheduling
Two categories:
- Soft real timemiss deadline and CD will sound funny
- Hard real timemiss deadline and plane will crash
System must handle periodic and aperiodic events
- E.g., procs A, B, C must be scheduled every 100, 200, 500 msec,
require 50, 30, 100 msec respectively
- Schedulable if
CPU
period
1 (not counting switch time)
Variety of scheduling strategies
- E.g., rst deadline rst (works if schedulable)
27/36
Multiprocessor scheduling issues
Must decide on more than which processes to run
- Must decide on which CPU to run which process
Moving between CPUs has costs
- More cache misses, depending on arch more TLB misses too
Afnity schedulingtry to keep threads on same CPU
- But also prevent load imbalances
- Do cost-benet analysis when deciding to migrate
28/36
Multiprocessor scheduling (cont)
Want related processes scheduled together
- Good if threads access same resources (e.g., cached les)
- Even more important if threads communicate often,
otherwise must context switch to communicate
Gang schedulingschedule all CPUs synchronously
- With synchronized quanta, easier to schedule related
processes/threads together
29/36
Thread scheduling
With thread library, have two scheduling decisions:
- Local Scheduling Thread library decides which user thread to
put onto an available kernel thread
- Global Scheduling Kernel decides which kernel thread to run
next
Can expose to the user
- E.g., pthread attr setscope allows two choices
- PTHREAD SCOPE SYSTEM thread scheduled like a process
(effectively one kernel thread bound to user thread Will
return ENOTSUP in user-level pthreads implementation)
- PTHREAD SCOPE PROCESS thread scheduled within the current
process (may have multiple user threads multiplexed onto
kernel threads)
30/36
Thread dependencies
Say H at high priority, L at low priority
- L acquires lock l.
- Scene 1: H tries to acquire l, fails, spins. L never gets to run.
- Scene 2: H tries to acquire l, fails, blocks. M enters system at
medium priority. L never gets to run.
- Both scenes are examples of priority inversion
Scheduling = deciding who should make progress
- Obvious: a threads importance should increase with the
importance of those that depend on it.
- Nave priority schemes violate this
31/36
Priority donation
Say higher number = higher priority
Example 1: L (prio 2), M (prio 4), H (prio 8)
- L holds lock l
- M waits on l, Ls priority raised to L
= max(M, L) = 4
- Then H waits on l, Ls priority raised to max(H, L
) = 8
Example 2: Same threads
- L holds lock l, M holds lock l
2
- M waits on l, Ls priority now L
= 4 (as before)
- Then H waits on l
2
. Ms priority goes to M
= max(H, M) = 8,
and Ls priority raised to max(M
, L
) = 8
Example 3: L (prio 2), M
1
, . . . M
1000
(all prio 4)
- L has l, and M
1
, . . . , M
1000
all block on l. Ls priority is
max(L, M
1
, . . . , M
1000
) = 4.
32/36
Fair Queuing (FQ)
Digression: packet scheduling problem
- Which network packet should router send next over a link?
- Problem inspired some algorithms we will see next week
- Plus good to reinforce concepts in a different domain. . .
For ideal fairness, would send one bit from each ow
- In weighted fair queuing (WFQ), more bits from some ows
Flow 1
Flow 2
Flow 3
Flow 4
Round-robin
service
33/36
Packet scheduling
Differences from CPU scheduling
- No preemption or yieldingmust send whole packets
Thus, cant send one bit at a time
- But know how many bits are in each packet
Can see the future and know how long packet needs link
What scheduling algorithm does this suggest?
34/36
Packet scheduling
Differences from CPU scheduling
- No preemption or yieldingmust send whole packets
Thus, cant send one bit at a time
- But know how many bits are in each packet
Can see the future and know how long packet needs link
What scheduling algorithm does this suggest? SJF
Recall limitations of SJF:
- Cant see the future
solved by packet length
- Optimizes response time, not turnaround time
but these are the same when sending whole packets
- Not fair
Kind of want fair SJF for networking
34/36
FQ Algorithm
Suppose clock ticks each time a bit is transmitted
Let P
i
denote the length of packet i
Let S
i
denote the time when start to transmit packet i
Let F
i
denote the time when nish transmitting
packet i
F
i
= S
i
+ P
i
When does router start transmitting packet i?
- If arrived before router nished packet i 1 from this ow,
then immediately after last bit of i 1 (F
i1
)
- If no current packets for this ow, then start transmitting when
arrives (call this A
i
)
Thus: F
i
= max(F
i1
, A
i
) + P
i
35/36
FQ Algorithm (cont)
For multiple ows
- Calculate F
i
for each packet that arrives on each ow
- Treat all F
i
s as timestamps
- Next packet to transmit is one with lowest timestamp
Example:
Flow 1 Flow 2
(a) (b)
Output Output
F = 8
F = 10
F = 5
F = 10
F = 2
Flow 1
(arriving)
Flow 2
(transmitting)
36/36