Vous êtes sur la page 1sur 4

Interrupt: interrupo por acesso a I/O.

Trap: interrupo gerada por erro ou a pedido do


utilizador.
Direct Memory Access (DMA): Used for high-speed I/O devices able to transmit information at
close to memory speeds. Device controller transfers blocks of data from buffer storage directly to
main memory without CPU intervention. Implies the capability of arbitrating the system bus
Protection: User Mode common instructions. Kernel Mode anything, assumed on traps and
interrupts. Memory Protection: each program can only access its memory. Segmentation:
based on having a base pointer and a limit for each process running. Any access made outside of
its spacegenerates a trap and normally leads to the process being killed. Virtual Memory:
(Mostly Used Today) based on having a table that translates virtual addresses into real ones.
Normally, there is such a table for each process. Each process sees all the address space. An
access made to an non existing page generates a trap and normally leads to the process being
killed. :-) programs can be larger than physical memory. :-( overhead acesso ao disco.
Multiprogramming: maintaining several jobs in memory, active at the same time, so that when the
current program can no longer run (for example because its blocked waiting for I/O), another is
available to run. Optimizes resource utilization: CPU and I/O. Time-Sharing usa
multiprogramming para tratar de mltiplos jobs interactivos.
Multitasking, extension of multiprogramming. Execution of the active programs is time-sliced,
Each runs for a short period of time. When a program is running and is forcibly replaced by
another, it's been preempted. Implementing: a non-maskable interrupt must connected to a
hardware timer. Every few milliseconds, the interrupt causes a task (or process) switch.
OS Architecture and Structures: Microkernel: Everything from the kernel into user space.
Communication takes place between user modules using message passing. :-) Easier to extend a
microkernel. Easier to port the os to new architectures. More reliable (less code is running in kernel
mode). More secure. :-( SLOW: Performance overhead of user space to kernel space
communication. Monolithic: Consists of everything below the system-call interface and above the
physical hardware. Provides the le system, CPU scheduling, memory management, and other OS
functions; a large number of functions for one level :-( Less reliable (more code is running in kernel
mode). Less secure. Not easy to port to new architectures (depends) :-) FAST!
___________________________________________________________________________
PROCESSOS
.stack - automatic variables (automatically created and destroyed in
methods)
heap - dynamically allocated memory (exp: malloc)
data - all variables, global and static
.bss section: non-init vars (or init to 0)
.data section: ini vars to non-zero
.text - code of the program
Process Control Block (PCB)
One of the most important data structures of
the OS represents a running process.
PCBs ow between queues when an event
occurs according to their state.
Queues: Job q - Set of all processes in the
system. Ready q - processes residing in main
memory, ready and waiting to execute.
Device/blocked q processes waiting for an I/O
device. One queue for each device.
Types of Processes : I/O Bounded- Spend more
time doing I/O than computation (Live mostly in
the blocked queue). CPU Bounded - Spend more
time doing computat. than I/O (Live mostly in the ready queue). For a good
utilization of computer resources its important to have a good mix: the long-term scheduler is responsible
for this.
Accounting information. This information includes the amount of CPU and real time used, time limits,
account numbers, job or process numbers, and so on.
Copy-on-Write (COW) (fork() worked by creating a copy of the parents address space for the child com
COW no necessrio copiar todo o address space) allowing the parent and child processes initially to share
the same pages. These shared pages are marked as COW pages, meaning that if either process writes to a
shared page, a copy of the shared page is created. :-) if a caller never makes any modifications, no private
copy need ever be created.
Shared Address Space with the Kernel: For making system calls and traps fast, it is possible to jump directly
to the kernel handler routine without remapping any memory. The address space of each process is divided
into two parts: - One that is specic of that process - One that corresponds to the kernel and is shared by all
processes. How does it work? The user process does not have direct access to the kernel memory; it cannot
read nor write that part of the address space. Whenever a trap occurs, it enters in kernel mode and thus
has access to the already mapped memory.
Process Termination in UNIX: A process is only eliminated by the os when its father calls wait()/waitpid()on
it. This allows the parent to check things like the exit code of it's sons. Zombie Process: has died and its
parent has not acknowledged its death (by calling wait()) They still are using the resources. Orphan Process:
Original parent has died. In that case, it's parent becomes init (process 1).
________________________________________________________________________________________
THREADS
Vantagens multi-processos: - Quando qq
coisa d bode s aquele thread k
afectado. - Less memory bloat over a LOT of
threads instanciated and killed, because of
fragmentation. This leads to the OS allocating
more memory -> slow.
Process switching is an extremely heavy
operation. Its necessary to: - remap address
spaces - establish new security contexts -
manage all info associated to processes.
Vantagens de Threads: + leves que
processos: mudana de contexto; rpido de
criar e terminar; rpido de sincronizar; + fcil
de partilhar recursos as variveis globais so
partilhadas entre todas as threads.
Multithreading Models: User threads
(N-to-1 model): Threads are
implemented in a library at user
space. The kernel is completely
unaware of the existence of threads.
Kernel threads (1-to-1 model):
Threads are implemented just in
kernel space. Kernel does all the
scheduling of threads. More used.
The Many-to-Many Model: A number
of kernel threads can map to a
different number of user thread

Thread Pool - most OS use an 1-to-1
model, its important not to waste much me creang and destroying threads.
In Linux 2.4 (with LinuxThreads) threads were really processes with shared address spaces Created with
clone(). Windows Threads - Implements the one-to-one mapping. Each thread contains: thread id, register
set, separate user and kernel stacks, private data storage area. The last 4 are known as the context of the
threads.
Spinlock: 1. Quando uma thread est a espera num ciclo de um bloqueio (lock). 2. Se a thread permanece
activa, o uso desse lock uma espcie de espera activa. 3. Assim, os spinlocks so geralmente contidos at
serem realizados. Em algumas implementaes eles so automaticamente realizados se a thread estiver a
espera em blocos, ou est a dormir. :-) 1. So eficientes se as threads forem prprias para serem
bloqueadas por perodos pouco tempo 2. Evitam overhead do OS process re-sheduling ou mudana de
contexto 3. Spinlocks so usados nos kernels do OS. :-( 2. So inteis para longas duraes. 2. Quanto mais
tempo o bloqueio mantido por uma thread, maior o risco dela ser interrompida pelo SO scheduler. 3.
Pode ocorrer que outras threads sero deixadas "spinning" (repetidamente tentando adquirir o bloqueio, 4.
Pode ocorrer um semi-deadlock.
________________________________________________________________________________________
SYNCHRONIZATION: Race Condition --> Critical Section: must run atomically in mutual exclusion to me
thread safe and reentrant. Reentrant refere-se qualidade de uma sub rotina de ser executada
concorrentemente de forma segura. As threads podem chamar todas a mesma funo de forma segura.
Produtor/Consumidor (n slots):




Sleeping Barber:
- N waiting chairs and a barber chair
- If there are no customers, the barber goes to sleep
- When a customer enters:
If all chairs are occupied, he leaves
If the barber is busy but there are chairs, then the
customer sits and waits
If the barber is sleeping, the customer wakes
him up


Deadlock - When 2 or more processes are unable to make progress being blocked waiting for each other.
Evitar deadlock: - excluso mutua; hold-and-wait (pede os processos todos de uma vez); no-preemption
(aps um processo alocar um recurso e se aps um pedido for negado deve libertar os recursos)
Livelock - When two or more processes are alive and working but are unable to make progress
Starvation - When a process is not being able to access resources that its needs to make progress. Solution:
aumentar a prioridade dos processos (aging).
Monitors: A monitor is an abstraction where only one thread or process can be executing at a time.
- Normally, it has associated data - When inside a monitor, a thread executes in mutual exclusion
Dining Philosophers : 1. For every pair of philosophers contending for a resource, create a fork and give it to
the philosopher with the lower ID. Each fork can either be dirty or clean. Initially, all forks are dirty. 2. When
a philosopher wants to use a set of resources (i.e. eat), he must obtain the forks from his contending
neighbors. For all such forks he does not have, he sends a request message. 3. When a philosopher with a
fork receives a request message, he keeps the fork if it is clean, but gives it up when it is dirty. If he sends
the fork over, he cleans the fork before doing so. 4. After a philosopher is done eating, all his forks become
dirty. If another philosopher had previously requested one of the forks, he cleans the fork and sends it.
Condition Variables : The condition must always be tested with a while loop, never an if! Being unlocked
out of a condition variable only means that the condition must be re-checked, not that it has become true.
The condition must always be checked and signaled inside a locked mutex.
Critical-Section (segment of code of a process) Problem - when one process is executing in its critical
section, no other process is to be allowed to execute in its critical section. The section of code implementing
this request is the entry section. The critical section may be followed by an exit section. The remaining code
is the remainder section. Solutions for critical-section problem: mutual exclusion; progress (); bounded
waiting (limit of times that other processes are allowed to enter their CS).
________________________________________________________________________________________
Preemptive kernel allows a process to be preempted while it is running in kernel mode. A preemptive
kernel is more suitable for real-time programming, as it will allow a real-time process to preempt a process
currently running in the kernel.
Nonpreemptive kernel does not allow a process running in kernel mode to be preempted; a kernel-mode
process will run until it exits kernel mode, blocks, or voluntarily yields control of the CPU.
GESTO DE MEMRIA
Thrashing O fenmeno de trashing ocorre quando o processo passa a maior parte do tempo em swapping,
no realizando qualquer trabalho til. Desta originado um grande nmero de pagefaults. A process is
thrashing if it is spending more time paging than executing. Solution Decrease the degree of
multiprogramming. We can limit the effects of thrashing by using a local replacement algorithm. Prevent
to prevent thrashing, we must provide a process with as many frames as it needs. But how do we know how
many frames it needs? There are several techniques. The working-set strategy starts by looking at how
many frames a process is actually using.
Working set of a process the working set of information W(t,) of a process at time t to be the collection
of information referenced by the process during the process time interval (t ,t). Typically the units of
information in question are considered to be memory pages. This is suggested to be an approximation of
the set of pages that the process will access in the future (say during the next time units), and more
specifically is suggested to be an indication of what pages ought to be kept in main memory to allow most
progress to be made in the execution of that process.
A Translation lookaside buffer (TLB) is a CPU cache that memory management hardware uses to improve
virtual address translation speed.
Tabelas de pginas invertidas Outra abordagem para trabalhar com pginas manter uma tabela onde
cada entrada representa uma moldura de pgina ao invs de um endereo virtual. Desta forma, a entrada
deve monitorar qual pgina virtual est associada quela moldura de pgina. Embora as tabelas de pginas
invertidas economizem quantidade significativas de espao (pelo menos nas situaes em que o espao de
endereo virtual muito maior que a memria fsica), elas tem a desvantagem de que o mapeamento
(traduo) do endereo virtual para o fsico mais complexo e potencialmente mais lento. Uma forma de
facilitar a traduo do virtual para o fsico a utilizao da TLB pesquisada por software. A pesquisa pode
ser feita a partir de um encadeamento de pginas pginas virtuais que possuamum mesmo endereo hash.

A memria fragmentada depende da granularidade de paginao do SO.
Fragmentao interna () apresenta maior desvantagem, pois a memria que fica livre no pode ser usada.
Fragmentao externa (so pedaos de memria que podem ser usadas) um processo alocado
exactamente com a memoria que precisa e normalmente criam-se buracos na memoria.
Algoritmos para alocao: FIRST-FIT, BEST-FIT e WORST-FIT (melhor First-fit ou best-fit)
Memory-Mapped files: Vantagens: maior performance evita-se fazer chamadas ao sistema read()/write()
Desvantagens: o contedo do ficheiro esta em memoria por isso se o servidor for abaixo perdemos o
ficheiro. Uma possvel soluo passa por de tempos a tempos devemos fazer o flush dos dados do ficheiro
para evitar perdas.
Catching consiste em copiar a informao para um store system mais rpido, esta tcnica trs vantagens
j que usa uma maior velocidade de memria.
delayed-write (write-back caching) where we delay updates to the master copy. Modifications are written
to the cache and then are written through to the server at a later time. :-) 1. because writes are made to the
Produtor
wait(EMPTY)
wait(PE)
produz
escreve++
post(PE)
post(FULL)
Consumidor
wait(FULL)
wait(PL)
consome
leitura++
post(PL)
post(EMPTY)

Semforos:
EMPTY = N
FULL = 0
Ponteiros:
Leitura = 0
Escrita = 0
Cliente
wait(MUTEX)
if(nCliente==N)
{ post(MUTEX)
exit() }
++nClientes;
Post(BARBER)
Post(MUTEX)
Wait(CHAIRS)
Barbeiro
wait(BARBER)
wait(MUTEX)
post(CHAIRS)
--nClientes
post(MUTEX)
corta cabelo

Semforos:
BARBER = 0
CHARIS = 0
MUTEX = 1
Variveis:
nClientes = 0

cache, write accesses complete much more quickly. 2. data may be overwritten before they are written
back, in which case only the last update needs to be written at all. :-( unwritten data are lost whenever a
user machine crashes.
Tempo-virtual: O tempo de execuo dos programas no tem relao com o tempo cronolgico exterior a
maquina. Objectivo: usar da melhor maneira o CPU. Tempo-real: Tratam explicitamente com a noo de
tempo. Permitem associar metas temporais aos programas. Objectivo: garantir que as metas temporais so
respeitadas. Tempo-real estrito (hard real-time): As metas temporais tm de ser cumpridas. O seu no
cumprimento implica uma falha do sistema. Tempo-real lato (soft real-time): Apesar de ser importante
cumprir as metas temporais, o seu no cumprimento ocasional no implica uma falha catastrfica do
sistema.
RAID:
Level 0 - Striped Disk Array without Fault Tolerance: Provides data striping (spreading out blocks of each
file across multiple disk drives) but no redundancy. This improves performance but does not deliver fault
tolerance. If one drive fails then all data in the array is lost.
:-) Pros: Better performance data replicated across drives, no storage overhead as drives are utilized 100%
:-( Cons: Possibility of losing entire data on failure of a single disk
Level 1 - Mirroring and Duplexing: Provides disk mirroring. Level 1 provides twice the read transaction
rate of single disks and the same write transaction rate as single disks.
:-) Pros Guard against disk failure as data is replicated across disk drives
:-( Cons Replication creates storage overhead as the same data is copied across drives
Level 2 - Error-Correcting Coding: Not a typical implementation and rarely used, Level 2 stripes data at
the bit level rather than the block level.
Level 3 - Bit-Interleaved Parity: Provides byte-level striping with a dedicated parity disk. Level 3, which
cannot service simultaneous multiple requests, also is rarely used.
Level 4 - Dedicated Parity Drive: A commonly used implementation of RAID, Level 4 provides block-level
striping (like Level 0) with a parity disk. If a data disk fails, the parity data is used to create a replacement
disk. A disadvantage to Level 4 is that the parity disk can create write bottlenecks.
:-) Pros Reduced storage overhead (actually we need only a single disk here to store parity). E.g. if you
have 3 disks, parity can be stored on 3rd. So your overhead is only 33% in terms of storage.
:-( Cons Still suffers froma performance prespective
Level 5 - Block Interleaved Distributed Parity: Provides data striping at the byte level and also stripe error
correction information. This results in excellent performance and good fault tolerance. Level 5 is one of the
most popular implementations of RAID.
:-)ProsGood Performance, Good failure protection :-( Cons Not as good when your requirement is only
performance or only failure protection (parity doesnt come for free).
Level 6 - Independent Data Disks with Double Parity: Provides block-level striping with parity data
distributed across all disks.
Level 0+1 - A Mirror of Stripes: Not one of the original RAID levels, two RAID 0 stripes are created, and a
RAID 1 mirror is created over them. Used for both replicating and sharing data among disks.
Level 10 - A Stripe of Mirrors: Not one of the original RAID levels, multiple RAID 1 mirrors are created,
and a RAID 0 stripe is created over these.
:-) Pros: Best in terms of performance & guards against potential failures.
:-( Cons: Costly in terms of storage overhead
Level 7: A trademark of Storage Computer Corporation that adds caching to Levels 3 or 4.
RAID S: (also called Parity RAID) EMC Corporation's proprietary striped parity RAID system used in its
Symmetrix storage systems.

Memory Management: Parties fsicas: - Pedaos iguais :-( 1. aumenta a frag. Interna - Pedaos diferentes
(frag interna menor): fila de espera associada a cada processo. :-( 1. muito espao usado pelas filas -uma fila
para todos os processo :-( 1. se um processo bloqueia, todos bloqueiam. Parties Dinmicas: - Frag.
Externa; - No ha frag. Interna; - Compactao de memoria(tcnica usada para diminuir a frag externa); -
Algoritmos: First-fit: Rpido, frag. Externa. Best-fit: percorre toda a lista. Produz menos frag. Externa; Worst-
fit: percorre toda a lista. Produz mais frag. Externa; Next-fit: prximo desde o anterior.
ESCALONAMENTO PROCESSOS:
First-Come First-Served (FCFS) scheduling is the simplest scheduling algorithm, but it can cause short
processes to wait for very long processes. nonpreemptive. - Poor turnaround time - Short process wait a
long time - Favor CPU-bound processes - I/O bound process waits until CPU bound process waits - Very low
overhead.
Shortest-Job-First (SJF or SPN or SJN) scheduling is provably optimal, providing the shortest average waiting
time. Implementing SJF scheduling is difficult. The SJF algorithm is a special case of the general priority
scheduling algorithm, which simply allocates the CPU to the highest-priority process. Both priority and SJF
scheduling may suffer from starvation. Aging is a technique to prevent starvation. nonpreemptive.
Shortest-Remaining-Time-First (SRTF) if a new process arrives with CPU burst length less than remaining
time of current executing process, preempt. - Long process may starve; - Good turnaround time for short
jobs.
Round-Robin (RR) scheduling is more appropriate for a time-shared (interactive) system. RR scheduling
allocates de CPU to the first process in the ready queue for q time units, where q is the time quantum. After
q time units, if the process has not relinquished the CPU, it is preempted, and the process is put at the tail of
the ready queue. The major problem is the selection of the time quantum. If the quantum is too large, RR
scheduling degenerates to FCFS scheduling; if the quantum is too small, scheduling overhead in the form of
context-switch time becomes excessive. RR is preemptive. - Time slicing is used with high overhead
Multilevel Feedback queue algorithms allow different algorithms to be used for different classes of
processes. The most common model includes a foreground interactive queue that uses RR scheduling and a
background batch queue that uses FCFS scheduling. Multilevel feedback queues allow processes to move
from one queue to another. - Queues have decreasing priority; - Short I/O jobs keep higher priority; - bom
para processos interactivos.
Highest Response Ration Next (HRRN) choose next process with the highest ratio (R) = (time spent
waiting + expected service time) / expected service time (escolher o de maior R). This algorithm accounts
for the age of the process. Shorted jobs are favored (smaller denominator). But aging without service
(waiting time) increases R. - Queue ordenada por ratio; - Previne starvation; - Service time still needs to be
estimated
Throughput the number of processes that are completed per time unit. Finish Time (out) instante em
que acaba a execuo. Turnaround Time (out in) (amount of time to complete 1 process) Waiting Time (
turnaround time exec) amount of time a process has been waiting in the ready queue. Response Time -
Amount of time it takes from when a request was submitted until the first response is produced. In (tempo
de chegada) exec (tempo de execuo).

Traditional UNIX Scheduling: Target: time-sharing interactive environment. Good response for interactive
users avoiding starvation for low-priority background processes. Multilevel feedback using Round-Robin
within each of the priority queue. If a process does not block or complete within a second, it is preempted
(1 sec preemption)Priorities are recomputed once per second. Priority is base on process type and
execution history.
Base priority divides all processes into xed
bands of priority levels. Also CPU and nice
components are restricted to prevent
processes from leaving these bands. This
allows to optimize access to block devices
(disk) and the os to respond quickly to system
calls. These bands are: swapper , block I/O
device control, le manipulation, character I/O device control and user processes.
Linux 2.6 Scheduling - Preemptive Priority-Based Scheduler: Two separate priority ranges: - Real-time (0-99)
and Nice (other) (100-140); - Two algorithms: time-sharing and real-time; - Scheduling (Time-Sharing Algo -
> Fair preemptive): - Active Array and Expired Array. Scheduling is done from the active array. - The
scheduler chooses the task from the active array for execution. - It runs until its time slice is gone, or it is
blocked. - Whenever a task has used all its time slice, its moved to the expired array. Recalculaon of its
priority is done at that time. - Execution goes on until the active array is empty. - When the active array is
empty, its switched with the expired array.

Windows XP Scheduling - Priority-based Preemptive Scheduling: - The highest priority queue is always
served rst; - 32 levels of priorities, two classes of threads: - Real-time, having a priority from 16 to 31 -
Variable, having a priority from 1 to 15; The priorities from all threads vary except if they are in a
REAL_TIME_PRIORITY_CLASS - Within each class, there are relative priorities - Base priority = normal (by
default) - When the quantum expires, the priority is lowered down and after a wait op, the priority is
increased
FILE SYSTEMIMPLEMENTATION
Layered File System --------------------------------------------------------------------------->>

File control block (FCB):

A storage structure consisting of
information about a file









Virtual File Systems (VFS): - VFS provide an object-oriented way of implementing file systems; - VFS allows
the same system call interface (the API) to be used for different types of file systems. - The API is for the
whole VFS interface and not for a specific type of file system.
Directories contain information about files. Directories provide a mapping between file names and the files
themselves.Often, they are Hierarchical and Tree-Structured (but not necessarily).
File Allocation Methods: Contiguous Allocation Each file occupies a set of contiguous blocks on the disk. -
Simple to implement - Only need to maintain the starting location (block #) and length (number of blocks)
are required.- Random access - Wasteful of space - Dynamic storage-allocation problem - Files cannot grow
External fragmentation will occur
Linked/Chained Allocation Each file is a
linked list of disk blocks. Blocks may be
scattered anywhere on the disk. The
pointers to the next blocks are scattered
all over the disk! - Simple need only
starting address - Free-space
management system no waste of space
- No external fragmentation - No
accommodation of the principle of
locality - No random access - An
important variation: FAT / FAT32 - There
is a file allocation table in the beginning
of the disk. - Each FAT entry points to the
next block that a file is occupying. Each
entry of the Disk Root Directory contains
the name and attributes of a certain file,
as well as the number of its first block. -
Directories are implemented as special files.
Indexed Allocation
To be able to have random access and better support for locality one special block (index block inode in
Unix) has all the pointers.
Example:
Index block: 4K = 2
12
disk block:
4K
entry of index: 32 bits address = 4
bytes
Number of pointers in an index
block: index block / address
Max file size: number of pointers x
disk blocks
In Unix:
- 10 direct pointers(first 40KB of the file)
- 1 single indirect pointer(next 4MB of the file)
- 1 double indirect pointer(next 4GB of the file)
- 1 triple indirect pointer(next 4TB of the file)
i-Nodes:
Unix: O Unix favorece os ficheiros pequenos porque os acessos a disco so mais reduzidos. Os ficheiros
pequenos ficam em ponteiros directos e os ficheiros maiores ficam em ponteiros para ponteiros.

Exerccio: inodes Unix
a) 12 ponteiros directos para blocos, b) 1 ponteiro para um bloco de ponteiros c) 1 ponteiro
duploindirecto; d) 1 ponteiro triplo
indirecto
bloco: 8K = 2
13

ponteiro: 32 bits = 4 bytes = 2
2

(1) Qual o tamanho mx de um ficheiro
suportado neste sistema?
a) -> 12 ponteiros x 2
13
(tamanho do
bloco) =
nmero de ponteiros em cada bloco:
2
13
/2
2
= 2
11

b -> 1 x 2
11
x 2
13
= 2
24

c -> 1 x 2
11
x 2
11
x 2
13
= 2
35

d -> 1 x 2
11
x 2
11
x 2
11
x 2
13
= 2
46

somar tudo = xanaaa
(2) Assumindo que apenas o file-inode
se encontra em memria quantos
acessos a disco so necessrios para
aceder ao offset: 19123345?
o ltimo offset de b (12 x 2
13
) + (
2
11
x 2
13
) = 16875520 < 19123345 ==>
deve estar em c
em c para chegar ao ficheiro tem k ir ao inode, dp ao indicie, dp ao indice e do ao bloco mm
como o inode j est em memra h 3 page faults que provocam 3 acessos a disco ==> 3 acessos a disco.
(3) Admitindo que os restantes campos do file-inode ocupam 32 bytes, qual sero espao total usado pelo
inode com file-addresses se hipoteticamente tivermos um ficheiro com o tamanho mximo?
Soma de:
- 32 bytes dos campos
- 12 + 1 + 1 + 1 ponteiros x 4 espao k ocupa o enderesso
- espao ocupado pela lista de indicies do ponteiro com um nvel de indireco: 1 bloco deles -> 8K
- espao ocupado pela lista de indicies do ponteiro com dois nveis de indireco: 8K + 8K x 2
11

- espao ocupado pela lista de indicies do ponteiro com trs nveis de indireco: 8K + 8K x 2
11
+ 8K x ( 2
11
x
2
11
)

ESCALONAMENTO DISCO: FIFO: - Processo requisita sequencialmente - o mais justo para todos os
processos - Aproxima-se de escalonamento aleatrio em performance se existem muitos processos
SSTF (Shortest Seek Time First) - Seleciona a requisio que necessita o menor movimento do brao do
disco a partir da posio corrente - Sempre escolhe o mnimo tempo de seek - Vasta utilizao. pequenas
filas.
SCAN: Brao move apenas em uma direo, satisfazendo todas as requisies at encontrar a ltima trilha
naquela direo - Direo invertida quando chega a um extremo - Conhecido como algoritmo do elevador
- Melhor distribuio de servios
C-SCAN: - Fornece tempo de espera mais uniforme que SCAN - Restringe busca em uma direo apenas -
Quando a ltima trilha foi visitada em uma direo, o brao retornado para o lado oposto do disco e a
busca inicia novamente - Trata os cilindros como uma lista circular - Baixa variao de servio.
C-Look: - Verso do C-SCAN; - Brao vai at a ltima requisio em cada direo, depois reverte a direo
imediatamente, sem primeiro ir at o final do disco.
SSTF, SCAN e C-SCAN possvel que processos com taxas de acesso + rpidas monopolizem o disco.
N-step-SCAN: - Segmenta a fila de requisies do disco em sub filas de tamanho N - Sub filas so
processadas uma de cada vez, usando SCAN - Novas requisies adicionadas a outra fila quando fila
processada - Garantia de Servio
FSCAN: - Duas sub-filas so usadas - Quando o SCAN comea todas as requisies esto em uma das filas,
sendo a outra fila vazia, recebe novas requisies durante o SCAN - Servio de novas requisies
postergado at todas requisies antigas serem processadas - Sensvel aos loads
PRI (Priority): - Prioridade por processo - Controlo fora da fila de gesto do disco RSS - Random Scheduling

Algoritmo de substituio de pginas FIFO
O FIFO (First-in, First-out) um algoritmo de substituio de pginas de baixo custo e de fcil
implementao que consiste em substituir a pgina que foi carregada h mais tempo na memria (a
primeira pgina a entrar a primeira a sair). Esta escolha no leva em considerao se a pgina est sendo
muito utilizada ou no, o que no muito adequado pois pode prejudicar o desempenho do sistema. Por
este motivo, o FIFO apresenta uma deficincia denominada anomalia de Belady: a quantidade de falta de
pginas pode aumentar quando o tamanho da memria tambm aumenta. Por estas razes, o algoritmo
FIFO puro muito pouco utilizado. Contudo, sua principal vantagem a facilidade de implementao: uma
lista de pginas ordenada pela idade. Dessa forma, na ocorrncia de uma falta de pgina a primeira
pgina da lista ser substituda e a nova ser acrescentada ao final da lista.
Algoritmo de substituio de pginas LRU
O LRU (Least Recently Used) um algoritmo de substituio de pgina que apresenta um bom desempenho
substituindo a pgina menos recentemente usada. Esta poltica foi definida baseada na seguinte
observao: se a pgina est sendo intensamente referenciada pelas instrues muito provvel que ela
seja novamente referenciada pelas instrues seguintes e, de modo oposto, aquelas que no foram
acessadas nas ltimas instrues tambm provvel que no sejam acessadas nas prximas. Apesar de o
LRU apresentar um bom desempenho ele tambm possui algumas deficincias [CAS03] quando o padro de
acesso sequencial (em estruturas de dados do
tipo vetor, lista, rvore), dentro de loops, etc. Diante dessas deficincias foram propostas algumas variaes
do LRU, dentre eles destacamos o LRU-K. Este algoritmo no substitui aquela que foi referenciada h mais
tempo e sim quando ocorreu seu k-ltimo acesso. Por exemplo, LRU-2 substituir a pgina que teve seu
penltimo acesso feito h mais tempo e LRU-3 observar o antepenltimo e assim por diante.
A implementao do LRU tambm pode ser feita atravs de uma lista, mantendo as pginas mais
referenciadas no incio (cabea) e a menos referenciadas no final da lista. Portanto, ao substituir retira-se a
pgina que est no final da lista. O maior problema com esta organizao que a lista deve ser atualizada a
cada nova referncia efetuada sobre as pginas, o que torna alto o custo dessa manuteno.
Algoritmo de substituio de pginas timo
O algoritmo timo, proposto por Belady em 1966, o que apresenta o melhor desempenho computacional
e o que minimiza o nmero de faltas de pginas. No entanto, sua implementao praticamente
impossvel. A idia do algoritmo retirar da memria a pgina que vai demorar mais tempo para ser
referenciada novamente. Para isso, o algoritmo precisaria saber, antecipadamente, todos os acessos
memria realizados pela aplicao, o que impossvel em um caso real. Por estes motivos, o algoritmo
timo s utilizado em simulaes para se estabelecer o valor timo e analisar a eficincia de outras
propostas elaboradas.
Algoritmo de substituio de pginas MRU
O algoritmo MRU (Most Recently Used) faz a substituio da ltima pgina acessada. Este algoritmo
tambm apresenta algumas variaes, semelhante ao LRU. Por exemplo, o MRU-n escolhe a n-ltima
pgina acessada para ser substituda. Dessa forma, possvel explorar com mais eficincia o princpio de
localidade temporal apresentada pelos acessos.
Algoritmo de substituio de pginas CLOCK
Este algoritmo mantm todas as pginas em uma lista circular (em forma de relgio). A ordem mantida
segue a seqncia em que elas foram carregadas em memria. Alm disso, adicionado um bit de uso que
indica se a pgina foi referenciada novamente depois de ter sido carregada. Ao precisar substituir uma
pgina o algoritmo verifica se a pgina mais antiga est com o bit zerado (o que significa que a pgina no
foi mais referenciada) para ser substituda. Se ela no estiver o bit zerado e a prxima pgina da fila mais
antiga ser verificada. Esse processo continua at que uma pgina antiga com o bit zerado seja encontrada
para ser substituda.

Algoritmo de substituio de pginas NRU
O algoritmo NRU (Not Recently Used) procura por pginas que no foram referenciadas nos ltimos acessos
para serem substitudas. Tal informao mantida atravs de um bit. Este algoritmo tambm verifica,
atravs de um bit de modificao, se a pgina teve seu contedo alterado durante sua permanncia em
memria. Esta informao tambm vai ajudar a direcionar a escolha da pgina. As substituies das pginas
seguem a seguinte prioridade: pginas no referenciadas e no modificadas, pginas no referenciadas,
pginas no modificadas e pginas referenciadas e modificadas.
Algoritmo de substituio de pginas LFU
O LFU (Least Frequently Used) escolhe a pgina que foi menos acessada dentre todas as que esto carregas
em memria. Para isso, mantido um contador de acessos associado a cada pgina (hit) para que se possa
realizar esta verificao. Esta informao zerada cada vez que a pgina deixa a memria. Portanto, o
problema desse algoritmo que ele prejudica as pginas recm-carregadas, uma vez que por estarem com
o contador de acessos zerado a probabilidade de serem substitudas maior. Qual uma possvel soluo
para este problema? (Estabelecer um tempo de carncia) S pginas fora desse tempo que podem ser
substitudas. Tal estratgia deu origem ao algoritmo FBR (Frequency-Based Replacement).
Algoritmo de substituio de pginas MFU
O MFU (Most Frequently Used) substitui a pgina que tem sido mais referenciada, portanto, o oposto do
LFU. O controle tambm feito atravs de contadores de acesso. O maior problema deste algoritmo que
ele ignora o princpio de localidade temporal.
Algoritmo de substituio de pginas WS
O algoritmo WS (Working Set) possui a mesma poltica do LRU. No entanto, este algoritmo no realiza
apenas a substituio de pginas ele tambm estabelece um tempo mximo que cada pgina pode
permanecer ativa na memria. Assim, toda pgina que tem seu tempo de permanncia esgotado ela
retirada da memria. Portanto, o nmero de pginas ativas varivel. O WS assegura que as pginas
pertencentes ao working set processo permanecero ativas em memria.
Os algoritmos apresentados so alguns dos disponveis na literatura. Outras implementaes ou variaes
dos destacados aqui podem ser encontradas tambm [CAS03].
Num sistema operativo onde executam apenas processos interactivos acha que o algoritmo de escalonamento
Multilevel Feedback poder ser til?
Sim, no entanto se o quantum for demasiado elevado, o processo poder demorar muito tempo a ser
executado e isso faz com que o algoritmo de escalonamento Multilevel feedback seja desvantajoso.
Qual a principal vantagem e a principal desvantagem em usar Memory-Mapped Files?
Unifica o acesso aos dados de um programa
Tradicionalmente o acesso aos dados em RAM ou em disco feita de forma totalmente
diferente
O uso de mmap() unifica o modo de acesso
O programador libertado da tarefa de movimentar explicitamente os dados entre o disco e
RAM
Exige menos chamadas ao sistema
As transferncias so mais rpidas
O kernel faz operaes de entrada sada directamente entre a memria do utilizador e o disco
sem passar pelos buffers do sistema
Qual dos algoritmos de troca de pginas em memria virtual explora melhor a Localidade Temporal?
O algoritmo MRU (Most Recently Used) faz a substituio da ltima pgina acessada. Este algoritmo tambm
apresenta algumas variaes, semelhante ao LRU. Por exemplo, o MRU-n escolhe a n-ltima pgina acessada
para ser substituda. Dessa forma, possvel explorar com mais eficincia o princpio de localidade temporal
apresentada pelos acessos.
Qual a tcnica de Memria Virtual do S.O. que usada para explorar a Localidade Espacial?
Copy-on-write.
Qual a principal vantagem e a principal desvantagem em usar uma Tabela de Pginas Invertida?
Uma abordagem para trabalhar com pginas manter uma tabela onde cada entrada representa uma
moldura de pgina ao invs de um endereo virtual. Desta forma, a entrada deve monitorizar qual pgina
virtual est associada quela moldura de pgina. Embora as tabelas de pginas invertidas economizem
quantidade significativas de espao (pelo menos nas situaes em que o espao de endereo virtual muito
maior que a memria fsica), elas tem a desvantagem de que o mapeamento (traduo) do endereo virtual
para o fsico mais complexo e potencialmente mais lento. Uma forma de facilitar a traduo do virtual
para o fsico a utilizao da TLB pesquisada por software. A pesquisa pode ser feita a partir de um
encadeamento de pginas virtuais que possuam um mesmo endereo hash.
Qual a fragmentao que apresenta mais desvantagens: fragmentao interna ou externa?
Interna
Problema da alocao fixa uso ineficiente da memria principal
Um processo, no importando quo pequeno seja, ocupa uma partio inteira
Externa
A execuo de processos pode criar pedaos livres de memria
Pode haver memria disponvel, mas no contgua
EXERCICIOS PRATICOS
Considere um computador com 256Mbytes de memria real, que usa um esquema de memria
virtual de 2Gbytes, baseado em Paginao, e onde o nmero das pginas ocupam os 20 bits mais
significativos do endereo virtual.
1. Qual o total mximo de pginas?
2. Qual o tamanho de cada pgina?
3. Se uma PTE (Page Table Entry) ocupar 8 bytes quantas pginas ocupa a Tabela de
Pginas?
Qual ser o tamanho mximo dessa Tabela.

2 Gb = 2
31
31 bits de endereamento lgico

1. Tamanho mximo de paginas 2
20
= 1 MB
2. Page size = 2
11
= 2 Kb
3. 8 = 2
3
; Size page table 2
20
2
3
= 2
23
= 8 MB
numero de paginas =
2
23
211
= 2
12

--------------------------------------------------------------
Considere um computador com 32 bits para endereamento e pginas de 4kb.
Se uma PTE (Page Table Entry) ocupar 4 bytes quantas pginas ocupa a Tabela de Pginas?
Qual ser o tamanho mximo dessa Tabela.

Paginas de 4 Kb = 2
12


Size page table = 2
20
2
2
= 2
22

Numero de paginas =
2
22
212
= 2
10
= 1024 paginas
Endereamento emMemria Virtual
Considere um computador com um espao de endereamento virtual de 16 Gbytes que usa
memria paginada e o tamanho de cada pgina de 16k. A memria RAM de 512Mbytes.
1. Como que os bits do endereo virtual so divididos para efeitos de
endereamento?
2. Sabendo que cada page table entry (PTE) ocupa 4 bytes diga quantas pginas ir
ocupar a Tabela de Pginas de cada segmento?
3. Suponha agora que seria usada paginao a 2 nveis de modo a que a tabela de
pginas do nvel superior tenha que caber numa pgina de memria. Como ficaria a
diviso de bits no endereo virtual? Qual o tamanho de cada Tabela de 2 nvel (PTE
Size=4bytes)?
4. Se o sistema usasse um esquema de Tabela de Pgina Invertida (com um PTE de 4
bytes) qual seria o tamanho dessa Tabela? Quantas pginas ocupa?
5. Quais as desvantagens em usar Memria Virtual? Diga em que tipo de aplicaes
que um S.O. ter em vantagens em no fazer uso de memria virtual.
Endereamento virtual = 16Gb = 2
34

Tamanho pagina = 16K = 2
14

RAM = 512Mb
1.


2. PTE = 4 bytes = 2
2

Tamanho 2
20
2
2
= 2
22

N paginas
2
20
214
= 2
8


3.


Paginas de 2 nvel =
2
14
22
= 2
12

4. 512Mb = 2
29

N paginas =
2
29
214
= 2
15

Inverted page
Tamanho total 2
15
2
2
= 2
17

#paginas =
2
1
214
= 2
3
= 8 paginas
5. Usar muita memoria virtual consome recursos, muita memoria, acessos lentos,
menos performance.
Um S.O. ter vantagens em no fazer uso de memria virtual em caso de ser um S.O. em tempo
real.
Memory Management Unit
Logical address generated by the CPU; also referred to as virtual address
Physical address addresses seen by the memory unit

Dynamic Loading
Routine is not loaded until it is called
Better memory-space utilization; unused routine is never loaded
Useful when large amounts of code are needed to handle infrequently occurring cases
No special support from the operating system is required implemented through program design
Dynamic Linking
Linking postponed until execution time
Small piece of code, stub, used to locate the appropriate memory-resident library routine
Stub replaces itself with the address of the routine, and executes the routine
Operating system needed to check if routine is in processes memory address
Dynamic linking is particularly useful for libraries
System also known as shared libraries

2. Dynamic Partitioning
Varying size partitions according to process needs
Partitions are of variable length and number.
Process is allocated exactly as much memory as required.
Eventually get holes in the memory. This is called external fragmentation.
Must use compaction to shift processes so they are contiguous and all free memory is in one block.
20 11
Pagingas offset
20 12
Pagingas offset
20 14
Pagingas offset
8 12 14
Pagingas offset
2
10
= 1 Kb
2
20
= 1 Hb
2
30
= 1 0b
2
40
= 1 Ib
2
50
= 1 Pb

3. Buddy System
2
N
allocator
Normally used to allocate memory in the kernel
Entire space available is treated as a single block of 2
U
.
If a request of size s such that:
2
U-1
< s <= 2
U
, entire block is allocated.
Otherwise block is split into two equal buddies.
Process continues until smallest block greater than or equal to s is generated
When a block becomes free, it may be coalesced with its corresponding buddies
Its a very fast system!
Compromise between internal and external fragmentation.









~





Category Level Description I/O Request
(Read/Write)
Data trasnfer
Rate
(Read/Write)
Typical
Application
Striping 0 Nonredundant Large strips.
Excellent
Small strips.
Excellent
High
performance
Mirrorin
g
1 Mirrored Good/Fair Fair/Fair System drives;
critical files

Parallel
Acess
2 Redundant via
hamming code

Poor

Excellent

3 Bit-interleaved
parity
Large I/O
requests. CAD

Indepen
dent
Acess
4 Block-interleaved
parity

Excellent/fair
Fair/Poor
5 Block-interleaved
distributed parity
High request
rate. Data
lookup
6 Block-interleaved
dual distributed
parity
Extremely high
availability













Considere o seguinte cenrio onde existem 5 processos (P1...P5) a usar 4 recursos de sistema
(A,B,C,D). A matriz de recursos atribudos (Allocation), a matriz de pedidos mximos de recursos
(Max) e o vector de recursos disponveis (Available) esto representados na Figura seguinte:

4.1- Aplique o algoritmo do Banqueiro e indique se o sistema se encontra ou no num estado
seguro (safe)?
NAX
Clain
=

1 u u 1
2 u u u
u 1 1 u
u 1 u 1
1 1 u u


P1
P2
PS
P4
PS

Available inicial |1 1 u u]
Available aps execuo do P5 |1 2 u 1]
Available aps execuo do P1 |2 2 1 2]
A partir deste ponto qualquer processo pode executar logo o sistema seguro.

4.2- Se chegarem quatro pedidos: (a)- um do processo P2 (1,0,0,0)
NAX
Clain
=

1 u u 1
1 u u u
u 1 1 u
u 1 u 1
1 1 u u


P1
P2
PS
P4
PS
Available inicial
|u 1 u u]
(b)- outro do processo P3 (0,1,0,0)
NAX
Clain
=

1 u u 1
2 u u u
u u 1 u
u 1 u 1
1 1 u u


P1
P2
PS
P4
PS
Available inicial |1 u u u]
(c)- e outro do processo P4 (0,1,0,0)
NAX
Clain
=

1 u u 1
2 u u u
u 1 1 u
u u u 1
1 1 u u


P1
P2
PS
P4
PS
Available inicial |1 u u u]

(d)- e outro do processo P1 (0,0,0,1)
O sistema no tem recursos D

Acha que algum desses pedidos dever ser permitido de imediato?
Nenhum dos processos pode executar.













Selection
Function
Decision
Mode
Troughput Response
Time
Overhead Effect on
Processes
Starvatio
n


FCFS

Max(w)


Nonpreemptive


Not
emphasized
May be high
(>>large variance in
processes execution
time)


Minimum

Penalizes shots &
I/O bound
processes



No

RR
constant Preemptive
(quantum)
May be low
(quantum
small)
Good response
time for shorts
Fair
SPN Min(s) Nonpreemptive High


Can be
high
Penalizes longs
Possible

SRT Min(s-e) Preemptive
(arrival)
Good response
time
HRRN Max(w+s)/s Nonpreemptive Good balance No

Feedback
-- Preemptive
(quantum)

Not emphasized
Favor I/O bound
processes

Possible
.ver s o available da para
algum P
.se der pegarno allocation
de P e somar no available
.pegar no P2 e substituir
no available
.susbs o P no claim