Académique Documents
Professionnel Documents
Culture Documents
Operating System ServicesOperating systems provide an environment for execution of programs and services to
programs and users.
One set of operating-system services provides functions that are helpful to the
user:
1. User interface - Almost all operating systems have a user interface (UI).Varies between
Command-Line (CLI), Graphics User Interface (GUI), Batch
2. Program execution - The system must be able to load a program into memory and to run
that program, end execution, either normally or abnormally (indicating error)
3. I/O operations - A running program may require I/O, which may involve a file or an I/O
device
4. File-system manipulation - The file system is of particular interest. Programs need to read
and write files and directories, create and delete them, search them, list file Information,
permission management.
5. Communications Processes may exchange information, on the same computer or
between computers over a network. Communications may be via shared memory or through
message passing (packets moved by the OS)
6. Error detection OS needs to be constantly aware of possible errors
May occur in the CPU and memory hardware, in I/O devices, in user
program
For each type of error, OS should take the appropriate action to ensure
correct and consistent computing
Debugging facilities can greatly enhance the users and programmers
abilities to efficiently use the system
Another set of OS functions exists for ensuring the efficient operation of the system itself via
resource sharing7. Resource allocation - When multiple users or multiple jobs running concurrently,
resources must be allocated to each of them
Many types of resources - Some (such as CPU cycles, main memory,
and file storage) may have special allocation code, others (such as I/O
devices) may have general request and release code
8. Accounting - To keep track of which users use how much and what kinds of computer
resources
The operating system is responsible for using hardware efficiently for the
disk drives, this means having a fast access time and disk bandwidth
Minimize seek time
Seek time seek distance
Disk bandwidth is the total number of bytes transferred, divided by the total
time between the first request for service and the completion of the last
transfer
There are many sources of disk I/O request-OS, System processes, Users
processes
I/O request includes input or output mode, disk address, memory address, number
of
sectors to transfer
OS maintains queue of requests, per disk or device
Several algorithms exist to schedule the servicing of disk I/O requests
The analysis is true for one or many platters
We illustrate scheduling algorithms with a request queue (0-199)
98, 183, 37, 122, 14, 124, 65, 67
6.Multiprogramming:
Multiprogramming is a technique to execute number of programs simultaneously by a
single processor.
In Multiprogramming, number of processes reside in main memory at a time.
The OS picks and begins to executes one of the jobs in the main memory.
If any I/O wait happened in a process, then CPU switches from that job to another job.
Hence CPU in not idle at any time.
OS
Job 1
Job 2
Advantages:
Job 3
Job 4
Throughput increases
Job 5
7.Race Condition
1. Process may be changing common variables, updating table, writing file, etc
2. When one process in critical section, no other may be in its critical section
Each process must ask permission to enter critical section in entry section,
may follow critical section with exit section, then remainder section
Solution to Critical-Section Problem1. Mutual Exclusion - If process Pi is executing in its critical section, then no other
processes can be executing in their critical sections
2. Progress - If no process is executing in its critical section and there exist some
processes that wish to enter their critical section, then the selection of the processes
that will enter the critical section next cannot be postponed indefinitely
3. Bounded Waiting - A bound must exist on the number of times that other
processes are allowed to enter their critical sections after a process has made a request
to enter its critical section and before that request is granted
Assume that each process executes at a nonzero speed
No assumption concerning relative speed of the n processes.
9.Readers-Writers Problem/one synchronisation problem explain:
o Readers only read the data set; they do not perform any updates
6
writing is performed
signal (wrt) ;
} while (TRUE);
The structure of a reader process
do {
wait (mutex) ;
readcount ++ ;
if (readcount == 1)
wait (wrt) ;
signal (mutex)
// reading is performed
wait (mutex) ;
readcount - - ;
if (readcount == 0)
signal (wrt) ;
signal (mutex) ;
} while (TRUE);
Sl.
No.
Paging
Segmentation
1.
2.
No separate protection
Separate protection
3.
No separate compiling
Separate compiling
4.
No shared code
Shared code
5.
Length-fixed
Variable
6.
2 dim add
7.
Linking-static
Dynamic
8.
Fragment-internal
external
4
5
6
Process
Thread
Process is heavy weight or resource
Thread is light weight taking lesser resources
intensive.
than a process.
Process switching needs interaction with Thread switching does not need to interact
operating system.
with operating system.
In multiple processing environments each
All threads can share same set of open files,
process executes the same code but has
child processes.
its own memory and file resources.
If one process is blocked then no other
While one thread is blocked and waiting,
process can execute until the first process
second thread in the same task can run.
is unblocked.
Multiple processes without using threads Multiple threaded processes use fewer
use more resources.
resources.
In multiple processes each process
One thread can read, write or change another
operates independently of the others.
thread's data.
S.N.
1
2
3
4
13.Fragmentation
As processes are loaded and removed from memory, the free memory space is broken into
little pieces. It happens after sometimes that processes can not be allocated to memory blocks
considering their small size and memory blocks remains unused. This problem is known as
Fragmentation.Fragmentation is of two types
S.N.
Fragmentation
Description
External
fragmentation
Internal
fragmentation
External fragmentation can be reduced by compaction or shuffle memory contents to place all
free memory together in one large block. To make compaction feasible, relocation should be
dynamic.
13.Multiprogramming
In a multiprogramming system there are one or more programs loaded in main memory
which are ready to execute. Only one program at a time is able to get the CPU for executing
its instructions (i.e., there is at most one process running on the system) while all the others
are waiting their turn.
The main idea of multiprogramming is to maximize the use of CPU time. Indeed, suppose the
currently running process is performing an I/O task (which, by definition, does not need the
CPU to be accomplished). Then, the OS may interrupt that process and give the control to
one of the other in-main-memory programs that are ready to execute. In this way, no CPU
time is wasted by the system waiting for the I/O task to be completed, and a running process
keeps executing until either it voluntarily releases the CPU or when it blocks for an I/O
operation. Therefore, the ultimate goal of multiprogramming is to keep the CPU busy as long
as there are processes ready to execute.
Note that in order for such a system to function properly, the OS must be able to load
multiple programs into separate areas of the main memory and provide the required
protection to avoid the chance of one process being modified by another one. Other problems
that need to be addressed when having multiple programs in memory is fragmentation as
10
programs enter or leave the main memory. Another issue that needs to be handled as well is
that large programs may not fit at once in memory which can be solved by using pagination
and virtual memory. Please, refer to this article for more details on that.
Finally, note that if there are N ready processes and all of those are highly CPU-bound (i.e.,
they mostly execute CPU tasks and none or very few I/O operations), in the very worst case
one program might wait all the other N-1 ones to complete before executing.
Multiprocessing
Multiprocessing sometimes refers to executing multiple processes (programs) at the same
time. This might be misleading because we have already introduced the term
multiprogramming to describe that before.
In fact, multiprocessing refers to the hardware (i.e., the CPU units) rather than the software
(i.e., running processes). If the underlying hardware provides more than one processor then
that is multiprocessing. Several variations on the basic scheme exist, e.g., multiple cores on
one die or multiple dies in one package or multiple packages in one system.
Anyway, a system can be both multiprogrammed by having multiple programs running at the
same time and multiprocessing by having more than one physical processor.
14.Linux components:
Linux is one of popular version of UNIX operating System. It is open source as its source
code is freely available. It is free to use. Linux was designed considering UNIX
compatibility. It's functionality list is quite similar to that of UNIX.
Kernel - Kernel is the core part of Linux. It is responsible for all major activities of
this operating system. It is consists of various modules and it interacts directly with
the underlying hardware. Kernel provides the required abstraction to hide low level
hardware details to system or application programs.
System Library - System libraries are special functions or programs using which
application programs or system utilities accesses Kernel's features. These libraries
implements most of the functionalities of the operating system and do not requires
kernel module's code access rights.
11
Basic Features
Following are some of the important features of Linux Operating System.
Open Source - Linux source code is freely available and it is community based
development project. Multiple teams works in collaboration to enhance the capability
of Linux operating system and it is continuously evolving.
Multi-User - Linux is a multiuser system means multiple users can access system
resources like memory/ ram/ application programs at same time.
Hierarchical File System - Linux provides a standard file structure in which system
files/ user files are arranged.
Shell - Linux provides a special interpreter program which can be used to execute
commands of the operating system. It can be used to do various types of operations,
call application programs etc.
12
Security - Linux provides user security using authentication features like password
protection/ controlled access to specific files/ encryption of data.
Architecture
Hardware layer - Hardware consists of all peripheral devices (RAM/ HDD/ CPU
etc).
14
Deadlock Prevention- Restrain the ways request can be madeMutual Exclusion not required for sharable resources; must hold for nonsharable
resources
Hold and Wait must guarantee that whenever a process requests a resource, it does
not hold any other resources:
1) Require process to request and be allocated all its resources before it begins execution, or
allow process to request resources only when the process has none
2) Low resource utilization; starvation possible.
No Preemption
1)If a process that is holding some resources requests another resource that cannot be
immediately allocated to it, then all resources currently being held are released
2)Preempted resources are added to the list of resources for which the process is waiting
3)Process will be restarted only when it can regain its old resources, as well as the new ones
that it is requesting
Circular Wait impose a total ordering of all resource types, and require that each
process requests resources in an increasing order of enumeration.
Deadlock Avoidance- Requires that the system has some additional a priori information
available1)Simplest and most useful model requires that each process declare the maximum number of
resources of each type that it may need.
2)The deadlock-avoidance algorithm dynamically examines the resource-allocation state to
ensure that there can never be a circular-wait condition.
3)Resource-allocation state is defined by the number of available and allocated resources, and
the maximum demands of the processes.
Resource-Allocation Graph and Wait-for Graph-
15
a) Resource-Allocation Graph
b)
Detection Algorithm1.
(a)
Finish[i] == false
(b)
Requesti Work
Finish[i] = true
go to step 2
4.
16
Request
AB C
P0
010
000
P1
200
202
P2
303
00 0
P3
211
100
P4
002
002
Available
ABC
000
3)Sequence <P0, P2, P3, P1, P4> will result in Finish[i] = true for all i
4)P2 requests an additional instance of type C
Request
ABC
P0
000
P1
202
P2
001
P3
100
P4
002
17
State of system?
1)Can reclaim resources held by process P0, but insufficient resources to fulfill
other processes; requests
2)Deadlock exists, consisting of processes P1, P2, P3, and P4
18.Thrashing1)If a process does not have enough pages, the page-fault rate is very high.
Page fault to get page,Replace existing frame,But quickly need replaced frame back
This leads to:
Low CPU utilization
Operating system thinking that it needs to increase the degree of
multiprogramming
Another process added to the system
2)Thrashing a process is busy swapping pages in and out .
3)Graph-
18
19.Parser-
TopDown Parsing
Bottom up parsing
}
o signal (S) {
S++;
}
simpler to implement
o Also known as mutex locks
Can implement a counting semaphore S as a binary semaphore
Provides mutual exclusion
Semaphore mutex; // initialized to 1
do {
wait (mutex);
// Critical Section
signal (mutex);
// remainder section
} while (TRUE);
Semaphore Implementation
Must guarantee that no two processes can execute wait () and signal () on the same
semaphore at the same time
Thus, implementation becomes the critical section problem where the wait and signal
code are placed in the crtical section
Could now have busy waiting in critical section implementation
20
Implementation of wait:
wait(semaphore *S) {
S->value--;
if (S->value < 0) {
add this process to S->list;
block();
}
}
Implementation of signal:
signal(semaphore *S) {
S->value++;
if (S->value <= 0) {
remove a process P from S->list;
wakeup(P);
}
}
22
No external fragmentation
Each file is a linked list of disk blocks: blocks may be scattered anywhere on the disk
block
pointer
23
24
File-Allocation Table
25
index table
Random access
Dynamic access without external fragmentation, but have overhead of index block
Mapping from logical to physical in a file of maximum size of 256K bytes and block
size of 512 bytes. We need only 1 block for index table
Mapping from logical to physical in a file of unbounded length (block size of 512
words)
Two-level index (4K blocks could store 1,024 four-byte pointers in outer index ->
1,048,567 data blocks and file size of up to 4GB)
27
file
22.Process Management
Typically system has many processes, some user, some operating system running
concurrently on one or more CPUs
o Concurrency by multiplexing the CPUs among the processes / threads
The operating system is responsible for the following activities in connection with
process management:
28
Memory Management
24SemaphoreA semaphore is a protected variable whose value can be accessed and altered
only by the operations P and V and initialization operation called
'Semaphoiinitislize'.
Binary Semaphores can assume only the value 0 or the value 1 counting
semaphores also called general semaphores can assume only nonnegative
values.
The P (or wait or sleep or down) operation on semaphores S, written asP(S) or
wait (S), operates as follows:
P(S): IF S > 0
THEN S := S - 1
29
ELSE (wait on S)
The V (or signal or wakeup or up) operation on semaphore S, written
asV(S) or signal (S), operates as follows:
V(S): IF (one or more process are waiting on S)
THEN (let one of these processes proceed)
ELSE S := S +1
Operations P and V are done as single, indivisible, atomic action. It is guaranteed that once a
semaphore operations has stared, no other process can access the semaphore until operation
has completed. Mutual exclusion on the semaphore, S, is enforced within P(S) and V(S).
If several processes attempt a P(S) simultaneously, only process will be allowed to proceed.
The other processes will be kept waiting, but the implementation of P and V guarantees that
processes will not suffer indefinite postponement.
Semaphores solve the lost-wakeup problem.
Producer-Consumer Problem Using Semaphores
The Solution to producer-consumer problem uses three semaphores, namely,
full, empty and mutex.
The semaphore 'full' is used for counting the number of slots in the buffer that are full. The
'empty' for counting the number of slots that are empty and semaphore 'mutex' to make sure
that the producer and consumer do not access modifiable shared section of the buffer
simultaneously.
Initialization
V (mutex)
V (full);
Consumer ( )
WHILE (true)
P (full)
P (mutex);
remove-Item ( );
V (mutex);
V (empty);
consume-Item (Item)
Since more core memory is available to the user there is no memory limit.
Dis advantages of Absolute Loader:
The programmer must specifically tell the assembler the address where the program is
to be loaded.
When subroutines are referenced, the programmer must specify their address
whenever they are called.
32
Systems generally first distinguish among users, to determine who can do what
o User identities (user IDs, security IDs) include name and associated number,
one per user
o User ID then associated with all files, processes of that user to determine
access control
o Group identifier (group ID) allows set of users to be defined and controls
managed, then also associated with each process, file
o Privilege escalation allows user to change to effective ID with more rights
33