Vous êtes sur la page 1sur 33

1.

Memory Management with Linked Lists

Four neighbor combinations for the terminating process X


Algorithms for allocating memory when linked list management is used1. FIRST FIT - allocates the first hole found that is large enough - fast (as little searching as
possible).
2. NEXT FIT - almost the same as First Fit except that it keeps track of where it last
allocated space and starts from there instead of from the beginning - slightly better
performance.
3. BEST FIT - searches the entire list looking for a hole that is closest to the size needed by
the process - slow - also does not improve resource utilization because it tends to leave many
very small ( and therefore useless) holes.
4. WORST FIT - the opposite of Best Fit - chooses the largest available hole and breaks off a
hole that is large enough to be useful (I.e. hold another process) - in practice has not been
shown to work better than others.
2. What is an Operating System? An operating system is a program that acts as an interface
between the user and the computer hardware and controls the execution of all kinds of
programs.
Operating System is a software, which makes a computer to actually work.
It is the software the enables all the programs we use.
The OS organizes and controls the hardware.
OS acts as an interface between the application programs and the machine hardware.
Examples: Windows, Linux, Unix and Mac OS, etc.,
OS is a resource allocator-Manages all resources, Decides between conflicting
requests for efficient and fair resource use. OS is a control program-Controls
execution of programs to prevent errors and improper use of the computer.

Operating System ServicesOperating systems provide an environment for execution of programs and services to
programs and users.
One set of operating-system services provides functions that are helpful to the
user:
1. User interface - Almost all operating systems have a user interface (UI).Varies between
Command-Line (CLI), Graphics User Interface (GUI), Batch
2. Program execution - The system must be able to load a program into memory and to run
that program, end execution, either normally or abnormally (indicating error)
3. I/O operations - A running program may require I/O, which may involve a file or an I/O
device
4. File-system manipulation - The file system is of particular interest. Programs need to read
and write files and directories, create and delete them, search them, list file Information,
permission management.
5. Communications Processes may exchange information, on the same computer or
between computers over a network. Communications may be via shared memory or through
message passing (packets moved by the OS)
6. Error detection OS needs to be constantly aware of possible errors
May occur in the CPU and memory hardware, in I/O devices, in user
program
For each type of error, OS should take the appropriate action to ensure
correct and consistent computing
Debugging facilities can greatly enhance the users and programmers
abilities to efficiently use the system
Another set of OS functions exists for ensuring the efficient operation of the system itself via
resource sharing7. Resource allocation - When multiple users or multiple jobs running concurrently,
resources must be allocated to each of them
Many types of resources - Some (such as CPU cycles, main memory,
and file storage) may have special allocation code, others (such as I/O
devices) may have general request and release code
8. Accounting - To keep track of which users use how much and what kinds of computer
resources

9. Protection and security - The owners of information stored in a multiuser or networked


computer system may want to control use of that information, concurrent processes should
not interfere with each other
Protection involves ensuring that all access to system resources is
controlled
Security of the system from outsiders requires user authentication,
extends to defending external I/O devices from invalid access attempts
Detail-1) User Operating System Interface CLI
Command Line Interface (CLI) or command interpreter allows direct command entry
Sometimes implemented in kernel, sometimes by systems program
Sometimes multiple flavors implemented shells
Primarily fetches a command from user and executes it
- Sometimes commands built-in, sometimes just names of programs
-If the latter, adding new features doesnt require shell modification.
2)User Operating System Interface GUI

User-friendly desktop metaphor interface1. Usually mouse, keyboard, and monitor


2. Icons represent files, programs, actions, etc
3. Various mouse buttons over objects in the interface cause various actions
(provide information, options, execute function, open directory (known as a
folder)
4. Invented at Xerox PARC

Many systems now include both CLI and GUI interfaces


1. Microsoft Windows is GUI with CLI command shell
2. Apple Mac OS X as Aqua GUI interface with UNIX kernel underneath
and shells available
3. Solaris is CLI with optional GUI interfaces (Java Desktop, KDE).

3.Time Sharing Systems:


Time sharing, or multitasking, is a logical extension of multiprogramming.
Multiple jobs are executed by switching the CPU between them.
In this, the CPU time is shared by different processes, so it is called as Time
sharing Systems.
Time slice is defined by the OS, for sharing CPU time between processes.
Examples: Multics, UNIX, etc.
4.Batch Processing:
In Batch processing same type of jobs batch (BATCH- a set of jobs with similar
needs) together and execute at a time.
The OS was simple, its major task was to transfer control from one job to the next.
The job was submitted to the computer operator in form of punch cards. At some later
time the output appeared.
The OS was always resident in memory. (Ref. Fig. next slide)
Common Input devices were card readers and tape drives.
Common output devices were line printers, tape drives, and card punches.
5.Disk Scheduling:

The operating system is responsible for using hardware efficiently for the
disk drives, this means having a fast access time and disk bandwidth
Minimize seek time
Seek time seek distance
Disk bandwidth is the total number of bytes transferred, divided by the total
time between the first request for service and the completion of the last
transfer
There are many sources of disk I/O request-OS, System processes, Users
processes
I/O request includes input or output mode, disk address, memory address, number
of
sectors to transfer
OS maintains queue of requests, per disk or device
Several algorithms exist to schedule the servicing of disk I/O requests
The analysis is true for one or many platters
We illustrate scheduling algorithms with a request queue (0-199)
98, 183, 37, 122, 14, 124, 65, 67

6.Multiprogramming:
Multiprogramming is a technique to execute number of programs simultaneously by a
single processor.
In Multiprogramming, number of processes reside in main memory at a time.
The OS picks and begins to executes one of the jobs in the main memory.
If any I/O wait happened in a process, then CPU switches from that job to another job.
Hence CPU in not idle at any time.
OS
Job 1
Job 2

Figure dipicts the layout of multiprogramming system.

The main memory consists of 5 jobs at a time, the CPU executes


one by one.

Advantages:

Job 3
Job 4

Efficient memory utilization

Throughput increases

CPU is never idle, so performance increases.

Job 5

7.Race Condition

counter++ could be implemented as


register1 = counter
register1 = register1 + 1
counter = register1
counter-- could be implemented as
register2 = counter
register2 = register2 - 1
count = register2
Consider this execution interleaving with count = 5 initially:
S0: producer execute register1 = counter {register1 = 5}
S1: producer execute register1 = register1 + 1 {register1 = 6}
S2: consumer execute register2 = counter {register2 = 5}
S3: consumer execute register2 = register2 - 1 {register2 = 4}
S4: producer execute counter = register1 {count = 6 }
S5: consumer execute counter = register2 {count = 4}

8.Critical Section Problem:

Consider system of n processes {p0, p1, pn-1}

Each process has critical section segment of code-

1. Process may be changing common variables, updating table, writing file, etc
2. When one process in critical section, no other may be in its critical section

Critical section problem is to design protocol to solve this

Each process must ask permission to enter critical section in entry section,
may follow critical section with exit section, then remainder section

Especially challenging with preemptive kernels

General structure of process pi is

Solution to Critical-Section Problem1. Mutual Exclusion - If process Pi is executing in its critical section, then no other
processes can be executing in their critical sections
2. Progress - If no process is executing in its critical section and there exist some
processes that wish to enter their critical section, then the selection of the processes
that will enter the critical section next cannot be postponed indefinitely
3. Bounded Waiting - A bound must exist on the number of times that other
processes are allowed to enter their critical sections after a process has made a request
to enter its critical section and before that request is granted
Assume that each process executes at a nonzero speed
No assumption concerning relative speed of the n processes.
9.Readers-Writers Problem/one synchronisation problem explain:

A data set is shared among a number of concurrent processes

o Readers only read the data set; they do not perform any updates
6

o Writers can both read and write

Problem allow multiple readers to read at the same time


o Only one single writer can access the shared data at the same time
o Several variations of how readers and writers are treated all involve
priorities
Shared Data-Data set, Semaphore mutex initialized to 1, Semaphore wrt
initialized to 1,Integer read count initialized to 0
The structure of a writer processdo {
wait (wrt) ;
//

writing is performed

signal (wrt) ;
} while (TRUE);
The structure of a reader process
do {
wait (mutex) ;
readcount ++ ;
if (readcount == 1)
wait (wrt) ;
signal (mutex)
// reading is performed
wait (mutex) ;
readcount - - ;
if (readcount == 0)
signal (wrt) ;
signal (mutex) ;
} while (TRUE);

10. Paging vs Segmentation:

Sl.
No.

Paging

Segmentation

1.

Transparent to programmer (system Involves programmer (allocates memory to


allocates memory)
specific function inside code)

2.

No separate protection

Separate protection

3.

No separate compiling

Separate compiling

4.

No shared code

Shared code

5.

Length-fixed

Variable

6.

Add space-1 dimensional add

2 dim add

7.

Linking-static

Dynamic

8.

Fragment-internal

external

11.Difference between Process and Thread


S.N.
1
2
3

4
5
6

Process
Thread
Process is heavy weight or resource
Thread is light weight taking lesser resources
intensive.
than a process.
Process switching needs interaction with Thread switching does not need to interact
operating system.
with operating system.
In multiple processing environments each
All threads can share same set of open files,
process executes the same code but has
child processes.
its own memory and file resources.
If one process is blocked then no other
While one thread is blocked and waiting,
process can execute until the first process
second thread in the same task can run.
is unblocked.
Multiple processes without using threads Multiple threaded processes use fewer
use more resources.
resources.
In multiple processes each process
One thread can read, write or change another
operates independently of the others.
thread's data.

12.Difference between User Level & Kernel Level Thread


9

S.N.
1
2
3
4

User Level Threads


User level threads are faster to create and
manage.
Implementation is by a thread library at
the user level.
User level thread is generic and can run
on any operating system.
Multi-threaded application cannot take
advantage of multiprocessing.

Kernel Level Thread


Kernel level threads are slower to create and
manage.
Operating system supports creation of Kernel
threads.
Kernel level thread is specific to the operating
system.
Kernel routines themselves can be
multithreaded.

13.Fragmentation
As processes are loaded and removed from memory, the free memory space is broken into
little pieces. It happens after sometimes that processes can not be allocated to memory blocks
considering their small size and memory blocks remains unused. This problem is known as
Fragmentation.Fragmentation is of two types
S.N.

Fragmentation

Description

External
fragmentation

Total memory space is enough to satisfy a request or to reside a


process in it, but it is not contiguous so it can not be used.

Internal
fragmentation

Memory block assigned to process is bigger. Some portion of


memory is left unused as it can not be used by another process.

External fragmentation can be reduced by compaction or shuffle memory contents to place all
free memory together in one large block. To make compaction feasible, relocation should be
dynamic.
13.Multiprogramming
In a multiprogramming system there are one or more programs loaded in main memory
which are ready to execute. Only one program at a time is able to get the CPU for executing
its instructions (i.e., there is at most one process running on the system) while all the others
are waiting their turn.
The main idea of multiprogramming is to maximize the use of CPU time. Indeed, suppose the
currently running process is performing an I/O task (which, by definition, does not need the
CPU to be accomplished). Then, the OS may interrupt that process and give the control to
one of the other in-main-memory programs that are ready to execute. In this way, no CPU
time is wasted by the system waiting for the I/O task to be completed, and a running process
keeps executing until either it voluntarily releases the CPU or when it blocks for an I/O
operation. Therefore, the ultimate goal of multiprogramming is to keep the CPU busy as long
as there are processes ready to execute.
Note that in order for such a system to function properly, the OS must be able to load
multiple programs into separate areas of the main memory and provide the required
protection to avoid the chance of one process being modified by another one. Other problems
that need to be addressed when having multiple programs in memory is fragmentation as
10

programs enter or leave the main memory. Another issue that needs to be handled as well is
that large programs may not fit at once in memory which can be solved by using pagination
and virtual memory. Please, refer to this article for more details on that.
Finally, note that if there are N ready processes and all of those are highly CPU-bound (i.e.,
they mostly execute CPU tasks and none or very few I/O operations), in the very worst case
one program might wait all the other N-1 ones to complete before executing.
Multiprocessing
Multiprocessing sometimes refers to executing multiple processes (programs) at the same
time. This might be misleading because we have already introduced the term
multiprogramming to describe that before.
In fact, multiprocessing refers to the hardware (i.e., the CPU units) rather than the software
(i.e., running processes). If the underlying hardware provides more than one processor then
that is multiprocessing. Several variations on the basic scheme exist, e.g., multiple cores on
one die or multiple dies in one package or multiple packages in one system.
Anyway, a system can be both multiprogrammed by having multiple programs running at the
same time and multiprocessing by having more than one physical processor.

14.Linux components:
Linux is one of popular version of UNIX operating System. It is open source as its source
code is freely available. It is free to use. Linux was designed considering UNIX
compatibility. It's functionality list is quite similar to that of UNIX.

Components of Linux System


Linux Operating System has primarily three components

Kernel - Kernel is the core part of Linux. It is responsible for all major activities of
this operating system. It is consists of various modules and it interacts directly with
the underlying hardware. Kernel provides the required abstraction to hide low level
hardware details to system or application programs.

System Library - System libraries are special functions or programs using which
application programs or system utilities accesses Kernel's features. These libraries
implements most of the functionalities of the operating system and do not requires
kernel module's code access rights.
11

System Utility - System Utility programs are responsible to do specialized, individual


level tasks.

Basic Features
Following are some of the important features of Linux Operating System.

Portable - Portability means softwares can works on different types of hardwares in


same way.Linux kernel and application programs supports their installation on any
kind of hardware platform.

Open Source - Linux source code is freely available and it is community based
development project. Multiple teams works in collaboration to enhance the capability
of Linux operating system and it is continuously evolving.

Multi-User - Linux is a multiuser system means multiple users can access system
resources like memory/ ram/ application programs at same time.

Multiprogramming - Linux is a multiprogramming system means multiple


applications can run at same time.

Hierarchical File System - Linux provides a standard file structure in which system
files/ user files are arranged.

Shell - Linux provides a special interpreter program which can be used to execute
commands of the operating system. It can be used to do various types of operations,
call application programs etc.
12

Security - Linux provides user security using authentication features like password
protection/ controlled access to specific files/ encryption of data.

Architecture

Linux System Architecture is consists of following layers

Hardware layer - Hardware consists of all peripheral devices (RAM/ HDD/ CPU
etc).

Kernel - Core component of Operating System, interacts directly with hardware,


provides low level services to upper layer components.

Shell - An interface to kernel, hiding complexity of kernel's functions from users.


Takes commands from user and executes kernel's functions.

Utilities - Utility programs giving user most of the functionalities of an operating


systems.

15.System and Application:


Comparison
1)The system software helps in operating the computer hardware, and provides a platform for
running the application software.
Application software helps the user in performing single or multiple related computing tasks.
2)System software executes in a self-created environment.
13

Application software executes in the environment created by the system software.


3)It executes continuously as long as the computer system is running.
It executes as and when the user requires.
4)The programming of system software is complex, requiring the knowledge of the working
of the underlying hardware.
The programming of an application software is relatively easier, and requires only the
knowledge of the underlying system software.
5)There are much fewer system software as compared to application software.
There are many more application software as compared to system software.
6)System software runs in the background and the users typically do not interact with it.
The application software run in the foreground, and the users interact with it frequently for
all their computing needs.
7)System software can function independent of the application software.
The application software depends on the system software and cannot run without it.
8)Examples: Windows OS, BIOS, device firmware, Mac OS X, Linux etc.
Windows Media Player, Adobe Photoshop, World of Warcraft (game), iTunes, MySQL etc.
17.Deadlocks: It is two or more processes are waiting indefinitely for an event that can be
caused by only one of the waiting processes.
System Model- 1)Resource types R1, R2, . . ., Rm .CPU cycles, memory space, I/O
devices2)Each resource type Ri has Wi instances.3)Each process utilizes a resource as
follows: request, use ,release.
Deadlock Characterization- Deadlock can arise if four conditions hold simultaneously

Mutual exclusion: only one process at a time can use a resource


Hold and wait: a process holding at least one resource is waiting to acquire

additional resources held by other processes


No preemption: a resource can be released only voluntarily by the process

holding it, after that process has completed its task


Circular wait: there exists a set {P0, P1, , Pn} of waiting processes such that
P0 is waiting for a resource that is held by P1, P1 is waiting for a resource that
is held by P2, , Pn1 is waiting for a resource that is held by Pn, and Pn is
waiting for a resource that is held by P0.

14

Deadlock Prevention- Restrain the ways request can be madeMutual Exclusion not required for sharable resources; must hold for nonsharable
resources
Hold and Wait must guarantee that whenever a process requests a resource, it does
not hold any other resources:
1) Require process to request and be allocated all its resources before it begins execution, or
allow process to request resources only when the process has none
2) Low resource utilization; starvation possible.
No Preemption
1)If a process that is holding some resources requests another resource that cannot be
immediately allocated to it, then all resources currently being held are released
2)Preempted resources are added to the list of resources for which the process is waiting
3)Process will be restarted only when it can regain its old resources, as well as the new ones
that it is requesting
Circular Wait impose a total ordering of all resource types, and require that each
process requests resources in an increasing order of enumeration.
Deadlock Avoidance- Requires that the system has some additional a priori information
available1)Simplest and most useful model requires that each process declare the maximum number of
resources of each type that it may need.
2)The deadlock-avoidance algorithm dynamically examines the resource-allocation state to
ensure that there can never be a circular-wait condition.
3)Resource-allocation state is defined by the number of available and allocated resources, and
the maximum demands of the processes.
Resource-Allocation Graph and Wait-for Graph-

15

a) Resource-Allocation Graph

b)

Corresponding wait-for graph

Detection Algorithm1.

Let Work and Finish be vectors of length m and n, respectively Initialize:

(a) Work = Available


(b)

For i = 1,2, , n, if Allocationi 0, then

Finish[i] = false; otherwise, Finish[i] = true


2.

Find an index i such that both:

(a)

Finish[i] == false

(b)

Requesti Work

If no such i exists, go to step 4


3.

Work = Work + Allocationi

Finish[i] = true
go to step 2
4.

If Finish[i] == false, for some i, 1 i n, then the system is in deadlock state.

Moreover, if Finish[i] == false, then Pi is deadlocked

16

Example of Detection Algorithm1)Five processes P0 through P4; three resource types


A (7 instances), B (2 instances), and C (6 instances)
2)Snapshot at time T0:
Allocation
ABC

Request
AB C

P0

010

000

P1

200

202

P2

303

00 0

P3

211

100

P4

002

002

Available
ABC
000

3)Sequence <P0, P2, P3, P1, P4> will result in Finish[i] = true for all i
4)P2 requests an additional instance of type C

Request
ABC
P0

000

P1

202

P2

001

P3

100

P4

002

17

State of system?
1)Can reclaim resources held by process P0, but insufficient resources to fulfill
other processes; requests
2)Deadlock exists, consisting of processes P1, P2, P3, and P4

18.Thrashing1)If a process does not have enough pages, the page-fault rate is very high.
Page fault to get page,Replace existing frame,But quickly need replaced frame back
This leads to:
Low CPU utilization
Operating system thinking that it needs to increase the degree of
multiprogramming
Another process added to the system
2)Thrashing a process is busy swapping pages in and out .
3)Graph-

18

19.Parser-

TopDown Parsing

A parse tree is created from root to leaves

The traversal of parse trees is a preorder traversal

Tracing leftmost derivation

Two types:Backtracking parser,Predictive parser

Bottom up parsing

A parse tree is created from leaves to root

The traversal of parse trees is a reversal of postorder traversal


Tracing rightmost derivation

More powerful than top-down parsing


21.Semaphore

Synchronization tool that does not require busy waiting


Semaphore S integer variable
Two standard operations modify S: wait() and signal()
o Originally called P() and V()
Less complicated
Can only be accessed via two indivisible (atomic) operations
o wait (S) {
while S <= 0
; // no-op
S--;
19

}
o signal (S) {
S++;
}

Semaphore as General Synchronization Tool

Counting semaphore integer value can range over an unrestricted domain


Binary semaphore integer value can range only between 0 and 1; can be

simpler to implement
o Also known as mutex locks
Can implement a counting semaphore S as a binary semaphore
Provides mutual exclusion
Semaphore mutex; // initialized to 1

do {
wait (mutex);
// Critical Section
signal (mutex);
// remainder section
} while (TRUE);

Semaphore Implementation
Must guarantee that no two processes can execute wait () and signal () on the same
semaphore at the same time
Thus, implementation becomes the critical section problem where the wait and signal
code are placed in the crtical section
Could now have busy waiting in critical section implementation
20

But implementation code is short


Little busy waiting if critical section rarely occupied

Semaphore Implementation with no Busy waiting

Implementation of wait:

wait(semaphore *S) {
S->value--;
if (S->value < 0) {
add this process to S->list;
block();
}
}

Implementation of signal:
signal(semaphore *S) {
S->value++;
if (S->value <= 0) {
remove a process P from S->list;
wakeup(P);
}
}

22.ALLOCATION METHOD FOR DISK SPACE:


1)Allocation Methods Contiguous
An allocation method refers to how disk blocks are allocated for files:
Contiguous allocation each file occupies set of contiguous blocks

Best performance in most cases


21

Simple only starting location (block #) and length (number of blocks)


are required
Problems include finding space for file, knowing file size, external
fragmentation, need for compaction off-line (downtime) or on-line.

Block to be accessed = Q + starting address


Displacement into block = R

2)Allocation Methods Linked

22

Linked allocation each file a linked list of blocks

File ends at nil pointer

No external fragmentation

Each block contains pointer to next block

No compaction, external fragmentation

Free space management system called when new block needed

Improve efficiency by clustering blocks into groups but increases internal


fragmentation

Reliability can be a problem

Locating a block can take many I/Os and disk seeks

FAT (File Allocation Table) variation

Beginning of volume has table, indexed by block number

Much like a linked list, but faster on disk and cacheable

New block allocation simple

Each file is a linked list of disk blocks: blocks may be scattered anywhere on the disk

block

pointer

23

24

File-Allocation Table

3)Allocation Methods Indexed

25

index table

Example of Indexed Allocation-

Need index table


26

Random access

Dynamic access without external fragmentation, but have overhead of index block

Mapping from logical to physical in a file of maximum size of 256K bytes and block
size of 512 bytes. We need only 1 block for index table

Mapping from logical to physical in a file of unbounded length (block size of 512
words)

Linked scheme Link blocks of index table (no limit on size)

Two-level index (4K blocks could store 1,024 four-byte pointers in outer index ->
1,048,567 data blocks and file size of up to 4GB)

27

file

22.Process Management

A process is a program in execution. It is a unit of work within the system. Program is


a passive entity, process is an active entity.

Process needs resources to accomplish its task


o CPU, memory, I/O, files
o Initialization data

Process termination requires reclaim of any reusable resources

Single-threaded process has one program counter specifying location of next


instruction to execute
o Process executes instructions sequentially, one at a time, until completion

Multi-threaded process has one program counter per thread

Typically system has many processes, some user, some operating system running
concurrently on one or more CPUs
o Concurrency by multiplexing the CPUs among the processes / threads

The operating system is responsible for the following activities in connection with
process management:

28

o Creating and deleting both user and system processes


o Suspending and resuming processes
o Providing mechanisms for process synchronization
o Providing mechanisms for process communication
o Providing mechanisms for deadlock handling

Memory Management

All data in memory before and after processing

All instructions in memory in order to execute

Memory management determines what is in memory when


o Optimizing CPU utilization and computer response to users

Memory management activities


o Keeping track of which parts of memory are currently being used and by
whom
o Deciding which processes (or parts thereof) and data to move into and out of
memory
o Allocating and deallocating memory space as needed

24SemaphoreA semaphore is a protected variable whose value can be accessed and altered
only by the operations P and V and initialization operation called
'Semaphoiinitislize'.

Binary Semaphores can assume only the value 0 or the value 1 counting
semaphores also called general semaphores can assume only nonnegative
values.
The P (or wait or sleep or down) operation on semaphores S, written asP(S) or
wait (S), operates as follows:

P(S): IF S > 0
THEN S := S - 1
29

ELSE (wait on S)
The V (or signal or wakeup or up) operation on semaphore S, written
asV(S) or signal (S), operates as follows:
V(S): IF (one or more process are waiting on S)
THEN (let one of these processes proceed)
ELSE S := S +1
Operations P and V are done as single, indivisible, atomic action. It is guaranteed that once a
semaphore operations has stared, no other process can access the semaphore until operation
has completed. Mutual exclusion on the semaphore, S, is enforced within P(S) and V(S).
If several processes attempt a P(S) simultaneously, only process will be allowed to proceed.
The other processes will be kept waiting, but the implementation of P and V guarantees that
processes will not suffer indefinite postponement.
Semaphores solve the lost-wakeup problem.
Producer-Consumer Problem Using Semaphores
The Solution to producer-consumer problem uses three semaphores, namely,
full, empty and mutex.
The semaphore 'full' is used for counting the number of slots in the buffer that are full. The
'empty' for counting the number of slots that are empty and semaphore 'mutex' to make sure
that the producer and consumer do not access modifiable shared section of the buffer
simultaneously.
Initialization

Set full buffer slots to 0.


i.e.semaphore Full = 0.

Set empty buffer slots to N.

i.e., semaphore empty = N.

For control access to critical section set mutex to 1.

i.e., semaphore mutex = 1.


Producer ( )
WHILE (true)
produce-Item ( );
P (empty);
P (mutex);
enter-Item ( )
30

V (mutex)
V (full);
Consumer ( )
WHILE (true)
P (full)
P (mutex);
remove-Item ( );
V (mutex);
V (empty);
consume-Item (Item)

Thrashing is caused by under-allocation of the minimum number of


pages required by a process, forcing it to continuously page fault. The
system can detect thrashing by evaluating the level of CPU utilization
as compared to the level ofmultiprogramming. It can be eliminated by
reducing the level of multiprogramming.
25.Loaders
A loader is a system program, which takes the object code of a program as input and
prepares it for execution.
Loader Function : The loader performs the following functions :
o Allocation - The loader determines and allocates the required memory space for
the program to execute properly.
o Linking -- The loader analyses and resolve the symbolic references made in the
object modules.
o Relocation - The loader maps and relocates the address references to correspond to
the newly allocated memory space during execution.
o Loading - The loader actually loads the machine code corresponding to the object
modules into the allocated memory space and makes the program ready to
execute.
1)Compile-and-Go Loaders:
A compile and go loader is one in which the assembler itself does the processes of
compiling then place the assembled instruction inthe designated memory loactions.
The assembly process is first executed and then the assembler causes a transfer to the first
instruction of the program.
E.G. WATFOR FORTRAN compiler
This loading scheme is also called assmble-and-go
Advantages of Compile-and-go loaders:
31

o Simple and easier to implement.


o No additional routines are required to load the compiled code into the memory.
Disadvantages of Compile-and-go loaders:
o Wastage im memory spave due to the presence of the assembler.
o There is a need to re-assemble the code every time it is to be run
2)Absolute loader:
The absolute loader will load the program at memory location x200:
1.- The header record is checked to verify that the correct program has
been presented for loading.
2.- Each text record is read and moved to the indicate address in memory
3.- When the end record (EOF) is encountered, the loader jumps to the
specified address to begin execution.
The four functions as performed in and absolute loader are :
1.Allocation 2.Linking 3.Relocation 4.Loading
Advantages of Absolute Loader:

Simple, easy to design and implement.

Since more core memory is available to the user there is no memory limit.
Dis advantages of Absolute Loader:

The programmer must specifically tell the assembler the address where the program is
to be loaded.

When subroutines are referenced, the programmer must specify their address
whenever they are called.

26.Protection and Security

Protection any mechanism for controlling access of processes or users to resources


defined by the OS

Security defense of the system against internal and external attacks


o Huge range, including denial-of-service, worms, viruses, identity theft, theft of
service

32

Systems generally first distinguish among users, to determine who can do what
o User identities (user IDs, security IDs) include name and associated number,
one per user
o User ID then associated with all files, processes of that user to determine
access control
o Group identifier (group ID) allows set of users to be defined and controls
managed, then also associated with each process, file
o Privilege escalation allows user to change to effective ID with more rights

33

Vous aimerez peut-être aussi