Vous êtes sur la page 1sur 27

Operating System Structure.

Monolithic Systems
The components of monolithic operating system are organized haphazardly and any
module can call any other module without any reservation. Similar to the other
operating systems, applications in monolithic OS are separated from the operating system
itself. That is, the operating system code runs in a privileged processor mode (referred to
as kernel mode), with access to system data and to the hardware; applications run in a non-
privileged processor mode (called the user mode), with a limited set of interfaces available
and with limited access to system data. The monolithic operating system structure with
separate user and kernel processor mode is shown in Figure 2.1.

When a user-mode program calls a system service, the processor traps the call and then
switches the calling thread to kernel mode. Completion of system service, switches the
thread back to the user mode, by the operating system and allows the caller to continue.
The monolithic structure does not enforce data hiding in the operating system. It
delivers better application performance, but extending such a system can be difficult work
because modifying a procedure can introduce bugs in seemingly unrelated parts of the
system.

Example Systems: CP/M and MS-DOS


Layered Systems:

The components of layered operating system are organized into modules and layers them
one on top of the other. Each module provide a set of functions that other module can call.
Interface functions at any particular level can invoke services provided by lower layers but
not the other way around. The layered operating system structure with hierarchical
organization of modules is shown in Figure 2.2.

One advantage of a layered operating system structure is that each layer of code is given
access to only the lower-level interfaces (and data structures) it requires, thus limiting the
amount of code that wields unlimited power. That is in this approach, the Nth layer can
access services provided by the (N- 1)th layer and provide services to the (N+1)th layer.
This structure also allows the operating system to be debugged starting at the lowest
layer, adding one layer at a time until the whole system works correctly. Layering also
makes it easier to enhance the operating system; one entire layer can be replaced
without affecting other parts of the system. Layered operating system delivers low
application performance in comparison to monolithic operating system.

Example Systems: VAX/VMS, Multics, UNIX


Virtual Machines:

 A virtual machine takes the layered approach to its logical conclusion. It treats
hardware and the operating system kernel as though they were all hardware.

 A virtual machine provides an interface identical to the underlying bare hardware.

 The operating system creates the illusion of multiple processes, each executing on
its own processor with its own (virtual) memory.

 The resources of the physical computer are shared to create the virtual machines.

 CPU scheduling can create the appearance that users have their own
processor.

 Spooling and a file system can provide virtual card readers and virtual line
printers.

 A normal user time-sharing terminal serves as the virtual machine operator’s


console.

Advantages/Disadvantages of Virtual Machines

 The virtual-machine concept provides complete protection of system resources


since each virtual machine is isolated from all other virtual machines. This isolation,
however, permits no direct sharing of resources.

 A virtual-machine system is a perfect vehicle for operating-systems research and


development. System development is done on the virtual machine, instead of on a
physical machine and so does not disrupt normal system operation.

 The virtual machine concept is difficult to implement due to the effort required to
provide an exact duplicate to the underlying machine.
   Non Virtual Machine   Virtual Machine

Figure :  Illustration of virtual and non-virtual machines


Client-Server Model:

Figure : The client-server model.

In this model, shown in Fig, all the kernel does is handle the communication between
clients and servers and the main OS functions are provided by a number of separate
processes.

The main function of the microkernel is to provide a communication facility


between the client program and the various services that are also running in user space.
Communication is provided by message passing. The client program and the service never
interact directly. Rather, they communicate indirectly by exchanging messages with the
microkernel.

The benefits of the microkernel approach include the ease of extending the
operating system. All new services are added to user space and consequently do not
require modification of the kernel. The microkernel also provides more security and
reliability, since most services are running as user-rather as kernel-processes. If a service
fails, the rest of the operating system remains untouched.
A distributed system can be thought of as an extension of the client server concept where
the servers are remote.

Figure . The client-server model in a distributed system.

 The communication between client and server is often by massage passing


 Client and server can run in same machine or on different machine,
connected by a local or wide area network.
 Client request can be handling by same machine or other machine server.
 It is not necessary to know the client that which computer is providing its
request. So the client server model is abstraction that can be used for single
machine or network of machine. For example:

The kernel coordinates the message passing between client applications and
application servers. The client-server structure of Windows NT is:

Fig: Windows NT client-server structure.


Process Management
The Process Model

In this model, all the run able software on the computer, sometimes including the
operating system, is organized into a number of sequential processes, or just processes
for short. A process is just an executing program, including the current values of the
program counter, registers, and variables. In multiprogramming CPU switches from
process to process..

Figure: (a) Multiprogramming of four programs. (b) Conceptual model of four


independent, sequential processes. (c) Only one program is active at once.

In Fig. (a) we see a computer multiprogramming four programs in memory.

In Fig. (b) We see four processes, each with its own flow of
control (i.e., its own logical program counter), and each one running independently of the
other ones. Of course, there is only one physical program counter, so when each process
runs, its logical program counter is loaded into the real program counter. When it is
finished for the time being, the physical program counter is saved in the process’ logical
program counter in memory.

In Fig. (c) We see that viewed over a long enough time interval, all the
processes have made progress, but at any given instant only one process is actually
running.
Process State

Batch system executes a job, time sharing system executes user program or task.
These above activities are similar so we call them process. program is passive entity but
process is active entity. Process have program, input, output and state(activities) .As a
process executes, it changes state. The state of a process is defined in
part by the current activity of that process. Each process may be in one of the
following states:
 New: The process is being created
 Ready: The process is waiting to be assigned to a processor.
 Running: Instructions are being executed.
 Waiting/block : The process is waiting for some event to occur (such as an I/O
completion or reception of a signal).
 Terminated: The process has finished execution.

Ready to running:

Ready process goes to running state when all other process has
completed their sharing time and it is a time for 1 ST process to run.

Running to ready:

Running process goes to ready state when the scheduler decides


that the running process has run long enough, and it is time to let another process have
some CPU time.
Running to waiting/block:

Running process goes to waiting/block state when a process discovers that it cannot
continue. In some systems the process must execute a system call, such as block or pause,
to get into blocked state. In other systems, including UNIX, when a process reads from a
pipe or special file (e.g., a terminal) and there is no input available, the process is
automatically blocked.

Waiting/block to ready:

Waiting process goes to ready state when the external event for which a process
was waiting (such as the arrival of some input) happens. If no other process is running at
that instant, ready to running process will be triggered and the process will start running.
Otherwise it may have to wait in ready state for a little while until the CPU is available and
its turn comes.

Process Creation
The processes in the system can execute concurrently, and they must be created and
deleted dynamically. Thus, the operating system must provide a mechanism (or facility) for
process creation and termination. There are four principal events that cause processes to
be created:
1. System initialization.
2. Execution of a process creation system call by a running process.
3. A user request to create a new process.
4. Initiation of a batch job.
5.
A process may create several new processes, via a create-process system call, during the
course of execution. The creating process is called a parent process, whereas the new
processes are called the children of that process. Each of these new processes may in turn
create other processes, forming a tree of processes (Figure).

Fig: processes creation (Here, in fig, a child has only one parent and a parent has more Childs.)
When a process creates a sub process, that sub process may be able to obtain its resources
directly from the operating System, or it may be constrained to a subset of the resources of
the parent Process. The parent may have to partition its resources among its children, or it
may be able to share some resources (such as memory or files) among several of its
children. Restricting a child process to a subset of the parent's Resources prevents any
process from overloading the system by creating too many sub processes.

When a process creates a new process, two possibilities exist in terms of execution:
1. The parent continues to execute concurrently with its children.
2. The parent waits until some or all of its children have terminated.

There are also two possibilities in terms of the address space of the new process:
1. The child process is a duplicate of the parent process.
2. The child process has a program loaded into it.

IN UNIX process are created by the FORK system call which creates the identical copy of
calling process.

Process Termination
A process terminates when it finishes executing its final statement and asks the
operating system to delete it by using the exit system call. At that point, the process may
return data (output) to its parent process (via the wait system call). All the resources of the
process-including physical and virtual memory, open files, and 1/0 buffers-are deallocated
by the operating system.
A process can cause the termination of another process via an appropriate system
call (for example, abort). Usually, only the parent of the process that is to be terminated
can invoke such a system call. Otherwise, users could arbitrarily kill each other's jobs. A
parent therefore needs to know the identities of its children. Thus, when one process
creates a new process, the identity of the newly created process is passed to the parent.
A parent may terminate the execution of one of its children for a variety of
Reasons, such as these:
 The child has exceeded its usage of some of the resources that it has
been allocated. This requires the parent to have a mechanism to
inspect the state of its children.
 The task assigned to the child is no longer required.
 The parent is exiting, and the operating system does not allow a child
to continue if its parent terminates. On such systems, if a process
terminates (either normally or abnormally), then all its children must
also be terminated. This phenomenon, referred to as cascading
termination, is normally initiated by the operating system.
In UNIX we can terminate a process by using the exit system call; its parent process may
wait for the termination of a child process by using the wait system call.

Implementation of Processes
To implement the process model, the operating system maintains a table, an array of
structures, called the process table or process control block (PCB) or Switch frame. The
activity of a process is controlled by a data structure called Process Control Block(PCB). A
PCB is created every time a program is loaded to be executed. It is also called a task
control block. Each process has its own PCB to represent it in the operating system. The
PCB is central store of information that allows the operating system to locate all key
information about the process.  it contain everything about the process that must be saved
when the process is switched from the running state to the ready state so that it can be
restarted later as if it had never been stopped.

Process management Memory management File management

Registers Pointer to text segment Root directory


Program counter Pointer to data segment Working directory
Program status word Pointer to stack segment File descriptors
Stack pointer User ID
Process state Group ID
Priority
Scheduling parameters
Process ID
Parent process
Process group
Signals
Time when process started
CPU time used
Children’s CPU time
Time of next alarm
Figure: Some of the fields of a typical process table entry.
The following is the information stored in a PCB.

p rocess state: The state may be new, ready, running, waiting, halted, and
so on.
Process Number: Each process is identified by its process number, called
process identification number (PID).
Program counter: The counter indicates the address of the next instruction
to be executed for this process.
CPU registers: They include accumulators, index registers, stack pointers,
and general-purpose registers, plus any condition-code information. Along
with the program counter, this state information must be saved when an
interrupt occurs, to allow the process to be continued correctly afterward.
CPU-scheduling information: This information includes a process priority,
pointers to scheduling queues, and any other scheduling parameters.

Memory-management information: This information may include such information as


the value of the base and limit registers, the page tables, or the segment tables, depending
on the memory system used by the OS.

Accounting information: This information includes the amount of CPU and real time used,
time limits, account numbers, job or process numbers, and so on.

I/O status information: The information includes the list of I/O devices allocated to this
process, a list of open files, and so on.

These PCBs are chained into a number of lists. For example, all processes in
the ready status are in the ready queue
As processes switch in and out of the Running state, their PCBs are saved and reloaded as
shown in this diagram:
Fig: Showing CPU switch from process to process.

Threads

 A thread is a light-weight process. A thread is contained inside a process and different


threads in the same process share some resources (most commonly memory), while
different processes do not. A traditional (or heavyweight) process has a single thread of
control. If the process has multiple threads of control, it can do more than
one task at a time. Figure (a) illustrates the difference between a traditional single-
threaded process and a multithreaded process.

Single Threading Multi-Threading

 Single Thread
 Has single thread of control
 It allows the process to perform only 1 task at a time.
 Multi thread
 Has many threads
 Simultaneous execution of different task

Multithreading vs. single threading

 Multithreading: The OS supports multiple threads of execution within a single


process
 Single threading: The OS does not recognize the separate concept of thread
• MS-DOS supports a single user process and a single thread
• Traditional UNIX supports multiple user processes but only one thread per
process
• Solaris and Windows 2000 support multiple threads

Why Threads?

Following are some reasons why we use threads in designing operating systems.

1. A process with multiple threads makes a great server for example printer server.
2. Because threads can share common data, they do not need to use interprocess
communication.
3. Because of the very nature, threads can take advantage of multiprocessors.

Thread:

 A thread is a subset of the process.

 It is termed as a ‘lightweight process’, since it is similar to a real process but


executes within the context of a process and shares the same resources allotted to
the process by the kernel

 Usually, a process has only one thread of control – one set of machine instructions
executing at a time.

 A process may also be made up of multiple threads of execution that execute


instructions concurrently.

 Multiple threads of control can exploit the true parallelism possible on


multiprocessor systems.
 On a uni-processor system, a thread scheduling algorithm is applied and the
processor is scheduled to run each thread one at a time.

 All the threads running within a process share the same address space, file
descriptor, stack and other process related attributes.

 Since the threads of a process share the same memory, synchronizing the access to
the shared data within the process gains unprecedented importance.

Process:
 An executing instance of a program is called a process.

 Some operating systems use the term ‘task‘to refer to a program that is being
executed.

 A process is always stored in the main memory also termed as the primary memory
or random access memory.

 Therefore, a process is termed as an active entity. It disappears if the machine is


rebooted.

 Several process may be associated with a same program.

 On a multiprocessor system, multiple processes can be executed in parallel.

 On a uni-processor system, though true parallelism is not achieved, a process


scheduling algorithm is applied and the processor is scheduled to execute each
process one at a time yielding an illusion of concurrency.

 Example: Executing multiple instances of the ‘Calculator’ program. Each of the


instances are termed as a process.
Threads vs. Processes

Thread Process

1. A thread has no data segment or 1. A process has code/data/heap &


heap other segments

2. There can be more than one thread 2. Threads within a process share
in a process, the first thread calls code/data/heap, share I/O, but each
main and has the process’s stack has its own stack and registers.

3. A thread cannot live on its own, it 3. There must be at least one thread in
must live within a process a process

4. Inexpensive creation 4. Expensive creation

5. Inexpensive context switching 5. Expensive context switching

6. If a thread dies, its stack is reclaimed 6. If a process dies, its resources are
by the process. reclaimed & all threads die.

7. Threads are considered lightweight 7. A process is considered as


because they use far less resources “heavyweight” because they use far
than processes. more resources than processes.

8. Threads are easier to create than 8. They require separate address space
processes since they don't require a
separate address space.

9. Threads require minimal amounts of 9. Processes are heavily dependent on


resource. system resources available

10. Modifying a main thread may affect 10. Changes on a parent process will not
subsequent threads. necessarily affect child processes
User Threads:

 User threads are supported above the kernel and are implemented by a thread
library at the user level.

 The library (or run-time system) provides support for thread creation, scheduling
and management with no support from the kernel.

 When threads are managed in user space, each process needs its own private thread
table to keep track of the threads in that process.

 The thread-table keeps track only of the per-thread items (program counter, stack
pointer, register, state..)

 When a thread does something that may cause it to become blocked locally (e.g. wait
for another thread), it calls a run-time system procedure.

 If the thread must be put into blocked state, the procedure performs thread
switching.

 User-thread libraries include POSIX Pthreads, Mach C-threads, and Solaris 2 UI-
threads.
User-level Threads: Advantages

 The operating system does not need to support multi-threading

 Since the kernel is not involved, thread switching may be very fast.

 Each process may have its own customized thread scheduling algorithm.

 Thread scheduler may be implemented in the user space very efficiently.

User-level Threads: Problems

 The implementation of blocking system calls (the rest of your processing must wait
until the call returns) is highly problematic (e.g. read from the keyboard). All the
threads in the process risk being blocked!
 Possible Solutions:

• Change all system calls to non-blocking

Kernel thread:
Kernel threads are supported directly by the operating system: The kernel performs
thread creation, scheduling, and management in kernel space. Because thread management
is done by the operating system, kernel threads are generally slower to create and manage
than are user threads. However, since the kernel is managing the threads, if a thread
performs a blocking system call, the kernel can schedule another thread in the application
for execution. Also, in a multiprocessor environment, the kernel can schedule threads on
different processors. Most contemporary operating systems-including Windows NT,
Windows 2000, Solaris 2, BeOS, and Tru64 UNIX (formerly Digital UN1X)-support kernel
threads.

Some important points:

 Kernel threads are supported directly by the OS: The kernel performs thread
creation, scheduling and management in the kernel space.

 The kernel has a thread table that keeps track of all threads in the system.

 All calls that might block a thread are implemented as system calls (greater cost).

 When a thread blocks, the kernel may choose another thread from the same process,
or a thread from a different process.

 Some kernels recycle their threads; new threads use the data-structures of already
completed threads.
Advantages of threads:
The benefits of multithreaded programming can be broken down into four
major categories:

1. Responsiveness: Multithreading an interactive application may allow a program to


continue running even if part of it is blocked or is performing a lengthy operation, thereby
increasing responsiveness to the user. For instance, a multithreaded web browser could
still allow user interaction in one thread while an image is being loaded in another thread.

2. Resource sharing: By default, threads share the memory and the resources of the process
to which they belong. The benefit of code sharing is that it allows an application to have
several different threads of activity all within the same address space.

3. Economy: Allocating memory and resources for process creation is costly. Alternatively,
because threads share resources of the process to which they belong, it is more economical
to create and context switch threads. It can be difficult in overhead for creating and
maintaining a process rather than a thread, but in general it is much more time consuming
to create and manage processes than threads. In Solaris 2,creating a process is about 30
times slower than is creating a thread, and context switching is about five times slower.

4. Utilization of multiprocessor architectures: The benefits of multithreading can be greatly


increased in a multiprocessor architecture, where each thread may be running in parallel
on a different processor. A single-threaded process can only run on one CPU, no matter
how many are available.
Multithreading on a multi-CPU machine increases concurrency. In a single processor
architecture, the CPU generally moves between each thread so
quickly as to create an illusion of parallelism, but in reality only one thread
is running at a time.

Thread Drawbacks
• Synchronization

 Access to shared memory or shared variables must be controlled if the


memory or variables are changed

 Can add complexity, bugs to program code

 E.g., need to be very careful to avoid race conditions, deadlocks and other
concurrency problems.

• Lack of independence

 Threads not independent, within a Heavy-Weight Process (HWP)

 The RAM address space is shared; No memory protection from each other

 The stacks of each thread are intended to be in separate RAM, but if one
thread has a problem (e.g., with pointers or array addressing), it could write
over the stack of another thread
Multithreading Models
Many systems provide support for both user and kernel threads, resulting in
different multithreading models. We look at three common types of threading
Implementation.

1. Many-to-One Model
2. One-to-one Model
3. Many-to-Many Model

1. Many-to-One Model

The many-to-one model (Figure) maps many user-level threads to one kernel thread.
Thread management is done in user space, so it is efficient, but the entire process will block
if a thread makes a blocking system call. Also,
because only

 In the many-to-one model, many user-level threads are all mapped onto a single
kernel thread.
 Thread management is handled by the thread library in user space, which is very
efficient.
 However, if a blocking system call is made, then the entire process blocks, even if the
other user threads would otherwise be able to continue.
 Because a single kernel thread can operate only on a single CPU, the many-to-one
model does not allow individual processes to be split across multiple CPUs .i.e one
thread can access the kernel at a time, multiple threads are unable to run in parallel
on multiprocessors.
 Green threads for Solaris and GNU Portable Threads implement the many-to-one
model.

Fig: Many-to-One Model

2. One-to-one Model

 The one-to-one model creates a separate kernel thread to handle each user
thread.
 One-to-one model overcomes the problems listed above involving blocking
system calls and the splitting of processes across multiple CPUs..i.e Allow
anther thread to run if block
 However the overhead of managing the one-to-one model is more significant,
involving more overhead and slowing down the system.i.e run parallel
 Most implementations of this model place a limit on how many threads can
be created.
 Linux and Windows from 95 to XP implement the one-to-one model for
threads.

Fig: One-to-one Model

3. Many-to-Many Model

 The many-to-many model multiplexes any number of user threads onto an equal or
smaller number of kernel threads, combining the best features of the one-to-one and
many-to-one models.
 Users have no restrictions on the number of threads created.
 Blocking kernel system calls do not block the entire process.
 Processes can be split across multiple processors.
 Individual processes may be allocated variable numbers of kernel threads,
depending on the number of CPUs present and other factors.

Fig: Many-to-Many Model

Thread Usage
1. Responsiveness: Multithreading an interactive application may allow a
program to continue running even if part of it is blocked or is performing
a lengthy operation, thereby increasing responsiveness to the user. For
instance, a multithreaded web browser could still allow user interaction
in one thread while an image is being loaded in another thread.

2. Resource sharing: By default, threads share the memory and the resources
of the process to which they belong. The benefit of code sharing is that it
allows an application to have several different threads of activity all within
the same address space.

3. Economy: Allocating memory and resources for process creation is costly. Alternatively,
because threads share resources of the process to which they belong, it is more economical
to create and context switch threads. It can be difficult in overhead for creating and
maintaining a process rather than a thread, but in general it is much more time consuming
to create and manage processes than threads. In Solaris 2,creating a process is about 30
times slower than is creating a thread, and context switching is about five times slower.

4. Utilization of multiprocessor architectures: The benefits of multithreading


can be greatly increased in a multiprocessor architecture, where each thread
may be running in parallel on a different processor. A single-threaded
process can only run on one CPU, no matter how many are available.
Multithreading on a multi-CPU machine increases concurrency. In a single processor
architecture, the CPU generally moves between each thread so
quickly as to create an illusion of parallelism, but in reality only one thread
is running at a time.

5.They are light process. They are easier to create and destroy. Decomposition of a process
into multi sequence of thread that run in quasi-parallel. Their programming model become
simpler.

Implementing Threads in User Space


There are two main ways to implement a threads package:

1. user space
2. kernel space

The first method is to put the threads package entirely in user space. The kernel
knows nothing about them. As far as the kernel is concerned, it is managing
Ordinary, single-threaded processes. The first, and most obvious, advantage is
that a user-level threads package can be implemented on an operating system that does not
support threads. To implement thread in user space, thread table is created in the user
space memory.
Fig: A user-level threads package.

When threads are managed in user space, each process needs its own
private thread table to keep track of the threads in that process. The thread table is
managed by the run-time system. When a thread is moved to ready state or blocked state,
the information needed to restart it is stored in the thread table.

Advantages with user implemented threads:

 Do not need changes to the operating system.


 Faster process switching, as a trap to the operating system is eliminated.
 Different applications can use different scheduling algorithms

Disadvantages with user implemented threads:

 If a thread makes a blocking system call, all threads in the task will stop. This is
unacceptable but unavoidable if blocking system calls is the only alternative (which
is common). It can be solved in a clumsy way if there is a separate system call to test
if read will block.
 Implementation of preemptive scheduling (with signals) is usually inefficient. Non-
preemptive scheduling means that a looping thread will stop all other threads in
same task.
 Parallel execution of threads in a multiprocessor is not possible.
 Programs that use threads usually do many system calls, making the system call
overhead for process switching less important.

Vous aimerez peut-être aussi