Académique Documents
Professionnel Documents
Culture Documents
process may need more than one time slice to complete. During a time slice a process may finish
execution or go for an I/O. The I/O in this case is usually interactive like a user response from the
keyboard or a display on the monitor. The CPU switches between processes at the end of a time slice. The
switching between processes is so fast that the user gets the illusion that the CPU is executing only one
users process.
Time-sharing operating systems are more complex than multi-programmed operating systems. This is
because in multi-programming several jobs must be kept simultaneously in memory, which requires some
form of memory management and protection. Jobs may have to be swapped in and out of main memory
to the secondary memory. To achieve this goal a technique called virtual memory is used. Virtual memory
allows execution of a job that may not be completely in a memory. The main logic behind virtual memory
is that programs to be executed can be larger physical memory, its execution takes place.
2 What is the best methodology available to create a modular kernel? List the seven types
of loadable kernel modules in Solaris.
3+7=10
Answer:
Modules
Perhaps the best current methodology for operating-system design involves using object-oriented
programming techniques to create a modular kernel. Here, the kernel has a set of core components and
dynamically links in additional services either during boot time or during run time. Such a strategy uses
dynamically loadable modules and is common in modern implementations of UNIX, such as Solaris,
Linux and MacOS. For example, the Solaris operating system structure is organized around a core kernel
with seven types of loadable kernel modules:
1. Scheduling classes
2. File systems
3. Loadable system calls
4. Executable formats
5. STREAMS formats
6. Miscellaneous
7. Device and bus drivers
Such a design allow the kernel to provide core services yet also allows certain features to be implemented
dynamically. For example device and bus drivers for specific hardware can be added to the kernel, and
support for different file systems can be added as loadable modules. The overall result resembles a layered
system in that each kernel section has defined, protected interfaces; but it is more flexible than a layered
system in that any module can call any other module. Furthermore, the approach is like the microkernel
approach in that the primary module has only core functions and knowledge of how to load and
communicate with other modules; but it is more efficient, because modules do not need to invoke
message passing in order to communicate.
The above cycle repeats. This is called the convoy effect. Here small processes wait for one big process to
release the CPU. Since the algorithm is non-preemptive in nature, it is not suited for time sharing
systems.
b. Shortest-Job-First Scheduling
Another approach to CPU scheduling is the shortest job first algorithm. In this algorithm, the length of the
CPU burst is considered. When the CPU is available, it is assigned to the process that has the smallest next
CPU burst. Hence this is named shortest job first. In case there is a tie, FCFS scheduling is used to break
the tie. As an example, consider the following set of processes P1, P2, P3, P4 and their CPU burst times:
Process Burst time (ms)
P1 6
P2 8
P3 7
P4 3
Using SJF algorithm, the processes would be scheduled as shown below.
P4
P1
P2
0 3 9 16 24
Average waiting time = (0 + 3 + 9 + 16) / 4 = 28 / 4 = 7 ms.
Average turnaround time = (3 + 9 + 16 + 24) / 4 = 52 / 4 = 13 ms.
If the above processes were scheduled using FCFS algorithm, then
Average waiting time = (0 + 6 + 14 + 21) / 4 = 41 / 4 = 10.25 ms.
Average turnaround time = (6 + 14 + 21 + 24) / 4 = 65 / 4 = 16.25 ms.
The SJF algorithm produces the most optimal scheduling scheme. For a given set of processes, the
algorithm gives the minimum average waiting and turnaround times. This is because, shorter processes
are scheduled earlier than longer ones and hence waiting time for shorter processes decreases more than
it increases the waiting time of long processes.
The main disadvantage with the SJF algorithm lies in knowing the length of the next CPU burst. In case of
long-term or job scheduling in a batch system, the time required to complete a job as given by the user can
be used to schedule. SJF algorithm is therefore applicable in long-term scheduling. The algorithm cannot
be implemented for CPU scheduling as there is no way to accurately know in advance the length of the
next CPU burst. Only an approximation of the length can be used to implement the algorithm.
But the SJF scheduling algorithm is provably optimal and thus serves as a benchmark to compare other
CPU scheduling algorithms. SJF algorithm could be either preemptive or non-preemptive. If a new
process joins the ready queue with a shorter next CPU burst then what is remaining of the current
executing process, then the CPU is allocated to the new process. In case of non-preemptive scheduling, the
current executing process is not preempted and the new process gets the next chance, it being the process
with the shortest next CPU burst.
trapped as an addressing error. If the offset is correct the segment table gives a base value to be added to
the offset to generate the physical address.
b) External fragmentation
Answer:
Segments of user programs are allocated memory by the bob scheduler. This is similar to paging where
segments could be treated as variable sized pages. So a first-fit or best-fit allocation scheme could be used
since this is a dynamic storage allocation problem. But, as with variable sized partition scheme,
segmentation too causes external fragmentation. When any free memory segment is too small to be
allocated to a segment to be loaded, then external fragmentation is said to have occurred. Memory
compaction is a solution to overcome external fragmentation.
The severity of the problem of external fragmentation depends upon the average size of a segment. Each
process being a segment is nothing but the variable sized partition scheme. At the other extreme, every
byte could be a segment, in which case external fragmentation is totally absent but relocation through the
segment table is a very big overhead. A compromise could be fixed sized small segments, which is the
concept of paging. Generally, if segments even though variable are small, external fragmentation is also
less.
6 What is computer virus? List the types of virus and its various infection methods
2+2+6=10
Answer:
Computer Virus
A computer virus is written with an intention of infecting other programs. It is a part of a program that
piggybacks on to a valid program. It differs from the worm in the following ways:
Worm is a complete program by itself and can execute independently whereas virus does not operate
independently.
Worm consumes only system resources but virus causes direct harm to the system by corrupting code
as well as data.
Types of viruses
There are several types of computer viruses. New types get added every now and then. Some of the
common varieties are:
Boot sector infectors
Memory resident infectors
File specific infectors
Command processor infectors
General purpose infectors
Infection methods
Viruses infect other programs in the following ways:
Append: virus code appends itself to a valid unaffected program
Replace: virus code replaces the original executable program either completely or partially
Insert: virus code gets inserted into the body of the executable code to carry out some undesirable
actions
Delete: Virus code deletes some part of the executable program
Redirect: The normal flow of a program is changed to execute a virus code that could exist as an
appended portion of an otherwise normal program.