Vous êtes sur la page 1sur 7

http://sundaros.blogspot.com/2010/09/old-test-i.

html
http://quizlet.com/26336566/csci-451-ch1-5-book-qx-answers-flash-cards/

1What is the distinction between spatial locality and temporal locality?
Spatial locality refers to the tendency of execution to involve a number of memory locations that are
clustered. Temporal locality refers to the tendency for a processor to access memory locations that have
been used recently. Spatial locality is generally exploited by using larger cache blocks and by
incorporating prefetching mechanisms (fetching items of anticipated use) into the cache control logic.
Temporal locality is exploited by keeping recently used instruction and data values in cache memory and
by exploiting a cache hierarchy.


Consider the following code fragment:
for ( int i = 0; i < 20; i++ ) {
for ( int j = 0; j < 10; j++ ) {
a[i][j] = 0;
}
}
a) Give an example of spatial locality in the code, if there is one.
Spatial locality occurs when the process uses nearby memory contents. This occurs when the array entries are
accessed more or less sequentially within the inner loop. One could also argue spatial locality occurs when the
instructions in the loop code are executed.
b) Give an example of temporal locality in the code, if there is one.
Temporal locality occurs when the same memory location is used repeatedly over some relatively short time
interval. This occurs when the loop index variables i and j are repeatedly incremented. The repeated accesses
to a[i], for a particular value of i, occur within a relatively short time.











2.In virtually all systems that include DMA modules, DMA access to main memory is given higher priority
than processor access to main memory. Why?
If the processor is held up in attempting to read or write memory, usually no damage
occurs, except a slight loss of time. But a DMA transfer may be from/to a device that is
sending/receiving data in a continuous time sensitive stream (e.g., a network port, or a
tape drive) and cannot be stopped without the loss of data.

DMA has higher priority because the CPU accesses the memory very frequently and
the DMA can starve, waiting for the bus to be free.



3.A computer has a cache, main memory, and a disk used for virtual memory. If a referenced
word is in the cache, 20 ns are required to access it. If it is in main memory but
not in the cache, 60 ns are needed to load it into the cache (this includes the time to
originally check the cache), and then the reference is started again. If the word is not
in main memory, 12 ms are required to fetch the word from disk, followed by 60 ns to
copy it to the cache, and then the reference is started again. The cache hit ratio is 0.9
and the main-memory hit ratio is 0.6.What is the average time in ns required to access
a referenced word on this system?Question: is this design any good?

Answer:
There are three cases to consider:

Location of referenced word Probability Total time for access in ns
In cache 0.90 20
Not in cache, but in main memory (0.10)(0.6) = 0.06 60 + 20 = 80
Not in cache or main memory (0.10)(0.4) = 0.04 12ms + 60 + 20 = 12000080

So the average access time would be:
Avg = (0.90)(20) + (0.06)(80) + (0.04)( 12000080) = 480026 ns
Design is no good. Average access time is over 24000 (480026/20) time cache access time,


4Explain the distinction between a real address and a virtual address.
A virtual address refers to a memory location in virtual memory. That location is on disk and at
some times in main memory. A real address is an address in main memory.
program references a word by means of a virtual address consisting of a page number and an offset
within the page. Each page of a process may be located anywhere in main memory. The paging system
provides for a dynamic mapping between the virtual address used in the program and a real address, or
physical address, in main memory.
Q-9}. Explain the difference between a monolithic kernel and a microkernel.
Ans. Monolithic kernel:Earlier in this type of kernel architecture, all the basic system services
like process and memory management ,interrupt handling etc.where packaged in to single
module in kernel space. This type of architecture led to some serious drawbacks like
1)Size of kernel ,which was huge
2)Poor maintainability.
Which means bug fixing or addition of new features resulted in recomplimention of the whole
kernel which could consume hours.
In a modern day approach to monoloithic architecture, the kernel consist of different modules
which can be dynamically loaded and unloaded.This moduler approach, maintainability of kernel
become very easy as only the concerned module needs to be loaded and unloaded every time
there is a change or bug mix in a particular module.So there is no need to bring down and
recompile the whole kernel for a smallest bit of change.
Microkernal: The architecture measurly caters to the problem of ever growing size of kernel code
which we could not control in the monolithic approach.
This architecture allows some basic services like devices driver management, protocol stacs file
system etc to run in user space. This reduces the kernel code size and also increases the sequrity
and stability of OS as we have the bare minimum code running in
kernel. So , if suppose a basic services like network service crashes due to buffer overflow,then
only the networking services memory would be corrupted living the rest of the system steel
functional.
In this architecture, all the basic OS service which are made part of user space are made to run as
servers which are used by other programs in the system through interprocess communication
(IPC).


2.5
Read the following description and answer the question below.

In IBMs mainframe operating system,OS/390, one of the major modules in the kernel
is the System Resource Manager (SRM).This module is responsible for the allocation
of resources among address spaces (processes).The SRM gives OS/390 a degree of
sophistication unique among operating systems. No other mainframe OS, and certainly
no other type of OS, can match the functions performed by SRM. The concept of resource
includes processor, real memory, and I/O channels. SRM accumulates statistics
pertaining to utilization of processor, channel, and various key data structures. Its purpose
is to provide optimum performance based on performance monitoring and analysis.
The installation sets forth various performance objectives, and these serve as
guidance to the SRM, which dynamically modifies installation and job performance
characteristics based on system utilization. In turn, the SRM provides reports that enable
the trained operator to refine the configuration and parameter settings to improve
user service.
This problem concerns one example of SRM activity. Real memory is divided into
equal-sized blocks called frames, of which there may be many thousands. Each frame
can hold a block of virtual memory referred to as a page. SRM receives control approximately
20 times per second and inspects each and every page frame. If the page
has not been referenced or changed, a counter is incremented by 1. Over time, SRM
averages these numbers to determine the average number of seconds that a page
frame in the system goes untouched.

What might be the purpose of this and what action might SRM take?


2.5
This problem concerns one example of SRM activity. Real memory is divided into equal-sized blocks
called frames, of which there may be many thousands. Each frame can hold a block of virtual memory
referred to as a page. SRM receives control approximately 20 times per second and inspects each and
every page frame. If the page has not been referenced or changed, a counter is incremented by 1. Over
time, SRM averages these numbers to determine the average number of seconds that a page frame in
the system goes untouched. What might be the purpose of this and what action might SRM take?

The system operator can review this quantity to determine the degree of "stress" on the system. By
reducing the number of active jobs allowed on the system, this average can be kept high. A typical
guideline is that this average should be kept above 2 minutes [IBM86]. This may seem like a lot, but it
isn't.


2.2
An I/O-bound program is one that, if run alone, would spend more time waiting for
I/O than using the processor. A processor-bound program is the opposite. Suppose a
short-term scheduling algorithm favors those programs that have used little processor
time in the recent past. Explain why this algorithm favors I/O-bound programs and
yet does not permanently deny processor time to processor-bound programs.
. I/O-bound processes use little processor time; thus, the algorithm will favor
I/O-bound processes.
b. if CPU-bound process is denied access to the processor
- the CPU-bound process won't use the processor in the recent past.
- the CPU-bound process won't be permanently denied access.


2.9 Explain the difference between a monolithic kernel and a microkernel.
monolithic kernel A large kernel containing virtually the complete operating system, including
scheduling, file system, device drivers, and memory management. All the functional components
of the kernel have access to all of its internal data structures and routines. Typically, a monolithic kernel
is implemented as a single process, with all elements sharing the same address space.

3333microkernel A small privileged operating system core that provides process scheduling, memory
management, and communication services and relies on other processes to perform some of the
functions traditionally associated with the operating system kernel.

Monolithic Kernel;

1. Kernel is a single large block of code.
2. Runs as single process within a single address space.
3.Virtual any procedure can call any other procedure.
4.If anything is changed, all modules and functions must be recompiled and relinked and system
rebooted.
5. Difficult to add new device driver of file system functions.

Microkernel;

1.Only core OS functions are in the kernel.
2.Less essential services and applications are built on microkernel and execute in user mode.
3.Simplifies implementation, provides flexibility and better suited for distributed environment.





3.4 What does it mean to preempt a process?
Reclaiming a resource from a process before the process has finished
using it.
3.5 What is swapping and what is its purpose?
A process that interchanges the contents of an area of main storage
with the contents of an area in secondary memory.


3.2 Consider a computer with N processors in a multiprocessor configuration.
a. How many processes can be in each of the Ready, Running, and Blocked states at
one time?
b. What is the minimum number of processes that can be in each of the Ready,
Running, and Blocked states at one time?

a. There can be N processes in running state. The number of processes in ready and blocked state
depends on the size of "ready list" and "blocked list".

b. The minimum number of processes can be 0, if the system is idle and there are no blocked jobs or no
ready jobs.









3.6 Consider the state transition diagram of Figure 3.9b. Suppose that it is time for the
OS to dispatch a process and that there are processes in both the Ready state and the
Ready/Suspend state, and that at least one process in the Ready/Suspend state has
higher scheduling priority than any of the processes in the Ready state.Two extreme
policies are as follows: (1) Always dispatch from a process in the Ready state, to minimize
swapping, and (2) always give preference to the highest-priority process, even
though that may mean swapping when swapping is not necessary. Suggest an intermediate
policy that tries to balance the concerns of priority and performance
Penalize the Ready, suspend processes by some fixed amount, such as one or two priority levels
Then a Ready/Suspend process is chosen next only if it has a higher priority than the highest-
priority Ready process by several levels of priority.

Vous aimerez peut-être aussi