Académique Documents
Professionnel Documents
Culture Documents
Booting Process
We switch on a particular computer then at first the BIOS
(Basic Input Output System) program will get over the control. The BIOS program is a
firmware and it is usually available on an EPROM (Erasable Programmable Read only
Memory) IC (Integrated circuit).When the BIOS program runs then at first it interacts with
all peripheral devices available and attached with the system and it writes a summary
report in some predefined memory locations of the memory known as BIOS data area
(specifically in the lower memory block). As an example location no. 413H and 414H
contain the size of the memory in the system. Location no. 417H contains different key
information and so on. After completion of this process the BIOS program produces a
beep sound to indicate the successful completion of the POST (POWER ON SELF
TEST) operation. After that BIOS executes a bootstrap loader program to load the O.S.
in the computers memory.
While in the loading process, the O.S. reads the BIOS data area
and knows regarding the system configuration. And without any cross check it
configures it self. After being fully and successfully load the O.S. takes all charges
of memory management, disk management, I/O management, deadlock management,
process scheduling and synchronization and ET.
Now days as O.S. are loaded in high memory block. So a copy or
replica of BIOS data area is also copied in high memory region. This memory is called
shadow memory. Thus a computer boots.
After successful loading of the OS it does all sorts of management related operations.
As example:
Process Management
Disk Management
I/O Management
Memory Management
CPU Scheduling
Protection and Security
Virtual Memory Management
Process Synchronization
Deadlock Management
Page- 2 -
Definitions of PROCESS
A program in execution
An asynchronous activity.
The animated spirit of a procedure.
The locus of control of a procedure in execution.
That which is manifested by the existence of a process control block in the operating
system.
That entity to which processors are assigned.
The dispatch able unit.
Process States
A process goes through a series of discrete process steps. Various events can cause a process
to change states.
When a job is admitted to the system, a corresponding process is created and normally
inserted at the back of the ready list. The process gradually moves to the head of the ready
list as the processes before it complete their turns at using the CPU.When the process reaches
the head of the list, and when the CPU becomes available the process starts using the CPU
and it is said to make a state transition from Ready state to Running state. The assignment of
the CPU to the first process on the ready list is called Dispatching and is performed by a
system entity called the dispatcher. We indicate this transition as follows:
Dispatch < process_name > : ready running
To prevent any one process from monopolizing the system, either accidentally or maliciously,
the OS sets a hardware interrupting clock (or interval timer) to allow this user to run for a
specific time interval or quantum. If the process does not voluntarily relinquish the CPU
before the time interval expires, the interrupting clock generates an interrupt, causing the OS
to regain control, then OS makes the previously running process ready and makes the first
process on the ready list running. So we have
Timerrunout < process_name > : running ready and
Dispatch < process_name > : ready running
Page- 3 -
If a running process initiates a i/p or o/p operation before its quantum expires, the running
process voluntarily relinquishes the CPU. So, the transition is
Block < process_name > : running blocked
The only other allowable state transition occurs when an i/o operations are completed. So we
have,
Wakeup < process_name > : blocked ready
On a multiprocessor system a running process may be suspended by another process running
at that moment on a different processor. So the transition
Suspend < process_name > : ready suspendedready
A suspendedready process may be made ready by another process. So the transition
Resume < process_name > : suspendedready ready
Concurrent Processes
A process is a sequence of operations carried out one at a time. The precise definition
of an operation depends on the level of detail at which the process is described. For some
purpose we may regard a process as a single operation A, as shown on top of the figure. For
other purposes, it may be more convenient to look upon the same process as a sequence of
simpler operations B,C, D, E. And when we examine it in still more detail, previously
recognized operations can be partitioned into still simpler ones: F, G, H and so on. So a
process is described in terms of increasingly simpler operations which are carried out in
increasingly smaller grains of time.
Processes are concurrent if their executions overlap in time. The following figure
shows three concurrent processes P, Q and R.
Page- 5 -
S1;
co begin
S2;
begin
S4;
co begin S5; S6 co end
S7;
co begin S8; S9 co end
end
S3;
co end
S10;
Page- 7 Q.1. Solve the following crossword puzzle using the given hints:
Hints:
Top-Down:
1. A program in execution.
Sideway:
1. A light-weighted process.
2. Assignment of CPU to the first process on the list.
3. Data structure containing certain important information about process.
Q.2. In a time-sharing operating system, when the time slot given to a process is completed,
the process goes from the RUNNING state to the
a) BLOCKED state
b) READY state
c) SUSPENDED state
d) TERMINATED state
Q.3. Suppose that a process is in BLOCKEND state waiting for some I/O service. When the
service is completed, its goes to the
a) RUNNING state
b) READY state
c) SUSPENDED state
d) TERMINATED state
Page- 8 Q.4. What are the ve major activities of an operating system in regard to Process management?
Q.5. When a process creates a new process using the fork() operation, which of the following
state is shared between the parent process and the child process?
a. Stack
b. Heap
c. Shared memory segments
Answers:
2.(b)Quantum is provided among n users if quantum (quantum is = 1) then a process goes to
READY state from RUNNING for getting another time slice.
3.(b)
Page- 9 -
Defines the priority of a ready process using some parameters associated with
the process
Memory requirements
Total actual time the process spends in the system since its arrival
External priorities
Page- 11
System load
Random choice
Page- 12 -
P2
3
P1
6
30
Round-Robin Scheduling
Preemptive in nature
Preemption based on time slices or time quanta
Time quantum between 10 and 100 milliseconds
All user processes treated to be at the same priority
Ready queue treated as a circular queue
New processes added to the rear of the ready queue
Preempted processes added to the rear of the ready queue
Scheduler picks up a process from the head of the queue and dispatches it
with a timer interrupt set after the time quantum
CPU burst < 1 quantum process releases CPU voluntarily
Timer interrupt results in context switch and the process is put at the rear of the
ready queue
No process is allocated CPU for more than 1 quantum in a row
Consider the following processes
Page- 14 -
Throughput
In communication networks, such as Ethernet or packet radio, throughput or network
throughput is the average rate of successful message delivery over a communication
channel. This data may be delivered over a physical or logical link, or pass through a certain
network node. The throughput is usually measured in bits per second (bit/s or bps), and
sometimes in data packets per second or data packets per time slot.
The system throughput or aggregate throughput is the sum of the data rates that are
delivered to all terminals in a network.
The throughput can be analyzed mathematically by means of queueing theory, where the load
in packets per time unit is denoted arrival rate , and the throughput in packets per time unit
is denoted departure rate .
Throughput is essentially synonymous to digital bandwidth consumption.
Q.1. Solve the following crossword puzzle using the given hints:
Hints:
Top-Down:
1. A small unit of time assigned to each process for utilizing the CPU.
Page- 16 -
Sideway:
1. Number of processes completed per time unit.
2. ---- chart is used for representing CPU scheduling.
3. Round Robin scheduling is a ---- scheduling.
Q.2.
If the CPU scheduling policy FCFS, the average waiting time will be
a)
b)
c)
d)
12.8 ms
8 ms
6 ms
none of these
Q.3.
If the CPU scheduling policy is SJF, the average waiting time (without pre-emption) will be
a) 16 ms
b) 12.8
c) 6.8 ms
d) none of these
Page- 17 Q.4. Suppose that the following processes arrive for execution at the times indicated. Each
process will run the listed amount of time. In answering the questions, use nonpreemptive
scheduling and base all decisions on the information you have at the time the decision must
be made.
Process
P1
P2
P3
Arrival Time
Burst Time
0.0
0.4
1.0
8
4
1
a. What is the average turnaround time for these processes with the FCFS scheduling
algorithm?
b. What is the average turnaround time for these processes with the SJF scheduling
algorithm?
Q.5. Suppose that the following processes arrive for execution at the times indicated. Each
process will run the listed amount of time. Use preemptive SJF scheduling to calculate
average waiting time.
Process
P1
P2
P3
P4
Arrival Time
0
1
2
3
Burst Time
8
4
9
5
2.
3.
Page- 18 -
4. a. 10.53
b. 9.53
5. The
Average waiting time = ((10-1-0) + (1-1) + (17-2) + (5-3))/4 = 26/4 = 6.5 milliseconds.
Average turnaround time = ((17-0) + (5-1) + (26-2) + (10-3))/4 = 52/4 = 13 milliseconds.
6. Processes that need more frequent servicing, for instance, interactive
processes such as editors, can be in a queue with a small time quantum.
Processes with no need for frequent servicing can be in a queue with a larger
quantum, requiring fewer context switches to complete the processing, and thus
making more efficient use of the computer.
Page- 19 -
Correctness criteria
A solution to the simplified problem is correct if the following criteria are satisfied:
(1) Scheduling of waiting processes Readers can use the resource simultaneously and so
can writers, but the number of running processes cannot exceed the number of active
processes:
0<=rr<=ar & 0<rw<=aw
(2) Mutual exclusion of running processes : Readers and writers cannot use the resource
Algorithms
Procedure grant reading (var v: T; reading: semaphore);
Begin
with v do
If aw = 0 then
while rr < ar do
begin
rr:= rr + 1;
signal(reading);
end
end
Procedure grant writing (var v: T; writing: semaphore);
Begin
with v do
If rr =0 then
while rw < ado
begin
rw:= rw + 1;
signal(writing);
end
end
The readers and writers problem solved with semaphores
type T = record ar, rr, aw. rw : integer end var v : shared T; reading, writing: semaphore;
Initially ar = rr = aw = rw reading = writing = 0
cobegin
begin reader
region v do
begin
ar:=ar+ 1;
grant reading (v,reading);
end
Page- 23 -
Q.1. Solve the following crossword puzzle using the given hints:
Hints:
Top-Down:
1. A synchronization tool having two standard atomic operations.
Sideway:
1. A situation where a process is blocking progress of another process and vice-versa.
2. In ----- exclusion at least one resource must be held in a non-sharable mode.
3. Each process has a segment of code for using resources which is called a ---- section.
Q.2. A state is safe if the system can allocated resource to each process (up to its maximum)
in some order and still avoid deadlock. Then
a) deadlock state is unsafe
b) unsafe state may lead to a deadlock situation
c) deadlock state is a subset of unsafe state.
d) all of these
Max
10
4
9
Current
5
2
3
Need
5
2
6
Page- 25 PAGING
In computer operating systems there are various ways in which the operating system can
store and retrieve data from secondary storage for use in main memory. One such memory
management scheme is referred to as paging. In the paging memory-management scheme,
the operating system retrieves data from secondary storage in same-size blocks called pages.
The main advantage of paging is that it allows the physical address space of a process to be
noncontiguous. Prior to paging, systems had to fit whole programs into storage contiguously
which caused various storage and fragmentation problems.[1]
Paging is an important part of virtual memory implementation in most contemporary generalpurpose operating systems, allowing them to use disk storage for data that does not fit into
physical Random-access memory (RAM). Paging is usually implemented as architecturespecific code built into the kernel of the operating system.
Overview
The main functions of paging are performed when a program tries to access pages that are
not currently mapped to physical memory (RAM). This situation is known as a page fault.
The operating system must then take control and handle the page fault, in a manner invisible
to the program. Therefore, the operating system must:
1.
2.
3.
4.
5.
Because RAM is faster than auxiliary storage, paging is avoided until there is not enough
RAM to store all the data needed. When this occurs, a page in RAM is moved to auxiliary
storage, freeing up space in RAM for use. Thereafter, whenever the page in secondary
storage is needed, a page in RAM is saved to auxiliary storage so that the requested page can
then be loaded into the space left behind by the old page. Efficient paging systems must
determine the page to swap by choosing one that is least likely to be needed within a short
time. There are various page replacement algorithms that try to do this.
Most operating systems use some approximation of the least recently used (LRU) page
replacement algorithm (the LRU itself cannot be implemented on the current hardware) or
working set based algorithm.
If a page in RAM is modified (i.e. if the page becomes dirty) and then chosen to be swapped,
it must either be written to auxiliary storage, or simply discarded.
To further increase responsiveness, paging systems may employ various strategies to predict
what pages will be needed soon so that it can preemptively load them.
Thrashing
Most programs reach a steady state in their demand for memory locality both in terms of
instructions fetched and data being accessed. This steady state is usually much less than the
total memory required by the program. This steady state is sometimes referred to as the
working set: the set of memory pages that are most frequently accessed.
Page- 27 Virtual memory systems work most efficiently when the ratio of the working set to the total
number of pages that can be stored in RAM is low enough that the time spent resolving page
faults is not a dominant factor in the workload's performance. A program that works with
huge data structures will sometimes require a working set that is too large to be efficiently
managed by the page system resulting in constant page faults that drastically slow down the
system. This condition is referred to as thrashing: pages are swapped out and then accessed
causing frequent faults.
The Optimal Page Replacement Algorithm
The best possible page replacement algorithm is easy to describe but impossible to
implement. It goes like this. At the moment that a page fault occurs, some set of pages is in
memory. One of these pages will be referenced on the very next instruction (the page
containing that instruction). Other pages may not be referenced until 10, 100, or perhaps
1000 instructions later. Each page can be labeled with the number of instructions that will be
executed before that page is first referenced.
The Not Recently Used Page Replacement Algorithm
In order to allow the operating system to collect useful statistics about which pages are being
used and which ones are not, most computers with virtual memory have two status bits
associated with each page. R is set whenever the page is referenced (read or written). M is set
when the page is written to (i.e., modified).
The bits are contained in each page table entry. It is important to realize that these bits must
be updated on every memory reference, so it is essential that they be set by the hardware.
Once a bit has been set to 1, it stays 1until the operating system resets it to 0 in software
The First-In, First-Out (FIFO) Page Replacement Algorithm
The operating system maintains a list of all pages currently in memory, with the page at the
head of the list the oldest one and the page at the tail the most recent arrival. On a page fault,
the page at the head is removed and the new page added to the tail of the list.
The Least Recently Used (LRU) Page Replacement Algorithm
A good approximation to the optimal algorithm is based on the observation that pages that
have been heavily used in the last few instructions will probably be heavily used again in the
next few. Conversely, pages that have not been used for pages will probably remain unused
for a long time. This idea suggests a realizable algorithm: when a page fault occurs, throw
out the page that has been unused for the longest time. This strategy is called LRU paging.
Page- 28 -
Memory protection
Memory protection is a way to control memory access rights on a computer, and is a part of
most modern operating systems. The main purpose of memory protection is to prevent a
process from accessing memory that has not been allocated to it. This prevents a bug within a
process from affecting other processes, or the operating system itself.
Q.1. Solve the following crossword puzzle using the given hints:
Hints:
Top-Down:
1. The page replacement algorithm having the least page-fault rate.
Sideway:
2.
3.
4.
5.
3
5
4
none of these
Page- 29 -
Q.3. If there are 32 segments, each of size 1 k byte, then the logical address should have
a) 13 bits
b) 14 bits
c) 15 bits
d) 16 bits
Q.4. Name two differences between logical and physical addresses.
Q.5. Why are page sizes always powers of 2?
Q.6. In a paged memory, the page hit ratio is 0.35. The time required to access a page in
secondary memory is equal to 100 ns. The time required to access a page in primary
memory is 10 ns. The average time required to access a page is
a) 3.0 ns
b) 68.0 ns
c) 68.5 ns
d) 78.5 ns
Answers:
2.
3.
Page- 30 4. A logical address does not refer to an actual existing address; rather, it refers to an
abstract address in an abstract address space. Contrast this with a physical address that
refers to an actual physical address in memory. A logical address is generated by the CPU
and is translated into a physical address by the memory management
unit(MMU).Therefore, physical addresses are generated by the MMU.
5. Recall that paging is implemented by breaking up an address into a page and offset
number. It is most efficient to break the address into X page bits and Y offset bits, rather than
perform arithmetic on the address to calculate the page number and offset. Because each bit
position represents a power of 2, splitting an address between bits results in a page size that is
a power of 2.
6.
Page- 31 -
File system
A file system is a method of storing and organizing computer files and their data. Essentially,
it organizes these files into a database for the storage, organization, manipulation, and
retrieval by the computer's operating system.
File systems are used on data storage devices such as a hard disks or CD-ROMs to maintain
the physical location of the files. Beyond this, they might provide access to data on a file
server by acting as clients for a network protocol (e.g., NFS, SMB, or 9P clients), or they
may be virtual and exist only as an access method for virtual data (e.g., procfs). It is
distinguished from a directory service and registry.
File names
A file name is a name assigned to a file in order to secure storage location in the computer
memory. By this file name a file can be further accessed. Whether the file system has an
underlying storage device or not, file systems typically have directories which associate file
names with files, usually by connecting the file name to an index in a file allocation table of
some sort, such as the FAT in a DOS file system, or an inode in a Unix-like file system.
Directory structures may be flat, or allow hierarchies where directories may contain
subdirectories. In some file systems, file names are structured, with special syntax for
filename extensions and version numbers. In others, file names are simple strings, and perfile metadata is stored elsewhere.
Metadata
Other bookkeeping information is typically associated with each file within a file system.
The length of the data contained in a file may be stored as the number of blocks allocated for
the file or as an exact byte count. The time that the file was last modified may be stored as
the file's timestamp. Some file systems also store the file creation time, the time it was last
accessed, and the time that the file's meta-data was changed. (Note that many early PC
Page- 32 operating systems did not keep track of file times.) Other information can include the file's
device type (e.g., block, character, socket, subdirectory, etc.), its owner user-ID and groupID, and its access permission settings (e.g., whether the file is read-only, executable, etc.).
Arbitrary attributes can be associated on advanced file systems, such as NTFS, XFS,
ext2/ext3, some versions of UFS, and HFS+, using extended file attributes. This feature is
implemented in the kernels of Linux, FreeBSD and Mac OS X operating systems, and allows
metadata to be associated with the file at the file system level. This, for example, could be the
author of a document, the character encoding of a plain-text document, or a checksum.
Facilities
Traditional file systems offer facilities to create, move and delete both files and directories.
They lack facilities to create additional links to a directory (hard links in Unix), rename
parent links (".." in Unix-like OS), and create bidirectional links to files.
Traditional file systems also offer facilities to truncate, append to, create, move, delete and
in-place modify files. They do not offer facilities to prepend to or truncate from the
beginning of a file, let alone arbitrary insertion into or deletion from a file. The operations
provided are highly asymmetric and lack the generality to be useful in unexpected contexts.
For example, interprocess pipes in Unix have to be implemented outside of the file system
because the pipes concept does not offer truncation from the beginning of files.
Secure access
Secure access to basic file system operations can be based on a scheme of access control lists
or capabilities. Research has shown access control lists to be difficult to secure properly,
which is why research operating systems tend to use capabilities. [citation needed] Commercial file
systems still use access control lists.
Page- 33 -
Erasing blocks: Flash memory blocks have to be explicitly erased before they can be
rewritten. The time taken to erase blocks can be significant, thus it is beneficial to
erase unused blocks while the device is idle.
Random access: Disk file systems are optimized to avoid disk seeks whenever
possible, due to the high cost of seeking. Flash memory devices impose no seek
latency.
Wear levelling: Flash memory devices tend to wear out when a single block is
repeatedly overwritten; flash file systems are designed to spread out writes evenly.
Log-structured file systems have many of the desirable properties for a flash file system.
Such file systems include JFFS2 and YAFFS.
Page- 34 -
Page- 35 -
Page- 36 name, even if it appeared to be in a separate folder. MFS was quickly replaced with
Hierarchical File System, which supported real directories.
A recent addition to the flat file system family is Amazon's S3, a remote storage service,
which is intentionally simplistic to allow users the ability to customize how their data is
stored. The only constructs are buckets (imagine a disk drive of unlimited size) and objects
(similar, but not identical to the standard concept of a file). Advanced file management is
allowed by being able to use nearly any character (including '/') in the object's name, and the
ability to select subsets of the bucket's content based on identical prefixes.
Page- 37 4. Progressive Unix-like systems have also introduced a concept called supermounting;
see, for example, the Linux supermount-ng project. For example, a floppy disk that
has been supermounted can be physically removed from the system. Under normal
circumstances, the disk should have been synchronized and then unmounted before its
removal. Provided synchronization has occurred, a different disk can be inserted into
the drive. The system automatically notices that the disk has changed and updates the
mount point contents to reflect the new medium. Similar functionality is found on
Windows machines.
5. A similar innovation preferred by some users is the use of autofs, a system that, like
supermounting, eliminates the need for manual mounting commands. The difference
from supermount, other than compatibility in an apparent greater range of
applications such as access to file systems on network servers, is that devices are
mounted transparently when requests to their file systems are made, as would be
appropriate for file systems on network servers, rather than relying on events such as
the insertion of media, as would be appropriate for removable media.
FAT
The File Allocation Table (FAT) filing system, supported by all versions of Microsoft
Windows, was an evolution of that used in Microsoft's earlier operating system (MS-DOS
which in turn was based on 86-DOS). FAT ultimately traces its roots back to the short-lived
M-DOS project and Standalone disk BASIC before it. Over the years various features have
been added to it, inspired by similar features found on file systems used by operating systems
such as Unix.
Page- 38 Older versions of the FAT file system (FAT12 and FAT16) had file name length limits, a limit
on the number of entries in the root directory of the file system and had restrictions on the
maximum size of FAT-formatted disks or partitions. Specifically, FAT12 and FAT16 had a
limit of 8 characters for the file name, and 3 characters for the extension (such as .exe). This
is commonly referred to as the 8.3 filename limit. VFAT, which was an extension to FAT12
and FAT16 introduced in Windows NT 3.5 and subsequently included in Windows 95,
allowed long file names (LFN).
FAT32 also addressed many of the limits in FAT12 and FAT16, but remains limited compared
to NTFS.
exFAT (also known as FAT64) is the newest iteration of FAT, with certain advantages over
NTFS with regards to file system overhead. But unlike prior versions of FAT, exFAT is only
compatible with newer Windows systems, such as Windows 2003 and Windows 7.
NTFS
NTFS, introduced with the Windows NT operating system, allowed ACL-based permission
control. Hard links, multiple file streams, attribute indexing, quota tracking, sparse files,
encryption, compression, reparse points (directories working as mount-points for other file
systems, symlinks, junctions, remote storage links) are also supported, though not all these
features are well-documented.
Unlike many other operating systems, Windows uses a drive letter abstraction at the user
level to distinguish one disk or partition from another. For example, the path C:\WINDOWS
represents a directory WINDOWS on the partition represented by the letter C. The C drive is
most commonly used for the primary hard disk partition, on which Windows is usually
installed and from which it boots. This "tradition" has become so firmly ingrained that bugs
came about in older applications which made assumptions that the drive that the operating
system was installed on was C. The tradition of using "C" for the drive letter can be traced to
MS-DOS, where the letters A and B were reserved for up to two floppy disk drives. This in
turn derived from CP/M in the 1970s, which however used A: and B: for hard drives, and C:
for floppy disks, and ultimately from IBM's CP/CMS of 1967.
Network drives may also be mapped to drive letters.
Page- 39 In contrast, in a shared disk file system all nodes have equal access to the block storage
where the file system is located. On these systems the access control must reside on the
client.
Distributed file systems may include facilities for transparent replication and fault tolerance.
That is, when a limited number of nodes in a file system go offline, the system continues to
work without any data loss.
The difference between a distributed file system and a distributed data store can be vague,
but DFSes are generally geared towards use on local area networks.
Page- 40 -
Two-Level Directory
Create a separate directory for each user
- Efficient searching: file search confined to user directory
- Different users can have files with the same name
To support file sharing, one user must be able to name a file in another users
directory
- Every file has a path name = user name + file name
E.g., /spell/mail/ptr/last
Page- 42 -
Page- 43 -
Q.1. Solve the following crossword puzzle using the given hints:
Hints:
Top-Down:
1. -------- contains information about the file.
2. A sequence of logical records.
Sideway:
1. A graph representing a directory scheme with no cycle.
2. A table used as a linked list and has one entry for each disk block.
Q.2.In which of the following directory systems, is it possible to have multiple complete
paths for a file, starting from the root directory
a) Single level directory
b) Two level directory
c) Tree structured directory
d) Acyclic graph directory
Page- 44 -
Disk sections
Partition table
Boot sector with
Boot signature (55AAH)
FAT # 1
FAT # 2
Root sector
Cluster
Disk Partition
Logical partition
(Using S/W e.g. Fdisk)
Physical partition
(Multiple HDD)
Bad block Management
Interleave factor
The main difference between FD and HD is that in FD the data will be stored in the form of
magnetic particles And in HD it will be in the form of magnetic bubbles.
Two types of HDD s are possible.
IDE (Integrated Device Electronics)
SCSI (Small Computer System Interface).
In IDE, bad blocks are marked as occupied by S/Ws. But in SCSI, bad blocks are replaced by
good blocks taken from reserve pool of good blocks.
Page- 46 The portion of the track enclosed with in a sector is called Allocation unit. The partial
allocation of allocation unit Is not possible. In FD and HD the size of allocation Units are
512 bytes and 2048 bytes respectively.
Page- 47 -
INPUT/OUTPUT DEVICES:
BLOCK DEVICES: It stores information in fixed size blocks (128 bytes to 1024
bytes) and they can be read/written independently. It may suffer from seek/latency
time
(e.g.
disks)
CHARACTER DEVICES: It delivers/accepts a stream of characters without any
block structures. There is no seek time here. (e.g. mouse, printer, terminals, network
interface).
Device Controllers
I/O units typically consist of a mechanical component and an electronic component called
device controller or adapter. A controller can handle 2,4 or 8 identical devices. The O.S. deals
with device controllers with standard interfaces (ANSI, IEEE, ISO, or defacto one) but not
with devices directly.
A model for connecting the CPU, memory, controllers and I/O devices
Each controller has a few registers that are used for communicating with the CPU. On some
computers, these registers are the part of the regular memory address space. This scheme is
called Memory-mapped I/O.
In I/O - mapped I/O performs I/O by writing commands (e.g. READ, WRITE, SEEK,
FORMAT etc.) with parameters. After acceptance of the command the controller goes alone
and completes the job. After completion, I/O generates interrupt to inform O.S. to regain its
control. CPU gets required data and information from pre specified register from the
controller.
A DMA transfer is done entirely by the controller. The DMA controller stores the read
block into its buffer and verifies the checksum and copies it to the specified memory address.
Then it increments and decrements the DMA address and DMA count respectively. The
process is repeated until the count exhausts. After completion DMA informs O.S. through
interrupt and O.S. takes action accordingly. DMA improves I/O performance. To synchronize
with CPU, DMA uses two signals HOLD and HLDA (HOLD ACK) DMA operation can be
continued due to the presence of buffer in the controller circuit.
Spooling (Simultaneous Peripheral Operation Online):
It is a way of dealing with dedicated I/O devices in a multiprogramming system. It creates a
special process, called a daemon, and a special directory, called a spooling directory.
Spooling is not only for printers but also for network file transfer by network daemon. Emails are also stored in spool directory first and then transmitted later.
Principles of I/O Software:
The primary goal is device independence. So that a program can access any file from any
HDD or FDD independently.
Uniform naming is the name of a file or a device should simply be a string or an integer and
not depend on the device in any way. Any FDD or HDD or CDROM can be mounted to a file
system to access files from it uniformly.
Error handling techniques may be handled as close to the hardware as possible. Read errors
caused by dusts on the read/write head will go away if the operation is repeated.
Synchronous (blocking) and asynchronous (interrupt driven) transfers are possible. In
asynchronous CPU transfers data until the interrupt arrives. In synchronous-after a READ
command the program suspends until the buffer is empty.
I/O devices may be sharable (e.g. disks, may be accessed from different terminals) or
dedicated (e.g. printer, can be accessed by a single user at a time).
Page- 50 The main disadvantages are software complexity and buffer requirement and it is useless if
CPU asks DMA to do the job.
RAM DISKS
A Ram disk uses a pre allocated portion of the main memory for storing the blocks. It has
no seek or rotational delay suitable for frequently accessible data and programs. It is volatile
in nature and not permanent.
Clocks
It is essential to the operation of any time-sharing system.
Clock Hardware
It has three H/W components: a crystal oscillator mounted under tension, a down counter and
a holding register holding the initial count. When the counter gets to zero, it causes a CPU
interrupt. O.S. then deals with the situation. Clock has two modes of operations. In one shot
mode the CKT produces an interrupt after counter counts to zero from the initial given count.
In square- wave mode, after getting to zero and causing the interrupt, the holding register is
automatically copied into the counter, and the whole process is repeated indefinitely
producing clock ticks.
Clock Software
Clock S/W or driver performs following duties:
1. Maintaining the time of day
2. Preventing processes from running longer than they are allowed to
3. Accounting for CPU usage
4. Handling the ALARM system call made by user processes
5. Proving watch dog timers for parts of the system itself
6. Doing profiling, monitoring and statistics gathering.
Page- 51 -
Let us consider an example that signals are pending for 4203,4207,4213,4215 and
4216. Then it can be implemented as.
Terminals
Terminal Types
Terminal Hardware
Page- 52 Since computers and terminals work with characters but they can communicate
transmitting one bit at a time. So UART is used for character to serial and serial to character
conversions. UARTS are attached to the computer by plugging RS-232 interface cards into
the bus.
Input Software
It operates in two different modes (1) raw mode (character oriented) (2) cooked mode (line
oriented). In first mode the characters are processed as when obtained from keyboard after
keeping them in memory buffer. In cooked mode, characters must be stored until an entire
line has been accumulated, because the user may decide subsequently to erase part of it.
Buffering is common here in cooked mode different characters are available e.g. kill, erase,
filler characters etc.
Hints:
Top-Down:
1. It is a way of dealing with dedicated I/O devices in a multiprogramming system.
2. ---- was invented to free the CPU from I/O bound low-level work.
Sideway:
1. It is essential to the operation of any time-sharing system.
2. DMA controller initiates this signal to interrupt the CPU.
Page- 56 -
Page- 57 For example, the standard C compiler, cc, is in the outer most layer : it invokes a C
preprocessor, two-pass compiler, assembler, and loader ( link-editor ), all separate lower level
programs. Although a two-level hierarchy of application programs is seen, users can extend
the hierarchy to whatever levels are required.
Page- 58 -
The Inode
An Inode contains the owners user identification and owners group identification,
protection bits physical disk addresses for the contents of the file, file size, time of creation,
time of last use, time of last modification to the file, time of last modification to the mode,
the number of links to the file, and an indication of whether the file i s a directory, an ordinary
the, a character special file (which represents a character. oriented device), or a pipe.
File blocks on disk can be contiguous, but it is more likely that they will be dispersed. The
inode contains the information needed to locate a block in a file, The inode data structure
tends to provide fast access tor small files; larger files, by far in the minority on typical
UNIX systems, can require longer access times.
The inode contains the information needed to locate all of a files physical blocks, The
information points directly to the first blocks of a file, and then uses various levels of
indirection to point to the remaining blocks.
The block locator information in the mode consists of 13 fields. Fields 0 through 9 are direct
block addresses; they point successively to the first 10 blocks of a file. Since the vast
majority of UNIX files are small these ten fields often suffice to locate all of a files
information directly.
Field l0 is an indirect block addresses; it points to a block containing the next group of block
addresses; this block can contain as many block addresses as will fit (bIocksize/
blockaddresssize) normally several hundred entries.
-
Page- 59 -
Page- 60 -
Bibliography:
Web references:
www.memorymanagement.org
www.webopedia.com/file_management_system.html
en.wikipedia.org/wiki/Unix_directory_structure