Vous êtes sur la page 1sur 61

Table of Contents :

. Chapter-1 Basic Structure of Computer

(1-1) to(1-58)

Chapter-2 Fixed Point and Floating Point Operations


Chapter-3 Basic Processing Unit

(3 -1) to(3-60)

Chapter-4 Pipe tining

(4 -1) to (4 . 26)

ctiapter-5. Memory System

(5 -1) to (5-114)

Chapter-6 1/0 Organization

(6 -1) to (6-140)

Appendix ,A Proofs

(A- 1)to (A -2)

Features of Book
...-----------------------------------------------------------------------: Uee of clear, plain and lucid language making the understanding very euy.
:.I Book provide detailed insight into the subject.
: Approach of the book reaembles clue room teaching.
: Eitcellent theory well 1JUpporled with the practical e:umple1 and


: Neat and to the 1eale dilllll'1UDI for eaaier underatanding of conceptl.



~---------------------------------------- - ------------------------------


(Coinputtt Org1nizllion and Architecture)

1. Basic Structure of Computers

(Cbpten - 1, 2)

Functional units - Basic operatlonal concepts - Bus structures - Performance and metrics Instructions and instruction sequencing - Hardware - software interface - Instruction set
an:hltecture - Addressing modes - RISC - CISC. ALU design - Fixed point and floating point

2. Basic Processing Unit

(Ch1pter- J)

Fundamental concepts - Execution of a comptete Instruction - Multiple bus organization Hardwired control - Micro programmed control - Nano programming.

3. Pipelining

(Cbptr -


Basic conoep\$- Data hazards -Instruction hazards -Influence on inslrYc!ion set3- Data path
and control considerations - Performance considerations - Exception handling.

4. Memory System

(Clllpter - 5)

Basic concepts Semiconductor RAM - ROM - Speed - Size and cost - Cache memories Improving cache performance - Virtual memory - Memory management requirements Assciclative memories - Secondary storage devices.

5.110 Organization

(Cb1ptor - 6)

Accessing 1/0 devices - Programmed lnpul/output - Interrupts- Direct memory access Buses - Interface circuits - Standard 110 Interfaces (PCI, SCSI, USB), 1/0 devices and

Table of Contents


1.1 Introduction .......................................................................................... 1 - 1

1.2Computer Types .................................................................................. 1-2
1.3 Functlonal Units ............................... .................................................... 1 - 4
1.3.1 Input Unit ...... .... . .. . . . ... . . .. .. . . . . . . . .. . .. . . . . ..... .. . . .. .... ..... 1 5
1.3.2 Memory Unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 5
1.3.3 Alflhmetic and Logic Unit . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 - 6

1.3.4 Output Unit. ......... . ............. . .. , .................... . .. , . . . .. .. 1 7

1.3.5 Control Unit. . ..................... . .. , . . . .. . .. . . .. .. . .. . . . .. .. . .. . .. 1 7
1.4 Basic Operational Concepts ..........................: ..................................... 1 - 7
1.5 BUS Structures ................................................................................. 1 - 11
1.5.1 ~le Bus SlrudlJre..... . ..... . .. . .............. . ........ . .. . ..... . .... 1-13
1.5.2 ~IUple Bus Structures............... , .. , , .. ...... , .. , .. , .. , ..... , .. , . 114
1.6 Software............................................................................................. 1 -15
1.7 Performance ...................................................................................... 1 - 19
1.7.1 Processor Clock ...... . ....... .... .. . .. ... ......... . .. .... .. . .. , . . . 1 21
1.7:2 CPU Time..... . .. . .. . .. : .. , ... , . , . 1 21
1.7.3 Perlonnance Metrics .......... . ..... . .. . .. . .............. . ........ . .... 1 22 HsdwnSoftwn lnlelface . . . . . . .. . . 1 22 0therPeltormanceMeastns . . . . . . . . . . . . . 124

1.7.4 Pelfonnance Measurement . .. . .. . . ......... . .. . .. . ..... . .. . .. . .. . ..... . . 1 26

1.8 Instructions and Instruction Sequencing ............................................ 1 - 29

1.8.1 Raglsler TransferNolatlon .... .. .... .. ......... ......................... 1-32
1.8.2 Assembly Language Notation .... . .. . .. . .. . .. . .. . .. . .. . .. . ....... . . .. . .. . . 1 - 32
1.8.3 Basic Instruction Types . . .. . . . . .. . . .. . . . . . . . . . .. . . . . . . . . . . 1- 33
U .3.1 Tlwee Addnm lnslructions . . . . . . . . . . . . 1 33 Two Address lnslnJdions One Address lnsbudion . . . . . . . . . . . . l.ero Address lnslructions . . . . . . . . . . . . ,
1.8.4 Instruction Execution and Straight-tile Sequencing . . . .. . . . . .. . .. . . .. .

1 33
1 . 34

1 34
1- 35

1.8.5 Branching . . . . . . . . . . . . .. . . . . . . . . .. . .. . .. 1- 37
1.8.6 Conditional Codes ................. .. ................ .. ................ 1- 39
1.8.7 Generating Memory Addresses . .. . . . . . .. . . . .. . . . . . . .. 1- 40
1.9 Instruction Set Architecture ................................................................ 1 40

1.10 RISC-CISC ....................................................................................... 1 42

1.10.1 RISC Versus CISC . .. .. .. .. . . .. . . . . .. . .. . .. . .. . .. .. . . . .. . . . . .. 1 - 43

1.11 Addressing Modes ........................................................................... 1 44

1.11.1 lmplementatlon of Vatlables and Constanls........................... 1 -44
1.11.2 Indirection and Polnlars . .. .. . .. . .. . . . . . . .. . . . .. . .. .. . . .. 1 -45
1.'11.3 Indexing and Arrays .. . . . .. .. . . .... . .. . . . . . .. .. .. . . . . 1 45
1.11.4RelativeAddressing ................... . ... . .. . ...... . ......... ... .... . 1 47
1.11.5 Additional Modes . .. . . .. .. . .. . . . . . . . . .. . .. .. .. . . . . 1 47

1.11.6 RISC Add111$$ing Modes . . . .. .. .. .. .. .. . .. . . . .. .. .. .. . .. .. .. . . .. .. 1 -48

Review Questions .................................................................................. 1 49
University Questions with Answers ....................................................... 1 - 50

2.1 Introduction .......................................................................................... 2 - 1

2.2 Addition and Subtraction of Signed Numbers ...................................... 2 - 1
2.2.1 Adders ... .. ......................................... . ................ 2. 7 HallAddef . . . . 2 7 . . . . . . . . . . . . . . . . . . . . .


2.2.2 Serial Adder . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . .. . .. . .. . . . 2. 10

2.2.3 P..allel Adder. . . . . . . . . . . .. . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 11
2.2.4 P..allel Subtraclor .... .... ....... . ........ . . .. . .. . ... ... . ......... 2 12
2.2.5 Adcfition I Sublraction logic Unit ... .. .... , . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 12
2.3 Design of Fast Adders ....................................................................... 2 . 14
2.3.1 Cany.tookahead Adders ......................... .. ........... . .... .. ... 2-15
2 .4 Multiplication of Positive Numbers ..................................................... 2 - 19

2.5 Signed Operand Multipllcatlon ........................................................... 2 - 22

2.5.1 Booth's Algorithm .............. : .. . ...... 2 22
2 .6 Fast Multiplication .................... .......................................................... 2 - 29
2.6.1 BltPair Recoding of MultfPiers . . . . . . . . . . . . . . . . 2 29
2.7 Integer Division ................................................................................. 2 . 31
2.7.1 Restoring Division......... ................ .. 2 31
2.7.2 N01H8S1orlng DMslon . ........ . ....................... ... ....... . ...... 2 - 35
2.7.3 Comparisoo between Remrlng and Non-restoring DMsion Algorithm . . . . . 2 - 38
2.8 Floating Point Numbers and Operations ............................................ 2. 39

Basic Structure of Computer

1.1 Introduction
At the beginning of the text, 1 felt, it is necessary to make distinction between
computer organi$ation and architecture. Although it is difficult to give precise
definitions for these terms we define them as follows :

Computer architecture refers to those attributes of a system visible to a

programmer. In other words, we can also say that the computer architecture
refers to the attributes that have a direct impact on the logical execution of
the program.

Computer organisation refers 'to the operational units

interconnections that reali7.c the architccturaI specifications.

a nd


The architectural attributes include the instruction set, dab types, number of bits

used to represent dab types, 1/0 mechanism, and techniques for addressing memory.
On the other hand, the organisation.11 attributes include those hnrdware details

transparent to programmer, such as control signals, interfaces between the computer,

memory and 1/0 peripherals. For examp le, it is an architectural issue whether a
computer will have a multiply and division instructions. It is an organisational issue
whether to implement multiplication and d ivision by special unit or by a mechanism
that makes repeated use of the add and subtract unit to perform multiplication and
division, respectively. The o rganisational issue may be b ased on which approach to be
used depending on the speed of operation, cost and size of the hardware and the
memory required to perform the operation.
Many computer manufacturers offer a family of computer models wiU> different
price and performance characteristics. These computers have the same architecture but
different organisations. Their architecturcrs have survived for many years, but their
organi$ations change with changing technology. For example, IBM has introduced
many new computer models with improved technology to replace older models,
offering the customer greater speed. lower cost, or both. These newer models hnvc the
same architecture with advance computer organisations. The common an:hitecture
(1 1)

Computer Organization & Architecture 1 - 2

Basic Structure of Computer

between the computer model maintains the software compatibility between them and
hence protects the software investments of the customers. The computer models were
designed lo be software compatible with one another, meaning that all models in the
series shared a common instruction set. In other words we can say that, the programs
written for one model could be run without modification on any other model.
However, the execution time, memory usage, and the 1/0 usage for the progra.m mny
change from model to model . The organisational changes also try lo maintain the
hardwatt compatibility with in the computer models. Many times it is not possible to
maintain the hardware compatibility. In such cases computer model may require
advance 1/0 and memory interface modules; the 1/0 and memory interface modules
designed for previous organisation may not work with advanced organisation.
In microcomputers, the relationship between computer nrchitccture and
organisation is very close. ln which, changes in technology not only influence
organisation but also result 'in the introduction of new powerful nrchltecture. The RISC
(Reduced Instruction Set) machine is a good example of this.

This text is about the computer organisation. Its purpose is to prepare clear and
complete understanding of the nature and characteristics of modem-day computer
systems. We besin this text with the basic structures of computers. '

1.Z Computer Types

A digital computer or simply computer in its simplest form is a fast electronic
calculating machine that accepts d igitized information from the user, processes it
according to a sequence of instructions stored in the internal storage, and provides the
processed information lo the user. The sequence of instructions stored in the internal
storage is called computer program and internal storage is called computer memory.
According to size, cost computational power and application computers a.re
classified as :



Desktop computers
Personal computers

Portable notebook computers


Mainframes or enterprise systems



Computer Organization & An:hitecture


S.sic Structure of Computer

Microcomputers : As the name implies microcomputers are smaller computers.

They contain only one Central Processing Unit. One distinguishing feature of a
microcomputer is that the CPU is usually a single integrated circuit called a

Microcomputer is the integration of microprocessor and supporting peripherals

(memory and 1/0 devices). The word length depends on the microprocessor used and
is in the range of 8-bits to 32-bits. These type of computers are used for smalJ
industrial control, process control and where storage and speed requirements are
Minicomputers arc the scaled up version of the microcomputers
with the moderate speed and storage capacity. These are designed to process .smaller
data words, typically 32-bit wo.rds. This !type of computers are used for scientific
calculations, research, data processing application and many other.
Minicomputers :

The desktop computers are the computers which are usually

found on a home or office desk. They consist of processing unit, storage unit, vi.sual
display and audlo as output units, and keyboard and mouse as input units. Usually
storage unit of such computer consists of hard disks, CD-ROMs, and diskettes.
Desktop Computers :

Personal Computers : The personal computers are the most rommon fof!ll of
desktop computers. They found wide use in homes, schools and business offices.
Portable Notebook Computers : Portable notebook computers are the compact
version of personal computers. The lap top computers are the good example of
portable notebook computer.
Workstations : Workstati.o ns have higher computation power than pe:sonal
computers. They have high resolution graphics terminals and improved input/output
capabilities. Workstations are used in engineering applications and in interactive
graphics applications.

Mainframe computers are implemented using

two or more central processing units (CPU). These are designed to work at very hlgh
speeds with large data word lengths, typically 64-bits or greater. The data storage
capacity of these computers is very ltigh. This type of computers are used for complex
scientific calculations, large data processing applications, Military defence control and
for complex graphics applications (e.g. : For creating walkthroughs with the help of
animation softwares).
M1lnfrlmes or Enterpriae Systems :

These computers have large storage unit and faster communication links.
The large storage unit allows to store sizable. database and fast communication links
allow faster communication of data blocks with computers connected in the network.
These computers serve major role in inremet communication.
Servers :

Computer Organization & Architecture 1 -4

Basic Structure of Computer

Superco mputers : These computers are basically multiprocessor computers used for
the !.uge-scalc nume.rical calculations required in applications such as weather
forecasting, robotic engineering, aircraft design and simulation.

1.3 Functional Units

Memory unit





The computer consists of five

functionally independent units :
input, memory, arithmetic and
logic, output and control units.
The Fig. 1.1 (a) shows these five
functional units of a computer,
and its physic.>( locotions in the

Fig. 1.1 (a)

Fig. 1.1 (b) Computer hardwaro

Fig. 1.1 Basic functl on al units of computer
The input unit accepts the digital information from user with the help of in put
devices such as keyboard, mouse, miaop hone etc. The information received from the
input unit is either stored in the memory for later use or immediately used by the

Computer Organization & Architecture


Basic Structure of Com puter

arithmetic and logic unit to perform the desired operations. The program stored in the
memory decides the processing steps and the processed output is sent to the user with
the help of output device$ or it is stored in the memory for later reference. All the
above mentioned activities are co-ordinated and controlled by the control unit. The
arithmetic and logic unit in conjunction with control unit is commonly called C~ntral
Processing Unit (CPU). let us discuss the fwu:tional units in detail

1.3.1 Input Unit

A computer accepts a digitally coded information through input unit using input
devices. The most commonly used input devices are keyboard and mouse. The
keyboard is used !or entering text and numeric information. On the other hand mouse
is used to position the screen cursor and thereby enter the information by selecting
option. Apart from keyboard and mouse there are many other input devices are
available, which include joysticks, trackball, spaceball, digitizers and scanners.

() Keyboord

(d) Joystick

(b) Mouae

(e) Tablet or digitizer

(c) Track boll

m Scanner

Fig. 1.2 Input d evices

1.3.2 Memory Unit

The memory unit is used to store programs and data. Usually, two types of
memory devices are used to form a memory unit : primary storage memory device
and secondary storage memory device. The primary memory, commonly called main
memory is a fast memory used for the storage of programs and active data (the data
currently in process). Th.e main memory is a semiconductor memory. It consists of a
large number of semiconductor storage cells, each capable of storing one bit of
information. These cells are read or written by the central processing unit in a group
of fixed size called w ord. The main memory is organized such that the contents of one
wo~, containing n bits, can be stored or retrieved in one write or read operation,

Computer Organization & Architecture

1 6

Basic Structure of Computer

To acxess data from a particular w<>rd from main memory ca.ch word in the main
memory has a distinct address. This allows to access nny word from the main memory
by specifying corresponding address. The number of bits in each word is referred to
as the word length of the computer. Typically, the word length varies from 8 to
64 bits. The number of such words in the main memory decides the size of memol)'
or capacity of the memol)'. This is one of the specification of the computer. The size
of computer main memory varies from few million words to tens of million words.
An important characteristics of a memory is an cccss time (the time required to
acxcss one word). The acxess time for main memory should be as small as possible.
Typically, it is of the order of 10 to 100 nanoseconds. This access time also depend on
the type of memory. In randomly accessed memories (RAMs), fixed time is required to
access any word in the memory. However, in sequential access memories this time is
not fixed.

The main memory consists of only randomly accessed memories. These memories
are fast but they are small in capacities and expensive. Therefore, the computer uses
the secondary storage memories such as magnetic tapes, magnetic disks for the storage
of large amount of data.
Stored program concept
Today's computer arc built on two key prlndplcs
1. Instructions are represented as numbers.
2. Programs can be stored in memory to be read or written just like numbers.

These principles lead to the stored-program concepl According to stored-program

concept memory can contain the program (source code), the corresponding compiled
machine code, editor program and even the compiler that generated the machine code.
1.3.3 Arithmetic and Logic Unit
The a.rithmetic and logic unit (ALU) is responsible for performing arithmetic
operations such as add, subtract, division and multiplication, and logical operations
such as ANDing, ORing, Inverting etc. T o perform these operations operands from the
main memory are brought into the high speed storage elements called registers of the
processor. Each register can store one word of data and they a.r e used to store
frequently used operonds. The acxcss times to registers are typically 5 to 10 times
faster than acxcss times to memory. After performing operation the result is either
stored in the register or memory l<>cation.

Computer Organizatio n & Architecture 1 7

Basic Slnlctura of Computer

1.3.4 Output Unit

The output unit sendS the processed results to the user using output devices such
as video monitor, printer, plotter, etc. The video monitors display the output on the
CRT screen whe.reas printers and plotters give the hard-copy output. Printers are
classified according to their printing methodology : Impact printers and non-impact
printers. Impact printers press formed character faces against an inked printers. Non
impact printers and plotters use laser techniques, ink-jet sprays, xerographic processes,
electrostatic me.thods, and elcctrothermal methods to get images onto the paper. A
ink-jet printer are the examples of non-impact printers.

1.3.5 Control Unit

As mentioned earlier, the control unit co-ordinates and controls the activities
amongst the functional units. The basic function of control unit is to fetch the
instructions stored in the main memory, identify the operations and the devices
involved in it, and accordingly generate control signals to execute the desired

The control unit uses control signals or timing signals to determine when a given
action is to talce place. It controls input and output operations, data transfers between
the processor, memory and input/ output devices using timing signals.
The control and the arithmetic and logic units of a computer are usually many
times faster than other devices connected to' a computer system. This enables them to
control a number of external input/output d.evices.

1.4 Basic Operational Concepts

The basic function of computer is to e:xecute program, sequence of instructions.
These instructions are stored in the computer memory. The instructions are executed
to process data which already loaded into the computer memory through input unit.
After proces'sing the data, the result is either stored back into the computer memory
for further reference or it is sent to the outside world through the output port.
Therefore, all functional units of the computer contribute to execute a program. Let us
summarize the functions of different computer units.
The input unit accepts data and instructions from the outside world to
machine. It is operated by control unit.

The memory l!Ilit stores both, data and instructions.

The arithmetic-logic unit performs arithmetic and logical operations.
The control unit fetches and interprets the instructions in memory and causes
them to be executed.
The output unit transmits final results .and messages to the outside world.

Computer Organization & Architecture 1 8

Basic Structuna of Computer

To perform execution of instruction, in addition to the arithmetic logic unit, and

control unit, the processor contains a number of registers used for temporary storage
of data and some special function registers, as shown in Fig. 1.3. The special function
registers include program counter (PC), instruction register (IR), memory adcires
register (MAR) and memory data register (MDR).
The Program Counter is one of the most important registers in the CPU. As
mentioned earlier, a program is a series of instructions stored in the memory. These
instructions tell the CPU exactly how to get the desired result It is important that
these instructions must be executed in a proper order to get the correct result nus
sequence of instruction execution is monitored by the program counter. It keeps track
of which instruction is being executed and what the next instruction will be.





[ ]












Geneml pufJ)Oso


L ______ _ _ _ _

------- - -

- -----


Mam memory

Fig. 1.3 Connections between the processor and the main memory

The instruction register (IR) is used lo hold the instruction that is currently being
executed. The contents of IR are available to the control unit, which generate the
timi.n g signals that control the various processing elements involved in executing the

Computer Organization & Architecture 1 9

Buie Slnlctu1'8 of Computer

The two registers MAR and MOR are used to handle the data transfer between the
main memory and the processor. The MAR holds the address of the main memory to
or from which data is to be transferred. The MDR contains the data to be written into

or read from the addressed word of the main memory.

Sometimes it is necessary to have computer system which can automatically

execute one of the collection of special routines whenever certain conditions exist
within a program or in the computer system. e.g. It is necessary that computer system
should give response to devices such as keyboard, sensor and other components when
they request for service. When the processor is asked to communicate with devices,
we say that the processor is servicing the devi.ces. For example, each lime when we
type a character on a keyboard, a keyboard service routine is called. It transfers the
character we typed from the keyboard 1/0 port into the processor and then to a data
buffer in memory.
When you have one or more 1/0 devices connected to a computer system, any
one of them may demand service at any time. The processor can service these devices
in one of the two ways. One way is to use the polling routine. The other way is to
use an interrupt.

In polling the processor's software simply checks each of the 1/0 devices every so
often. During this check, the processor tests to see if any device needs servicing. A
more desirable method would be the one that allows the processor to be executing its
main program and only stop to service 1/0 devices when it is told to do so by the
device itself. In effect, the method, would provide an external asynchronous input that
would inform the processor that it shoul d complete whatever instruction that is
currently being executed and fetch a new routine that will service the requesting
device. Once this servicing is completed, the processor would resume exactly where it
left off. This method is called interrupt method .

To provide services such as polling and interrupt processing is one of the major
function of the processor. We Jen.ow that, many 1/0 devices are coMected to the
oompute.r system. It may happen that more than one input devices request for 1/0
service simultaneously. In such cases the 1/0 device having highest pri.ority is
serviced first. Therefore, to handle multiple interrupts (more than one interrupts)
processor use priority logic. Thus, handling multiple interrupts is also a function of
the processor. The Fig. 1.4 (a) shows how program execution flow gets modified when
interrupt occurs. The Fig. 1.4 (b) shows that interrupt service routine itself can be
interrupted by higher priority interrupt. Processing of such interrupt is called nested
interrupt p~esslng and interrupt is called 11ested interrupt.

Computer Orgenlz.atlon & Architecture 1 10

Main prog.ram

Interrupt seMce

(n) Program flow for s ingle Interrupt


Basic Structure of Co mputer

lntenupt service
rouUne 1

lnl$rrupl seNice
routine 2

(b) Program flow for nested Inte rrupt

Fig. 1.4
We have seen that the processor provides the requested service by executing an
appropriate interrupt service routine. However, due to change in the program
sequence, the internal state of the processor may change and therefore, it is necessary
to save it in the memory before servicing the interrupt. Normally, the contents of PC,
the general registers, and some control information are stored in memory. When the
interrupt service routine is completed, the state of the processor is restored so that the

intti'fllpted program MAY eonlinue. Thutfol'i!, Mving the slate of the pr0c:tssor at the
time o f interrupt is also one of the function of the computer system.
Let us see few examples which will make you more easy to understand the basic
operations of a computer.

,_. Example 1.1 : State the operations inoolved in the exee11tion of ADD Rl, RO
Solution : The instruction Ad d Rl, RO adds the operand in Rl register to the operand
in RO register and sto.rcs the sum into RO register. Let us sec the steps involved in the
execution of this instruction.
1. Fetch the instruction from the memory into IR register of the p rocessor.
2. Decode the instruction.
3. Add the contents of Rl and RO 3nd store the =ult in the RO.
,_,. Example 1.2 : State the operations involved in the execution of Add LOCA, RO.
Solutlon : The instruction Add LOCA, RO adds the oi)erand at memory location
LOCA to the operand in register RO, and stores result in the register RO. The steps
involved in the execution of this instruction are :

Computer Organization & Arc:hltKture 1 - 11

Baalc Structure of Computer

1. Fetch the instruction &om the memory into the IR register of the processor.
2. Decode the instruction.
3. Fetch the second operand from memory location LOCA and add the contents
of LOCA and the contents of register RO.
4. Store the result in the RO.

1.5 BUS Structures

We know that the central processing unit, memory unit and 1/0 unit are the
hardware components/modules of the computer. They work together with
communicating each other and have paths for connecting the modules together. The
collection of paths connecting the various modules is called the interconnection
structure. The design of this interconnection :structure will depend on the exchanges
that must be made l:etween modules. A group of wires, called bus is used to provide
necessary signals for communication between modules. A bus tha.t connects major
computer components/modules (CPU, memory, 1/0) is called a system bus. The
system bus is a set of conductors that connects the CPU, memory and 1/0 modules.
Usually, the system bus is separated into three functional groups :

Data Bus

Address Bus

Control Dus

1) Data Bus : The data bus consists o( 8, 16, 32 or more parallel signal lines. These
lines are used to send data to memory and output ports, and to receive data from
memory and input port Therefore, data bus lines are bi-directional. This means that
CPU can read data O(\ these lines from memory or &om a port, as well as send data
out on these lines to a memory location or to a port. The data bus is connected in
parallel to all peripherals. The communication between peripheral and CPU is
activated by giving output enable pulse to the peripheral. Outputs of peripherals are
floated when they are not in ~se.
2) Addresa Bus : It is an unidirectional bus. The address bus consists of 16, 20, 24
or more parallel signal lines. ~ these lines the CPU sends out the address of the
memory location or 1/0 port that is to be written to or read &om. Here, the
communication ls one way, the address is send from CPU to memory and 1/0 port
and hence these lines are unidirectional.

Computer Orga nization & Architactu,. 1 - 12

Basic Strvcbl,. of Computer

3) Co ntrol Bus ; The control lines regulate the activity on the bus. The CPU sends
signals on the control bus to enable the outputs of addressed memory devices or port
Typical control bus signals are :

Memory Read <MEMR)

Memory Write (MEMW)

1/ 0 Read (!OR)

1/0 Write (!OW)

Bus Request (BR)

Bus Grant (BG)

Interrupt Request (lNTR)

Interrupt Acknowledge (INTA)

Oock (CLK)




Hold Acknowledge (HLDA)

Fig. 1.5 shows bus interconnection scheme.













' - - - - - ' ....___ ___,

~--~ ~----"


Fig. 1.5 Bus Intercon nection sche me

Computer Organization & Architecture 1 - 13

Saale Structure of Computer

1.5.1 Single Bus Structure

Another way to represent the same bus connection scheme is shown ln Fig. 1.6.
Here, address bus, data bus and control bus are shown by single bus called system
bus. Hence such interconnection bus structure is called single bus st:ructutt.












System bus



Fig. 1.6 Single bus structure

Jn a single bus structure all units are connected to common bus called system bus.
However, with single bus only two units can communicate with each other at a time.
The bus control lines are used to arbitrate multiple requests for use of the bus, The
main advantage of single bus structure is its low cost and its flexibility for attaching
peripheral devices.

The complexity of bus control logic depends on the amount of translation needed
between the system bus and CPU, the timing requirements, whether or not interrupt
management is included and the size of the overall system. For a small system, control
signals of the CPU could be used directly to reduce handshalcing logic. Also, drivers
and receivers would not be need.e d for the data and address lines. But large systems
with several interfaces would need bus driver and receiver circuits connected to the
bus in order to maintain adequate signal quality. In most of the processors,
multiplexed address and data buses are used to reduce the number of pins. During
first part of bus cycle, address is present on this buS. Afterwards, the same bus is used
for data transfer purpose. So latches are required to hold the address sent by the CPU
initially. Interrupt priority management is optional in a system. It is not required in
systems which use software priority management The complex system includes
hardware for managing the 1/0 interrupts to increase efficiency of a system. Many
manufacturers have made priority management devices. Programmable interrupt
controller (PIC) is the IC designed to fulfil the same task.

Computer Organization & Architecture 1 - 14

Basic Structure of Computer

1.5.2 Multiple Bus Structures

The performance of computer system s uffers when large number of devices are
connected to the bus. This is because of th.e two major reasons :
l. When more devices are connected to the common bus we need to share the bus

amongst these devices. The sharing mechanism co-ordinates the use of bus to
different devices. This co-<irdinatlon requires finite time called p ropagation
delay. When control of the bus posses from one device to another frequently,
these propagation delays are noticeable and affect the performance of computer

2. When the aggregate data transfer demand approaches the capacity of the bus,
the bus may become a bottleneck.. In such situations we have to increase the
data rate of the bus or we have to use wider bus.
Now-a-days the data transfer rates for video controllers and network interfaces are
growing rapidly. The need of high speed shared bus is impractical to satisfy with a
single bus. Thus, most computer systems use the multiple buses. These buses have the
hierarchical structure.
Fig. 1.7 shows two bus configurations. The traditional bus connection uses three
buses : local bus, system bus and expanded bus. The high speed bus configuration

uses high-speed bus along with the three buses used in the traditional bus connection.
Here, cache controller is connected to high-speed bus. This bus supports connection to
high-speed LANs, such as Fiber Distributed Data Interface (FDDI), video and graphics
workstation controllers, as well as interface controllers to local peripheral buses
including SCSI and P1394.
Local bul


Locol l/O



(a) Traditional bus configuration



Computer OrganizaUon & Architecture

1 -15

Ca cha

Buie: Structure of Comput.r


l otal 1/0



p 1394




bus lntatf:lco

(b) Hlghpeed bus c:onflgun1Uon

Fig. 1.7

1.6 Software
Microcomputer software is divided into two broad categories, system software and
user software.
System Software

Typical microcomputer system software includes monitorI operating systelJ\,

editors, assemblers, linker, loader, compilers, interpreters and debuggers. It is the
collection of programs which are needed in the creation, preparation, and execution of
other programs. So system software in a microcomputer allows one to develop
application/user programs for microprocessor-based systems. User software consists of
programs generated by the various users. Basic functions provided by system software
are as follows :

Computer Organlzatlon & Architecture

1 - 16

Basic Structure of Computer

Receive and interpret user commands.

Enter and edit user application programs and store them as files in the hard
disk or floppy disk.
File management, i.e. storage and retrieval of files in secondary storage
devices such as hard disk or Ooppy disk.
1/0 handling using standard device drivers.
Translation of programs from assembly language to machine language or
higher level language to machine language.

Link and run user written application programs.

Debug the user written application programs.
Let us see the system software programs which are used to above listed functions.

The editor is a program, which is used to aeate and modify source programs/text,
(letters, numbers, punctuation marks, assembly language programs, higher level
language programs such as PASCAL, C, FORTRAN etc.). The editor has commands
to change, delete or insert lines or characters.

Assembler translates an assembly language source file that was aeated using the
ed1tor into machine language such as binary or object rode. The assembler reads the
source file of your program from the disk where you saved it after editing. An
assembler usually reads your source file more than once.
The assembler generates two files on the floppy or hard disk during these two
passes. The first file is called the object file. The object file contains the binary codes
for the instructions and information about the addresses of the instructions. The
second file generated by the assembler is called assembler list file. This file contains
the assembly language statements, the binary code for each instruction, and the offset
for each instruction.
In the first pass, the assembler performs the following operations :
1. Reading the source program instructions.

2. Creating a symbol table in which all symbols used in the program, together
with their attributes, are stored.
3. Replacing all mnemonic codes by their binary codes.
4. Detecting any syntax error in the source program.
5. Assigning relative addresses to instructions and data.
On a second pass through the source program, the assembler extracts the symbol
from the operand field and searches for it in the symbol tablc. U the symbol does not
appear in the table, the corresponding statement is obviously erroneous. If the symbol
does appear in the table, the symbol is replaced by its address or value.

Computer Organization & Architecture 1 - 17

Buie Structure of Computer

Macro Assembler
A very useful facility provided by many assemblers is the use of macro. A macro
is a sequence of instructions to which a name is assigned. When the macro is

referenced by specifying its name, the macro assembler replaces the macro call by the
sequence of instructions that define the macro. The macro assembler functions in a
similar monner to the assembler described earlier. However it has to perform an
additional task of mocro expansion before the assembly program is translated into an
equivalent machine language program.
Cross Assembler
The distingu.i shing feature of a cross assembler is that .it is not written in the same
language used by the microprocessor that will execute machine code generated by the
assembler. Cross assembler is usually written in a high-level language such as
FORTRAN, l'!ASCAL, C which will make them machine independent. For example,
Z80 assembler moy be written in C and then the assembler may be executed on other
machine such as the Motorola 6800.
Meta Assembler

The most powerful assembler is the meta assembler because it supports many
different microprocessors.
A linker is a program used to join together several object files into on.e Inrge object
file. When writing large program,, it is usually much more efficient to divide the large
program into smaller modules. Each module can be individually written, tested and
debugged. When all the modules work, they can be linked together to form a Inrge
functioning program.

The linker produces a link file which contains the binary codes for all the
combined modules. The linker also produces a link map which contains the address
information about the link files. The linker, however, does not assign absolute
addresses to the program, it only assigns relative addresses starting &om zero. This
form of the program is said to be relocatable as it can be put anywhere in memory for

A locator is a program used to assign the specific addresses, at which the object
code is to be loaded into memory. A locator program that comes with the IBM PC
Disk Operating System (DOS) is called EXE2BIN.

Comput&r Organization & Archit&ctur&

1 -18

Basic Structur& of Computer

lnterpr1tler a nd Compiler
An interpreter processes higher, level
language programs. A l a time, an interpreter
executes one statement of the higher level
language. Unlike an interpr,eter, compiler
takes the source program written in higher
level language and translates whole program
into a machine language. Fig. 1.8 shows the
operation of interpreter. The interpreter reads
a high level language statement of the source
program, translates the statement in to
machine code and, if it doesn't need
another instruction,
executes the code for that state:ment
immediately. It then reads the next high level
language source statement, translates it, and
executes it. BASIC programs are often
executed in this way.
The advantage of using an interpreter is
lhllt if an error is found, you can just correct
the source program lll\d immediately rerun
Fig. 1.8 Operation of Interpreter
it. The major disadvantage of the interpreter
approach is that an interpreted program runs
5 lo 25 times slower than the same program will
run after being compiled. The reason is that
with an interpreter each statement must be
translated to machine code every lime the
program is run.


Fig. 1.9 shows how compilers do the

translation and execution processes. A compiler
program reads through the entire high-level
language source program, and in two or more
passes through It, translates it all to reloca table
machine code programs (object modules) . Linker
links these relocatable object modules. The
output file from the linker is then located to get
absolute address. Finally located program is
loaded into memory. Once the located program
is loaded into memory, the entire program can
be run without any further trans.lation.

Compile to



Fig. 1.9 Op1tratlon of complier

Computer Organization & Architecture 1 - 19

Basic Structure of Computer

Therefore, it will run much faster than it would if executed by an interpreter. The
major disadvantage of the compiler approach is that when an error is found, it usually
must be corrected in the source program and the entire compile-load sequence
A debugger is a program which allows you to load your object code program into
system memory, execute the program, and debug it.
How does a debugger help in debugging


program ?

1. The debugger allows you to look at the contents of registers and memory
locations after your program runs.
2. It allows you to change the contents of register and memory locations and
rerun the program.
3. Some debuggers allow you to stop execution after each instruction so you can
check or alter memory and register contents.
4. A debugger also allows you to set a b rcakpoint at any point in your program.
When you run a progra.m, the system will execute instructions upto this
breakpoint and stop. You can then examine register and memory contents to
see if the results are correct at that poinl If the results are correct, you can
move the break point to a later point in your program. If results are not
correct, you can check the program up to that point to find out why they are
not correct.
In short, debugger tools can help you to isolate problems in your program.

Operating System
An operating system performs resource :management and provides an interface
between the user and the machine. A resoun:e may be the nticroprocessor, memory, or
an 1/0 device. Basically, an operating system. is a collection of system programs that
tells the machine what to do under a variety of conditions. Major operating system
functions include efficient sharing of memory, 1/0 peripherals, and the nticroprocessor
among several users. Along with DOS, UNIX and WINDOWS are the popular
operating systems used today.

1.7 Perfonnance
When we say one computer is faster than another, we compare their speeds and
observes that the faster computer runs a progiam in less time than other computers.
The computer center manager running a large server system may say a computer is
faster when it completes more jobs in an hour. The computer user is always interested
in reducing the time between the start and the completion of the program or event,

Computer Organization & Architecture 1 20

Basic Structure o f Computer

i.e. reducing the execution time. The execution time is also referred to as response
time. Reduction in response time increases the throughput (the total amount of work
done in a given time). The performance of the computer is directly related to
throughput and hence it is reciprocal of execution time.
PerformanceA = Execution timeA.
'This means that for two computers A and B if the performance of A is greater
than the performance of B, we have
PerformanceA > PerformanceB
Execution timcA
Execution time 8

Execution time 8 > Execution timeA

That is, the execution time on B is longer than that on ~' if A is faster than B.
In discussing a computer design, we often want to relate the performance of two
different computers quantitatively. We will use the phrase A is n times faster than B"
or euquivalently "A is n times as fast as B" to mean.
= n
Performance 0
If A is n times faster than B then th.e excetution time on B is n times longer than it
is on A:
Execution tlmc 8
Performance 8
Execution timeA

,_. Example 1.3 :

If cqmputer A

runs a p~am in 10 seconds and computer B runs t/11!

same program in 25 secands, how much foster is A than B ?

Solution : We know that A is n times foster than B if
Execution time 0
Performance 8
Execution time11
Thus the performance ratio is


- .. 2.S
and A is therefore 2.5 times faster than B.
In the above example, we could also say that computer B is 2.5 times slower than
computer /\, since

Performance 11 = 2.S
Performance 8

Computer Organization & Architecture 1 - 21

Basic Structure of Computer

means that
Performance A
= Performance 11
For simplicity, we will normally use the terminology faster than when we try to
compare computers quantitatively. Because performace and execution time are
reciprocals, in.creasing perforamance requires decreasing execution time. To avoid the
potential confusion between the terms increasing and decreasing, we usually say
"improve performance" or "improve execution time" when we mean "increase
performance" and "decrease execution tim.e".
The ideal performance of a computer system is achieved when we have a perfect
match between the machine capability and the program behaviour. The machine
capability can be enhanced with better hardware technology, innovative architectural
features, and efficient resources monagemenL However, program behaviour is difficult
to predict since lt heavily depends on application and run-time conditions. The
program behaviour also depends on the algorithm design, data structures used,
lang\Ulge efficiency, programmer skill and compiler technology. Let us sec the factors
for projecting the performance of a computer.

1.7.1 Processor Clock

In today's digital computer, the CPU or simply the processor is driven by a clock
with a constant cycle time called processor clock. The time period of processor clock is
denoted by P. The period P of one clock cycle is an important parameter that affects
processor performance. The clock rate is given by R = 1/P which is measured in
cycles per second (CPS). The electrical unit for this measurement of CPS Is hertz (H.2).
Today's personal computers and workstations have clock rates in the range of
Megahertz (MHz) and Gigahertz (GHz). The computers having clock rate of 800 MHz
have 800 million cycles per second.

1.7.2 CPU Time

CPU extt11tio.n time, or simply CPU ti.me is the time the CPU spends computing
for pilrtlcular task lll\d docs not include Ume spend waiting for 1/0 or running other
programs. CPU time can be divided into the CPU time spent in the program, called
user CPU time and the CPU time spent in the operating system performing tasks on
behaU of the program, called system CPU time. Differentiating between system and
user CPU time ls d ifficult to do accurately because it is often hard to assign
responsibility for operating system activities to one user p rogram rather than another
and because of the functionality differences among operating systems. We use CPU
performance to refer user CPU time.

Computer Organization & Architecture 1 - 22

Basic StnJcture of Computer

1.7.3 Performance Metrics

Users and designers often examine performance using different metrics. If we
could relate these different metrics, we could determine the effect of a design change
on the performance as seen by the user. Since we are confining ourselves to CPU
performance at this point, the bottom-l!ine performance measure is CPU execution
time. A simple formula relates the most basic metrics (clock cycles and clock cycle
time) to CPU time :
CPU execution time = CPU clock cycles for a program x Clock cycle time
for a program
Alternatively, because clock rate and clock cycle time are inverses,

CPU execution time

for a program

CPU clock cycles for a program

Clock rate

nus formula makes it clear that the hardware designer can improve performance
by reducing either the length of the clock cycle or the number of clock cycles required
for a program. Hardware Software Interface

The previous equation do not include any reference lo the number of instructions
needed for the program. However, since the compiler clearly generated instructions to
execute and the computer had to execute the instructions to run the program, the
execution time must depend on the number of instructions in a program.
For the execution of program, pnx:essor has to execute number of machine
language instructions. This number is denoted by N. The number N is the actual
number of instructions executed by the p rocessor and is not ne'cessarily equal to the
number of machine instructions in the machine language program. This is because
some instructions may be executed more than once in the loop and others may not be
executed at all. Each machine instruction takes one or more cycle time for execution.
nus time is required to perform various steps needed to execute machine instruction.
The average number of basic steps required to execute one machine Instruction is
denoted by 5, where each basic step is completed in one clock cycle. Thus, the
program execution time is given by

T = N;S

... (1)

where N is the actual number of instructions executed by the processor for

executiol\ of a program, R is a clock rate measur~d in cloclcs per second and S is the
average number of steps needed to execute one machine instruction. The above
equation is known as basic pe.r fonnance equation.

Compulllr Organization & Archllllcture 1 - 23

Basic Structure of Computer

When mllChine instruction execution time is measured in terms of cycles per

instruction (CPI), the program execution time is given as
__ NxCPI
We know that each instruction executio.n involves cycle of events involving the
instruction fetch, decode, operand(s) fetch, .execution and store results. We need to
access memory to perform instruction fetch, to perfonn operand(s) fetch or to store
The memory cycle is the time needed to complete one memory reference. Usually,
a memory cycle is k times the processor cycle, P. The value of k depends on the speed
of the memory technology and the interconnection scheme used to interface memory
and processor.
The CPI of the an instruction type can be divided into two components terms
corresponding to the total processor cycles and memory cycles needed to complete the
execution of the instruction. Therefore, we can rewrite the equation 2 as
T = Nx(p~mxk)

... {J)

Where p is the number of processor cycles required for the instruction decode and
execute, m is the number of memory references needed, k is the ratio between
memory cycle and processor cycle, N is the machine instruction count, and R is the
clock rate. The above performance parameters, i.e. N, p, m, le, R are affected by four
system attributes : instruction set architecture, complier technology, CPU
implementation and control, ~d cac:tte and memory hierarchy, as shown Table 1.1.

Performance parametara

Syatem attribute

.Average cycles p., Instruction

count (N)

lnatNction set archlteCture


Compiler technology


Processor Implementation and


cycln per
referencn per cees1
lnatnJC1fon (p) lnstn1C1fon (m) latency, k


Table 1.1




Cache and memocy hlerardly




Computer Organization & Architecture

1 - 24

Basic Structure of Computer

The instruction set nrchitecture affects the machine instruction count (N), i.e. the
program length and the average processor cycles required per instruction (p). The
compil~r lechnology affects the value of N, p and the memory reference count (m).
The processor implementation and control determine the total proct.'SSOr time (p/R)
required. Finally, the memory technology and hierarchy design affect the :nemory
access latency {k/R).

Example 1.4 : Let us assume tltnt two computers use same instructWn set architecture.
Cqmputer A has a clock cycle time of 250 ps and a CPI of 2.0 for some program md

computer B has a clock cycle Hme of 500 ps and a CPI of 1.2 for tire same program.
Wlr~h computer is faster fo r this program and /Ty how much ?
Solution : We know that each computer executes the same number o f instructions for
the program; let's call this numbe.r N. First, find the number of processor clock cycles
for each computer :
CPU clock cyclesA


CPU clock cydes 8

N x 1.2

The CPU time for each machine will be

CPU time A

CPU clock cycles,,. x Clock cycle timeA

= Nx 2.0x 250 ps = 500 N ps

CPU time 8

CPU clock cydesp x Clock Cycle time 8

Nxl.2 x500ps = 600Nps

Thus we can say that computer A is faster. The amount faster is given by the rdtio
of the execution limes.
CPU Performance" = Execution times 8 = 600 N ps = l.2
CPU Performanceo
Execution limesA
500 N ps
We can conclude that computer A is 1.2 times faster than computer B for this
program. Other Performance Measures
MIPS is an another way to measure the processor speed. The processor speed can
be measured in terms of million instructions per second (MU'S). It is given as
MIPS rate=
Average time required for the execution of instruction x 106
... (4)
CP!x 106
N x CPI x 106

Computer Organization & Architecture 1 25

Basic Structure of Computer

Substituting value of T &om equation (2) we get,


MIPS rate = - - 6

Tx 10
Referring equation (2) we can also write
MIPS rate = - - C x 106

... (5)

\<Vhere C is the total number of clock cycles required to execute a given program
Throughput Rate
Another important measure of throughput is known as throughput rate. It
indicates a number of programs a system can execute per unit time. It is often
specified as programs/second. Throughput can be further measured separately for the
system fY'IJ and for the processor (W~ The processor throughput is given as
WP = Number of maChlne instructions executed per second
... ( )
Number of machine instructions per program
MIPS rate x 106

It is often gr@tcr dum the system throughput bttallSC in system throughput we

have to consider the system overheads caused by the 1/0, compiler, and OS
(operating system) when multiple programs are interleaved for processor execution by
multiprogramming or time sharing. If the processor is kept busy in a perfect program
inte.rleaving fashion, then W5 = Wp This will probably never happen, since the system
overhead often causes an extra delay and the processor may be left idle for some
The 1970s and 1980s marked the growth of the supercomputer industry which was
defined by high performance on floatingpointintensive programs. Average instruction
time and MIPS were clearly inappropriate metrics for this industry. Hence another
popular alternative to execution time w.as invented. It is million floating-point
operations per second, abbreviated megaflops or MFLOPS but always pronounced
"megaflops. MFLOPS can be defined as
MFLOPS = Number of flonting point operations in a program
Execution time x 106
Where, a floating-point operation is an addition, subtraction, multiplication, or
division operation applied to a number in .a single or double precision floating point
representation. Such data items are heavily used in scientific calculations and nre
specified in programming languages using key words like float, real, double or double

Computer Organization & Architecture 1 26

Basic Structure of Computer

MFLOP rating is dependent on the program. Different programs require the

execution of different number of floating point operations. Si.nee MFLOPS were
intended to measure floating-point perforn.ance, they arc not applicable outside that
MFLOPS is based on operations in the program rather than on instructions, hence
it has a stronger claim than MIPS to being a fair comparison between different
machines. The key point is that the same program running on different computers
may execute a different number of instructions but will always execute the same
number of floating-point operations. Unfortw\ately, MPLOPS is not dependable
because the set of floating-point operations is not consistent across machines, and the
number of actual floating-point operations performed ~y vary. For example, the
processor which does not provide division instruction requires several floating-point
operations to perform floating-point division, wh~s the processor which provides
division instruction req~es only one floating point operation to perform floating
point division.
Another major problem is that the MFLOPS ratings changes according not only to
the mixture of integer and floating-point operations but to the mixture of fast and
slow floating-point operations. For example, the program with floating-point add
operations have a higher rating than the program with floating-point division
operations. This problem can be solved by giving more weights to the complex
floating-point operations while measuring the performance. These MFLOPS might be
called normalize4_ MFLOPS. Of course, because of the counting and weighting, these
normalized MFLOPS may be very different from the actual rate at which a machine
executes floating-p_oint operations.
1.7.4 Performance Measurement

When we compa.re the performance of different computers say A, B and C, we

may observe that some programs run faster by computer A, some by computer B and
some by computer C. In this situation they present a confusing picture and we cannot
have a dear idea of which computer is faster. 11tls happens because each computer
has an ability to execute particular instruction or step in the instruction execution
faster than others.
We know that, processing of an instruction involves several steps :

Fetch the instru.ction from main memory M.

Decode the instruction opcode.

Load the operands from the main memory if they are not in the CPU

Computer Organization & Architecture 1 - 27

Basic Structure of Computer

Execute the Instruction using appropriate functional unit, such as floating

point adder or fixed point adder.

Store the results in the main memory unless they are to be retained in CPU

All instructions do not require to perform all steps listed above. When instruction
has all its operands in CPU registers, it will run faster whereas the instruction which
requires multiple memory accesses takes more time to execute. Let us consider two
programs P1 and P21 with instructions having all operands in the CPU and with
instructions having all operands in the memory, respectively. Also consider two
computers C 1 and C2 The clock speed of C1 is greater than the clock speed of Cz ;
however th.e memory access time in C1 is less than the memory access time in Cz.
With these computer conditions we can easily understand that the C1 will execute the
program P1 faster than Cz and Cz will execute the program P2 faster than C 1 In such
situation it is difficult to decide which computer is faster. Therefore, measures of
instructi.o n execution performance are based on average figures, which are usually
determined experimentally by measuring the run times of representative called
benchmark programs. In r~ent years, it has become popular to put together collection
of benchmarks to try to measure the performance of processors with a variety of
applications. The benchmark programs are different for checking the performance of
processor for different applications. According to applications the benchmark programs

are classified as :

Desktop Benchmark
Server Benchmark and

Embedded Benchmark

Desktop Benchmarks

Desktop benchmarks divide into two broad classes : CPU-intensive benchmarks

and graphics-intensive benchmarks. These two benchmark programs measures
compute CPU and graphics pedoIIIlailce of the processor, respectively.
Server Benchmarks

We know that servers have to perform many functions, so there are multiple types
of benchmark programs for servers.
CPU throughput oriented benchmark .: This benchmark program can be used to

the processing rate of a multiprocessor by running multiple copies, one for

each CPU benchmark and converting the CPU time into a rate. This particular
measurement is known as SPEC rate.

Web server benchmark : This benchma.r k program simulates multiple clients

requesting both static and dynamic pages from a server, as well as clients posting data
to the server.

Computer Organization & Architecture

1 28

Basic Structur e of Computer

File system benclunark : It is used to measure network file system (NFS)

performance us.ing 11 script of file server requests. It also tests the performance of the
1/0 system (both disk and network 1/0) as well as the CPU.
Transaction processing bendtm.uk : It is used to measure the ability of a system
to handle transnctions, which consists of database accesses and updates. In the mid
1980s, a group of concerned engineers formed the vendor-independent transaction
p rocessing council (TPC) to try to create a set of realistic and fair benchmark
programs for transaction processing. Following this TPC benchmark program there
were many benchmarks published namely TPC-A, TPC-C, TPC-H, TPC-R, and TPC-W.
All these benchmarks, measure performance in transactions per second. In addition,
they include a response time requirement, so that throughput performance is
measured only when the response time limit is met.
Embedded Benchmarks

Embedded applications have enormous variety and their performance requirements

are also different. Thus, it is unrealistic to have a single set of benchmark programs
for embedded systems. In practice, many designers of embedded systems planned for
the benchmark programs that reflect their application, either as kernels or as
stand-alone versions of the entire application.

A new set of standardised benchmark programs (EON) Embedded Microprocessor

Benchmark Consortium lltC available for embedded applications which are
characterised well by Kernel performance. These benchmark programs are divided into
five different classes :

Automotive/ industrial



Office automation


Automotive/industrial benclunark programs include miaoberu:lunark programs for

arithmetic operations, pointer chasing, memory performance, matrix arithmetic. table
lookup and bit manipulation. They also include automobile control benchmarks and
FFT benchmarks. The consumer benchmark programs include mainly multimedia
benchmarks like JPEG compress/decompress, filtering, and RGB conversions.
Networking benchmark is the collection of programs for shortest path calcula tions, IP
routing, and packet flow operations. Office automation benchmark programs include
graphics and text benchmarks such as Bezier curve calculation, dithering, image
rotation and text processing. Finally, teJec.ommunication benchmark programs include
filtering and DSP benchmarks.

Computer Organization & Architecture 1 - 29

Basic Structure of Computer

The selected benchmark programs are compiled for the computer under test, and
the runnmg time on a real computer is mea.sur~'li. The same benchmark program is
also compiled ~nd run on Ute l'cference computer. A nonprofit orgllrtisatior~ called
System Performance Evaluation Corporation (SPEQ specified the benchmark programs
and reference computers in 1995 nnd again in 2000. For SPEC95, the reference
computer is the SUN SPARCStation 10/40 and for SPEC2000, the reference computer
is an Ultra-SPARC10 workstation with a 300 MHz Ultra SPARC-D processor.
The running time of a benchmark program is compared for computer under test
and the reference computer to decide the SPEC rating of the computer under test. The
SPEC rating is given by
SPEC ratin = Running time on the re fercncc computer
Running time on the computer under test
The SPEC rating for all selected programs is individually calculated and then the
geom.etric mean of the results is computed to determine the overall SPEC rating for
the computer under tesl It Is given by

SPEC rating =

(.n )ii


whe.r e n is the nu.m ber benchmark pro~ams used for determining SPEC rating.
The computers providing higher performance have higher SPEC rating.

1.8 Instructions and Instruction Sequencing

Each instruction of the CPU contain specific information fields, which are required
to execute it. These information fields of instructions are called elements of instruction.
These are :
Operation code : The operation code field in the, instruction specifies the
operation to be performed. The opera tion Is specified by biruuy code, hence
the name operation code or simply opcode for SOBS processor operation code
for Add B instruction is SOH.

Source I Destination operand : The source/ destination operand field directly

specifies the source/destination operand for the instruction. in 8085, the
instruction MOV A,B has B register as a source operand and A register as a
destination operand, because this instruction copies the contents of register B
to register A.

Source operand address : We know that the operation specified by the

instruction may require one or more operands. The source operand may be
in the CPU register or in the memory. Many times the instruction specifies
the address of the source operand so that operand(s) can be accessed and
operated by the CPU according to the instruction.

Computer Organization & Architecture 1 30

Basic Structure of Computer

In 8085, the source operand address for instruction Add M is given by HL

register pair.
Destination operand address : The operation executed by the CPU may
produce result Most of the ti.mes the result is stored in one of the operand.
Such operand is known as destination operand. The instruction which
produce result specifies the d.estination operand address. In 8085, the
destination operand address for instruction INR M is given by HL register
pair. Because INR M instruction increments the contents of memory location
specified by HL register pair and stores the result in the same memory
Next instruction address : The next instruction address tells the CPU from
where to fetch the next instruction after completion of execution of current
instruction. For ]UMP and BRANCH instructions the address of the next
instruction is specified within the instruction. However, for other
instructions, the next instruction to be fetched immediately follows the
current instruction. For example, in 8085, the instruction JMP 2000H specifies
the next instruction address as 2000H.
We have seen that each instruction in a program specifies operation to be
performed and data to be processed. For this reason, an instruction is divided into
two parts : its operation code (opcode) and its operands. The operand is an another
name for data. It may appear in differen~ forms :




Logkal Data

Addresses : The addresses are in fact a form of data. In many situations, some
calculation must be performed on the operand reference in an instruction to determine
physical address.

Numbers : All c:Omputer supports numeric data types. The common numeric data
types are:
Integer or Fixed Point



Chuactars : For documentation a common form of d!lta ls text or character strin~.

Today, most of the computers uso ASCil (American Standard Code for Information
Interchange) code for character represented by a unique 7-bit pattern ; thus, 123

Computer Organization & Architecture 1 31

Basic Structure of Computw

different characters can be represented. However, the ASCII encoded characters are
always stored and transmitted using 8-bits per character. The eighth bit may be set to
0 or used as a parity bit for error detection.
Another code used to encode cha racteIS is the Extended Binary Coded Decimal
Interchange Code ( EBCDIC).
Logical Data : Most of the processors interpret data as a bit, byte, word, or double
wo.r d. These are referred to as units of data. When data is viewed as n, !bit items of
data, each item having the value 0 or 1, it is considered as a logical data. The logical
data is used to store an array of Boolean or binary data items and with logical data
we can manipulate the bits of data items.

A computer has a set of instructions that allows the user to formulate any
data-processing task. To carry out tasks, regardless of whether a computer has 100
instructions or 300 instructions, its instructions must be capable of performing
following basic operations :

Data trarufers between the memory and the processor registers.

Arithmetic and logic operations on data.

Program sequencing and control.

1/0 control.
Data Transfer Instructions : Data transfer instructions include the instructions for
data transfer between the memory and the processor register. The instructions may
include the byte transfer or word transfer instructions.
Arithmetic or Loglcal lnatructlona : These instructions are also Jcnown as data
processing instructions. The arithmetic instructions provide computational capabilities
for processing numeric data, whereas logic instructions provide capabilities of
performing logical operations on the bits of a word.
Program Sequencing and Control lnstruetlona :
This instruction type mainly
includes test and branch instructions. Test instructions are used to test the value of a
data word or the status of a computation. Branch instructions are used to branch to a
different set of instructions depending on the decision made.

110 Conlrol : The J/0 control instructions include the instructions for data transfer
between processor and input/output devices. The instructions may include the byte
transfer or word transfer instructions.


Before going to discuss these instructions we understand first some notations used
these instructions.


Basic Structure of Comput.r

Computer Organization & Architecture 1 - 32

1.8.1 Register Transfer Notation

We have seen that in a computer system data transfer takes place between
processor registers and memory and between processor registers and 1/0 system.
These data transfers can be represented by standard notations given below :

Processor registers are represented by notations RO, Rl, R2, ... and so on.

The addresses of the memory locations are represented by ruunes such as


1/0 registers are represented by names


as DATAlN, DATAOUT and so

The contents of register or memory location are denoted by placing square
brackets around the name of the register or memory location.
Let us see following exo.mplcs for clear understanding.

Example :



This expression states that the contents of memory location LOC are transferred
into the processor register R2.

Example :


[Rl) + [R2)

This expression states that the contents of processor registers Rl and R2 are added
and the result is stored into the processor register R3.
Example : [LOCI ~ (Rl) - [R2J
This expression states that the contents of the processor register R2 is subtracted
from processor register Rl and the result :is stored into the memory location LOC.
The notations explained above are commonly known as register transfer notations
(RlN). In these notations, the data represented by the right-hand side of the
expression is transferred to the location specified by the left hand side of the

expression. overwriting the old contents of that location.

1.8.2 Assembly Language Notation

Assembly language notations are the another type of notations used to represent
machine instructions and programs. These notations use assembly language formats.
However, register names, names of memory locations. are same as that of register
notations. Let us see some examples.
Exany>le :


This expression states that the contents of processor register R2 are transferred to
processor register Rl. Thus the contents register R2 remain unchanged but contents of
register Rl are overwritten.

Computer Organization & Architecture 1 - 33

Example :

Buie Struc:t\lre of Computer

ADD Rl, R2, R3

This expression states th.at the contents of processor registers Rl and R2 are added
and the result Is stored in the register R3.
lt is important to note that the above expressions written in the assembly language
notations has three fields : operation, source and destination having their positions
from left to right. This order is followed by many computer. But there are many
computers In which the order of source and destination operands is reversed.

1.8.3 Basic Instruction Types

The processor instructions can be classified according to their operations and
according to the number of address references required by them. ln this section we are
going to discuss basic instruction types according to the number of address references
required by the instructions.
Accordingly to address reference there are three address, two address, one address
and zero address reference instructions. Let us see examples of each of them. Three Address Instructions

The three address instruction can be represented symbolically as

where A, B, C are the variables. These variable names are assigned to distinct
locations in the memory. ln this instruction operands A and B are called source
operands and operand C is called destination operand and ADD is the operation to be
performed on the operands. Thus the general instruction format for three address
instruction Is
Operation Source 1, Source 2, Destination
The number of bits required to represent such instruction include :
L Bits required to specify the three memory addresses of the three operands. U

n-bits are required to specify one memory address, 3n bits are required to
specify three memory addresses.

2. Bits required to specify the operation. TWo Address Instructions
The two ad dress instruction can be represented symbolically as
This instruction adds the contents of variables A and B and stores the sum in
variable B destroying its previous contents. Here, operand A is source operand;
however operand B serves as both source and destination operand. The general
instruction format for two address instruction is

Operation Source, Destination

Computer Organization & Architecture 1 - 34

Basic: Structure of Computer

To represent this instruction less number of bits are required as compared to three
address instruction. The number of bits N!quired to represent two address instruction
include :
1. Bits required to specify the two memory addresses of the two operands, i.e. 2n
2 Bits required to specify the operation. One Address lnstnlctlon
The one address instruction can be represented symbolically as

This instruction adds the contents of variable A into th.e processor register called
accumulator and stores the sum back into the accumulator destroying the previous
contents of the accumulator. In this instruction the second operand is assumed
implicitly in " unique location accum:.tlator. The general instruction format for one
address instruction is
Operation Source

Few more examples of one address instructions are :


This instruction copies the c;onlents of memory lOCillion A into the



This instruction copies the contents of accumulator into memory

location B.

In one address instruction, it is important to note that the operand specified in the
instruction can be either source operand or destination operand depending on the
instruction. For example, in LOAD A instruction, the operand specified in the
instruction is a source operand whereas the operand specified in the SfORE B
instruction is a destination operand. Similarly, in one address instruction the implied
operand (accumulator) can be either source or destination depending on the
instruction. Zero Address Instructions

In these instructions, the locations of all operands are defined implicitly. Such
instructions are found in machines that store operands in a structure called a
pushdown stack.

From above discussion we can easily understand that the instruction with only one
address will require less number of bits to represent it, and instruction with three
addresses will require more number of bi ts to represent it. Therefore, to access entire
instruction from the memory, the instruction with th.r ee addresses requires more
memory accesses while instruction with one address requires less memory accesses.

Computer Organization & Architecture 1 35

Basic Structure of Computer

The speed of instruction execution is m!linly depend on how much memory accesses it
requires for the execution. U memory accesses arc more, mor~ Wnc is required' lo
execute the instruction. Therefore, the execution time for three address instructions is
more than the execution time for one address instructions.
To have a less execution lime we have to use instructions with minimum memory
accesses. For this instead of referring the operands from memory it is advised to refer
operands from processor registers: When machine level language programs are
generated by compilers from highlevel languages, lhe intelligent compilers see that
the maximum references to the operands lie in the processor registers.

1.8.4 Instruction Execution and Straight-Li ne Sequencing

Uptil now we have. seen that instruction consists of opcode or opcode and
oprand/ s or opcode and operand address. Every processor has some basic types of
instructions such as data transfer instructions, arithmetic instructions, logical
instructions, branch instru.ctions and so o"\. To perform a particular task on the
computer, it is programmers job to select and write appropriate instructions one after
the other, i.e. programmer has to write instn.:ctions in a proper sequence. This job of
programmer is 'known as instruction sequencing. The instructions written in a proper
sequence to execute a particular task is called p rogram.
Processor executes a program with the help of program counter (PC). PC holds the
address of the instruction to be executed next. To beg;n execution of a program, the
address of its first instruction is placed into the PC. Then, the processor control
circuits use the information (address of memory) in the PC to fetch and execute
instructions, one at a lime, in the order of increasing addresses. This is called
straight-line sequencing. During the execution of instruction, the PC is incremented
by the length of the current instruction in execut'on. For example, if currently
executing instruction length is 3 bytes, the PC is incremented by 3 so that it points to
the instrliction to be executed next
'Let us see how instruction is
executed. fhe complete instruction cycle
involves three operations : Instruction
fetching, opcode decoding and instruction
Fetcl\ the
next lnatruetlon


Decode lnstNQion


Exec:u1e lnstsuctlon

Execute cydo

Fig. 1.10 Basic Instruction cycle


The Fig. 1.10 shows the basic

instruction cycle. After each instruction
cycle, central processing unit checks for
any valid interrupt request. 1f so, central
p rocessing unit fetches the instructions
from the interrupt service routine and
after completion of interrupt service
routine, central processing unit starts the

Computer Organization & Architecture

1 3e

Basic Structure or Computer

new instruction cycle from where it has been interrupted. The Fig. 1.11 shows
instruction cycle with interrupt cycle.

Fetch cycle

De.code ln&tructlon

Execute Instruction


Exeaito cycle

lnlerrupl cycle

Process lnl errupts

Fig. 1.11 Basic Instruction cycle with Interrupt

Instruction Fetch Cycle :

In th.is cycle, the instruction is fetched from the memory location whose adcltess is
in the PC. This instruction is placed in the instruction register (IR) in the processor.
Instruction Decode Cycle :

In this cycle, the opcode of the instruction stored in the mStruction register is
decoded/ examined to determine which operation is to be performed.
Instruction Execution Cycle :

In this cycle, the specified operation is performed by the processor. This often
involves fetching operands &om the memory or from processor registers, performing
an arithmetic or logical operation, and storing the result in the destination location.
During the instruction execution, PC contents arc incremented to point to the next
instru.ction. After completion of execution of the current instruction, the PC contains
the address of the next instruction, and a new instruction fetch cycle can begin.

Computer Organization & An:hltectura 1 - 37

Saale Structure of Computer

1.8.5 Branching
Everytime it Is not possible to store a program in the consecutive memory
locations. After execution of decision making instruction we have to follow one of the
two program sequences. In such cases we c:an not use straightline sequencing. Here,
we have to use branch instructions to transfer the program control from one
straightline sequence to another straight-line sequence of instruction, as shown in
following program.
For example, see the
have to check whether A
A - B or B - A.

program given for operation JA - BJ. In this program, we

> B or B > A and accordingly we. have to perform operation RO

MOV R2, Rl

Get the number l into RO

Get the number 2 int o Rl
I f NUMl < Num2
; Jump to another program sequence
NOMl 4- NUMl - NUM2
Store the resu l t in R2

NUM2 +- NUM2 - NUMl
; Store the result in R2
In the above program we have used JB NEXT instruction to transfer the program
control to the instruction SUB Rl, RO if NUMl is less than NUM2. Thus we have
decided to branch the program control after checking the condition. Such branch
instructions are cnlled conditional branch instructions. We discuss more about it in
section 1.8.6. In branch instructions the new address called target address or branch
target is loaded into PC and instruc:tion is fetched from the new address, instead of
the instruction at the location that follows the branch instruction in sequential address

the conditional branch instruc:tions are used for program looping. In looping, the
program is instructed to execute certain set of instructions repeatedly to execute a
particular task number of times. For example, to add ten numbers stored in the
consecutive memory locations we have to perform addition ten times.

the program loop is the basic structure which forces the processor to repeat a
sequence of instructions. Loops have four sections.
1. Initialization section.

Processing section.

3. Loop control section.

4. Result section.

Basic Structure of Computer

Computer Organization & Architecture 1 38

I. The initialization section establishes the starting



loop counters for counting how many limes loop is executed,

address registers for indexing which give pointers to memory locations


other variables

2. The actual data manipulation occurs in the processing section. lltls is the

section which does the work.

3. The loop control section updates counters, indices (pointers) for the next
4. The result section analyzes and stores the results.

Inidolization section

lnlUallzaUon secllon
loop eontro4 section

loop contml section


Resutl seclion

Flowchart 1

Flowchart 2

Note : The processor executes initialization section and result section only once,
while it may execute processing section and loop control section many times. Thus,
the execution time of the loop will be mainly dependent on the execution time of the
processing section and loop control section. The flowchart 1 shows typical program
loop. The processing section in this flowchart is always executed at least once. If you
interchange the position of the processing and loop control section then it is possible
that the processing section may not be executed at all, if necessary. Refer flowchart 2.

Comp_uter Organlutlon & An:hitacture 1 39

Bu ie Structure of Comput.r

1.8.6 Conditional Co des

The condition code flags are used to store the results of certain condition when
certain operations are performed during execution of the program. The condition code
Rags are stored in the status registers. The status register is also referred to as flag
register. ALU operations and certain register operations may set or reset one or more

bits In the status register. Status bits lead to a new set of microprocessor Instructions.
These lnstru~ons permit the execution of a progTam to change flow on the basis of
the condition of bits in the status register. So the condition bits in the status register
can be used to take logical decisions within the program. Some of the coll\D\Dn
condition code nags are:
1) Cany/Borrow : The carry bit is set whe n the summation of two 8-bit numbers is
greater than 1111 1111 (FFH). A borrow is 'g enerated when a large number is
subtracted from a smaller number.

2) Zero : The zero bit Is set when the contents of register are zero after any
operation. This happens not only when you decrement the register, but also when any
arithmetic or logical operation causes the contents of register to be zero.
3) Negative or Sign :

In 2's complement arithmetic, the most significant bit is a sign

bit. If this bit is logic 1, the number is nega live number, othetWise


positive mml>er.

The negative bit or sign bit is set when an y arithmetic or logical operation gives a
negative result.

s -----------......----./

4) Auxilia ry Ca ny : The auxiliary carry bit of

status register is set when an addition in the first
4-bits causes a carry into the fifth bit. This is
often referred as half carry or intermediate carry.
This is used in the BCD arithmetic.

Fig. 1.12
5) overflow Flag : In 2's complement arithmetic, most significant bit is used to
represent sign and remaining bits are used to represent magnitude of a number (see
Fig. 1.12). This flag is set if the result of a signed operation is too large to fit in the
number of bits available (7-bits for 8-bit number) to represent it.
For example, if you add the 8-bit signed number 01110110 (+118 decimal) and the
8-bit signed n umber 00110110 (+ 54 decimal). The result will be 10101100 (+ 172
decimal), which is the correct binary result, but in this case it is too large to fit in the
7-bits allowed for the magnitude in an 8-bit signed number. The overflow flag will be
set after this operation to indicate that the result of the addition has overflowed into
the sign bit.
6) Parity : When the result of an operation leave the indicated register with an even
number of l's, parity bit is set.

Computer Organization & Architecture 1 - 40

Basic Structure of Computer

1.8.7 Generating Memory Addresses

The address of the memory can be specified directly within the instruction. For
example, MOV [2000HJ, Rl. In this instruction the memory addzess is fix; it can not be
dynamically changed in the program itself. 'Chere are some situations where we need
to change the memory addzcss dynamically. Let us see the example prograin. In this
program the contents from the array of data are added to get the total sum of all
array elements. We know for this we have repeat the add instruction number of limes
equal to the array elements and each ti.me we have to add a number from a new
succes.~ive memory location. Everylime the address of memory is different. So to
change the add.ress of memory each time when we enter the loop address variable is
used. Such addzessing is ci1lled indirect addressing. For example, ADD Rt , (R2). Here,
the contents of R2 register are used as an address of memory location. By
incrementing the contents of register R2 it is possible to change the memory address
each ti.me we enter the loop.
Note : The instruction used in the program given below specifics first destination
operand and then thP. source operand.
MOV R2, Array-s tart
Load the s ta rting address of
the array

MOV RO, Count



Rl, 0
Rl, (R2 )


I nitiali~e

the counter

Result 0

+ a r ray e l ement

i ncrement memory pointer

Decrement count

if count

0, repeat

1.9 Instruction Set Architecture

The Instruction Set Architecture (ISA) is the part of the processor that is visible to
the programmer or complier writer. The ISA serves as the boundary between software
and hardware. We will briefly decribe the instruction sets found in many of the
microprocessors used today. The ISA of a processor can be described using 5
categories :
Operand storage in the CPU

Number of explicit named operands

Operand location


Type and size of operands.

Of all the above, the most distinguishing factor is the operand storage. The three
most common types of JSAs are :

Computer Organization & An:hltectu,. 1 41


Buie Structu,. of Computer

S Lllck The operanda Cltc lmpUdlly on top o f ll1c sl4ck .

2. Accumulator One operand is implicitly the accumulator.

3. General Purpose R~ster (GPRl All operands are explicitly mentioned, they
are either registers or memory locati.ons.
Let us look at the assembly code of A = B + C;

au 3 architectures :













Not all processors can be neatly tagged into one of the above categories. The Intel
8086 has many instructions that use implicit operands although it has a general
register set The Intel 8051 is another example, it has 4 banks of GPRs but most
instructions must have the A re~er as one of its operands.
Let us see the advantages and disadvantages of above instruction set architecture.

Advantages : Simple model of expression evaluation (reverse polish). Short

Disadvantages : A stack cannot be randomly accessed. This makes it hard to
generate efficient code. The stack itsclf is accessed every operation and becomes a
Advantages : Short instructions.
Disadvantages : The accumulator is only temporary storage so memory traffic is the
highest for this approach.

Advantages :

Makes code generation easy. Data can be stored for long periods in

Disadvantages : AU operands must be named leading to longer instructions.

Earlier CPUs were of the first 2 types but in the last 15 years all CPUs made are
GPR processors. The 2 major reasons are that registers are faster than memory, the
more data that can be kept lnternaly In the CPU the faster the program will run. The
other reason is that registers are easier for a complier to use.

Computer Organization & Architecture

1 - 42

Basic Structure of Computer

As we mentioned before most modern CPUs are of the GPR (General Purpose
Register) type. A few examples of such CPUs are the IBM 360, DEC VAX, Intel 80x86
and Motorola 68xxx. But while these CPUs were clearly better than previous stack and
accumulator based CPUs, they were still Lacking in several areas :

I. Instructions were of varying length from l byte to 6-8 bytes. This causes
problems with the pre-fetching a:nd pipelirung of instructions.

ALU {Arithmetic Logical Unit) instruction.q could have operands that were
mcmor:y locations. Bccnusc the nwnbcr of cycles it takes to access memory
varies so does the whole instruction. This isn't good for complier . writers;
pipelirUng and multiple issue.

3. Most ALU instruction had only 2 operands where one of the operands is also
the destination. This means this operand is destroyed during the operation or
it must be saved before somewhere.
Thus in the early SO's the idea of RISC was introduced. The SPARC project was
started at Berkeley and the MIPS project at Standford. RJSC stands for Reduced
Instruction Set Comp uter. The ISA is composed of instructions that all have exactly
the same size, usually 32-bits. Thus they can be pre-fetched and pipelined successfully.
AU ALU instructions have 3 operands which are only registers. The only memory
access is through explicit LOAD/STORE instructions.
Thus A = B + C will be assembled as :





C, R3

Although it takes 4 instructions we can reuse the value in the registers.

RIS.C processors have faster clock rates. The clock rates range from 20 to 120 MHz
determined by the implementation technology employed.
Most RISC processors use hardwired control. The architectural details of RJSC
processor are covered in next point which compares RJSC with OSC. The most of the
RJSC processors use 32-bit instructions. They are very few instructions. The
instructions are predominantly register"based. The limited (3 to 5) addressing modes
are used by these processors. The memory access cycle is broken into pipelined access
operations. This involves the use of caches and working registers. A large register file
and separate instruction and data caches are used. This benefits internal data
forwarding and eliminates unnecessary storage of intermediate results. Using
hardwired control, the clock cycles per instruction (CPI) are reduced to 1 for most
RJSC instructions. This is the advantage of having all instructions of equal size.

Computer Organization & Architecture 1 - 43

Basic Structure of Computer

1.10.1 RISC Versus CISC

Future processors may be designed with features from both types. Therefore the
boundary between RISC and C1SC architectures has become blurred In recent yeaJS.
Fig. 1.13 shows the architectural distinctions between RISC and CISC.
,__ _ , Dqla paltl

Instruction and
clala path

M icropiogranvned
oontrol memo


(a) The RISC architecture with hardwired

control and apUt ln1tructfon cache and

(b) The CISC architecture with

microprogrammed control and

data cache

unified cache

Fig . 1.13
As shown in Fig. 1.13, RISC architecture uses separate instruction and data caches.
Their access paths are also different (Note : exceptions do exist). In a CISC processor.

there is a unified cache for holding both, instructions and data. Therefore they have to
share the same path for data and instruction.
The hardwired control is found in most RISC processors while the traditional CISC
processors use microprogrammed control. Thus the control memory is needed in these
processors. This may signi.6cantly slow down the instruction execution. The
modern C1SC processors may also use hardwired control. So split caches and
hardwired control are not exclusive in RISC machines.


Let us compare the characteristics of RISC and CJSC processors. Table 1.2 shows
the comparison between them.



Clock rate ls 50-150 M Hz In 1993.

Clock rate is 33-50 M Hz in 1992.

Simple Instructions taking one cycle. Alt

average CPI is less than 1.5.

Complex Instructions taking multiple cycles.

Alt average CPI is between 2 and 15.

Simple inatructiona laking one cycle

Complox lnattuctiona laking multiplo cydoa


Very few instructions refer memory

Most of insWc:tions may refer memory

lntlruclion& are executed by hardware

Instructions are executed by mlcropcogram

Computer OrganlzaUon & ArchlWc:tu,. 1 - 44

Basic Structu,. of Compular

Foxed fonnat Instructions

Variable format Instructions

Few ln1tructlon1

Many ln1tructlon1

Few addressing modes, and most

lnstructlons have register to register
addressing mode

Many addressing modes

Complex addressing modes are

synthesized In software

Suppof1s C0111Plex addreuing modes


Multiple register aets

Single register set


Highly pipelined

Not pipelined or less pipelined


Complextty is In lhe eompi er

Complexl!y la In lhe microprogram

Tabla 1.2 Comparison bei - n RISC and CISC processors charac:blrlstlca

1.11 Addressing Modes

Part of the programming flexibility for each processor is the number and dillerent
kind of waya the pro~ammer can refer to data stored in the memory or I/0 device.
The dilferent ways that ii pnx:essor ain il=s5 diltil ilre rclem:d to ilS addressing
schemes or addttning modes.

1.11 .1 Implementation of Variables a nd Constants

The variables and constants are the most commonly used data types on every
computer. In assembly language, a variable is initialized by assigning register or a
memory location to hold its value. Thus by changing the contents of specified register
or memory location .it is possible to change the value of the variable. Let us see the
addressing modes used for this purpose.
1. Register Mode :
Example :

The operand is the contents of processor register. The name of

register is specified in the instruction.


This instruction copies the contents of register R2 to register Rl.

2. Absolute Mode or Direct Mode :


The address of the location of the operand is

given expticity as a part of the instruction.

MOVE A, 2000

The above instruction copies the contents of memory location 2000 into the A
register. As shown in the instruction, here, address of operand is given explicitly in
the instruction.

Computer Organization & An:hltecture 1 45

Basic Structure of Computer

'The constant for address a.n d data can be represented by immediate addressing
mode in the assembly language programming;. Let us see Immediate addressing mode.
3. Immediate Mode :

The operand is given explicity in the instruction.

Example : MOVE A, #20

The above instruction copies operand 20 in the register A. The sign # infront of the
value of an operand is used to indicate that this value is an immediate operand.

1.11.2 Indirection and Pointers

Some processor instructions does not provide an operand or its address explidUy.
Instead, they prov;de the information from which the memory address of the operand
can be determined. Such address is referred to as e.ffectlve address (EA) of the
4. Indirect Mode :

The effective address of the operand is the contents of a

register or th.e main memory location whose address is given
explicitly in the instruction. When the effective address of the
operand is the contents of a register this addressing is known as
register Indirect addressing.

Example : MOVE A, (RO)

The above instruction copies the contents of memory addressed by the contents of
register RO into the register A.
The indirection can be denoted by placing the name of register or the memory
address given in the instruction in parentheses. For example : MOVE (LOC), Rt. This
instruction copies the contents of register R1 into the memory location whose address
is stored at the memory location LOC.
The register or memory location which provides the address of operand is known
as pointer.

1.11.3 Indexing and Arrays

The indexing is a technique that allows programmer to point or refer the data
(operand) stored in sequential memory locations one by one.
II. Index Mode :

The effective address of the operand is generated by adding a

constant value (specified in the instruction) to the contents o f a

Example :

MOVE 20 (Rt), R2

Computer Organization & Archlbtctunt 11-46

Buie Slructunt of Computer

The above instruction loads the conlents of register R2 into the memory location
whose address is calculated by addition of the conh?nts of register Rl and constant
value (offset or displacement) 20.
Anays are the most common way to structure the data. The array data structure is
easy to handle and moreover it is supported by almost all the programming languages
such as 'C', Pascal, Fortran etc. Many programming problems can be solved using
For enmple : We need the information about the marks obtained by the students
in a particular subject or we want to know salary of every employee in some
company. Then such information can be collectively stored in array data structure.
We can visualize an array as :










Here a (2)

= 40

a (8) = 75
and so on

Fig. 1.1<4 Array a(10)

Note that the elements in array are always stored in contiguous memory locations.
These array elements can be accessed by using indirect or index addressing modes.
The two dimensional array is something which you can compare with the two
storied building! Extra space which is arranged in rows and columns. Let me draw a
figure which will represent the two dimensional array.

:I I I I I I I
Fig. 1.15 2 Dimensional arny

Computar Organization & Architecture 1 47

Basic Structure of Computar

The two dimensional array elements can be accessed by index addressing mode.
Using index addressing mode we can specify the row address in the register Md
offset gives the column address. For example : MOV R2, lO{Rl). In lh.is instruction Rt
specifies the row the address and offset 10 gives the column address.

1.11.4 Relative Addressi ng

We have seen the index addressing mode using general-purpose processor
registers. lf we use this addressing mode with program counter instead of the general
purpose register we get the relative addressing mode.

6. Relative Mode :

The effective address is determined by the index mode using

program counter in p lace of the general purpose processor

This addressing mode is commonly used to specify the target address in branch
instructions. For example,
This instruction causes program execution to go to the branch target location
identified by the name BAd<, if the branch condition is sntisfied. The branch target
location Cilll be determined by specifying it an offset from the current value of the
program counter. Since the branch target location may either before or alter the branch
instruction, the offset is specified as n signed n umber.

1.11.5 Additional Modes

So far we have discussed the six addressing modes : Register, absolute (direct),
immediate, indirect, index and relative addressing modes. In this section we will sec
two more addressing modes : autoincrement and autodecrement.
7. Autolncrement Mode :

The effective address of the operand is the contents of a

register specified in the instruction. After accessing the
operand, the con tents of this register are incremented to
address the next location.

Example : MOVE (R2), + RO

The above instruction copies the contents of register RO into the memory location
whose add.res:; is specified by the contents of register R2. After copy operation, the
contents of register R2 are automatically incremented by 1.
8. Autodecrement Mode :

The cont~ts of a register specified in the instruction are

decremented and then they are used as an effective
address to access a memory location.

Example : MOVE Rl, - (RO)

Computer Organization & Architecture 1 - 48

Basic Structure of Computer

'Ibis instruction, initially decrements the contents of register RO and then the
decremented contents of register RO are used to address the memory location. Finally,
the contents from the addressed memory location are copied into the register Rl.

1.11.6 RISC Addressing Modes

The small instruction set of typical RISC processor consists mostly of
operations, and simple load and store operations for memory
access. Each operand ls brough t into a processor register with a load instruction and .
results are transferred to memory by means of store instruction. In this architecture
almost all instructions have simple register addressing, and thus it uses only a few
addressing modes. Usually, the RISC processor has three basic addressing modes:

Register addressing

Immediate operand and

Relative to PC addressing for branch instructions

Register Addressing : In register addressing, the instruction usually consists of

three fields : opcode field which specifies an operation, one or two source register
fields and one destination register field. The operation is p<?rformed with the data

specified in the source register fields and the result is stored in the destination register
field. Fo.r example : ADD R1, ~. R3 ; R3 +- R1 + ~ In case of memory access one
source register specifies the memory address and second source register specUies the
offsel For example, LO (R 1) ~ ; RJ +- M [R 1 + ~).
Immediate Operand Adci=sing : In immediate Op<?rand addressing mode, the
second source is an immediate operand. The operation is performed with the data
specified in the souzce register field and the immediate operand, and the result is
stored in the destination register field. For example : ADD R1, # 100, R:i ;
l{z <- R1 + 100
Relative to PC Addressing : In relative to PC addressing, the instruction usually
consists of three fields : opcode field, condition field and address field. Opcode field
specifics the operation. The condition field specifies one of many possible branch
conditions, and address field specifies the signed offset which is to be a dded to the
contents of PC to calculate new address when branch condition is satisfied. For
example, JMP COND, R1 (~) ; PC +- R1 + ~ If condition is satisfied.

Computer Org1nization & Architecture 1 - 49

Blsic Structure of Computer

Review Questions
l . Ezptam tit< IJGrimts 1yp.. of computm ond their "Pl'liaitions.
2. Dnrw ml explain tit< block diagram of simple compul6 with fiw ftmdiDrud uniis.
(CSE Nov.JDec.-2004, CSI! April/Miy-20041
3. Summoriu lht IJf"TOlion of wmputn.
4. Defiru amrpultr me,,,ory aml compwt<r program.
S. zplai1! the ftmction of tadr ftmdlonal unit in Ille computtr system.
6. Ezptam !ht stoml P"'K'am coruq>t.
7. Ezplaht the we cf program counter and uislruclion rrgisltr.
8. Wltat is the role of prognun counter in AddrtSSing ?
(CS!! NovJDtt.-2003)
9. Dnrw aml o;plain !ht connlwlU bttW<OI !ht proas>or ml th< lllllin memory.
JO. Wltal do you mtmt by lnttm1pt ?
JI. Wltat Is na!M lnttm1pt ?
12. Explain lht 1ingle liru bus St1Vctur<.
13. Ezp_lain the multipa bus stn1<1urt.
l4. Dtfinr :
a) Encutlon timr
b) /!aponse t/mr
15. Defint proa:ssor clock ml clock nite.
16. Explain the rdation cf throughput with o:eucticn time nd mponse time.
17. Dtfint Mll'S rm and throughput 111tt.
18. Wltat Is MFWPS ? W/oaJ Is its signijiun ?

19. Dtfint CPI.


HtlW do you determine thr in<lrudian arcution .pMt cf a procasor ?

(LT. April/Miy-2003)
State and o;p1'2in tl1t basic pcfonnanct tquation.
Dtfim CPU time.
WhAt is stniight-1int S<JfU<llcing ?
Ezphlin the process of instruction e.urution.
Sp<Cify thr scqutna cf opuation inoolurd whm m inslnlditm is =""(CS!! AprillMAy-2000

26. WhAt do you """" by branddng ?

27. Wluit Is lmmdt target ?
28. Wluit are mndltion CDt"'5 7 EzplAin Ille we of them.
29. Wrile note on iiutruction



30. Wrilr short nok on RISC arc/ritcdure.

Jl. Giw the comparison lrd_,, RISC and CISC '1r<hitcclurts.
32. Erp/in the """4nlllget and dUndMntJJget of RISC ar<hitmurt.
33. Erplain thr addressing moda of RISC ar<hita-turr.
34. O>mment on VLSI Mllisation of RISC proas>or.
JS. D<SCribe the fet<M .,,,idt increase computing spad i11 RISC math!~.
36. Erplain how daign a>1t dlcrmsa '1nd rtliability inatosa in RISC madiints.
37. Commtnt on HU WJ1110r1 pro11idtd by RISC mJiints.
38. List the addressing mtKla supJJOried by RISC rchitlurr.

Computer Organization & Architecture 1 - 50

Basic Structure of Computer

University Questions with Answers

Give an aampk each of zerrHlddress, one-address, two-address and thrcMddress
IMy/June-2006, CSE/IT, 2 Maro)


Ans. :

Refer section 1.8.3.

Which data structures can he hest supporttd using (a) indirect addressing ~
(b) indexed addressing mode?
!May/June-2006, CSE/IT, 2. Marb)


Ans. : Indirect addressing mode suppo:rts pointers and indexed addressing mode
supports a.rraxs.

Expl1lin in detail the different types of instructions that are s11pporttd in a typical
(May/June-2006, CSE/IT, 10 Marks)


Ans. : Refer section 1.8.


Registers Rl and R2 of a computer contain t11e decimal oalues 1200 and 2400
respectiwly. What is the effectiw address of t11e memory operand in each of the
fol/awing instructions?
1) Load 20 (R1), RS
2) Add - (R2), RS
4) Sub (R1) +, R5

3) M1111c # 3000, R5
Ana. :

R1 1200 R2 2400
Sr. No.


[May/}une-2006, CSllllT, 6 Marks)


Effective addreu of

Load 20(R1), RS

1200 + 20 1220

Add - (R2), RS

200 - 1

Movo #3000, RS


Sub (Rl)+. RS



What is a bus? What art lht different buses In a CPU ?

!NovJDec.-2006, CSE/IT, 2 Marb)

Ans. :

Ans. :

Refer section 1.5.

What are the fou r basic types of uperalions that need to he supporred '1y an
instruction set?
[NovJDec.-2006, CSl!/IT, 2 Marica)
Refer section 1.8.

Give the different instruction formats of a CPU in general.

INovJDec.-2006, CSE/IT, 6 Marbl

Ans. :

Refer section 1.8.3.

Computer 0111anlz:11llon & ArchltactuN 1 51

Buie StructuN of Compular

Define addressing mode. Classify addressing 7IWdes and explain etlCh type with
[NovJDec.-2006, CSl!/IT, 10 Marks)


Ans. :

Refer section 1.11.

Explain instruction set and instruction Setpltncing. [Nov./Dec.-2006, CSl!/IT, 10 Marks)


Ans. :

Refer section 1.8.

bus is bidirtctional IZ1ld addrtsS bus is unidirectional in most
!May/June-2007, CSl!IIT, 2 Marks)

Q.10 Why data

11\ns. :

Refer section 1.5.


Describe different types

of addressing modes in detail.

[May/June-2007, CSE/IT, 8 Marks)

Ans. : Refer section 1.11.

of a cmnputtr hos 256 K words of 32 bits each. The computer hlls

an instruction ftmnat with four fields : an operation code field, a mode field to specify
one of~ addressing modes, a regisLer address field to spify one of 60 proussor
registers, an4 a memory addrtsS. Spijy the instruction formal md the number of
bits in toda field if the instruction is in one memory word.

Q.12 The memory unit

(M;ay/June-2007, ECE, 2 Marks)

Ana. :

rrotal memory size = 256 K x 32 bits

1024 kbytes
:. Address bits = 20
Mode field

= 3 bit

23 = 8 > 7

Register address field = 6 bits

Opcode field

= 32 -

20 - 3 - 6

26 64 > 60

=3 bits

Q.13 What is meant by the stored program ccncept ? Discuss.

[May/}une-2007, ECF., 2 Marks)

Ans. : Refer section 13.2.

Q.14 What are the various types of lnstructimt Set Architectures (ISAs) possible ? Discuss.
[May/June-2007, ECF., 8 Marks)

An. : Refer section 1.9.

Q.15 Discuss the various Issues to lit considered whik designing the ISA of a proussor.
[May/}un...21!"7, ECE, 8 Marks)

An8. : We should consider following issues in designing the !SA of a processor.

Computer Organization & Architecture

1 - 52

Buie Stnu:tilre of Computer

Instruction format : Fixed or variable.

Addressing modes support : Many or few and whether to support complex

addressing mode.

Number of CPU registers : Less or more.

Instruction execution time : One cyde or variable cycles.

Separate caches for data and instruc:tion : Yes or no.

Pipeline support : High, less or no.

Instruction implement : Hardwired or microprogrammed.

Instruction implementation complexity : High or less.

Number of instructions : Few or more.

Q.16 How m11ny 128 x 8 RAM using chips art ntltd to provide a menwry CllpllCity of
2048 bytes ?
(M ay!June-2007, ECE, 2 Marks)
Ans.: 16
Q.1 7

What is the inform11tion conveyed by addressing modes?

[NovJDec:.200'7, CSF., 4 MAm)

Ans. : Re/er section 1.11.


Write notes on instruction form11ts.

[NovJDec.200'7, June-2008, CSF., 4 Marks)

Ans. : An instruction format defines the :layout of the bits of an instruction. It must
include opcode, zero or more operands and addressing mode for each operand. The

instruction length is usually kept in multiple of the character length. or memory

transfer length which is usually 8-bits. With this length we will always get an
integral number of instructions during a fetch cycle. Once the instruction length is
fixed, it is necessary to allocate number of bits for opcode, operand(s) and
addressing modes. For an instruction format of a given length. if more number of
bits are allocated to opcode field then less number of bits available for addressing.
The bits allocation for addressing can be determined by the following factors, which
simplifies the task of allocating bits in the instruction.
Number of addressing modes
Number of operands

Number of CPU registers

Number of register sets
Address range or number of address lines

Address granularity (address can refer by~. word or double word )

Computer Orgnlzatlon & Archlt8ctu19 1 - 53

Buie Stnlctu.. of Computer

The Fig. 1.17 shows the general

format. The
Ad=ing Displacement Immediato
instruction represented by ,...IA-32
1 or 2
1 or 2
1 or4
1 or4
format consists of four fields.
Opcode field, addressing mode field,
displacement field and immediate
Fig. 1.17 IA-32 Instruction format
field. The Opcode field consists"of
one or two bytes. The addressing mode information is contained in one or two bytes
immediately following the opcode. For instructions that involve the use o! only one
register in generating the effective address of an operand, only one byte is needed in
the addressing mode field. The addressing modes, base with index and base with
index and displacement require two registers to generate the effective address of an
operand. Hence addressing mode field for these two addressing mode is two bytes.

If a displacement value is used in computing '!II effective address for a memory

operand, it is encoded into either one or four bytes in a field that immediately
follows the addressing m!)de field.
If one of the operand is an immediate value, then it is plaocd in the last field of
an instruction and it occupies either one or four bytes.

The instruction format of a processor also changes aa:ordlng to the address



in the instruction.

LJst the Nrious addressing modes. Groe a


/Jrief expl11111llimr of each of them

with an

[NovJOec.-2007, csi;. 8 MarbJ

Ans. : Refer section 1.11.

Q.20 Explain the instruclUm cycle highlighting the sub-cycles and sequence of steps to
INovJOec.-2007, Eci;. 8 M.ub)

Ans. : Refer section 1.8.4.


Draw the single bus and three bus organiultion of the data path inside a proassor.
INov.IDec.2007, ECl1. 4 Marbl

Ans. : Refer section 1.S.

Q.22 Drrnu the structure of altunatroe two-bus structure.

INov.IDec.-2007, Eci;. 2MubJ

An. : Refer section 1.S

Q.23 What

is me.an '1y straight-line sequmcing?

AM. : Refer section 1.8.4

INov.IDec.-2007, ECl1. 2 Muksl

Basic Structure of Compuwra

Computer Organization & Architecture 1 - 54


E.xp/ai11 in detail abo11t fimctkmal u11it and bus struchtres of computers.

INovJDtt.-2007, ECE, 16 Marlcsl

Ans. : Refer sections 1.3 and 1.5.

Q .25

D.scribt! vario11s addressi11g modes with suitable examples.

(NovJOtc.-2007, ECE, 16 Marbl

Ans. : Refer section 1.11.


Explain in detail about instn1ction execution characteristics.

fNovJDec.-2007, ECE, 16 Mrlcsl

Ans. : Refer section 1.8.4

is the data bus in most microprocessors bidirectional while the address bus is
unidirectional ?
[May/June-2007, CSE, 2 Mub)

Q.27 Why

Ans. : Refer section 1.5.


With a neat diagram explain Von Neununm computer architecture.

(My/June-2007, CSE, U Marbl

Ans. : The idea of having computer wired for general computations with program
stored in memory was introduced by John Von Neumann when he was working as a
consultant at the Moore school. He a.nd originators of ENlAC designed the first
stored program computer named EDVAC (F.IC!Ctronlc Discrete Variable Computer) . .The
stored program concept in EDVAC facilitated the users to enter and alter various
programs and do variety of computations.
The EDVAC project was further d eveloped by Von Neumann with his
collaborators at the Institute for Advanced studies (lAS) in Princeton. They came up
with a new machine referred to as lAS or Von Neumann Machine. It has now
become the usual frame of reference for many modem computers.
Fig. 1.18 shows the general
structure of a Von Neumann
machine. It consists of five basic
units whose functions can be
summarized as follows :




Fig. 1.18 A Von Neumann machine

The input unit transmits data

and instructions from the
outside world to machine. It is
opcraled by control unit.

The memory unit stores both, data and instructions.

The arilhmelic-logic unit performs arithmetic and logical operations.

Computer Orgnlzatlon & An:hltecture 1 55

BHIC StructuN of Compublr

The control unit fetches and interprets the instructions in memory and causes
them to be executed.

The output unit transmits final results a.nd messages to the ou.tside world.
In the original !AS machine (Von Neumann machine), memory unit consists of
4096 storage locations (2 12 =4096) of 4-0-bits each, referred to as words. These
memory locations are used to store data as well as instructions.
Q.29 What art IM major instruction design iss11es ?

(May!Jun.,..2008, CSI!, 4 Marks

Ans. : The major instruction design issues are as follows :

1. Instruction length : It is most basic design issue. This decision affects, and is
affected by, memory size, memory organization, bus structure, CPU
complexity and CPU speed . The instruction length is usually kept in multiple
of the character length, or memory transfer length which is usually 8-bi.ts.

Length of opcode, operands and :addressing modes : Once the instruction

length is fixed, it is necessary to allocate number of bits for opcode,
ope.rand(s) and addressing modes. For an instruction format of a given length,
if more number of bits are allocated to opcode field then less number of bits
available for addressing. The bits allocation for addzessing can be determined
by the following facto rs, which simpliJies the task of allocating bits in the

Number of addressing modes

Number of operands

Number of CPU registers

Number of register ~

Address range or number of address lines

Address granularity (address can rcfer byte, word or double word )

3. Address ttfcrences : The instruction design is also affected by the address

referencesin the instruction : one .address instruction, two address instruction,
lhrL>e address Instruction o r zero address Instruction.

Q.30 Registers RI and R2 of a computer contain lht thcimnl t1alues 1200 and 4600. What
is IM effective address of IM memory operand in each of the fol/Qwing instructions ?
a) Load 20 (Rl), RS

b) Add- (R2), RS.

Ans. : a) EA : 1200 + 20
b) EA: 4599.

(May/June-200_8, ECE, 2 Marica(

Computar Organization & Archlt9ctu,. 1 - 56


Basic Slructu,. of Computars

Explain in detail tht dijfemlt instructi"" types arid instruction sequencing.

[Mty/June-2008, ECE, 16 Marks)

Ans. : Refer sections 1.8.3 and 1.8.4.

Q.32 Explain the different types of addrt$sing modes with suitable examples.
[Mty/June-2008, ECE, 16 Mul<IJ

Ans. : Refer section 1.11.

Q.33 Draw block diagram of a basic p~or.

[NovJDec.-2008, CSE/IT, 1 Marks)

Ans. : Refer section 1.3.

Q.34 Differentiate dirt and indirect addrt$Sing mode.

[NovJDec.-2008, CSE/IT, 1 Mukai

Ans. : Refer section 1.11.

Q.35 Write the basic ptrformance equation and using this tq11J1ti"" explain how the
performance of a system am~ imprwed.
(NovJDec.-2008, CSE/IT, 16 Marl<1J

Ana. : Re.fer section 1.7.3.

Q .36 Name the functional units of a computer and how they are interrelated.
[NovJDec.-2008, ECE, 1 Mules!

Ans. : Refer section 1.3.

Q. 37 When addressing 1>UJ<U will

viewtd critic.ally ?

[NovJDec.-2008, ECf,. 2 Marla)

Ans. : In case of pipelining addressir\g modes will be viewed critically.


What is meant l1y a multiple lrns ? Where It is organised ?

[NovJDec ..-2008, ECE. 2 Marks)

Ans. : Refer section 1.61.


What are the Sbjtwarts used in a computer to aperate all the functional units 7
Discuss /Jriefly on the bus structures.
[NovJDec.-1008, ECE, 6 Mukai

Ans. : Refer sections 1.6 and 1.5.

Q.40 Discuss the aperati""s of a lnts.

(NovJDec.-2008, ECE, 6 Marks)

Ans. : Refer section 1.5.


Define SPEC rating.

[Mty/June-2009, ECE, 2 Mules)

Ans. : Refer section 1.7.4.

Q.42 What are the major functions of system software in typical computer ?
[May/June-2009, ECll;, 1 Mub)
Ans. : Refer section 1.6.

Compuwr Org1nlzatlon & An:hlt9ctul'9 1 - IS7

Blslc Structu,.. of Comput.r

Q.43 Describe the addressing mode for a=ssing memory conlmt.

[May/June-2009, ECP. 8 Marb)

Ans. : Refer section 1.11.

Q.44 Name and explain various special register in a typical ccmputer.
[May/june-1009, ECP. 8


Ans. : Reier section 1.4.

Q.45 Explain zero, ont, two and three addrtssing imtructions with examplt.
[May!June-2009, ECE, 8 Mukai
Ans. : Rl!fer section 1.8.3.
Q.46 What is a system bus ?

[May/june-2009, CSP. 2 Mrbl

Ans. : Refer section 1.5.

Q.47 Describe the functianal units of the oomputu systnn.
IMy/June-2009, CSE/IT, 8 Marbl


Ans. : Refer section 1.3.

Q.48 Explain briefly the mrious type of addressing modes with example.
(May/June-2009, CSE/IT, 16 Marbl

Ans. : Refer section 1.11.