Vous êtes sur la page 1sur 5

NARAYANA ENGINEERING COLLEGE::NELLORE

DEPARTMENT OF CSE
ACADEMIC YEAR: 2017 2018
Objective Questions
Class : IV B.Tech.
Branch : CSE
Staff / Dept. : M.JEEVAN KUMAR / CSE
Subject : Advanced Computer Architecture

UNIT I
1. Computer architecture involves in both------------ [C]
a) S/W b) H/W c) Both d) None
2.classification of various computer architectures based on ---- [C]
a) instructions b) data streams c) both a&b d) none
3. conventional sequential machines are called---------compters [A]
a) SISD b)SIMD C) MIMD d) both a&b
4. Vector computers are equipped with------------ [C]
a)scalar b)vector H/W c) both a&b d)none
5. Parallel computers are reserved for-------- [C]
a) SISD b)SIMD C) MIMD d)none
6. performance of a computer system demands a perfect match between [C]
a) Machine Capability b) Program behavior c)both a&b d) none
7. The simplest measure of program performance is--- [A]
a)turnaround time b)throughput c)both a&b d)none
8. How many shared memory multiprocessor models [B]
a)2 b)3 c)4 d) 1
9. In---------Model physical memory is uniformly shared by all the processors [A]
a) UMA Model b) NUMA Model c) COMA Model d)none
10. The message-passing network provides point-to-point ---connections among the
nodes. [A]
a) Static b)Dynamic c) both a&b d) none
11. PRAM stands for ______________ [A]
a)parallel random access memory b) programmable random access memory
c) parallel read only access memory c)none
12.time complexity of a serial algorithm is simply called----- [A]
a) serial complexity b) parallel complexity c)polynomial complexity d)none
13. time complexity of a parallel algorithm is simply called----- [B]
a) serial complexity b) parallel complexity c)polynomial complexity d)none
14. The ordering relationship between statements is indicated by--dependence [A]
a) data b)size c) instruction d) both a&b
15. How many types of data dependence [C]
a) 3 b)4 c)5 d)2
16. Parallelism appears in various forms, such as [D]
a) Pipelining b) Concurrency c) Overlapping d) All
17. ______ parallelism is a function of cost and performance tradeoffs. [C]
a) S/W b) H/W c) Both d) None
18-------is a measure of the amount of computation involved in a software process [A]
a) Grain size b) Latency c)both a&b d)none
19-------is a time measure of the communication overhead occurred between machine subsystems
[B]
a) Grain size b) Latency c)both a&b d)none
20. Demand-driven computation chooses ___________ approach [A]
a) Top-down b) Bottom-up c) Hybrid d) None.
UNIT II
1. ---describes the instruction execution rate and floating-point-capability of parallel computer
[c]
a)MIPS b)Mflops c) both a&b d)none
2. The MIPS rating is depends on ---------- [a]
a)instruction set b)data set c)both a&b d)none
3. Any machine having hundreds and thousands of processors is----- [a]
a) massively parallel computers b) massively parallel processor
c) massively vector computers d) massively vector processor
4.scalable computers are used for solving------ [b]
a) potable problems b)scalable problems c) both a&b d)none
5.granularity decides the ----in computation [c]
a) size of data items b)program modules c)both a&b d)none
6. Amdahls law is based on -------------- [c]
a)fixed workload b) fixed problem size c) both a&b d)none
7. fixed time speedup was originally developed by--------law [b]
a) Amdahls b)gustafsons c) both a&b d)none
8. Scalability of computer is affected by------------ [d]
a) machine size b) clock rate c)problem size d)all
9. ------is a request from I/O or other devices to processor for service [a]
a) Interrupt b) transaction d) priority d)none
10. a priority interrupt bus is used to pass -------- [a]
a) interrupt signal b)message c)both a&b c)none
11. the process of selecting the next bus master is called [a]
a) arbitration b) transaction c) interrupt d) both a&b
12. the duration of a masters control of the bus is called--- [b]
a) transaction b)bus tenure c) interrupt d) both a&b
13.--------split the request and response into separate bus transaction [a]
a)split transaction b) connected transaction c) bothe a&b d)none
14. ---------is used to carry out a masters request and slaves response in a single bus transaction
[b]
a)split transaction b) connected transaction c) bothe a&b d)none
15. how many types of address formats for memory interleaving [a]
a)2 b)4 c)3 d) 5
16. sequential addresses are assigned in ----interleaved memory [a]
a)high-order b)lower-order c)both a&b c)none
17.---is the process of moving blocks of information between the levels of memory hierarchy
[b]
a)Memory mapping b) memory swapping c)both a&b d)none
18. the portion of the kernel which handles the allocation and deallocation of main memory to executing
is process is called---------- [a]
a) Memory manager b) memory leak c)both a&b d)none
19.------is used to declare whether a memory event is legal or illegal [b]
a) process event b)event ordering c)both a&b d) none
20. behavior of a shred memory system as observed by processor is called [c]
a) Memory manager b) memory leak c) memory model d)none

UNIT III
1. are linearly connected to perform a fixed function over a stream of data flowing from one end to
the other [a]
a) linear pipeline processor b)non linear pipeline processor
c) both a&b d)none
2. how many types of linear pipelines [b]
a) 3 b) 2 c) 5 d)6
3. data flow between adjacent stages in an asynchronous pipeline is controlled by a ---
protocol [a]
a) handshaking b)TCP/IP c)FTP d)HTP
4. A linear pipeline processor is constructed with ---------processing stages [a]
a) K b)k+1 d)6 d)15
5. --- are used to interface between stages in synchronous model [b]
a) Latency b)clocked latches c)both a&b d)none
6. the pipe line stages are combinational ------------circuits [a]
a) logical b)relational c)both a&b d)bitwise
7. the pipe line frequency is defined by as the ---of the clock period [a]
a) inverse b)proportional c)both a&b d)none
8. A linear pip[e lines of k stages can process n tasks in-----clock cycles [b]
a) K-(n-1) b)k+(n-1) c)k+(n+1) d) k-(n+1)
9. A---- pipeline allows feedforward and feedback connections in addition to the streamline
connections . [b]
a) Static b) dynamic c)both a&b d)none
10. In a ----is easy to partition a given function into a sequence of linearly ordered sub functions.
[a]
a) Static pipeline b) dynamic pipe line c)both a&b d)none
11. the combine set of permissible and forbidden latencies can easily displayed by -----
vector [a]
a) collision b) non-collision c)both a&b d) none
12. ------implies resource conflicts between two initiations in the pipeline [a]
a) collision b) non-collision c)both a&b d) none
13. MAL stands for [b]
a)Minimal average lower b) Minimal average latency
c)Minimal agree lower b) Minimal average load
14. --------is the purpose of modify reservation table [a]
a) delay insertion b) delay latency c)both a&b c)none
15. A stream of instructions can be executed by a pipeline in an -------- manner [a]
a) Overlapped b)linear c) non linear d)both a&c
16. The instructions are executed in ----stages [c]
a) one b)several c) both a&b d)8
17. sequential instructions are loaded into a pair of --for in-sequence pipelining[b]
a) sequential buffers b) parallel buffer c) both a&b d)none
18. ------------- scheme is supported by an optimizing compiler [a]
a) static scheduling b) Dynamic scheduling c) both a&b c)none
19. Dynamic scheduling is achieved with ---schemes [c]
a) Tomasulo's register-tagging b) scoreboarding c)both a&b d)none
20. how many types of branch strategy [b]
a)1 b)2 c)3 d)5

Unit-IV

1. ---------is an order set of scalar data items [a]


a) Vector b) strut c)union d)none
2. Vector processing occurs when ------operations are applied to vectors [c]
a) Arithmetic b) logical c)both a&b d)none
3. The conversion from scalar code to vector code is called-------- [a]
a) Vectorization b) vectorizer c) both a&b d)none
4. How many types of vector instructions [c]
a) 5 b)4 c)6 d)8
5. Vector operand may have ------length [b]
a)word b)arbitrary c) bit d)both a&b
6. Compound vector functions depends on -------- [a]

a)one dimensional b) two dimensional c) three dimensional d) both a&b

7. Compound vector functions include----- [d]

a) load b)store c)multiply d)all

8. ----are integral part of all vector processors [c]

a)vector pipelining b)chaining c)both a&b d)none

9. vector register length is----- [a]

a) fixed b)variant c) both a&b d)none

10. -----are internal part of all vector processors [c]

a)pipelining b)chaining d) both a&b d)none

11. When a vector has a length greater than that of the vector registers , segmentation of the long

vector into fixed-length segment is ------ [a]

a) Strip-mining b)vector loop c)both a&b d)none


12. The program construct for processing long vectors is------ [b]

a) Strip-mining b)vector loop c)both a&b d)none


13. A----is formed with a network of functional units which are locally connected and operates

Synchronously with multimedia pipeline [a]

a) Systolic array b) Systolic computation c)both a&b d)none


14. A two-level pipeline architecture is seen in a---------- [a]

a)pipeline net b)multinet c)both a&b d)none

15. A pipenet is constructed from interconnection multiple------------- [b]

a) functional data b)functional pipeline c)both a&b d)none

16. A pipenet is constructed from interconnection multiple functional pipeline through----BCNs[c]

a)5 b)6 c)2 d)1

17.The program graph represent the-------pattern in a given CVF [b]

a)program flow b)data flow c)both a&b d)none


18. The vector processor as also carried out by ---------computers [a]

a)SIMD b)SISD c)MIMD D)MISD

19. Most SIMD computers use -----control unit and distributed memories [b]

a) two b)single d)three d)four

20. The instruction set of an SIMD computer is decoded by the -----control unit [a]

a)array b)uni d)both a&b d)none

UNIT-IV

1.latancy hiding can be accomplished through ---------complementary approach [b]

a) 3 b)4 c)5 d)3


2.--------techniques which bring instructions or data close to the processor before they are actually

needed [a]

a) prefetching b) postfecting c)both a&b d)none


3.---------reduces cache misses [a]

a)coherent caches b)relaxed memory c)multiple contexts d) prefetching

4. ----consistency models by allowing buffering and pipelining of memory references [b]

a)coherent caches b)relaxed memory c)multiple contexts d) prefetching

5.-----support to allow a processor to switch from one context to another [c]

a)coherent caches b)relaxed memory c)multiple contexts d) prfetching

6. Single-address-space multiprocessors/ multi computers must use ------virtual memory [a]

a)shared b)distributed c)both a&b d)none

7. The Dash architecture was--------multiprocessor system [d]

a)large-scale b)cache-coherent c)NUMA d)all

8. Cache-coherent was maintained using an --------protocol [c]

a)individual b)distributed directory-based c)both a&b d)none

9.--------levels of cache were used per processing node. [c]

a)three b)one c)two d)four

10.loads and writes were separated with the use of ------for implementing weaker memory

Consistency
[b]

a)read buffer b)write buffer c)both a&b d)none

11. Prefetching can be classified based on whether it is ----- [c]

a)binding b)nonbinding c)both a&b d)none

12. Prefetching can controlled by [c]

a)H/W b)S/W c)both a&b d)none

13.The caching of ----- read-write data provided substantial gains in performance [a]

a)shared b)distributed c)both a&b d)none


14. The caching of shared read-write data provided substantial gains in performance, with benefits
ranging from----t0--------
[b]

a)2.2 to 2.5 b)2.2 to 2.7 c)2.2 to 3.7 d)3.7 to 2.7

15. The scalablecoherence interface is specified in ----------standards [a]

a)IEEE b)IJRE c)IEIE d)all

16. SCI stand for------- [b]

a) scalable cache interface b) scalable coherence interface

c) scalable cache interconnection b) scalable coherence interconnection

17. SCI supports-----------connection [a]

a)unidirectional point to point b) bidirectional point to point c)both a&b d)none

18. Cache to cache transaction is ------- [b]

a) prepend b)postpend c)both a&b c)none

19.MC stands for-------- [b]

a)multiple cache b)multiple context multiple consistency d)none

20. The most relaxed memory model is the --------------model [d]

a)relaxed consistency b)sequential consistencyc)weak consistency d)all

SIGNATURE OF STAFF HOD PRINCIPAL

NEC NELLORE:

Vous aimerez peut-être aussi