Vous êtes sur la page 1sur 8

V.M.K.V.ENGINEERING COLLEGE DEPARTMENT OF CSE M.E. CSE QUESTION BANK COMPUTER ARCHITECTURE UNIT I PART-A 1.

. Define response time, execution time, throughput, wall clock time, elapsed time. 2. Mobile devices care about battery life more than power, so energy is the proper metric. Give the energy dynamic. 3. What is CPU time? How it can be classified, with one example. 4. What is the use of Synthetic benchmarks? Give 2 examples. 5. Define SPEC. What is the use of it? 6. In addition to specifying registers and constant operands, addressing modes specify the address of a memory object. Specify the various addressing modes. 7. Define desktop benchmarks? What are the types of it? 8. Give the cost of a packaged integrated circuit and cost of a die. 9. Amdahls Law defines the speedup that can be gained by using a particular feature. Give the speedup ratio. 10. Define reproducibility, arithmetic mean, geometric mean 11. Mention the use of transaction processing benchmarks. 12. Four implementation technologies, which change at a dramatic pace, are critical to modern implementations. Mention them. 13. Explain different types of locality. 14. List out the addressing modes. 15. Suppose that we want to enhance the processor used for Web serving. The new processor is 10 times faster on computation in the Web serving application than the original processor. Assuming that the original processor is busy with computation 40% of the time and is waiting for I/O 60% of the time, what is the overall speedup gained by incorporating the enhancement? 16. List out the basic operations in the Instruction set. 17. Define pipeline interlock and mention its use. 18. Define pipeline, pipe segment, processor cycle. 19. Suppose that we are considering an enhancement to the processor of a server system used for web serving. The new CPU is 10 times faster on computation in the web serving application than the original processor. Assuming that the original CPU is busy with computation 40% of the time and is waiting for I/O 60% of the time. What is the overall speedup gained by incorporating the enhancement?

20. CPU time is equally dependent upon what characteristics? Why? 21. Give the alternate to structural hazard. 22. What is the CPI of processor without structural hazard? 23. If 30secs of the execution time of a program that takes 60secs in total can use the enhancement. Find the fraction enhancement. PART-B 1. Discuss the different ways how instruction set architecture can be classified. 2. Briefly explain about the trends in technology in computer design. 3. Explain about memory addressing and discuss the different addressing modes in instruction set architecture. 4. Discuss in detail about data hazards and explain the technique used to overcome data hazard. 5. Briefly explain about the trends in cost in computer design. 6. Discuss about the trends in power in integrated circuits. 7. Explain in detail about benchmarks. 8. How will you summarize performance results of a computer? Explain in brief. 9. Explain about the quantitative principles of computer design. 10. With examples explain the various hazards in pipelining.

UNIT II PART-A 1. 2. 3. 4. List out the various data dependencies. RAW hazards through memory are maintained by two restrictions. What are they? Give an example of control dependence. There are two types of name dependences between an instruction i' that precedes instruction j in program order. What are they? 5. What are the constraints imposed by control dependencies? 6. Mention the concepts illustrated by the case study modeling the branch predictor. 7. Briefly explain the goal multiple issue processor. 8. Give the value of the CPI for a pipelined processor. 9. Mention the major flavors of multiple issue processors. 10. To obtain the final unrolled code we had to make some of the decisions and transformations. Give them. 11. Imprecise exceptions can occur because of two possibilities. What are they? 12. Tomasulos scheme was unused for many years after the 360/91, but was widely adopted in multiple-issue processors starting in the 1990s for several reasons. List them. 13. Mention the steps involved in instruction execution using Tomasulos algorithm. 14. What do you mean by a branch target cache? 15. List the steps involved in handling an instruction with a branch-target buffer. 16. Give the purpose of return address predictors. 17. Determine the total branch penalty for a branch-target buffer assuming the penalty cycles for individual mispredictions from the table given below. Make the following assumptions about the prediction accuracy and hit rate: Prediction accuracy is 90% (for instructions in the buffer). Hit rate in the buffer is 90% (for branches predicted taken). Instruction in Buffer Yes Yes No No Prediction Taken Actual Branch Taken Not taken Taken Not Taken Penalty Cycles 0 2 2 0

18. List out several functions that are integrated by integrated instruction fetch unit. 19. How do we know which registers are the architectural registers if they are constantly changing? 20. Comment on speculating through multiple branches. 21. Mention the concepts illustrated by the case study exploring the impact of microarchitectural techniques. 22. What are the advantages of using dynamic scheduling? 23. Briefly explain the idea behind using reservation station.

PART-B 1. What is instruction-level parallelism? Explain in detail about the various dependencies caused in ILP. 2. Discuss about Tomasulos algorithm to overcome data hazard using dynamic scheduling. 3. Explain how to reduce branch cost with dynamic hardware prediction. 4. Explain how hardware based speculation is used to overcome control dependence. 5. Discuss briefly about the limitations of ILP. 6. Explain in details about exploiting ILP using multiple issue and static scheduling. 7. Discuss about the advanced techniques for instruction delivery and speculation. 8. Explain briefly about exploiting ILP using dynamic scheduling, multiple issue and speculation. 9. Explain in detail the limitations of ILP for realizable processors.

UNIT III PART A 1. What is loop unrolling? 2. When static branch predictors are used? 3. Write the example for loop carried dependence in the form of recurrence. 4. Mention the different methods to predict branch behavior. 5. Explain the VLIW approach. 6. Mention the techniques to compact the code size in instructions. 7. List the various advantages of using multiple issue processor. 8. What are loop carried dependence? 9. Mention the tasks involved in finding dependencies in instructions. 10. Use the GCD test to determine whether dependence exists in the following loop: for(i=1;i<=100;i++) x[2*i+3]=x[2*i]*5.0; 11. What is software pipelining? 12. List the simple hardware-fixed direction mechanisms for static branch prediction. 13. What do you mean by global code scheduling? 14. Mention the steps followed in trace scheduling. 15. List out the various advantages of predicted instructions. 16. Give the limitations of predicted instructions. 17. What do you mean by poison bit? 18. Mention the methods for preserving exception behavior. 19. What is an instruction group? 20. List the disadvantages of supporting speculation in hardware. 21. What happens when a branch is predicted? 22. Mention the issues affecting accurate branch prediction. 23. What are the various causes of mispredictions? 24. The parallelization techniques for loops normally follow three steeps. What are they? 25. There are two limitations that affect our ability to do accurate dependence analysis for large programs. Mention them. PART B 1. With an example explain the concept of unrolling. 2. Discuss briefly about the VLIW approach. 3. What are the different techniques to exploit and expose more parallelism using compiler support? Explain each. 4. Describe how hardware support for exposing more parallelism at compile time. 5. Differentiate hardware and software speculation mechanisms. 6. Explain in detail about static branch prediction.

UNIT IV PART A 1. What are multiprocessors? Mention the categories of multiprocessors. 2. Mention the various disadvantages of using symmetric shared memory. 3. What do you mean by directory-based coherence? 4. List out the factors that contributed to the rise of the MIMD multiprocessors. 5. Give the various disadvantages of simultaneous multithreading. 6. Draw the basic structure of a centralized shared-memory multiprocessor. 7. What is multiprocessor cache coherence? 8. Mention the models that are used for consistency. 9. What do you mean by write-update protocol? 10. List the conditions for a memory system to be coherent. 11. When a multiprocessor system is sequentially consistent? 12. There are two classes of protocols, which use different techniques to track the sharing status, in use. What are they? 13. What do you mean by write invalidate protocol? 14. Mention the factors which reinforce the trend towards more reliance on multiprocessing. 15. Draw the basic architecture of a distributed shared-memory multiprocessor. 16. When a block is in the shared state, the memory value is up to date and so what are the requests can occur? 17. What are the various issues in distributed shared memory architecture? 18. Give the basic idea in processor consistency. 19. List the different kinds of coherence protocols that implement release consistency. 20. Comment on the merits and demerits of symmetric multiprocessing. 21. What are the design challenges in SMT fetch? PART B 1. Explain in detail about symmetric shared memory architecture. 2. Describe the following in detail: a) Basic schemes for enforcing coherence. b) Snooping protocols. 3. Explain in detail about distributed shared memory architecture. 4. Discuss about limitations in symmetric shared-memory multiprocessors and snooping protocols. 5. With an example describe directory-based cache coherence protocols. 6. Explain in detail about the models of memory consistency. 7. Discuss briefly the synchronization issues in multiprocessors. 8. Briefly discuss about hardware and software multithreading. 9. Explain briefly about the CMP architecture.

UNIT IV PART A 1. 2. 3. 4. What is a cache miss and cache hit? List out the unusual characteristics possessed by transaction-processing benchmarks. Mention the various problems with disk arrays. Improvement in capacity is customarily expressed as improvement in areal density, measured in bits per square inch. Give it. 5. Mention the factors that measure I/O performance. 6. List out the drawbacks in shadowing. 7. An interaction, or transaction, with a computer is divided into three parts. What are they? 8. Write the steps to design an I/O system. 9. Briefly discuss about the classification of buses. 10. Give the equation for mean number of tasks in the system. 11. List out the various advantages of using bus master. 12. Mention the different properties of errors. 13. Gray and Siewiorek classify faults into four categories according to their cause. What are they? 14. What do you mean by transaction-processing benchmarks? 15. State and explain Littles law. 16. Define the following terms: a. Timeserver b. Arrival rate 17. Discuss about M/M/1 queuing model. 18. Suppose a processor sends 40 disk I/Os per second, these requests are exponentially distributed, and the average service time of an older disk is 20 ms. Answer the following questions: a. On average, how utilized is the disk? b. What is the average time spent in the queue? 19. What do you mean by split transaction? 20. Discuss about bus transactions. 21. How to reduce cache miss penalty and miss rate?

PART B 1. 2. 3. 4. 5. 6. Describe the various techniques to reduce cache miss penalty. Explain the different techniques to reduce miss rate. Discuss how main memory is organized to improve performance. With a neat sketch explain the various levels of RAID. Briefly explain the various ways to measure I/O performance. Explain in detail about the M/M/1 queuing model.

7. Discuss about memory technology and optimizations. 8. Define and with examples explain real faults and failures. 9. Explain in detail about buses. 10. Describe the following: a) Throughput vs response time. b) Transaction-processing benchmarks.

Vous aimerez peut-être aussi