Vous êtes sur la page 1sur 5

CS-421 Parallel Processing BE (CIS) Batch 2004-05

Handout_2
Parallelism in a Uniprocessor System
1. Multiprogramming & Timesharing
a. In multiprogramming several processes reside in main memory and CPU switches from one process (say
P1) to another (say P2) when the currently running process (P1) blocks for an I/O operation. I/O operation
for P1 is handled by a DMA unit while the CPU runs P2.
b. In timesharing processes are assigned slices of CPU’s time. The CPU executes the processes in the round
robin fashion as below.
Time quantum expires

--- CPU Job finishes

Processes Queue
• It appears that every user (process) has its own processor. (multiple virtual processors)
• Averts the monopoly of a single (computation intensive) process as in pure multiprogramming
2. Multiplicity of Functional Units
Use of multiple functional units like multiple adders, multipliers or even multiple ALUs to provide
concurrency is not a new idea in uniprocessor environment. It has been around for decades.
3. Harvard Architecture
a. This provides separate memory units for instructions and data. This effectively doubles the memory
bandwidth saving CPU’s time. E.g. is split cache in which instructions are kept in I-cache and data in D-
cache
b. In contrast when instructions and data are kept in the same memory, the architecture is called Princeton
Architecture. E.g. are unified cache, main memory, etc.
4. Memory Hierarchy
A parallel processing mechanism supported by memory hierarchy is the simultaneous transfer of
instructions/data between (CPU, cache) and (MM, secondary memory)
5. Pipelining
a. Arithmetic Pipelining
X
Exponent Mantissa Significand Normalize
Y Alignment Addition result
Comparison

4-stage FP adder
b. Instruction Pipelining

IF ID EX M WB

5-stage instruction pipeline

Page - 1 - of 5
CS-421 Parallel Processing BE (CIS) Batch 2004-05
Handout_2

A time-space diagram is used to describe the progress of instructions through the pipeline.

WB I1 I2 I3 I4 I5 I6
M I1 I2 I3 I4 I5 I6 I7

stages
EX I1 I2 I3 I4 I5 I6 I7 I8
ID I1 I2 I3 I4 I5 I6 I7 I8 I9
IF I1 I2 I3 I4 I5 I6 I7 I8 I9 I10
1 2 3 4 5 6 7 8 9 10
Clock Cycles Æ
(Pipelined Execution)

We’ve assumed that every stage takes one clock cycle and there are no hazards in the instruction stream.
Instruction Latency = 5 cycles
Instruction Throughput = 6/10 IPC = 0.6 IPC
In order to gain better appreciation of pipelined execution, we draw time-space diagram for non-pipelined
execution as shown below:
WB I1 I2
M I1 I2
EX I1 I2
ID I1 I2
IF I1 I2
1 2 3 4 5 6 7 8 9 10
Clock Cycles Æ
(Non-Pipelined Execution)
Instruction Latency = 5 cycles
Instruction Throughput = 2/10 IPC = 0.2 IPC
This is evident that pipelined execution improves instruction throughput. However, it doesn’t improve
instruction latency. (In practice, pipelining increases instruction latency due to delay of pipeline registers
as will be explained subsequently.)
Speedup
Suppose that a k-stage instruction pipeline executes a program containing n instructions. Let τ be the
cycle time.
Execution time on non-pipelined computer is given as
tnp = nkτ ----------(1)
Execution time on pipelined computer is given as
tp = (k – 1 + n)τ ----------(2)
where, k – 1 cycles are required to fill up the pipeline (also called pipeline setup time).
By definition, speedup S of pipelined execution over non-pipelined execution is given as

Page - 2 - of 5
CS-421 Parallel Processing BE (CIS) Batch 2004-05
Handout_2
time before enhancement
S=
time after enhancement
t np
=
tp
nkτ
=
(k − 1 + n )τ
nk
= − − − − − − − − − −(3)
k −1+ n
Clearly, for a given pipeline, greater speedup is achieved, as more and more instructions are executed. We
can compute the upper bound on speedup as follows:
Lim k
S ideal = n → ∞
k −1
+1
n
=k
We regard it as ideal speedup because its derivation is based on the assumption of no pipeline hazards. As
can be seen, even ideal speedup cannot go beyond pipeline depth (i.e. number of pipeline stages).
Instruction Throughput
Instruction throughput ω is defined as the number of instructions executed per unit time. This is
calculated as:
n
ω= − − − − − − − − − − ( 4)
(k − 1 + n )τ

Multiplying numerator and denominator of (4) by k, we can express ω in terms of speedup S as:
S
ω=

The upper bound on ω is similarly found:
Lim 1
ω ideal = n → ∞
 k − 1 
 + 1τ
 n 
= 1/τ
CPI
Cycles per instruction (CPI) of pipelined execution can be found as:

CPI =
(k − 1 + n )
n
k −1
= +1
n
The lower bound on CPI is
Lim  k −1 
CPI ideal = n → ∞ + 1
 n 
=1

Page - 3 - of 5
CS-421 Parallel Processing BE (CIS) Batch 2004-05
Handout_2
Multiple Issue Architectures
These architectures are able to execute multiple instructions in one clock cycle (i.e. performance beyond
just pipelining). An N-way or N-issue architecture can achieve an ideal CPI of 1/N.
There are two major methods of implementing a multiple issue processor.
• Static multiple issue
• Dynamic multiple issue
Static Multiple Issue Architecture
• The scheduling of instructions into issue slots is done by the compiler.
• We can think of instructions issued in a given clock cycle forming an instruction packet.
• It is useful to think of the issue packet as a single instruction allowing several operations in
predefined fields. This was the reason behind the original name for this approach: Very Long
Instruction Word (VLIW) architecture.
• Intel has its own name for this technique i.e. EPIC (Explicitly Parallel Instruction Computing) used in
Itanium series.
• If it is not possible to find operations that can be done at the same time for all functional units, then
the instruction may contain a NOP in the group of fields for unneeded units.
• Because most instruction words contain some NOPs, VLIW programs tend to be very long.
• The VLIW architecture requires the compiler to be very knowledgeable of implementation details
of the target computer, and may require a program to be recompiled if moved to a different
implementation of the same architecture.
Dynamic Multiple Issue Architecture
– Also known as superscalars
– The processor (rather than the compiler) decides whether zero, one, or more instructions can be
issued in a given clock cycle.
– Support from compilers is even more crucial for the performance of superscalars because a
superscalar processor can only look at a small window of program. A good compiler schedules code
in such a way that facilities scheduling decisions by the processor.

******

Page - 4 - of 5
CS-421 Parallel Processing BE (CIS) Batch 2004-05
Handout_2
Granularity
Qualitative Definition
The level on which work is done in parallel
Examples
• Job / Program Level
- The highest level of parallelism conducted among programs through multiprogramming/timesharing,
multiprocessing.
- Coarsest granularity
• Task / Procedure Level
Conducted among tasks of a common program (problem) e.g. multithreading
• Interinstruction Level
Conducted among instructions through superscalar techniques
• Intrainstruction Level
- Conducted among different phases of an instruction through pipelining
- Finest granularity
Quantitative Definition
Granularity = (Time spent on Computation)/ (Time spent on Communication)
• Fine-Grained Applications
9 low granularity i.e. more communication and less computation
9 less opportunity for performance enhancement
9 Facilitates load balancing
• Coarse-Grained Applications
9 High granularity i.e. large number of instructions between synchronization and communication points
9 More opportunity for performance enhancement
9 Hardened to balance load
******

Page - 5 - of 5

Vous aimerez peut-être aussi