Académique Documents
Professionnel Documents
Culture Documents
Course Objectives:
1. To introduce the fundamentals of parallel and distributed computing including parallel and
distributed architectures and paradigms
2. To understand the technologies, system architecture, and communication architecture that
propelled the growth of parallel and distributed computing systems
3. To develop and execute basic parallel and distributed application using basic programming
models and tools.
We can use this formula to estimate the value of π with a random number
generator:
number_in_circle = 0;
for (toss = 0; toss < number_of_tossess; toss++)
{
x = random double between -1 and 1;
y = random double between -1 and 1;
distance_squared = x * x + y * y;
if (distance_squared <= 1)
number_in_circle++;
}
pi_estimate = 4 * number_in_circle/ (double)
number_of_tosses;
This is called a “Monte Carlo” method, since it uses
randomness. Write a program that uses the above Monte Carlo method to
estimate π (MPI / Pthreads/ OpenMP).
3. Conway’s Game of Life is played on a rectangular grid of cells that may or 3 hours
may not contain an organism. The state of the cells is updated each time step by
applying the following set of rules:
1. Every organism with two or three neighbours survives.
2. Every organism with four or more neighbours dies from
overpopulation.
3. Every organism with zero or one neighbours dies from isolation.
4. Every empty cell adjacent to three organisms gives birth to a new one.
Create an MPI program that evolves a board of arbitrary size
(dimensions could be specified at the command line) over several
iterations. The board could be randomly generated or real from a file.
Try applying the geometric decomposition pattern to partition the work
among your process
4. Use OpenMP to implement a producer-consumer program in which 2 hours
some of the threads are producers and others are consumers. The
producers read text from a collection of files, one per producer. They
insert lines of text into a single shared queue. The consumers take the
lines of text and tokenize them. Tokens are “words” separated by white
space. When a consumer finds a token, it writes it to stdout.
5. Write a code that sets a real variable on each of N 2 hours
processors equal to MPI rank (Task ID) of the task. Then write your own
routine to perform a reduction operation over all processors to sum the values
using only MPI Send and MPI Receive calls. Do this global reduction operation
using the following communication algorithms:
a. Communications in a ring.
b. Hypercube communications.
Put in timing calls using MPIWtime to test the timing of your routines
compared to using the MPI routine MPI All Reduce to do the same
computation.