Vous êtes sur la page 1sur 22

BM-308

Paralel Programlamaya Giriş


Bahar 2018
(4. Sunu)
(Yrd. Doç. Dr. Deniz Dal)
MPI = Message Passing Interface
• MPI is a specification for the developers and users of
message passing libraries.
• The goal of the Message Passing Interface is to provide
a widely used standard for writing message passing
programs. The interface attempts to be:
– practical
– portable
– efficient
– flexible
• Interface specifications have been defined for C/C++
and Fortran programs.
Reasons for Using MPI
• Standardization: MPI is the only message passing
library which can be considered a standard. (November
1993)
• Portability: There is no need to modify your source
code when you port your application to a different
platform that supports (and is compliant with) the MPI
standard.
• Functionality: Over 115 routines are defined in MPI-1
alone.
• Availability: A variety of implementations are
available, both vendor and public domain.
Programming Model
• Hardware platforms:
• Distributed Memory: Originally, MPI was targeted
for distributed memory systems.
• Shared Memory: As shared memory systems
became more popular, particularly SMP/NUMA
architectures, MPI implementations for these
platforms appeared.
• Hybrid: MPI is now used on just about any common
parallel architecture including massively parallel
machines, SMP clusters, workstation clusters and
heterogeneous networks.
Programming Model

• All parallelism is explicit: the programmer is


responsible for correctly identifying parallelism and
implementing parallel algorithms using MPI constructs.
• The number of tasks dedicated to run a parallel
program is static. New tasks can not be dynamically
spawned during run time.
MPI Function Categories

• Initialization/finalization
• Point-to-point communication functions
• Collective communication functions
• Communicator topologies
• User-defined data types
• Utilities (e.g.- timing)
Single Program Multiple Data Model (SPMD)

• Creating parallelism with the SPMD computational model is


ideal for MPI programming.
• SPMD Model:
• A single program is executed by all tasks simultaneously.
Single Program Multiple Data Model (SPMD)

• At any moment in time, tasks can be executing the same


or different instructions within the same program.
• Tasks do not necessarily have to execute the entire
program - perhaps only a portion of it.
Minimum MPI Program Skeleton

#include “mpi.h”

Initialize MPI environment MPI_Init

Do work and make message passing calls

Terminate MPI Environment MPI_Finalize


Getting Information

• MPI uses
• objects,
• communicators,
• groups
to define which collection of processes may
communicate with each other.
• Most MPI routines require you to specify a communicator
as an argument.
• Simply use MPI_COMM_WORLD whenever a communicator
is required.
• MPI_COMM_WORLD includes all of your MPI processes.
Getting Information

• MPI_Comm_size: Determines the size of the group


associated with a communicator.
• MPI_Comm_rank: Determines the rank of the calling
process in the communicator.

1 9 MPI_COMM_WORLD
4
7
0
2 We call “master” or “root”
3 6 5

8
Getting Information

• Rank within a communicator:


• Every process has its own unique integer identifier
assigned by the system when the process initializes.
• A rank is sometimes also called a "process ID".
• Ranks are contiguous and begin at zero.
Environment Management Routines

• Purposes of MPI environment management routines:


• Initializing the MPI environment,
• Terminating the MPI environment,
• Querying the environment,
• Identity, etc.
Environment Management Routines

MPI_Init
• Initializes the MPI execution environment.
• Must be called in every MPI program.
• Must be called before any other MPI functions.
• Must be called only once in an MPI program.
• For C programs, MPI_Init may be used to pass the
command line arguments to all processes.

int MPI_Init(int *argc, char **argv[])


Environment Management Routines

MPI_Finalize

• Terminates MPI execution environment.


• After this routine, any of MPI routines does not work.

int MPI_Finalize()
Environment Management Routines

MPI_Comm_size

• Determines the number of processes in the group


associated with a communicator.
• Generally used within the communicator
MPI_COMM_WORLD to determine the number of processes
being used by your application.

int MPI_Comm_size(MPI_Comm comm,int *size)


Environment Management Routines

MPI_Comm_rank

• Determines the rank of the calling process within the


communicator.
• A unique integer rank between 0 and (#OfProcs-1) within
the communicator MPI_COMM_WORLD.
• The rank is referred to as a task ID.

int MPI_Comm_rank(MPI_Comm comm,int *rank)


Environment Management Routines

MPI_Get_processor_name

• Returns the processor name.


• Also returns the length of the name.

int MPI_Get_processor_name(char *name,int *length)


MPI_Wtime

• Returns an elapsed wall clock time in seconds (double


precision) on the calling processor.

double MPI_Wtime()
MPI_Abort

• Terminates all MPI processes associated with the


communicator.
• In most MPI implementations it terminates ALL processes
regardless of the communicator specified.

int MPI_Abort(MPI_Comm comm,int errorcode)


MPI Rutinleri İçeren C++ Programlarının Derlenmesi ve Çalıştırılması
ADIM 1: MPICH2’nun MPI rutinleri içeren C++ programlarını
derleyebilen mpicxx/mpic++ derleyicisi ile programlarımızı
derleriz.
$mpicxx HelloWorld.cpp -o hello.x
ADIM 2: mpirun komutuyla executable dosyamızın çalışıp
çalışmadığını test ederiz. (Sadece bir makina üzerinde sanal
bir test gerçekleşir.)
$mpirun -np 5 ./hello.x (5 bilgisayar ile sanal çalıştır)
ADIM 3: MPICH2’nun MPI rutinleri içeren C++ programlarını
küme düğümleri üzerinde koşturacak mpiexec programını
çalıştırırız.
$mpiexec -f ~/cluster_hosts -n 5 ./hello.x (Kümeden 5
bilgisayar ile çalıştır) (mpiexec -host cn01,cn06,cn11 ./a.out)
UYGULAMALAR

1 - 01_Hello_World.cpp
2 - 02_Hello_World_With_Ranks.cpp
3 - 03_Processor_Name.cpp
4 - 04_Measure_Time.cpp
5 - 05_Abort.cpp

Vous aimerez peut-être aussi