KEMBAR78
MPI Tutorial | PDF
MPI Tutorial
Dhanashree N P
February 24, 2016
Dhanashree N P: MPI Tutorial
MPI - Message Passing Interface
A standard for message passing library
Efficient, Portable, Scalable, Vendor Independent
C, Fortran
Message Passing Parallel Programming Model
MPI-3
Distributed, Shared, Hybrid Memory
Some implementations - OpenMPI, MPICH2, IBM Platform
MPI etc.
Different implementations support different versions and
functionalities of the standard
Use MPI with C for Assignment 2
Dhanashree N P: MPI Tutorial
MPI Routines
MPI Init(), MPI Finalize()
MPI Comm size(), MPI Comm rank()
MPI Get processor name()
MPI Abort(), MPI Get version(), MPI Initialized()
Dhanashree N P: MPI Tutorial
Compilation and Program Execution
mpicc -o test.c test
mpirun -n 4 ./test
Dhanashree N P: MPI Tutorial
Multiple hosts
To run on multiple hosts
mpirun -n 4 -host hostname1,hostname2 ./test
mpirun -n 4 -hostfile filename ./test
(a) etc hosts file
(b) hostfile
Dhanashree N P: MPI Tutorial
Some Points to Note
Only one MPI Init and MPI Finalize in a program
Do not declare functions or variables starting with MPI or
PMPI in the program
Communicators, Groups and Ranks
A communication domain is the set of processes allowed to
communicate with each other.
MPI Comm type variables eg. MPI COMM WORLD
Parameter to all message passing primitives
Each process belongs to many different communication
domains
Rank(task id) of the calling process is a unique integer
identifier in the range 0 to n-1.
Dhanashree N P: MPI Tutorial
Unicast Communication Primitives
Types of Operations
Synchronous, Blocking, Non-blocking, Buffered, Combined
etc.
Blocking v/s Non-blocking
System buffer v/s Application buffer
Order and Fairness
Dhanashree N P: MPI Tutorial
Unicast Communication Primitives
Blocking - MPI Send(), MPI Recv()
MPI Send(buffer,count,type,dest,tag,comm)
MPI Recv(buffer,count,type,source,tag,comm,status)
Non-Blocking- MPI ISend(), MPI IRecv()
MPI Isend(buffer,count,type,dest,tag,comm,request)
MPI Irecv(buffer,count,type,source,tag,comm,request)
buffer - reference to data that has to be sent/received.
count- number of data elements of a particular type.
source - rank of sender, dest - rank of receiver.
tag - MPI ANY TAG or any non-negative integer
comm - by default MPI COMM WORLD
***Make Sure to avoid deadlocks!***
Dhanashree N P: MPI Tutorial
Dhanashree N P: MPI Tutorial
Unicast Communication Routines
Other flavors - Blocking - MPI Ssend(), MPI Bsend(),
MPI Rsend()
Attaching a buffer - size in bytes
MPI Buffer attach (&buffer,size)
MPI Buffer detach (&buffer,size)
Send a message and post a receive before blocking
MPI Sendrecv (&sendbuf,sendcount,sendtype,dest,sendtag,
&recvbuf,recvcount,recvtype,source,recvtag, comm,&status)
Wait functions - MPI Wait, MPI Waitany, MPI Waitall,
MPI Waitsome
Other flavors - Non-Blocking - MPI Issend(), MPI Ibsend(),
MPI Irsend()
Status check functions - MPI Test, MPI Testany, MPI Testall,
MPI Testsome
Dhanashree N P: MPI Tutorial
Collective Communication and Computation Routines
The unicast routines were primarily for communication purposes.
Some of the collective operations support computations. All the
processes that are part of a communicator has to participate in
collective communication.
Three types of collective operations:
Synchronization - wait till all members have reached the
synchronization point
Data movements - broadcast, scatter, gather
Computations - reductions
These can be used with MPI primitive data types.
Dhanashree N P: MPI Tutorial
Collective Communication Routines
MPI Barrier(comm) - Each task executing this is blocked until
all the other tasks of the same group have reached this point.
MPI Bcast(&buffer,count,datatype,root,comm) - The process
with rank ’root’ broadcasts to all other processes in the group
Dhanashree N P: MPI Tutorial
Collective Communication Routines
MPI Scatter(&sendbuf,sendcnt,sendtype,&recvbuf,
recvcnt,recvtype,root,comm ) - The process with rank ’root’
broadcasts to all other processes in the group
MPI Gather(&sendbuf,sendcnt,sendtype,&recvbuf,
recvcnt,recvtype,root,comm ) - Reverse of scatter.
Dhanashree N P: MPI Tutorial
Collective Communication Routines
MPI Allgather(&sendbuf,sendcnt,sendtype,&recvbuf,
recvcnt,recvtype,comm ) - Concatenation of data to all tasks
in a group.
Dhanashree N P: MPI Tutorial
Collective Computation Routines
MPI Reduce(&sendbuf,&recvbuf,count,datatype,op,root,comm)
- Applies a reduction operation on all tasks in the group and
places the result in one task.
MPI Allreduce - collective computation and data movement
operation
MPI MAX, MPI MIN, MPI SUM,MPI PROD, MPI BOR,
MPI BAND etc.
Dhanashree N P: MPI Tutorial
Data Types - MPI Datatype (C Data type)
MPI CHAR (signed char)
MPI SHORT (signed short int)
MPI INT (signed int)
MPI LONG (signed long int)
MPI UNSIGNED CHAR (unsigned char)
MPI UNSIGNED SHORT (unsigned short int)
MPI UNSIGNED LONG (unsigned long int)
MPI UNSIGNED (unsigned int)
MPI FLOAT (float)
MPI DOUBLE (double)
MPI LONG DOUBLE (long double)
MPI BYTE
MPI PACKED
Dhanashree N P: MPI Tutorial
Derived Data Types
The following are used for derived data type creation:
Contiguous
Vector
Indexed
Struct
Dhanashree N P: MPI Tutorial
Derived Data Types - Example using Contiguous
Dhanashree N P: MPI Tutorial
References
Message Passing Interface(MPI) -
https://computing.llnl.gov/tutorials/mpi/
Inter group communications - http://www.mpi-forum.org/
docs/mpi-1.1/mpi-11-html/node114.html
Tutorials - http://mpitutorial.com/tutorials/
Introduction to Parallel Computing by Ananth Grama et al. -
Section 6.3 to Section 6.7
Dhanashree N P: MPI Tutorial

MPI Tutorial

  • 1.
    MPI Tutorial Dhanashree NP February 24, 2016 Dhanashree N P: MPI Tutorial
  • 2.
    MPI - MessagePassing Interface A standard for message passing library Efficient, Portable, Scalable, Vendor Independent C, Fortran Message Passing Parallel Programming Model MPI-3 Distributed, Shared, Hybrid Memory Some implementations - OpenMPI, MPICH2, IBM Platform MPI etc. Different implementations support different versions and functionalities of the standard Use MPI with C for Assignment 2 Dhanashree N P: MPI Tutorial
  • 3.
    MPI Routines MPI Init(),MPI Finalize() MPI Comm size(), MPI Comm rank() MPI Get processor name() MPI Abort(), MPI Get version(), MPI Initialized() Dhanashree N P: MPI Tutorial
  • 4.
    Compilation and ProgramExecution mpicc -o test.c test mpirun -n 4 ./test Dhanashree N P: MPI Tutorial
  • 5.
    Multiple hosts To runon multiple hosts mpirun -n 4 -host hostname1,hostname2 ./test mpirun -n 4 -hostfile filename ./test (a) etc hosts file (b) hostfile Dhanashree N P: MPI Tutorial
  • 6.
    Some Points toNote Only one MPI Init and MPI Finalize in a program Do not declare functions or variables starting with MPI or PMPI in the program Communicators, Groups and Ranks A communication domain is the set of processes allowed to communicate with each other. MPI Comm type variables eg. MPI COMM WORLD Parameter to all message passing primitives Each process belongs to many different communication domains Rank(task id) of the calling process is a unique integer identifier in the range 0 to n-1. Dhanashree N P: MPI Tutorial
  • 7.
    Unicast Communication Primitives Typesof Operations Synchronous, Blocking, Non-blocking, Buffered, Combined etc. Blocking v/s Non-blocking System buffer v/s Application buffer Order and Fairness Dhanashree N P: MPI Tutorial
  • 8.
    Unicast Communication Primitives Blocking- MPI Send(), MPI Recv() MPI Send(buffer,count,type,dest,tag,comm) MPI Recv(buffer,count,type,source,tag,comm,status) Non-Blocking- MPI ISend(), MPI IRecv() MPI Isend(buffer,count,type,dest,tag,comm,request) MPI Irecv(buffer,count,type,source,tag,comm,request) buffer - reference to data that has to be sent/received. count- number of data elements of a particular type. source - rank of sender, dest - rank of receiver. tag - MPI ANY TAG or any non-negative integer comm - by default MPI COMM WORLD ***Make Sure to avoid deadlocks!*** Dhanashree N P: MPI Tutorial
  • 9.
    Dhanashree N P:MPI Tutorial
  • 10.
    Unicast Communication Routines Otherflavors - Blocking - MPI Ssend(), MPI Bsend(), MPI Rsend() Attaching a buffer - size in bytes MPI Buffer attach (&buffer,size) MPI Buffer detach (&buffer,size) Send a message and post a receive before blocking MPI Sendrecv (&sendbuf,sendcount,sendtype,dest,sendtag, &recvbuf,recvcount,recvtype,source,recvtag, comm,&status) Wait functions - MPI Wait, MPI Waitany, MPI Waitall, MPI Waitsome Other flavors - Non-Blocking - MPI Issend(), MPI Ibsend(), MPI Irsend() Status check functions - MPI Test, MPI Testany, MPI Testall, MPI Testsome Dhanashree N P: MPI Tutorial
  • 11.
    Collective Communication andComputation Routines The unicast routines were primarily for communication purposes. Some of the collective operations support computations. All the processes that are part of a communicator has to participate in collective communication. Three types of collective operations: Synchronization - wait till all members have reached the synchronization point Data movements - broadcast, scatter, gather Computations - reductions These can be used with MPI primitive data types. Dhanashree N P: MPI Tutorial
  • 12.
    Collective Communication Routines MPIBarrier(comm) - Each task executing this is blocked until all the other tasks of the same group have reached this point. MPI Bcast(&buffer,count,datatype,root,comm) - The process with rank ’root’ broadcasts to all other processes in the group Dhanashree N P: MPI Tutorial
  • 13.
    Collective Communication Routines MPIScatter(&sendbuf,sendcnt,sendtype,&recvbuf, recvcnt,recvtype,root,comm ) - The process with rank ’root’ broadcasts to all other processes in the group MPI Gather(&sendbuf,sendcnt,sendtype,&recvbuf, recvcnt,recvtype,root,comm ) - Reverse of scatter. Dhanashree N P: MPI Tutorial
  • 14.
    Collective Communication Routines MPIAllgather(&sendbuf,sendcnt,sendtype,&recvbuf, recvcnt,recvtype,comm ) - Concatenation of data to all tasks in a group. Dhanashree N P: MPI Tutorial
  • 15.
    Collective Computation Routines MPIReduce(&sendbuf,&recvbuf,count,datatype,op,root,comm) - Applies a reduction operation on all tasks in the group and places the result in one task. MPI Allreduce - collective computation and data movement operation MPI MAX, MPI MIN, MPI SUM,MPI PROD, MPI BOR, MPI BAND etc. Dhanashree N P: MPI Tutorial
  • 16.
    Data Types -MPI Datatype (C Data type) MPI CHAR (signed char) MPI SHORT (signed short int) MPI INT (signed int) MPI LONG (signed long int) MPI UNSIGNED CHAR (unsigned char) MPI UNSIGNED SHORT (unsigned short int) MPI UNSIGNED LONG (unsigned long int) MPI UNSIGNED (unsigned int) MPI FLOAT (float) MPI DOUBLE (double) MPI LONG DOUBLE (long double) MPI BYTE MPI PACKED Dhanashree N P: MPI Tutorial
  • 17.
    Derived Data Types Thefollowing are used for derived data type creation: Contiguous Vector Indexed Struct Dhanashree N P: MPI Tutorial
  • 18.
    Derived Data Types- Example using Contiguous Dhanashree N P: MPI Tutorial
  • 19.
    References Message Passing Interface(MPI)- https://computing.llnl.gov/tutorials/mpi/ Inter group communications - http://www.mpi-forum.org/ docs/mpi-1.1/mpi-11-html/node114.html Tutorials - http://mpitutorial.com/tutorials/ Introduction to Parallel Computing by Ananth Grama et al. - Section 6.3 to Section 6.7 Dhanashree N P: MPI Tutorial