Inside MPI - Lab
First, download the tarball mpi-4.tgz that contains various programs for this lab.
bibw
Analyze the program bibw.c to understand what it does. Then, run the program and observe its behavior.
The program does not work with message larger than 64KB.
This is due to this part of the program, where both MPI rank send a message, and then receive the other rank's message:
Since MPI uses a rendez-vous protocol for large messages, both MPI ranks are stuck in MPI_Send, waiting for the other rank to reach MPI_Recv.
This can be fixed by sending the message with a non-blocking send:
pingpong
Analyze the program pingpong.c, and run it.
Explain the behavior of the program for large messages.
The program measures the duration of MPI_Send. Small messages are sent in Eager mode, and the MPI_Send is fast. For large messages, MPI uses a rendez-vous protocol, which makes the sending process synchronize with the receiving process. Since the receiving rank is busy computing, it does not "see" the rendez-vous request, and only reply wait it reaches the MPI_Wait.
stencil
Analyze and run (with 2 MPI ranks) the program stencil_mpi.c while varying the value of N.
- With large values of N, the program stalls. Find the value of N that causes the problem.
- Compute the size of MPI messages for this value N.
- Explain the cause of the problem, and fix the program.
MPI+OpenMP
- Parallelize the program stencil_mpi.c with OpenMP.
- Since you are mixing MPI with threads, you should initialize MPI properly.
- Your program may run fine while being incorrect (because of a race condition, for example)
MPI+CUDA
- Parallelize the program stencil_mpi.c with CUDA.