CSC 5001– High Performance Systems

Portail informatique

MPI part 2 - Lab

Min/Max

The goal of this exercise is to experiment with MPI collective communication.

The program min_max creates an array of integers initialized with random values. It then searches for the minimum and maximum values.

In this exercise, we want to parallelize the application with MPI, and only use collective communication? In this program, rank 0 broadcasts the array, then each MPI rank searches for the minimum and maximum values in a subset of the array. Then, all the ranks transfer the results to rank 0.

Modify min_max.c to initialize MPI, and compute, for each MPI rank, the lower and upper bounds of the array to be processed.

First, print the MPI rank and the bounds of the array. Make sure that all the array entries will be processed.

Then, modify the program so that rank 0 initialize th array, and broadcast it to the other processes.

The last step is to collect the result that each rank has processed, and to print the minimum and maximum value of the array. To do so, perform two reductions with the operators MPI_MIN, and MPI_MAX.

Projet

The goal of this exercise ot to work on the parallelization of your project with MPI. To do so, analyze the application, and identify the values that need to be synchronized.

Depending on the program to parallelize, several approaches are possible. Estimate the frequency of the communication, and the quantity of data to transfer. Is it possible to "hide" the cost of communication using non-blocking communication ? Is it better to use point-to-point communication, or collective communication ?