MPI merge multiple intercoms into a single intracomm

I realize I'm a year out of date with this answer, but I thought maybe other people might want to see an implementation of this. As the original respondent said, there is no way to merge three (or more) communicators. You have to build up the new intra-comm one at a time. Here is the code I use. This version deletes the original intra-comm; you may or may not want to do that depending on your particular application:

#include <mpi.h>


// The Borg routine: given
//   (1) a (quiesced) intra-communicator with one or more members, and
//   (2) a (quiesced) inter-communicator with exactly two members, one
//       of which is rank zero of the intra-communicator, and
//       the other of which is an unrelated spawned rank,
// return a new intra-communicator which is the union of both inputs.
//
// This is a collective operation.  All ranks of the intra-
// communicator, and the remote rank of the inter-communicator, must
// call this routine.  Ranks that are members of the intra-comm must
// supply the proper value for the "intra" argument, and MPI_COMM_NULL
// for the "inter" argument.  The remote inter-comm rank must
// supply MPI_COMM_NULL for the "intra" argument, and the proper value
// for the "inter" argument.  Rank zero (only) of the intra-comm must
// supply proper values for both arguments.
//
// N.B. It would make a certain amount of sense to split this into
// separate routines for the intra-communicator processes and the
// remote inter-communicator process.  The reason we don't do that is
// that, despite the relatively few lines of code,  what's going on here
// is really pretty complicated, and requires close coordination of the
// participating processes.  Putting all the code for all the processes
// into this one routine makes it easier to be sure everything "lines up"
// properly.
MPI_Comm
assimilateComm(MPI_Comm intra, MPI_Comm inter)
{
    MPI_Comm peer = MPI_COMM_NULL;
    MPI_Comm newInterComm = MPI_COMM_NULL;
    MPI_Comm newIntraComm = MPI_COMM_NULL;

    // The spawned rank will be the "high" rank in the new intra-comm
    int high = (MPI_COMM_NULL == intra) ? 1 : 0;

    // If this is one of the (two) ranks in the inter-comm,
    // create a new intra-comm from the inter-comm
    if (MPI_COMM_NULL != inter) {
        MPI_Intercomm_merge(inter, high, &peer);
    } else {
        peer = MPI_COMM_NULL;
    }

    // Create a new inter-comm between the pre-existing intra-comm
    // (all of it, not only rank zero), and the remote (spawned) rank,
    // using the just-created intra-comm as the peer communicator.
    int tag = 12345;
    if (MPI_COMM_NULL != intra) {
        // This task is a member of the pre-existing intra-comm
        MPI_Intercomm_create(intra, 0, peer, 1, tag, &newInterComm);
    }
    else {
        // This is the remote (spawned) task
        MPI_Intercomm_create(MPI_COMM_SELF, 0, peer, 0, tag, &newInterComm);
    }

    // Now convert this inter-comm into an intra-comm
    MPI_Intercomm_merge(newInterComm, high, &newIntraComm);


    // Clean up the intermediaries
    if (MPI_COMM_NULL != peer) MPI_Comm_free(&peer);
    MPI_Comm_free(&newInterComm);

    // Delete the original intra-comm
    if (MPI_COMM_NULL != intra) MPI_Comm_free(&intra);

    // Return the new intra-comm
    return newIntraComm;
}

If you are going to do this by calling MPI_COMM_SPAWN multiple times, then you'll have to do it more carefully. After you call SPAWN the first time, the spawned process will also need to take part in the next call to SPAWN, otherwise it will be left out of the communicator you're merging. it ends up looking like this:

Individual Spawns

The problem is that only two processes are participating in each MPI_INTERCOMM_MERGE and you can't merge three communicators so you'll never end up with one big communicator that way.

If you instead have each process participate in the merge as it goes, you end up with one big communicator in the end:

Group Spawns

Of course, you can just spawn all of your extra processes at once, but it sounds like you might have other reasons for not doing that.

Tags:

C

Mpi