ComputationalRadiationPhysics / redGrapes

Resource-based, Declarative task-Graphs for Parallel, Event-driven Scheduling :grapes:

Home Page:https://redgrapes.rtfd.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Further MPI helpers

michaelsippel opened this issue · comments

The MPI-helpers included with redGrapes currently only provide a way to poll MPI_Requests and retrieve the resulting MPI_Status. This requires the user to manually call the get_status function after each asynchronous MPI-call (e.g. MPI_Isend, MPI_Irecv). Using a blocking MPI_Wait does not integrate with redGrapes and might provoke a distributed deadlock.

The idea is to make the usage of MPI inside redGrapes safer by a special MPI-Task-factory.

Currently it looks like this:

mgr.emplace_task(
    [mpi_request_pool]( ... )
    {
        MPI_Request request;
        MPI_Irecv(  ... , &request );

        MPI_Status status = mpi_request_pool->get_status( request );

        int recv_data_count;
        MPI_Get_count( &status, MPI_CHAR, &recv_data_count );
    },
    ... );

and could be turned into something like this:

auto status_fut = emplace_mpi_task(
    []( MPI_Request & request, ... )
    {
        MPI_Irecv(  ... , &request );
    },
    ... );

MPI_Status status = status_fut.get();
int recv_data_count;
MPI_Get_count( &status, MPI_CHAR, &recv_data_count );

This is now implemented on dev. The make_mpi_scheduler function returns a struct which contains also a function to create a mpi-task.

    /*
     * Initialization
     */
    MPI_Init( nullptr, nullptr );

    rg::Manager<
        TaskProperties,
        rg::ResourceEnqueuePolicy
    > mgr;

    auto default_scheduler = rg::scheduler::make_default_scheduler( mgr );
    auto mpi_scheduler = rg::helpers::mpi::make_mpi_scheduler(
              mgr,
              // all mpi tasks should have this task-properties
              TaskProperties::Builder().scheduling_tags({ SCHED_MPI }) );

    // initialize main thread to execute tasks from the mpi-queue and poll
    rg::thread::idle =
        [mpi_scheduler]
        {
            mpi_scheduler.fifo->consume();
            mpi_scheduler.request_pool->poll();
        };

    mgr.set_scheduler(
        rg::scheduler::make_tag_match_scheduler( mgr )
            .add({}, default_scheduler)
            .add({ SCHED_MPI }, mpi_scheduler.fifo));

    /*
     * Create a MPI-Task
     */
    auto status_fut =
        mpi_scheduler.emplace_mpi_task(
            []( MPI_Request & request )
            {
                MPI_Irecv( ..., &request );
            }
        );

    // do something else

    MPI_Status status = status_fut.get();