How do I stop Julia optimising out a variable? (or how to use `MPI.Ibarrier()`?)

I want to use MPI.Ibarrier() to synchronize some processes where one needs to complete some work before the others can start, so that the first process does not need to wait at the barrier. Pseudo-code:

using MPI

comm = MPI.COMM_WORLD
comm_rank = MPI.Comm_rank(comm)

for i in 1:100
    if comm_rank == 0
        ... do some work ...
        dummy_req = MPI.Ibarrier(comm)
    else
        req = MPI.Ibarrier(comm)
        MPI.Wait(req)
        ... do some other work ...
    end
end

I get a weird error like

error in running finalizer: MPI.MPIError(code=7)
MPI_Request_free at /home/.../.julia/packages/MPI/TKXAj/src/api/generated_api.jl:2331 [inlined]
free at /.../.julia/packages/MPI/TKXAj/src/nonblocking.jl:161
...

which I think is due to the finalizer instantly running on the ‘Request’ object returned from MPI.Ibarrier() on rank-0. The error goes away if I print dummy_req! So I think MPI needs the ‘Request’ to exist for a little while. I think the compiler is removing dummy_req instantly because it’s never used, and causing the problem (the same as if I didn’t even store the return value from MPI.Ibarrier() in a variable). I think I only need it to persist until the end of the if-statement, or one iteration of the loop, but don’t know how to do that without actually doing something with dummy_req.

1 Like

You can use GC.@preserve dummy_req to make sure that dummy_req is not finalised within a block of code.

Your particular case is a bit tricky because you want all the 100 requests to be preserved. One solution would be to use an MPI.MultiRequest to store all of them, which could look like:

using MPI

MPI.Init()

comm = MPI.COMM_WORLD
comm_rank = MPI.Comm_rank(comm)

dummy_reqs = MPI.MultiRequest(100)

GC.@preserve dummy_reqs begin
    for i in 1:100
        if comm_rank == 0
            # ... do some work ...
            MPI.Ibarrier(comm, dummy_reqs[i])
        else
            req = MPI.Ibarrier(comm)
            MPI.Wait(req)
            # ... do some other work ...
        end
    end
    MPI.Barrier(comm)  # make sure dummy_reqs hasn't been finalised up to this point
end
2 Likes

Thanks @jipolanco, the MPI.MultiRequest was what I needed! For anyone else who comes across a similar problem - if you want to re-use the MultiRequest, you need to free it first, e.g. something like

request_store = MPI.MultiRequest(8)
for i in 1:100
    for j in 1:8
        ... do some stuff ...
        MPI.Ibarrier(comm, request_store[j])
    end
    MPI.Barrier(comm)
    MPI.Free(request_store)
end