How to use FFTW MPI routines with MPI.jl

I’d like to use FFTW’s distributed memory routines and I have several doubts on how to achieve this. I guess one should be able to build that on top of MPI.jl and FFTW.jl, right?
I’m still stuck in very conceptual doubts, so, I’d be really grateful if someone could point me to articles/documentation that could help me better understand those issues.

Here are some of my doubts so far:

1) How julia handles libraries dependencies?

FFTW produces a separate shared library for it’s mpi related stuff “libfftw3_mpi”, which depends on fftw’s serial library “libfftw3”. Of course, it must also be linked to the mpi library.

Although there is no explicit Libdl.dlopen(lib) in MPI.jl and FFTW.jl, where lib is the appropriate shared library, my understanding is that it happens implicitly when one ccalls directly the library, right?

Does that makes the symbols exported by each shared library globally available to other libraries opened in julia?

My understanding from the docs is that, with appropriate flags, that is indeed the case.

2) MPI.jl linked library

It seems MPI.jl builds it’s own library in /MPI/deps/usr/lib/libjuliampi.dylib (I’m on a mac) and that is the library used in package. Why building this library is necessary, as opposed to linking to mpi libraries directly?
Also, it doesn’t seem this library exports MPI functions:

> nm libjuliampi.dylib                
                 U _MPI_Finalize
                 U _MPI_Finalized
                 U _MPI_Initialized
                 U _atexit
0000000000002e90 T _finalize_atexit
0000000000002ee0 T _install_finalize_atexit_hook
0000000000002ef2 T _juliampi_empty_
                 U _mpi_finalize_
                 U _mpi_init_
                 U dyld_stub_binder

I have zero knowledge on MPI’s internals and I’m very confused about what is done on this package.

The FFTW.jl package does not compile the fftw3_mpi library. So you’d probably have to compile FFTW yourself rather than using the one that comes with FFTW.jl

my understanding is that [dlopen] happens implicitly when one ccall s directly the library, right?

Yes. If you want to dlopen with different flags, then you can call dlopen manually before the ccalls.

It seems MPI.jl builds it’s own library in /MPI/deps/usr/lib/libjuliampi.dylib (I’m on a mac) and that is the library used in package. Why building this library is necessary, as opposed to linking to mpi libraries directly?

Requiring the user to ensure that the correct libraries are installed is problematic. It’s a lot less error-prone for the package to provide binary dependencies itself.

In general, dealing with binary dependencies is one of the most difficult things about package management in any language. In Julia, we’ve experimented with a few different approaches (BinDeps.jl, leveraging Anaconda via Conda.jl, …) and a typical first choice these days is to use BinaryProvider.jl with pre-compiled binaries hosted on Github via BinaryBuilder.jl.

Of course, if you’re just doing something for you own use, it is easy enough to compile and link whatever you want. The hard part is making such a package available to other users.

3 Likes

This library ensures that MPI_Finalize is called before the process exits. There is some discussion here. MPI.jl does not build its own MPI libraries, it links to whichever ones it can find (setting the CC and FC environment variables before building MPI.jl will give it a hint about where to look). libjuliampi is dlopened with flags that make both its symbols and the MPI libraries it is linked to available from Julia.

In general, if you want to use Julia + some other MPI aware software + MPI.jl, all you have to do is make sure they all linked to the same MPI library, and everything will work out (you can pass communicators etc. from Julia to the other software). Note that MPI.jl uses the Fortran communicator, not the C communicator, and provides the appropriate conversion methods.

1 Like