ANN: MPI.jl v0.10.0: new build process and CUDA-aware support

Wow that’s awesome, that might be exactly what I need. Agreed, it seems like this implements a superset of MPIArrays and that package looks abandoned, so can this one take over?

1 Like

@kose-y I have a few questions / feature requests:

  • Are nested MPIArrays supported/planned? I’m thinking of an array, distributed along a communicator, each element of which is again an MPIArray along another communicator.
  • Are arrays distributed along two dimensions supported/planned?
  • Is indexing supported, or is it intentionally disabled?
1 Like

Wow. Super cool to see all these updates on the distributed array and FFT support. Thanks to everyone involved! I can only agree: It would be super cool to the various efforts under the same hood.

Wow. That leads me to ask about topology awarness of the cluster network.
The cluster scheduler will have topology knowledge - ie which compute nodes are close to each other on the network. In a simple fat tree those are groups on nodes on the same switch.

So could we have those communicators launch the other communicators to nodes on the same switch?

ps. I am not limiting the discussion to one topology

That is theoretically the point of the MPI topology functions, some of which are currently available in MPI.jl (as always, PRs are welcome if you would like to add more functionality).

1 Like

Global indexing is intentionally disabled, as it was not the important part of my application.
Distribution along two or more dimensions is planned, and I think it can be connected to communication-efficient linear algebra.