MPI is the state-of-the-art in multiprocessing on HPC clusters, however the syntax of MPI may be a bit unfamiliar for beginners. MPI.jl provides Julia wrappers for MPI that make the process much simpler. This package provides wrappers around MPI.jl that make the process even simpler, and perhaps puts it in a form that is more familiar to Julia users.

A sample script to evaluate a parallel map-reduce:

```
using MPIMapReduce
using MPI
MPI.Init()
y1 = pmapreduce(x -> ones(2) * x, +, 1:5)
if MPI.Comm_rank(MPI.COMM_WORLD) == 0
show(stdout, MIME"text/plain"(), y1)
println()
end
```

This produces the output

```
2-element Array{Float64,1}:
15.0
15.0
```

Note that the reduction is applied elementwise, so this is equivalent to `mapreduce(x -> ones(2)*x, (x,y) -> x .+ y, 1:5)`

in Julia.

The package also exports another function `pmapgatherv`

, that peforms a map followed by a concatenation. The sample script

```
using MPIMapReduce
using MPI
MPI.Init()
y = pmapgatherv(x -> ones(2) * x^2, hcat, 1:5)
if MPI.Comm_rank(MPI.COMM_WORLD) == 0
show(stdout, MIME"text/plain"(), y)
println()
end
```

produces the output

```
2×5 Array{Float64,2}:
1.0 4.0 9.0 16.0 25.0
1.0 4.0 9.0 16.0 25.0
```

In this function the reduction is not performed elementwise, so this is actually equivalent to Julia’s `mapreduce`

. The naming reflects the fact that the function uses MPI’s gatherv function under the hood.

This is pretty new and I have not tested this extensively, but basic tests appear to work. Issues and PR are welcome!

Link to package: GitHub - jishnub/MPIMapReduce.jl: An MPI-based distributed map-reduce function for Julia