I am relatively new to Julia, but have ported one of my older programs from Fortran to Julia. I picked a program from my collection which doesn’t have any parallelims in it with the intention to implement it from scratch, in a Julian way).
So here I am, trying to speed the code up in Julia. I know that there is standard package
Distributed.jl in Julia which should allow me to do exactly what I wanted: parallelize the code in Julian way. (Whether using a Julia standard package makes an approach Julian is a different question and I would leave it aside for the sake of shortness.) However, there is also
MPI.jl which seems to offer the same possibility as
Distributed.jl, although I would see it as a step back, given that MPI as a library was needed by languages developed before parallel computers, unaware of the parallel environment in which they would eventually end up running. But Julia is not like that, Julia was designed with inherent capability to run on multiple processing units.
So I wonder, what was the motivation to introduce MPI into Julia through a package, when it already has the same or very similar functionality in
Distributed.jl? Did anyone try both approaches and what were your experiences? Does
Distributed.jl lack some functionality present in
MPI.jl? Does using
MPI.jl brings more potential for better performance? Does any of these libraries renders your code less platform independent? Since my long term aim is to run simulations on multiple GPUs, is any of these package easier to integrate with
CUDA.jl? I am pretty sure that
CUDA.jl is not aware of MPI, but it might be of
Any comments or opinions would be very welcome.