I’m developing a package MyModule
that uses distributed operations internally. The following code works perfectly fine,
using Distributed
addprocs(10)
using MyModule
function_that_uses_pmap_inside(); # imported from MyModule, works fine
This is expected, as the Julia docs state
Finally, if
DummyModule.jl
is not a standalone file but a package, thenusing DummyModule
will loadDummyModule.jl
on all processes, but only bring it into scope on the process whereusing
was called.
However, if I run the following code instead,
using Distributed
using MyModule
addprocs(10)
function_that_uses_pmap_inside();
I get an exception On worker 2: key MyModule not found
. This also makes sense, since I imported MyModule
before adding the processors. However, running using MyModule
again after adding the processors does not fix the error. It seems like Julia does not load the module on the new processors if using MyModule
has already been run once.
This means that if I decide to add more processors during my Jupyter notebook workflow, I’ve been having to restart my entire notebook, so that I can add all the processors I want before running using MyModule
. This takes a lot of time: is there any way of reloading a module on the new processors, without restarting my notebook?