Loading package on newly added processors

I’m developing a package MyModule that uses distributed operations internally. The following code works perfectly fine,

using Distributed
addprocs(10)
using MyModule
function_that_uses_pmap_inside(); # imported from MyModule, works fine

This is expected, as the Julia docs state

Finally, if DummyModule.jl is not a standalone file but a package, then using DummyModule will load DummyModule.jl on all processes, but only bring it into scope on the process where using was called.

However, if I run the following code instead,

using Distributed
using MyModule
addprocs(10)
function_that_uses_pmap_inside(); 

I get an exception On worker 2: key MyModule not found. This also makes sense, since I imported MyModule before adding the processors. However, running using MyModule again after adding the processors does not fix the error. It seems like Julia does not load the module on the new processors if using MyModule has already been run once.

This means that if I decide to add more processors during my Jupyter notebook workflow, I’ve been having to restart my entire notebook, so that I can add all the processors I want before running using MyModule. This takes a lot of time: is there any way of reloading a module on the new processors, without restarting my notebook?

is this what you are looking for?

@everywhere using MyModule

see here

you may also refer to the manual part, esp.:

Finally, if DummyModule.jl is not a standalone file but a package, then using DummyModule will load DummyModule.jl on all processes, but only bring it into scope on the process where using was called.