I have a proposal which I could implement, but wanted to see if there’s any relevant things I may well be missing before embarking on it.
As you know, when you have multiple Julia processes and do using Foo or even @everywhere using Foo, Julia first loads Foo on the master and then on the workers, so it takes twice as long (which for me personally is taking some already painful load times and making them really unbearable). My understanding is that this is done because if using Foo triggered precompilation and all the workers tried precompiling things at once, you can run into race conditions with the cache files.
However, afaict, no such race conditions exist if everything is precompiled. One can check this is the case by running @everywhere @eval using Foo, which will actually load everything in parallel on all workers at once. If Foo is precompiled, at least for all the random packages I’ve tried, everything works fine.
So it seems like if we rewrote @everywhere using Foo to expand to (Pkg.precompile("Foo"); @actually_run_everywhere_in_parallel using Foo), then you get a 2X speed boost for parallel loading when Foo is already precompiled, and othewise you still get a little boost over the current situation because once precompilation is done, loading still happens in parallel.
The problem is Pkg.precompile("Foo") doesn’t exist, only Pkg.precompile() exists which does all packages.
So my proposal would be to implement Pkg.precompile(pkg) (seems doable scanning over the source code for Pkg.precompile()), and then modify the @everywhere macro. I’m not sure using Foo could be made to work the same unless the package_callback machinery was changed to allow pre-hooks, although that may be beyond my ability.
Let me know if anyone has comments, I’m kind of familiar with Distributed but much less so with Pkg so I can imagine I missed something.
There is Base.compilecache(::PkgId) but I don’t think that’s what you need as it creates the precompilation cache even if it is not stale. I think you need “compilecache_if_stale” function.
How stale cache is detected is complex so I may be saying something wrong, but, last time I tried, stale cache detection and package loading are tightly coupled and you cannot just detect if the cache is stale or not without loading the package itself (or I guess at least its dependencies).
That’s exactly right, I should have specified above that’s what Pkg.precompile(pkg) would do, i.e. it would do nothing if the cache is not stale (similarly how currently Pkg.precompile() only precompiles stale caches).
That’s true, but in looking over the code for Pkg.precompile it looks like the logic is there, just that it also loops over all packages. It seemed possible the logic could be extracted to do a single package. Although I can’t say I’m 100% sure of this.
All information is (presumably) there in the cache header so I wouldn’t be surprised that it is already implementable. I just wanted to bring up that you might need to re-implement some of the logic behind Base.require.
Also, I don’t think what Pkg.precompile is doing is enough because it allows false negative. For example:
(tmp.RM00KTz0Sl) pkg> add InitialValues@0.2.0 BangBang@0.3.19 # BangBang depends on InitialValues
...
julia> using BangBang # precompile BangBang@0.3.19 and InitialValues@0.2.0
[ Info: Precompiling BangBang [198e06fe-97b7-11e9-32a5-e1d131e6ad66]
(tmp.RM00KTz0Sl) pkg> add InitialValues@0.2.1
Resolving package versions...
Updating `/tmp/tmp.RM00KTz0Sl/Project.toml`
[22cec73e] ↑ InitialValues v0.2.0 ⇒ v0.2.1
Updating `/tmp/tmp.RM00KTz0Sl/Manifest.toml`
[22cec73e] ↑ InitialValues v0.2.0 ⇒ v0.2.1
(tmp.RM00KTz0Sl) pkg> rm InitialValues
Updating `/tmp/tmp.RM00KTz0Sl/Project.toml`
[22cec73e] - InitialValues v0.2.1
Updating `/tmp/tmp.RM00KTz0Sl/Manifest.toml`
[no changes]
julia> exit()
$ julia --startup-file=no --project=.
...
julia> using Pkg
julia> Pkg.precompile() # should precompile, but it doesn't
Precompiling project...
julia> using BangBang
[ Info: Precompiling BangBang [198e06fe-97b7-11e9-32a5-e1d131e6ad66]
I guess it’s OK for Pkg.precompile as it’s just a use-facing interface and only has to cover important cases. However, if you need to use it via @everywhere using ..., I think it needs to be more robust.
Hmm thanks, that’s a super helpful example. So it seems like the logic in precompile() may not be quite enough. Will have to think through what needs to be done instead.
I stared at code loading functions long enough for posting the comments and now I started feeling like I can implement this
Here is my shot at this:
using Pkg
isstale(m::Module) = isstale(Base.PkgId(m))
function isstale(pkg::Base.PkgId)
if pkg.uuid === nothing
@debug "Ignoring top-level module `$pkg`." maxlog=1 _id=(isstale, pkg)
return false
elseif Pkg.Types.is_stdlib(pkg.uuid)
return false
end
sourcepath = Base.locate_package(pkg)
sourcepath === nothing && error("Package not found: $pkg")
paths = Base.find_all_in_cache_path(pkg)
for path_to_try in paths::Vector{String}
staledeps = Base.stale_cachefile(sourcepath, path_to_try)
staledeps === true && continue
any(staledeps) do dep
if dep isa Module
return isstale(dep)
else
return isstale(dep[2]::Base.PkgId)
end
end || return false
end
return true
end
I think this is still incorrect as stale_cachefile seems to treat packages that are already loaded are not stale (even though the files are edited afterwards). It makes sense in a single-process case but, for Distributed, I don’t think it’s enough.