I could not see a significant difference, on my laptop it takes 28-36s to for the language server to start without downloading the cache files.
Edit: If I switch to performance mode it takes 15s.
Sorry if this is a dumb question. I’m a relatively new Julia user, but one thing that I have learned is that compilation time in Julia is a lot longer the first time you run something than in subsequent runs. Just wanted to double check that this speed increase isn’t due to this effect, right?. For example, if I run the command twice, the first time takes ~7 sec and the second time less than 0.01 sec. This is what I get (and I’m currently running Julia-1.7.3).
There is no need to do any compilation the second time it runs since the compilation from the first run is stored. Also, the second time you run it you don’t need to load the package again.
| | |_| | | | (_| | | Version 1.10.0-DEV.207 (2022-12-28)
_/ |\__'_|_|_|\__'_| | Commit 9448c89652 (1 day old master)
|__/ |
julia> cd("C:/v"); @time using GMT
2.665190 seconds (3.92 M allocations: 240.520 MiB, 3.23% gc time, 38.43% compilation time: 89% of which was recompilation)
Which is still a bit slower than what I get with Julia 1.7
It is undoubtedly faster to start many other functions but I have modules whose TTFP dropped from 16 s to only ~8 s and SnoopePrecompile does not help (but increases substantially the precompile time)
Well, Julia 1.9 does not magically solve all problems regarding the load and compilation time of packages. It implements binary caching which helps a lot. But for some packages the load time slightly increases.
So there is more work to be done, on the Julia side, but also by the package authors…
The speedups reported here has little to nothing to do with package load time but has to do with packages that use precompile or equivalent to (now) cache native code. So if all you are measuring is package load time, or you use a function that has not been precompiled, I wouldn’t expect much difference.
I know that it depends in pre-compilation and believe me that I’ve tried *a lot of variations. Still, I was also expecting an improvement in load time since it says that it’s recompiling things. And again, in Julia 1.7 that start time is below 2 sec. In >= 1.9 the load time is almost always ~3 sec.
And I don’t see what I can do when the TTFP of the plot function, as I reported before here, suddenly jumped from ~8 sec to > 11 sec in the middle of the 1.9 development cycle. Now it dropped to ~4.5 (still a bit high). It is < 7 sec in Julia 1.7
Yes, I saw that. Both in the load tile as well as on other modules but I really don’t know what I can do when SnoopCompile says there are no invalidations
julia> using SnoopCompileCore; invalidations = @snoopr(plot(rand(5,2))); using SnoopCompile
julia> trees = invalidation_trees(invalidations)
SnoopCompile.MethodInvalidations[]
and for inferences I have a list of 7 functions and no idea on what to do with that info
julia> tinf = @snoopi_deep plot(rand(5,2))
InferenceTimingNode: 2.239190/6.083681 on Core.Compiler.Timings.ROOT() with 7 direct children
julia> itrigs = inference_triggers(tinf)
7-element Vector{InferenceTrigger}:
Inference triggered to call GMT.plot(::Matrix{Float64}) from eval (.\boot.jl:370) inlined into REPL.eval_user_input(::Any, ::REPL.REPLBackend, ::Module) (C:\workdir\usr\share\julia\stdlib\v1.10\REPL\src\REPL.jl:152)
Inference triggered to call GMT.add_opt_cpt(::Dict{Symbol, Any}, ::String, ::Matrix{Symbol}, ::Char, ::Int64, ::Matrix{Float64}, ::Nothing, ::Bool, ::Bool, ::String, ::Bool) from common_plot_xyz (C:\Users\joaqu\.julia\dev\GMT\src\psxy.jl:175) with specialization GMT.common_plot_xyz(::String, ::Matrix{Float64}, ::String, ::Bool, ::Bool)
Inference triggered to call GMT._helper_psxy_line(::Dict{Symbol, Any}, ::String, ::String, ::Bool, ::Matrix{Float64}, ::Vararg{Any}) from common_plot_xyz (C:\Users\joaqu\.julia\dev\GMT\src\psxy.jl:183) with specialization GMT.common_plot_xyz(::String, ::Matrix{Float64}, ::String, ::Bool, ::Bool)
...
If you’re using SnoopPrecompile to compile a workload and you’re not seeing a big speedup on 1.9, your first stop should be to check for invalidations. Checking for invalidations is easy, most Julia users should be able to do it. Fixing them is much harder than any of us would like, but if you open a new topic someone might be able to help you.
As a reminder, the recipe is to start a fresh session (even with --startup=no so it’s “clean”) and then:
using SnoopCompileCore
invs = @snoopr using ThePackageICareAbout;
tinf = @snoopi_deep some_workload_I_want_to_be_fast(args...);
using SnoopCompile
trees = invalidation_trees(invs);
staletrees = precompile_blockers(trees, tinf)
and see what is in staletrees. If it’s empty, you can rule out invalidation; if not, you have a problem you need to fix. Note that this is workload-dependent: some_workload_I_want_to_be_fast... can be a begin...end block that exercises all the functionality you want to be fast. If you later discover more stuff you want to be fast, you should repeat this analysis for that new workload. (If you just want to fix all invalidations, look at trees instead, but often that’s a pretty large list…)
The “fault” is only rarely in the method listed as causing the invalidation, instead the usual fix is in the things that got invalidated by it. SnoopCompile’s documentation goes over strategies to analyze them more deeply and fix them, but again that treads into more difficult waters.
One doubt/curiosity: if one loads a package in a fresh Julia section, and then later opens Julia again in the same environment, should now the loading time be very fast? I mean, is there any reason for recompilation to occur among two subsequent loadings of a package, if nothing has been done in the mean time?
If the package needs precompilation, then yes, using MyPkg will be very slow the first time. But loading is still slow (less than in the first case) once it’s been precompiled. The second form of slowness has nothing to do with I/O (that’s fast), instead almost all of it is about checking whether the loaded code is still valid.
For those who curse invalidations, keep in mind they are a feature, not a bug, even though they cause headaches. Julia has three remarkable properties:
we attempt to guarantee that the code that runs is the same no matter how you got there: no matter whether you loaded everything in your “universe of code” before triggering compilation, or whether you built up your system incrementally at the REPL with usage, you get the same outcome.
The last property is critical and is what makes the REPL actually work. Otherwise you’d get different outcomes when you used the REPL vs something “hardbaked” in a package. But it’s also what requires us to check for invalidations, and what makes package loading slow.