Julia v1.9.0-beta2 is fast

First test on Ubuntu 20.04:

Code, save it in bench.jl:

using CSV

const input="""

function read_csv(in)
    io = IOBuffer(in)
    file = CSV.File(io)

file = read_csv(input)


@time @eval include("bench.jl")

# Julia-1.8.4:      7.17s
# Julia-1.9 beta2 : 0.47s

So when using the package CSV, the FTTX is 15 times faster for Julia-1.9 beta2 than for julia-1.8.4.

I installed the new beta with jill.py using the commands:

jill install 1.9 --unstable
jill switch 1.9

The pre-release version 1.40.1 of the vscode plugin works with Julia v1.9-beta2 .


Does the beta already have native caching backported? I didn’t see it in the linked news file

1 Like

Does LanguageServer.jl (Julia’s VSCode Plugin) start faster in Julia 1.9.0-beta2?


The pull request https://github.com/JuliaLang/julia/pull/47184 is included.


I could not see a significant difference, on my laptop it takes 28-36s to for the language server to start without downloading the cache files.
Edit: If I switch to performance mode it takes 15s.

1 Like

Second test case, save it in plot.jl:

# example for using plotting in Julia
using Plots

x  = 0:0.01:100
y1 = sin.(x)

p = plot(x, y1, size=(1200,700), legend=false)

and execute it with:

@time @eval include("plot.jl")

# Julia v1.8.4      8.25s
# Julia v1.9-beta2  3.22s

So in this test case the first time to plot is about 2.5 times shorter.

Nice belated Christmas present! :grinning:
Thanks a lot to @tim.holy and all the other developers for the good work!


Sorry if this is a dumb question. I’m a relatively new Julia user, but one thing that I have learned is that compilation time in Julia is a lot longer the first time you run something than in subsequent runs. Just wanted to double check that this speed increase isn’t due to this effect, right?. For example, if I run the command twice, the first time takes ~7 sec and the second time less than 0.01 sec. This is what I get (and I’m currently running Julia-1.7.3).

julia> @time @eval include("bench.jl")
  6.717367 seconds (20.20 M allocations: 1.007 GiB, 3.62% gc time, 94.06% compilation time)
2-element CSV.File:
 CSV.Row: (time = 1, ping = 25.7)
 CSV.Row: (time = 2, ping = 31.8)

julia> @time @eval include("bench.jl")
  0.009645 seconds (4.25 k allocations: 262.817 KiB, 68.93% compilation time)
2-element CSV.File:
 CSV.Row: (time = 1, ping = 25.7)
 CSV.Row: (time = 2, ping = 31.8)

The times I mention here are all the first execution times after starting a fresh session. The second time is always much faster.


There is no need to do any compilation the second time it runs since the compilation from the first run is stored. Also, the second time you run it you don’t need to load the package again.


I wish that was more true :slight_smile:

On the second run

  | | |_| | | | (_| |  |  Version 1.10.0-DEV.207 (2022-12-28)
 _/ |\__'_|_|_|\__'_|  |  Commit 9448c89652 (1 day old master)
|__/                   |

julia> cd("C:/v"); @time using GMT
  2.665190 seconds (3.92 M allocations: 240.520 MiB, 3.23% gc time, 38.43% compilation time: 89% of which was recompilation)

Which is still a bit slower than what I get with Julia 1.7

It is undoubtedly faster to start many other functions but I have modules whose TTFP dropped from 16 s to only ~8 s and SnoopePrecompile does not help (but increases substantially the precompile time)

Well, Julia 1.9 does not magically solve all problems regarding the load and compilation time of packages. It implements binary caching which helps a lot. But for some packages the load time slightly increases.

So there is more work to be done, on the Julia side, but also by the package authors…

Accumulated, I’ve spent weeks trying to improve that but …


The speedups reported here has little to nothing to do with package load time but has to do with packages that use precompile or equivalent to (now) cache native code. So if all you are measuring is package load time, or you use a function that has not been precompiled, I wouldn’t expect much difference.

You’re likely experiencing lots of invalidations (which defeat the purpose of caching…), see the almost 90% of recompilation time.


I know that it depends in pre-compilation and believe me that I’ve tried *a lot of variations. Still, I was also expecting an improvement in load time since it says that it’s recompiling things. And again, in Julia 1.7 that start time is below 2 sec. In >= 1.9 the load time is almost always ~3 sec.
And I don’t see what I can do when the TTFP of the plot function, as I reported before here, suddenly jumped from ~8 sec to > 11 sec in the middle of the 1.9 development cycle. Now it dropped to ~4.5 (still a bit high). It is < 7 sec in Julia 1.7

Yes, I saw that. Both in the load tile as well as on other modules but I really don’t know what I can do when SnoopCompile says there are no invalidations

julia> using SnoopCompileCore; invalidations = @snoopr(plot(rand(5,2))); using SnoopCompile

julia> trees = invalidation_trees(invalidations)

and for inferences I have a list of 7 functions and no idea on what to do with that info

julia> tinf = @snoopi_deep plot(rand(5,2))
InferenceTimingNode: 2.239190/6.083681 on Core.Compiler.Timings.ROOT() with 7 direct children

julia> itrigs = inference_triggers(tinf)
7-element Vector{InferenceTrigger}:
Inference triggered to call GMT.plot(::Matrix{Float64}) from eval (.\boot.jl:370) inlined into REPL.eval_user_input(::Any, ::REPL.REPLBackend, ::Module) (C:\workdir\usr\share\julia\stdlib\v1.10\REPL\src\REPL.jl:152)
 Inference triggered to call GMT.add_opt_cpt(::Dict{Symbol, Any}, ::String, ::Matrix{Symbol}, ::Char, ::Int64, ::Matrix{Float64}, ::Nothing, ::Bool, ::Bool, ::String, ::Bool) from common_plot_xyz (C:\Users\joaqu\.julia\dev\GMT\src\psxy.jl:175) with specialization GMT.common_plot_xyz(::String, ::Matrix{Float64}, ::String, ::Bool, ::Bool)
 Inference triggered to call GMT._helper_psxy_line(::Dict{Symbol, Any}, ::String, ::String, ::Bool, ::Matrix{Float64}, ::Vararg{Any}) from common_plot_xyz (C:\Users\joaqu\.julia\dev\GMT\src\psxy.jl:183) with specialization GMT.common_plot_xyz(::String, ::Matrix{Float64}, ::String, ::Bool, ::Bool)

If you’re using SnoopPrecompile to compile a workload and you’re not seeing a big speedup on 1.9, your first stop should be to check for invalidations. Checking for invalidations is easy, most Julia users should be able to do it. Fixing them is much harder than any of us would like, but if you open a new topic someone might be able to help you.

As a reminder, the recipe is to start a fresh session (even with --startup=no so it’s “clean”) and then:

using SnoopCompileCore
invs = @snoopr using ThePackageICareAbout;
tinf = @snoopi_deep some_workload_I_want_to_be_fast(args...);
using SnoopCompile
trees = invalidation_trees(invs);
staletrees = precompile_blockers(trees, tinf)

and see what is in staletrees. If it’s empty, you can rule out invalidation; if not, you have a problem you need to fix. Note that this is workload-dependent: some_workload_I_want_to_be_fast... can be a begin...end block that exercises all the functionality you want to be fast. If you later discover more stuff you want to be fast, you should repeat this analysis for that new workload. (If you just want to fix all invalidations, look at trees instead, but often that’s a pretty large list…)

The “fault” is only rarely in the method listed as causing the invalidation, instead the usual fix is in the things that got invalidated by it. SnoopCompile’s documentation goes over strategies to analyze them more deeply and fix them, but again that treads into more difficult waters.


One doubt/curiosity: if one loads a package in a fresh Julia section, and then later opens Julia again in the same environment, should now the loading time be very fast? I mean, is there any reason for recompilation to occur among two subsequent loadings of a package, if nothing has been done in the mean time?

Invalidations happen when you load the package, not when you run the workload. You’re snooping the wrong step.

1 Like

If the package needs precompilation, then yes, using MyPkg will be very slow the first time. But loading is still slow (less than in the first case) once it’s been precompiled. The second form of slowness has nothing to do with I/O (that’s fast), instead almost all of it is about checking whether the loaded code is still valid.

For those who curse invalidations, keep in mind they are a feature, not a bug, even though they cause headaches. Julia has three remarkable properties:

  • it’s aggressively compiled & automatically specialized
  • it’s interactive
  • we attempt to guarantee that the code that runs is the same no matter how you got there: no matter whether you loaded everything in your “universe of code” before triggering compilation, or whether you built up your system incrementally at the REPL with usage, you get the same outcome.

The last property is critical and is what makes the REPL actually work. Otherwise you’d get different outcomes when you used the REPL vs something “hardbaked” in a package. But it’s also what requires us to check for invalidations, and what makes package loading slow.