I don’t think that is the reason. Even if I load a very large package where the startup time of Julia plays a very insignificant role I get similar results:
Benchmark 1: julia +1.10 --startup-file=no -e "using KiteModels"
Time (mean ± σ): 5.530 s ± 0.015 s [User: 5.168 s, System: 0.928 s]
Range (min … max): 5.507 s … 5.549 s 10 runs
Benchmark 2: julia +1.11 --startup-file=no -e "using KiteModels"
Time (mean ± σ): 6.256 s ± 0.015 s [User: 6.307 s, System: 0.515 s]
Range (min … max): 6.222 s … 6.273 s 10 runs
Summary
julia +1.10 --startup-file=no -e "using KiteModels" ran
1.13 ± 0.00 times faster than julia +1.11 --startup-file=no -e "using KiteModels"
In this example, Julia 1.10 is 13% faster than Julia 1.11. But this is only the package load time and does not include the pre-compilation time. Well, for me the load time is much more relevant than the pre-compilation time.
Another data point. Time-to-first-gradient with Mooncake + DifferentiationInterface is 40% higher in Julia 1.11 vs 1.10:
> hyperfine --warmup 3 'julia +1.10 --startup-file=no -e "using Mooncake, DifferentiationInterface; f(x) = sum(x .^2); gradient(f, AutoMooncake(config=nothing), [1., 2., -1.4])"' 'julia --startup-file=no -e "using Mooncake, DifferentiationInterface; f(x) = sum(x .^2); gradient(f, AutoMooncake(config=nothing), [1., 2., -1.4])"'
Benchmark 1: julia +1.10 --startup-file=no -e "using Mooncake, DifferentiationInterface; f(x) = sum(x .^2); gradient(f, AutoMooncake(config=nothing), [1., 2., -1.4])"
Time (mean ± σ): 43.036 s ± 0.159 s [User: 42.316 s, System: 0.651 s]
Range (min … max): 42.892 s … 43.365 s 10 runs
Benchmark 2: julia --startup-file=no -e "using Mooncake, DifferentiationInterface; f(x) = sum(x .^2); gradient(f, AutoMooncake(config=nothing), [1., 2., -1.4])"
Time (mean ± σ): 60.154 s ± 0.400 s [User: 59.122 s, System: 0.876 s]
Range (min … max): 59.707 s … 61.109 s 10 runs
Summary
julia +1.10 --startup-file=no -e "using Mooncake, DifferentiationInterface; f(x) = sum(x .^2); gradient(f, AutoMooncake(config=nothing), [1., 2., -1.4])" ran
1.40 ± 0.01 times faster than julia --startup-file=no -e "using Mooncake, DifferentiationInterface; f(x) = sum(x .^2); gradient(f, AutoMooncake(config=nothing), [1., 2., -1.4])"
Here julia runs Julia 1.11.5. For some reason julia +1.11 says ERROR: 1.11 is not installed. Please run juliaup add 1.11 to install channel or version. I opened an issue about this.
As a side note, computing the first gradient of a basic quadratic function of three variables (3D vector) takes one minute??
I find that much worse than load/compile time is the regression in TTFP (at least in this case) where 1.11 is ~2.5 slower and 1.12 ~4 time slower than 1.10. And that example is for an exact same command that has been pre-compiled with PrecompileTools.
Measuring that by subtracting the using Base process’s runtime from the using Downloads process’s runtime (or rather the intervals) assumes that starting and exiting the process takes the same time. That seems reasonable for the start because of the identical state, but exiting doesn’t occur with the same state and can take an arbitrarily long time e.g. running finalizers julia --startup-file=no -E "obj = Ref(3); finalizer(x -> Libc.systemsleep(x[]), obj)". No idea how to benchmark exit() in isolation though.
Following up on some of the comments here regarding consistent code performance across Julia versions — I wanted to share some observations from our own experience with Rocket.jl, where we still support versions as far back as Julia 1.3.
We’re running exactly the same test suite and code across all supported Julia versions, with the same payload, and we’ve noticed similar patterns to those described by Fons. For example, we’ve seen test runtimes improve significantly between 1.3 and 1.9 — dropping from nearly 5 minutes to about 3.5 minutes. A great improvement! However, starting with 1.11, the performance regresses substantially — test time jumps back to 4.5 minutes, and the nightly build performs even worse than 1.3.
You can see an even more drastic version of this in the test suite for RxInfer.jl, where we support Julia 1.10 and above. There, the jump from 1.10 to 1.11 is dramatic.
Seeing test runtimes go from ~17 minutes to ~23 minutes is quite concerning. The logs suggest that most of this additional time is spent in compilation too.