I don’t think that is the reason. Even if I load a very large package where the startup time of Julia plays a very insignificant role I get similar results:
Benchmark 1: julia +1.10 --startup-file=no -e "using KiteModels"
Time (mean ± σ): 5.530 s ± 0.015 s [User: 5.168 s, System: 0.928 s]
Range (min … max): 5.507 s … 5.549 s 10 runs
Benchmark 2: julia +1.11 --startup-file=no -e "using KiteModels"
Time (mean ± σ): 6.256 s ± 0.015 s [User: 6.307 s, System: 0.515 s]
Range (min … max): 6.222 s … 6.273 s 10 runs
Summary
julia +1.10 --startup-file=no -e "using KiteModels" ran
1.13 ± 0.00 times faster than julia +1.11 --startup-file=no -e "using KiteModels"
In this example, Julia 1.10 is 13% faster than Julia 1.11. But this is only the package load time and does not include the pre-compilation time. Well, for me the load time is much more relevant than the pre-compilation time.
Another data point. Time-to-first-gradient with Mooncake + DifferentiationInterface is 40% higher in Julia 1.11 vs 1.10:
> hyperfine --warmup 3 'julia +1.10 --startup-file=no -e "using Mooncake, DifferentiationInterface; f(x) = sum(x .^2); gradient(f, AutoMooncake(config=nothing), [1., 2., -1.4])"' 'julia --startup-file=no -e "using Mooncake, DifferentiationInterface; f(x) = sum(x .^2); gradient(f, AutoMooncake(config=nothing), [1., 2., -1.4])"'
Benchmark 1: julia +1.10 --startup-file=no -e "using Mooncake, DifferentiationInterface; f(x) = sum(x .^2); gradient(f, AutoMooncake(config=nothing), [1., 2., -1.4])"
Time (mean ± σ): 43.036 s ± 0.159 s [User: 42.316 s, System: 0.651 s]
Range (min … max): 42.892 s … 43.365 s 10 runs
Benchmark 2: julia --startup-file=no -e "using Mooncake, DifferentiationInterface; f(x) = sum(x .^2); gradient(f, AutoMooncake(config=nothing), [1., 2., -1.4])"
Time (mean ± σ): 60.154 s ± 0.400 s [User: 59.122 s, System: 0.876 s]
Range (min … max): 59.707 s … 61.109 s 10 runs
Summary
julia +1.10 --startup-file=no -e "using Mooncake, DifferentiationInterface; f(x) = sum(x .^2); gradient(f, AutoMooncake(config=nothing), [1., 2., -1.4])" ran
1.40 ± 0.01 times faster than julia --startup-file=no -e "using Mooncake, DifferentiationInterface; f(x) = sum(x .^2); gradient(f, AutoMooncake(config=nothing), [1., 2., -1.4])"
Here julia runs Julia 1.11.5. For some reason julia +1.11 says ERROR: 1.11 is not installed. Please run juliaup add 1.11 to install channel or version. I opened an issue about this.
As a side note, computing the first gradient of a basic quadratic function of three variables (3D vector) takes one minute??
I find that much worse than load/compile time is the regression in TTFP (at least in this case) where 1.11 is ~2.5 slower and 1.12 ~4 time slower than 1.10. And that example is for an exact same command that has been pre-compiled with PrecompileTools.
Measuring that by subtracting the using Base process’s runtime from the using Downloads process’s runtime (or rather the intervals) assumes that starting and exiting the process takes the same time. That seems reasonable for the start because of the identical state, but exiting doesn’t occur with the same state and can take an arbitrarily long time e.g. running finalizers julia --startup-file=no -E "obj = Ref(3); finalizer(x -> Libc.systemsleep(x[]), obj)". No idea how to benchmark exit() in isolation though.
Following up on some of the comments here regarding consistent code performance across Julia versions — I wanted to share some observations from our own experience with Rocket.jl, where we still support versions as far back as Julia 1.3.
We’re running exactly the same test suite and code across all supported Julia versions, with the same payload, and we’ve noticed similar patterns to those described by Fons. For example, we’ve seen test runtimes improve significantly between 1.3 and 1.9 — dropping from nearly 5 minutes to about 3.5 minutes. A great improvement! However, starting with 1.11, the performance regresses substantially — test time jumps back to 4.5 minutes, and the nightly build performs even worse than 1.3.
You can see an even more drastic version of this in the test suite for RxInfer.jl, where we support Julia 1.10 and above. There, the jump from 1.10 to 1.11 is dramatic.
Seeing test runtimes go from ~17 minutes to ~23 minutes is quite concerning. The logs suggest that most of this additional time is spent in compilation too.
@fonsp It would be interesting, if possible, to take that into account. If more deps of the packages you are installing require precompilation the setup time would directly be greater but hopefully you would get a corrisponding boost later on
@fonsp you are more familiar with your code but could you maybe add the option to add a representative workflow for the package? Maybe for plot(sin) and then we could add workflows for other packages via a PR?
If there would be a base structure where we could just add it we could all help
That would be amazing!
It seems to me that there’s scope for a “TTFX tests” repository that aggregates TTFX code snippets for popular packages… I’ll happily spin one up if that’s of interest.
It’s ambiguous what this means. There’s at least a couple ways to read this:
Across all Julia versions, only one version X of a given package Y is measured. If that version is not compatible with earlier Julia versions, it is omitted. That is what Symbolics.jl seems to do in that graph; its current version requires v1.10+, so its line only starts there
The latest version of a given package Y is measured against and varies with a given Julia version Z. That is implied by the notebook:
Include Julia versions before 1.10 . Comparing with old Julia versions might not be valid, since different package versions might be loaded.
That would be amazing! What do you have in mind, a kind of general registry with one (or more) file per package, and the maintainers provide the snippets?
We’d probably want:
a Project.toml listing the dependencies needed by the snippet, including compat bounds
the source / origin of the snippet (documentation, readme, developer, etc.)
Essentially, but I’m thinking we might as well make it open for anybody to add snippets. I’ve got an idea around this, and I’ll probably just see about implementing it then sharing it rather than describing it in full here.
I’ve been thinking about the Project.toml/Manifest.toml, however:
I think each snippet should probably only involve a single package, to remain as focused as possible. Maybe a separate family of multi-package snippets if there’s demand/value.
Given the different ways you might want to resolve package/dep versions, I feel like adding full manifest information is a bit much. I’m leaning towards just a minimum Julia + package version
That may not be possible for some of the most used packages, which require a combination of dependencies to do anything at all. Some examples I have in mind:
DifferentiationInterface.jl goes hand in hand with an autodiff backend like ForwardDiff.jl
JuMP.jl is useless without a solver like HiGHS.jl
Turing.jl usually calls Distributions.jl
In addition, many snippets will involve standard libraries like LinearAlgebra.jl or SparseArrays.jl, which might be versioned separately in the future.
That’s why I’m learning towards at least a Project.toml. Even for your proposal of “just a minimum Julia + package version”, the standard way of encoding this is a Project.toml file.