I modified the script to make sure the csv file is saved as well. Finished doing that 3 minutes ago, came to discourse to post that the aggregate data is now also available on the repo, and see that this was already announced 2 minutes ago by you.
I wish my students were as magically efficient and quick as this community (they are great anyway though)
The named versions alpha, lts, etc, probably need to be filtered out or changed to include only the last handful of results because lts includes all versions that were historically lts, not just the current lts (some for the other named versions)
Median might be better than mean because there are a ton of extreme outliers
If someone gets around to submitting this as a PR it will be accepted quickly and will be present in the autogenerated plots (see ttfx_snippets_vis.jl)
maybe it would make sense to store commit hashes or something for the changing specifiers? Otherwise it gets a bit complicated to map them back I guess
They need 26 min with Julia 1.10 and 31.5 min with Julia 1.12. A difference of 20%. But this is, of course, only one measurement, not necessarily representative.
I just switched over our benchmark CI to 1.12. Over 1.12, Import is a 10% improvement, precompile is a slight regression, and TTFX (compiling an empty kernel) takes 55% longer
Not particularly this one but for example #59494, where with the very helpful contibutions of Kristoffer and Tim the TTFP of the plot function dropped down to the amazing:
julia> @time using GMT
0.707362 seconds (1.11 M allocations: 64.533 MiB, 5.06% gc time, 3.01% compilation time)
julia> @time @eval plot(rand(5,2))
0.002897 seconds (721 allocations: 34.281 KiB)
This is very likely fastest TTFP of a plot() function in Julia. But unfortunately this does not propagate and a call that is almost identical in called functions (it differes to only one line)
compiles again. Sure, I can add this one too to the pre-compile workload but just that extra line adds 500 kb to the pre compiled cache. The size of the pre-compiled cache is what stops me to add much more to PrecompileTools workload. I have codes with a couple hundreds LOC that add ~10 MB to the pre-compile cache. I really don’t understand what happens in this land, only that they are huge.
What you describe here is probably not directly related to the compile times of packages. I am still looking for a simple, small, reproducible test case for the compile time that shows an increase of more than 20%. Then we would have the chance to create an issue that might get addressed.
This is likely influenced by how you specialize to kwargs in your code.
In makie we try do convert to untyped dicts for kwargs as early in the call as possible, so that all inner functions don’t specialize to the kwargs.
I’m not sure if that helps with gmt, but without it I’m sure you’ll get larger precompile times with slightly different kwargs!
I think that what you mean is more or less what I do here
The _common_plot_xyz() is the big function that does most of the parsing work. I still have to keep a Dict{Symbol, Any} because input can take different forms. But one thing that puzzles me is that other functions that rely almost ti calls to this main function still compile sub-functions that should have been already precompiled (at least they do not show up as invalidations with SnoopCompile)
Not just startup times are increasing. The runtimes are getting worse too. I have a couple of tests in GeoStatsFunctions.jl that are now failing in Julia v1.12:
CompositeVariogram: Test Failed at /home/runner/work/GeoStatsFunctions.jl/GeoStatsFunctions.jl/test/theoretical/composite.jl:102
Expression: #= /home/runner/work/GeoStatsFunctions.jl/GeoStatsFunctions.jl/test/theoretical/composite.jl:102 =# @elapsed(sill(γ)) < 1.0e-5
Evaluated: 0.007924943 < 1.0e-5
I didn’t change a single line of code
P.S.: the elapsed test is placed after a warmup call, so it is not measuring compilation.