Potential performance regressions in Julia 1.8 for special un-precompiled type dispatches and how to fix them

The only known effect in Julia v1.8 I’m aware of is that loading time of packages may increase a bit because more code is cached, which should however result in overall slightly faster TTFX. I’ve seen this in a few small packages.

What you’re observing instead sounds similar to

Can you see if https://github.com/JuliaLang/julia/pull/46366 helps with loading time? Related issue (also opened by Marius): Large number of invalidations by this package and seems to really slow down certain jll loads · Issue #77 · SciML/Static.jl · GitHub

2 Likes

First of all. I’m writing a whole blog post on this, lots of work to do, etc. Will fill in all details. Short stuff now. But see.

no_jit_lag

That’s what load times are like now on OrdinaryDiffEq v6.24, so the start there is a bit misleading if someone doesn’t know what they are reading. Time to first solution is down an order of magnitude and now it fully precompiles in a system image. Now, I don’t think you did that on purpose. You saw this with Trixi.jl, and your statements are true while what I just showed is true. The question is how to reconcile both, and what to do about it.

I’m not going to address the runtime stuff because the 7% is just pure Julia changes, probably inlining and the effects system.

The real thing is time to first solution and package load time. The issue is that everything now fully precompiles on “standard types” so “most people” have much lower load times. Standard types being Vector{Float64} for u0, Float64 for tspan, and NullParameters or Vector{Float64} for parameters. I don’t think anyone will disagree that this means almost all users (more than 99%) will experience a faster first solution and all. But what it does mean is that everyone gets those precompiled. Trixi.jl uses its own parameter type, and thus it bypasses this system, and hence you see the increased precompiles time and load time without the benefit.

The answer to this is multi-fold. One,

Using the preferences system to determine what to precompile. For now if you just comment out:

The load times should go back. Please test this. If so, then a preferences system to disable that is the solution.

Two, I have been calling for help to setup upstream SnoopCompiles, so please help. For example, RecursiveFactorization.jl/RecursiveFactorization.jl at v0.2.11 · JuliaLinearAlgebra/RecursiveFactorization.jl · GitHub that should be changed to snoopprecompile etc. That will reduce the number of repeated compilations and reduce the precompile and load times overall. Every upstream package should probably get a small representative workflow snooped.

Three, see the invalidation report from this week.

The convert overloads is already taken care of, but the Static ones are ongoing.

The big ones are:

Please help with the second one by writing a Cassette pass for LoopVectorization so that it can do function replacement on !static_! so that ! does not need to be overloaded. That would remove most of the recompilation.

Also, it would be helpful to see a representative Trixi invalidation report if you can generate one. Just do exactly as from that DifferentialEquations.jl issue, and share what the top 10 or so invalidators are.

9 Likes

Looking at invalidations, Static.jl with !(::False) is currently the worst when using Trixi.jl. This should hopefully be fixed by Remove invalidating `!` overloads by ChrisRackauckas · Pull Request #78 · SciML/Static.jl · GitHub.

julia> using SnoopCompileCore

julia> invalidations = @snoopr begin
           using Trixi
           trixi_include(default_example())
       end

julia> trees = invalidation_trees(invalidations);

Edit: But also see the list of PRs fixing related invalidations in the post below.

1 Like

Okay, so I went a bit invalidation hunting this morning:

  • Static.jl invalidates quite a lot, see
    fix invalidations in logging by ranocha · Pull Request #46481 · JuliaLang/julia · GitHub,
    fix invalidations for Dicts from Static.jl by ranocha · Pull Request #46490 · JuliaLang/julia · GitHub,
    fix invalidations in sort! from Static.jl by ranocha · Pull Request #46491 · JuliaLang/julia · GitHub,
    fix invalidations of `isinf` from Static.jl by ranocha · Pull Request #46493 · JuliaLang/julia · GitHub,
    fix invalidations in REPLCompletions from Static.jl by ranocha · Pull Request #46494 · JuliaLang/julia · GitHub,
    fix invalidations from Static.jl by ranocha · Pull Request #140 · JuliaIO/Tar.jl · GitHub,
    fix API invalidations from Static.jl by ranocha · Pull Request #3179 · JuliaLang/Pkg.jl · GitHub
  • Unrolled.jl invalidates a bunch of stuff, see Unrolled.jl invalidates quite a lot · Issue #12 · cstjean/Unrolled.jl · GitHub
  • HDF5.jl invalidates a bunch of stuff in the REPL: hopefully fix invalidations of REPL from HDF5.jl by ranocha · Pull Request #46486 · JuliaLang/julia · GitHub
  • ChainRulesCore.jl invalidates a bunch of stuff, see Invalidations from ChainRulesCore Tangent overload on Tail · Issue #576 · JuliaDiff/ChainRulesCore.jl · GitHub
  • FixedPointNumbers.jl invalidates quite a bit, in particular from LoopVectorization calling sum(::Vector{Any})
    inserting reduce_first(::typeof(Base.add_sum), x::FixedPointNumbers.FixedPoint) in FixedPointNumbers at ~/.julia/packages/FixedPointNumbers/HAGk2/src/FixedPointNumbers.jl:295 invalidated:
      backedges: 1: superseding reduce_first(::typeof(Base.add_sum), x) in Base at reduce.jl:394 with MethodInstance for Base.reduce_first(::typeof(Base.add_sum), ::Any) (309 children)
    
  • ArrayInterface.jl invalidates parts of Tar.jl: hopefully fix invalidations from ArrayInterface.jl by ranocha · Pull Request #138 · JuliaIO/Tar.jl · GitHub, hopefully fix more invalidations by ranocha · Pull Request #139 · JuliaIO/Tar.jl · GitHub
  • OrderedCollections.jl invalidates quite a bit
    inserting convert(::Type{OrderedCollections.OrderedDict{K, V}}, d::OrderedCollections.OrderedDict{K, V}) where {K, V} in OrderedCollections at ~/.julia/packages/OrderedCollections/PRayh/src/ordered_dict.jl:110 invalidated:
     backedges: 1: superseding convert(::Type{T}, x::AbstractDict) where T<:AbstractDict in Base at abstractdict.jl:561 with MethodInstance for convert(::Type, ::AbstractDict) (134 children)
    
  • LoopVectorization.jl invalidates some code in HDF5.jl and indexing:
    inserting convert(::Type{T}, i::LoopVectorization.UpperBoundedInteger) where T<:Number in LoopVectorization at /home/hendrik/.julia/packages/LoopVectorization/e7fJe/src/reconstruct_loopset.jl:25 invalidated:
     backedges: 1: superseding convert(::Type{T}, x::Number) where T<:Number in Base at number.jl:7 with MethodInstance for convert(::Type{UInt64}, ::Integer) (3 children)
                2: superseding convert(::Type{T}, x::Number) where T<:Number in Base at number.jl:7 with MethodInstance for convert(::Type{Int64}, ::Integer) (17 children)
                3: superseding convert(::Type{T}, x::Number) where T<:Number in Base at number.jl:7 with MethodInstance for convert(::Type{Int32}, ::Integer) (99 children)
     17 mt_cache
    
  • Geometry basics also invalidates code in HDF5.jl etc.
    inserting convert(::Type{IT}, x::GeometryBasics.OffsetInteger) where IT<:Integer in GeometryBasics at /home/hendrik/.julia/packages/GeometryBasics/5Sb5M/src/offsetintegers.jl:40 invalidated:
     mt_backedges: 1: signature convert(::Type{T}, x::Number) where T<:Number in Base at number.jl:7 (formerly convert(::Type{T}, x::Number) where T<:Number in Base at number.jl:7) triggered MethodInstance for Colors._precompile_() (1 children)
                   2: signature convert(::Type{T}, x::Number) where T<:Number in Base at number.jl:7 (formerly convert(::Type{T}, x::Number) where T<:Number in Base at number.jl:7) triggered MethodInstance for parse(::Type{ColorTypes.RGB{FixedPointNumbers.N0f8}}, ::String) (1 children)
                   3: signature convert(::Type{T}, x::Number) where T<:Number in Base at number.jl:7 (formerly convert(::Type{T}, x::Number) where T<:Number in Base at number.jl:7) triggered MethodInstance for Colors._parse_colorant(::String) (1 children)
     backedges: 1: superseding convert(::Type{T}, x::Number) where T<:Number in Base at number.jl:7 with MethodInstance for convert(::Type{Int64}, ::Integer) (9 children)
                2: superseding convert(::Type{T}, x::Number) where T<:Number in Base at number.jl:7 with MethodInstance for convert(::Type{Int32}, ::Integer) (91 children)
    
  • ForwardDiff.jl invalidates some string code: fix type instability/invalidations from `nextind` by ranocha · Pull Request #46489 · JuliaLang/julia · GitHub
  • StaticArrays.jl also invalidates quite a bit, e.g.,
    inserting similar(::Type{A}, shape::Union{Tuple{SOneTo, Vararg{Union{Integer, Base.OneTo, SOneTo}}}, Tuple{Union{Integer, Base.OneTo}, SOneTo, Vararg{Union{Integer, Base.OneTo, SOneTo}}}, Tuple{Union{Integer, Base.OneTo}, Union{Integer, Base.OneTo}, SOneTo, Vararg{Union{Integer, Base.OneTo, SOneTo}}}}) where A<:AbstractArray in StaticArrays at ~/.julia/packages/StaticArrays/68nRv/src/abstractarray.jl:156 invalidated:
     mt_backedges: 1: signature Tuple{typeof(similar), Type{Array{Union{Int64, Symbol}, _A}} where _A, Tuple{Union{Integer, AbstractUnitRange}}} triggered MethodInstance for similar(::Type{Array{Union{Int64, Symbol}, _A}}, ::Union{Integer, AbstractUnitRange}) where _A (0 children)
                   2: signature Tuple{typeof(similar), Type{Array{Union{Int64, Symbol}, _A}} where _A, Any} triggered MethodInstance for Base._array_for(::Type{Union{Int64, Symbol}}, ::Base.HasShape, ::Any) (0 children)
                   3: signature Tuple{typeof(similar), Type{Array{Any, _A}} where _A, Tuple{Union{Integer, AbstractUnitRange}}} triggered MethodInstance for similar(::Type{Array{Any, _A}}, ::Union{Integer, AbstractUnitRange}) where _A (0 children)
                   4: signature Tuple{typeof(similar), Type{Array{Any, _A}} where _A, Any} triggered MethodInstance for Base._array_for(::Type{Any}, ::Base.HasShape, ::Any) (0 children)
                   5: signature Tuple{typeof(similar), Type{Array{Base.PkgId, _A}} where _A, Tuple{Union{Integer, AbstractUnitRange}}} triggered MethodInstance for similar(::Type{Array{Base.PkgId, _A}}, ::Union{Integer, AbstractUnitRange}) where _A (0 children)
                   6: signature Tuple{typeof(similar), Type{Array{Base.PkgId, _A}} where _A, Any} triggered MethodInstance for Base._array_for(::Type{Base.PkgId}, ::Base.HasShape, ::Any) (0 children)
                   7: signature Tuple{typeof(similar), Type{Array{Union{Int64, Symbol}, _A}} where _A, Tuple{Union{Integer, AbstractUnitRange}}} triggered MethodInstance for similar(::Type{Array{Union{Int64, Symbol}, _A}}, ::Tuple{Union{Integer, Base.OneTo}}) where _A (9 children)
                   8: signature Tuple{typeof(similar), Type{Array{Base.PkgId, _A}} where _A, Tuple{Union{Integer, AbstractUnitRange}}} triggered MethodInstance for similar(::Type{Array{Base.PkgId, _A}}, ::Tuple{Union{Integer, Base.OneTo}}) where _A (9 children)
                   9: signature Tuple{typeof(similar), Type{Array{Any, _A}} where _A, Tuple{Union{Integer, AbstractUnitRange}}} triggered MethodInstance for similar(::Type{Array{Any, _A}}, ::Tuple{Union{Integer, Base.OneTo}}) where _A (136 children)
    
  • There are of course more invalidations, but they seem to be less severe, e.g.,
    hopefully fix invalidations in API from AbstractFFTs by ranocha · Pull Request #3180 · JuliaLang/Pkg.jl · GitHub

However, I would have expected that these invalidations happen also with Julia v1.7 - or do I miss something?

14 Likes

There’s two facts that collide to make this matter more now. One is:

This means that if a precompile is missing in package X that is used/needed to precompile a call in package Y, it will now precompile with the ownership of package Y. This has 2 effects: one is that more precompilation will happen, two is that if package Z also needs the missing precompile, then package Y and package Z will precompile their own versions of the call from package X.

The more precompilation will increase load times but normally decrease first solve time, if types tend to match etc. But, it will increase load times more if a call is precompiled multiple times. The solution then is to try and precompile “what we know is needed” in package X, and use the system of external precompilation as sparingly as possible. It’s required to make things work (for example, Base misses precompilation of Vector(::Uninitiaialized,::Tuple) so oops you might need that), but don’t overrely on it.

The next is how SnoopPrecompile changes the game:

The main fact is that uninferred calls can now do precompilation effectively. Go back and re-read this issue in full:

That was the old situation. The issue was, if we can get inference to happen higher, then precompilation will happen on RecursiveFactorization.jl and that will send the first solve time with implicit methods from 22 seconds to 3. Now with SnoopPrecompile, the pre-changed version probably already hits 3 (on current release, it’s now 0.5 seconds BTW).

Basically, this means a lot more precompiles. But because a lot more precompiles, doubling precompiles hurts more. And invalidating functions hurts even more. If you do @time using OrdinaryDiffEq, the real important stat is 75% of the time is recompilation. This is invalidations taking what was precompiled and throwing it away because loading a different package (Static.jl, LoopVectorization.jl) invalidates the precompiled version.

So in the end, a lot more gets precompiled so the load time is increased (because of the ownership and non-inferred help), this does have a major improvement on the first solve time, but it increases using time, which then explodes because invalidations throw away more than a majority of that precompile work.

Therefore, invalidations matter a whole lot more now. It’s time to fix as much as we can there.

3 Likes

Yeah, right, that’s explains (at least a part of) this.

Out of curiosity: Are invalidation fixes usually backported (to release-1.8 in this case) or do we have to wait for Julia v1.9?

But do we have tools that can be used for example in CI to make sure invalidations aren’t brought back again in the future? The fact is that waiting for someone enough pissed off to hunt down all the invalidations can work once, but isn’t much sustainable in the long run.

13 Likes

I think at a package level, one may assert that a PR doesn’t add invalidations. See e.g. https://github.com/JuliaArrays/OffsetArrays.jl/blob/master/.github/workflows/invalidations.yml

8 Likes

First of all, thanks to everyone who offered some helpful suggestions!

Unfortunately, no. When I try to do using Trixi with the nightly build 36aab14a97, Julia segfaults with

[1693377] signal (11): Segmentation fault
in expression starting at REPL[1]:1
ijl_array_del_end at /cache/build/default-amdci5-3/julialang/julia-master/src/array.c:1144
jl_insert_method_instances at /cache/build/default-amdci5-3/julialang/julia-master/src/dump.c:2379 [inlined]
_jl_restore_incremental at /cache/build/default-amdci5-3/julialang/julia-master/src/dump.c:3273
[...]
Full error message
[1693377] signal (11): Segmentation fault
in expression starting at REPL[1]:1
ijl_array_del_end at /cache/build/default-amdci5-3/julialang/julia-master/src/array.c:1144
jl_insert_method_instances at /cache/build/default-amdci5-3/julialang/julia-master/src/dump.c:2379 [inlined]
_jl_restore_incremental at /cache/build/default-amdci5-3/julialang/julia-master/src/dump.c:3273
ijl_restore_incremental at /cache/build/default-amdci5-3/julialang/julia-master/src/dump.c:3333
_include_from_serialized at ./loading.jl:867
_require_search_from_serialized at ./loading.jl:1099
_require at ./loading.jl:1378
_require_prelocked at ./loading.jl:1260
macro expansion at ./loading.jl:1240 [inlined]
macro expansion at ./lock.jl:267 [inlined]
require at ./loading.jl:1204
jfptr_require_50250.clone_1 at /mnt/hd1/opt/julia/nightly-20220825-36aab14a97/lib/julia/sys.so (unknown line)
_jl_invoke at /cache/build/default-amdci5-3/julialang/julia-master/src/gf.c:2447 [inlined]
ijl_apply_generic at /cache/build/default-amdci5-3/julialang/julia-master/src/gf.c:2629
jl_apply at /cache/build/default-amdci5-3/julialang/julia-master/src/julia.h:1854 [inlined]
call_require at /cache/build/default-amdci5-3/julialang/julia-master/src/toplevel.c:466 [inlined]
eval_import_path at /cache/build/default-amdci5-3/julialang/julia-master/src/toplevel.c:503
jl_toplevel_eval_flex at /cache/build/default-amdci5-3/julialang/julia-master/src/toplevel.c:731
eval_body at /cache/build/default-amdci5-3/julialang/julia-master/src/interpreter.c:561
eval_body at /cache/build/default-amdci5-3/julialang/julia-master/src/interpreter.c:522
jl_interpret_toplevel_thunk at /cache/build/default-amdci5-3/julialang/julia-master/src/interpreter.c:751
jl_toplevel_eval_flex at /cache/build/default-amdci5-3/julialang/julia-master/src/toplevel.c:912
jl_toplevel_eval_flex at /cache/build/default-amdci5-3/julialang/julia-master/src/toplevel.c:856
ijl_toplevel_eval_in at /cache/build/default-amdci5-3/julialang/julia-master/src/toplevel.c:971
eval at ./boot.jl:370 [inlined]
eval_user_input at /cache/build/default-amdci5-3/julialang/julia-master/usr/share/julia/stdlib/v1.9/REPL/src/REPL.jl:152
repl_backend_loop at /cache/build/default-amdci5-3/julialang/julia-master/usr/share/julia/stdlib/v1.9/REPL/src/REPL.jl:248
#start_repl_backend#46 at /cache/build/default-amdci5-3/julialang/julia-master/usr/share/julia/stdlib/v1.9/REPL/src/REPL.jl:233
start_repl_backend##kw at /cache/build/default-amdci5-3/julialang/julia-master/usr/share/julia/stdlib/v1.9/REPL/src/REPL.jl:230 [inlined]
#run_repl#59 at /cache/build/default-amdci5-3/julialang/julia-master/usr/share/julia/stdlib/v1.9/REPL/src/REPL.jl:372
run_repl at /cache/build/default-amdci5-3/julialang/julia-master/usr/share/julia/stdlib/v1.9/REPL/src/REPL.jl:357
jfptr_run_repl_57495.clone_1 at /mnt/hd1/opt/julia/nightly-20220825-36aab14a97/lib/julia/sys.so (unknown line)
_jl_invoke at /cache/build/default-amdci5-3/julialang/julia-master/src/gf.c:2447 [inlined]
ijl_apply_generic at /cache/build/default-amdci5-3/julialang/julia-master/src/gf.c:2629
#1007 at ./client.jl:413
jfptr_YY.1007_36884.clone_1 at /mnt/hd1/opt/julia/nightly-20220825-36aab14a97/lib/julia/sys.so (unknown line)
_jl_invoke at /cache/build/default-amdci5-3/julialang/julia-master/src/gf.c:2447 [inlined]
ijl_apply_generic at /cache/build/default-amdci5-3/julialang/julia-master/src/gf.c:2629
jl_apply at /cache/build/default-amdci5-3/julialang/julia-master/src/julia.h:1854 [inlined]
jl_f__call_latest at /cache/build/default-amdci5-3/julialang/julia-master/src/builtins.c:774
#invokelatest#2 at ./essentials.jl:810 [inlined]
invokelatest at ./essentials.jl:807 [inlined]
run_main_repl at ./client.jl:397
exec_options at ./client.jl:314
_start at ./client.jl:514
jfptr__start_30331.clone_1 at /mnt/hd1/opt/julia/nightly-20220825-36aab14a97/lib/julia/sys.so (unknown line)
_jl_invoke at /cache/build/default-amdci5-3/julialang/julia-master/src/gf.c:2447 [inlined]
ijl_apply_generic at /cache/build/default-amdci5-3/julialang/julia-master/src/gf.c:2629
jl_apply at /cache/build/default-amdci5-3/julialang/julia-master/src/julia.h:1854 [inlined]
true_main at /cache/build/default-amdci5-3/julialang/julia-master/src/jlapi.c:567
jl_repl_entrypoint at /cache/build/default-amdci5-3/julialang/julia-master/src/jlapi.c:711
main at julia-nightly-20220825-36aab14a97 (unknown line)
__libc_start_main at /lib/x86_64-linux-gnu/libc.so.6 (unknown line)
unknown function (ip: 0x401098)
Allocations: 22294390 (Pool: 22283733; Big: 10657); GC: 12
Segmentation fault (core dumped)

Just to re-emphasize this from my original post: I do not mean to criticize any particular package.

However, at the moment your description does not match our observations from a (what I would call) regular user’s perspective: If we install OrdinaryDiffEq and Trixi into a fresh depot on a standard Linux machine, then the package installation time and the package loading time and the compilation time go up from 1.7.3 to 1.8.0.

These times do not just represent “convenience” issues for us: Longer package installation times means increased development times due to higher CI wait times. Longer loading times means it is harder to use these packages for quick demonstrations or for live experimentation when teaching university courses. Longer compilation times are problematic when running parallel jobs on supercomputers.

Actually, this is IMHO probably the biggest issue of all: I would assume that a language designed for high performance will - unless specifically announced - only ever have improved execution performance with each new release. Thus, these measurements were at least a surprise to us. I would be very interested in hearing from others if this regression has been observed for other use cases as well. Since we use Julia as an HPC language, a 7% regression in execution speed is non-negligible.

They improve, but unfortunately not completely back to Juila 1.7.3 levels:

P.S.: It seems like the title of my post was unilaterally changed by someone to something different from what I wrote. I think this is somewhat rude, especially since now the title does not fully reflect my original intent anymore (in my opinion, the performance regressions are only a question of type dispatches, given that compilation and execution performance is affected as well).

4 Likes

Note that https://github.com/JuliaLang/julia/pull/46366 isn’t merged, so you’d have to compile julia yourself (maybe applying that patch on top of v1.8.0 tag, to minimise unrelated changes).

You’re missing the huge caveat and the whole point. Your statement only for cases like Trixi.jl where special parameters types are used. That’s a huge deal. The first solve time is dramatically lower if that’s not the case. That’s pretty clear from the measurements. Regular users use Vector{Float64} or nothing for parameters: if you don’t believe that’s the case please provide evidence (I can tell you that from thousands of Discourse posts, more than 99% of them are in this case!). Yes, for the cases that people post about <1% of the time, which happens to be the case that you are looking at all of the time, there is this problem. I understand that increased Trixi compilation time, plus v1.8 changes and increased invalidations, but there is no reason to go doom and gloom beyond what’s actually true. Recognition of this fact is what will lead us to the real issues and the real solutions.

That is convenience. I’m sorry it’s now less convenient, but we will fix this. I need your help though on categorizing and profiling your case though in order to do this efficiently. My compute resources are swamped trying to survey the possible cases.

Note that you can cache the precompilation (or do a system image build that’s cached) that would remove this, so for CI infrastructure there are some easy fixes. If you need it, we can also get you some more CI compute on the AMDCI machines. In fact, I’m curious whether @giordano has any build scripts that do a single precompilation step for a multi-group CI test.

Yes, but that’s a completely different thread. Please create a separate thread on this. It’s different profiles, different causes and effects, etc. It would just be confusing to address it here because it has nothing to do with the compile times which is a whole discussion of its own. As I said, I think it’s due to the effects system, we can take a look at a thread with profiles and everything, but putting all of that into a thread about precompilation would be unreadable so it’s best to keep two completely separate topics in separate threads. I’d be happy to dig into this with you, but that’s 25+ posts with images etc. on its own. Handling this precompilation is already long.

The CI builds have prebuilt artifacts you can download. I just learned that the other day: it’s so much easier :sweat_smile:.

I did that and I’ll take full responsibility. I don’t think it’s rude because if someone finds this thread they will find 13 in-depth posts about v1.8 precompilation changes and how it adversely effects the special type cases which are not covered by the package snooping. They will find nothing about runtime changes in v1.8, which is a completely separate topic. Keeping things organized and searchable is helpful. But again, there’s no reason to not discuss v1.8 runtime changes, it’s just a separate thread and a tangent in a discussion about precompilation.

3 Likes

Generally these aren’t cases that just come and go. These Static, LoopVectorization, Symbolic, and ChainRules core invalidations have been there for a long time. It’s just that precompilation never really did much, so no one really cared.

It would be good to add invalidation testing to infrastructure somehow, but solving the root cases in the core packages that cause the vast majority of issues is relatively maintainable. There just aren’t that many packages that are used by the majority of Julia users and which happen to overload something like !.

1 Like

Can you share the recompilation percentage on v1.8 that you’re seeing? @time on the using should show that.

Also share @time_imports Trixi

Sure, “brought back” wasn’t the best word choice, I really meant how to not introduce new one in the future. It looks like the script suggested by Jishnu above can help with that.

Oh I see what you were asking for now. Yeah, the script can work, but it’s overly sensitive. “Most” invalidations don’t mean very much, so getting a red test from adding one invalidation to a high level call is a bit too conservative. You almost want to pair it with a cost model of precompilation. We might adopt it in SciML if someone would help us slam the script around 100 repos, though there would need to be some judgement calls made it on (the output would help make said judgements though!)

To keep people up to date: We distributed an updated variant of the script mentioned above to all SciML repos and the Trixi framework. I also made quite a few PRs to other basic packages in the ecosystem. Let’s see how things evolve from here on and let’s work together to fix invalidations!

8 Likes

And there was a major change to the function wrapping for late wrapping that should help first solve times for all downstream users, even Trixi, if Trixi snoop compiles. That’ll get a write-up soon. Also, other changes like A bunch of ambiguity fixes by ChrisRackauckas · Pull Request #1753 · SciML/OrdinaryDiffEq.jl · GitHub

2 Likes

Regarding the output format for the invalidations reports, we’ve incorporated a relatively generic way of creating tables (e.g., below) that are similar to the plots made (e.g. here) but with more information (file / line number / method names) in ReportMetrics.jl. The script looks like

using SnoopCompileCore
invalidations = @snoopr begin
    # load packages & do representative work
    nothing
end;
import ReportMetrics
ReportMetrics.report_invalidations(;
    job_name = "invalidations",
    invalidations,
    process_filename = x -> last(split(x, "packages/")),
)

And the output table looks like, for example:

┌─────────────────────────────────────────────────────┬───────────────────┬───────────────┬─────────────────┐
│ <file name>:<line number>                           │    Method Name    │ Invalidations │ Invalidations % │
│                                                     │                   │    Number     │     (xᵢ/∑x)     │
├─────────────────────────────────────────────────────┼───────────────────┼───────────────┼─────────────────┤
│ ChainRulesCore/oBjCg/src/tangent_types/thunks.jl:29 │ ChainRulesCore.== │      179      │       63        │
│ ChainRulesCore/oBjCg/src/tangent_types/thunks.jl:28 │ ChainRulesCore.== │      104      │       36        │
│ ChainRulesCore/oBjCg/src/tangent_arithmetic.jl:105  │ ChainRulesCore.*  │       2       │        1        │
└─────────────────────────────────────────────────────┴───────────────────┴───────────────┴─────────────────┘

Could this be somehow incorporated into the GitHub action for invalidations?

3 Likes