Colleagues and I are developing Muscade.jl. Muscade is deliberately putting a high workload on the Julia 1.11 compiler. I have done nothing with module precompilation so far. Thus I have a significant TTFX, but this is not our topic today (I’ll use .
What puzzles me is that re-running the same analysis still triggers considerable compilation. TTSX !?! I study this with SnoopCompile.jl. My findings are
- There seems to be zero invalidations on the second run:
invs = @snoop_invalidations using Muscade, Test, StaticArrays,SparseArrays;
returns an empty array. - Using
@snoop_inference
, I find that most of the recompilation concerns Muscade code (on first execution, quite a few instances of Julia methods where created). The code is mostly linear algebra (StaticArrays
) and other maths, specialised for automatic differentiation. However I am fairly convinced these are recompilation of instances already created on the first run. I my mind, this is supported by the “zero invalidation” finding (inference makes out 6 seconds of TTSX of 4 minutes). - One class of entries in the flattened inference list puzzles me (look at the end of this line):
InferenceTiming: 0.010487/0.040255 on (::StaticArrays.var"#280#281"{∂ℝ{3, 1, ∂ℝ{2, 24, ∂ℝ{1, 24, Float64}}}})(∂ℝ{3, 1, ∂ℝ{2, 24, ∂ℝ{1, 24, Float64}}}(∂ℝ{2, 24, ∂ℝ{1, 24, Float64}}(∂ℝ{1, 24, Float64}(0.0, [0.0, 0.0, 0.0, 0.0, 0.0,...
How should I interpret StaticArrays.var"#280#281"
?
∂ℝ
is my homebrew forward automatic differentiation (I know, but ForwardDiff.jl did not exist yet when I started on this…), but why does 0.0, [0.0, 0.0, 0.0, 0.0, 0.0,...
, a value of an automatic differentiation variable ever appear in the analysis of a compilation process? Is this a clue to something triggering recompilation, or is this irrelevant to the recompilation issue?
Do you know what triggers recompilation? Are there tools/techniques I could use to understand what triggers recompilation?