Profiling compilation time

My code is heavy on parametric types, and rather slow to compile. I would like to know on which functions that time is being spent. Is there a tool to report on that?

Barring that, is there a way to force compilation of fun with arg_types, without executing it? Is it reasonable to approximate compilation time by timing @time @code_native(fun(args...)) (assuming that this particular specialization hasn’t been compiled yet)?

precompile ?

https://docs.julialang.org/en/stable/stdlib/base/#Base.precompile

1 Like

Also take a look at SnoopCompile.jl which makes it easier to create a full precompile script for your package.

2 Likes

I’m facing similar problems; even with precompilation the first instantiation of a type is (relatively) time consuming:

struct A
   x::Int
end
precompile(A, (Int,))
@time A(1)
@time A(1)

The first run takes ~100x longer

Most likely, precompile(A, (Int,)) only compiles the outer constructor, but it’s the inner constructor that does all the work. See also this issue

Thank you for the suggestions @greg_plowman and @vchuravy. I turned them into a function to generate a compilation time report.

In my tests, calling precompile on all functions used in some code took longer than just running that code, so I initially thought that this wouldn’t work as a compile-time proxy. But I’m guessing that it’s because precompile takes a tuple of types, which is something the Julia compiler doesn’t handle so well. I “warmed up” precompile for those type tuples to get rid of that time. Code here

I’m not sure thats the case, specifying a single inner constructor A(x::Int) = new(x) doesn’t affect the timings

1 Like

@yuyichao Can I please ask for your opinion on this? I’d like to measure per-specialization compilation time. I realize that this is a potentially ill-defined quantity, but I would like a reasonable approximation. Consider two functions that take an integer, and that may or may not have been called already.

f(x) = g(x) + 2
g(x) = sin(x) + ...

Is it reasonable to profile them this way?

precompile(f, (Int,))
precompile(g, (Int,))   # warm up the precompilation

f(x) = g(x) + 2        # redefine the functions to clear the specialization cache
g(x) = sin(x) + ...

@show @elapsed(precompile(g, (Int,))  
@show @elapsed(precompile(f, (Int,))    # measure the caller after the callee

No this is not the issue. precompile currently only generate LLVM code. This is the desired behavior for sysimg compilation but may not be the desired behavior for package precompilation (in which case one should not generate llvm IR since they are not going to be used currently) or runtime (in which case it might be better to generate native code).

This will currently not include native code generation time.

1 Like

Thank you for the explanation.

Is there a way to trigger native code generation? Alternatively, can I reasonably assume that native code generation is negligible, or roughly proportional in time to the rest of the compilation pipeline?

No. Ref RFC: Change the behavior of `precompile` depending on current mode by yuyichao · Pull Request #23738 · JuliaLang/julia · GitHub

No.

Depends.