Intermittent type inference calls during method evaluation

When looking through a first call for a longish method via StatProfilerHTML.jl, I see some type inference happening during the method body evaluation:

(This is in Julia 1.8-rc1.)

I think that this is caused by the method being too long which caused the compiler to give up on inferring the whole method. Is this assumption correct?

My guess is also that storing the methodinstance is aborted once inference takes too long, so to ensure that more methodinstances are stored, I just need to avoid methods where typeinference happens during evaluation of the body? This could explain why some methods don’t “take” if you try to precompile them as mentioned by @timholy in Consider `generate_precompile` prior to module close · Issue #38951 · JuliaLang/julia · GitHub

I don’t think so. It just looks like it’s deferred some type inference due to you working on arbitrary Expr and it being potentially beneficial to compile better code for the branch there.

1 Like

I think you’re right about the runtime dispatch, but I’m not so sure about the branch. It doesn’t match what I’ve seen while looking at the profiler. Or do you mean the function which is dispatched into by “compile better code for the branch there”? As far as I understand, LLVM code is either a hit or a miss. Or the complete function body is compiled or it isn’t. There isn’t some partial compilation which is then optimized during runtime.

Based on the following, I would guess that the upper method explore_call! is compiled already to LLVM code, but the runtime dispatch isn’t yet. But, before stepping into the runtime dispatch, Julia has to compile that function after running type inference. That’s why the type inference shows up in the explore_call! body: Julia hasn’t stepped into the JIT compiled lower function.

warmup(x) = x
precompile(warmup, (Int,))
@time @eval warmup(1)

function f(x)
    out = Base.inferencebarrier(x)
    return out + 1
end
precompile(f, (Int,))
@time @eval f(1)

function g(x)
    out = Base.inferencebarrier(x)::Int
    return out + 1
end
precompile(g, (Int,))
@time @eval g(1)

h(x) = x + 1
precompile(h, (Int,))
@time @eval h(1)

@noinline sub(x) = x

function k(x)
    out = sub(Base.inferencebarrier(x))
    return out + 1
end
precompile(k, (Int,))
@time @eval k(1)
  0.003000 seconds (911 allocations: 55.395 KiB, 89.77% compilation time)
  0.000149 seconds (41 allocations: 1.906 KiB)
  0.000154 seconds (41 allocations: 1.906 KiB)
  0.000145 seconds (41 allocations: 1.906 KiB)
  0.002851 seconds (434 allocations: 29.125 KiB, 92.99% compilation time)

Note in this example that the call @time @eval k(1) shows compilation time because sub wasn’t compiled by the precompile call.

EDIT: I just realize that even when all types are known inside a method, it is still possible that Julia cannot compile the LLVM code yet because the function that is dispatched into has to be compiled still, so that’s why there are separate interpreted and compiled modes. Only once all submethods are compiled, the link to that compiled function can be hardcoded into the LLVM code.

1 Like