I’ve been hunting down some performance problem in some code, and was finding behaviour of the inference engine that make no sense to me whatsoever. I’ve managed to reduce it to the following minimal example:
struct Bar{L}
nt::NTuple{L,Int}
end
function test(bar::Bar{L}) where {L}
# first(bar.nt) == 0 && nothing
v = rand(-5:5, L)
r = ntuple(k -> v[k], Val(L))
return r
end
The problem is in the apparently harmless line that is commented out. It appears to do nothing, right? Well, without it, the function is type-stable, and it output is properly inferred
@code_warntype test(Bar((1,2,3,4)))
Variables:
bar<optimized out>
v<optimized out>
r<optimized out>
t_1<optimized out>
t_2<optimized out>
t_3<optimized out>
t_4<optimized out>
Body:
...
end::NTuple{4,Int64}
However, if I now uncomment the silly nothing
line, test
becomes type-unstable and very slow
@code_warntype test(Bar((1,2,3,4)))
Variables:
bar::Bar{4}
#11::getfield(, Symbol("##11#12"))
v<optimized out>
r<optimized out>
t_1::Any
t_2::Any
t_3::Any
t_4::Any
Body:
...
end::NTuple{4,Any}
Any clue what might be going on here? Any suggestion is appreciated! This cropped up when including @assert
statements or checks in a function. My solution for the moment is to comment out the checks, but I would like to have them back :-).
julia> versioninfo()
Julia Version 0.7.0-DEV.4055
Commit da98033388 (2018-02-22 09:10 UTC)
Platform Info:
OS: macOS (x86_64-apple-darwin17.4.0)
CPU: Intel(R) Core(TM) i5-7500 CPU @ 3.40GHz
WORD_SIZE: 64
LIBM: libopenlibm
LLVM: libLLVM-3.9.1 (ORCJIT, skylake)
Environment: