Inference breaks as a result of a simple no-op?

bug
inference

#1

I’ve been hunting down some performance problem in some code, and was finding behaviour of the inference engine that make no sense to me whatsoever. I’ve managed to reduce it to the following minimal example:

struct Bar{L}
    nt::NTuple{L,Int}
end

function test(bar::Bar{L}) where {L}
    # first(bar.nt) == 0 && nothing
    v = rand(-5:5, L)
    r = ntuple(k -> v[k], Val(L))
    return r
end

The problem is in the apparently harmless line that is commented out. It appears to do nothing, right? Well, without it, the function is type-stable, and it output is properly inferred

@code_warntype test(Bar((1,2,3,4)))
Variables:
  bar<optimized out>
  v<optimized out>
  r<optimized out>
  t_1<optimized out>
  t_2<optimized out>
  t_3<optimized out>
  t_4<optimized out>

Body:
   ...
  end::NTuple{4,Int64}

However, if I now uncomment the silly nothing line, test becomes type-unstable and very slow

@code_warntype test(Bar((1,2,3,4)))
Variables:
  bar::Bar{4}
  #11::getfield(, Symbol("##11#12"))
  v<optimized out>
  r<optimized out>
  t_1::Any
  t_2::Any
  t_3::Any
  t_4::Any

Body:
   ...
  end::NTuple{4,Any}

Any clue what might be going on here? Any suggestion is appreciated! This cropped up when including @assert statements or checks in a function. My solution for the moment is to comment out the checks, but I would like to have them back :-).

julia> versioninfo()
Julia Version 0.7.0-DEV.4055
Commit da98033388 (2018-02-22 09:10 UTC)
Platform Info:
  OS: macOS (x86_64-apple-darwin17.4.0)
  CPU: Intel(R) Core(TM) i5-7500 CPU @ 3.40GHz
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-3.9.1 (ORCJIT, skylake)
Environment:

#2

It seems like anything that has a short circuit makes it fail:

julia> function test(bar::Bar{L}) where {L}
           true && 2
           v = rand(-5:5, L)
           r = ntuple(k -> v[k], Val(L))
           return r
       end
test (generic function with 1 method)

Please file an issue about this.


#3

But the function is not type stable, isn’t it? Since you would either return a NTuple{L, Int}
or nothing which are two different return types. So the best the compiler could do would be to infer Union{NTuple{L, Int}, nothing} as return type (which it indeed should be capable to).


#4

Why would it return nothing? It should return r, which is always a NTuple{L, Int} as far as I can see.


#5

Ah sorry, I thought there was an early return. Probably shouldn’t write before my first coffee…


#6

#7

Well, as pointed out in the issue above, this seems to be just another instance of the hairy #15276 issue, related to inference funkiness with variables captured in closures. The essence of it is that as soon as v is captured in a closure by the ntuple line, inference is unsure about the type of r. The role of the commented line is unclear, but something analogous was already noted in the #15276 issue. Excuse the noise!