# Why is this expression type unstable?

I have the following function, and I’ve written two return statements which are identical in an input-output sense, but they have different syntax and mechanics. I like the format of the former statement because it’s a generator, and there’s less syntax there so it’s clearer. But the former return statement is not type stable, until I run the second return statement! The latter return statement is always type stable.

``````function lagrangepolynomial(x::AbstractVector, y::AbstractVector, z::Number = Symbolics.variable(:z))

n = length(x)

δⱼ(z,x,j,k) = (z - x[k]) / (x[j] - x[k])
Pⱼ(z,x,j,n) = prod(δⱼ(z,x,j,k) for k in 1:n if k != j)

# NOT type stable...
return sum((y[j] * Pⱼ(z,x,j,n)) for j in 1:n)

# Type stable
return sum(map(j -> y[j] * Pⱼ(z,x,j,n), 1:n))

end
``````

## Minimum Working Example

``````using Symbolics
@variables z

x = -1:0.1:1
y = cos.(x)
n = length(x)

# Both of these are type stable!
δⱼ(z,x,j,k) = (z - x[k]) / (x[j] - x[k])
Pⱼ(z,x,j,n) = prod(δⱼ(z,x,j,k) for k in 1:n if k != j)

L1(z,x,y,n) = sum(y[j] * Pⱼ(z,x,j,n) for j in 1:n)
L2(z,x,y,n) = sum(map(j -> y[j] * Pⱼ(z,x,j,n), 1:n))
``````

## The first function is type unstable…

``````julia> @code_warntype L1(z,x,y,n)
MethodInstance for L1(::Num, ::StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}, ::Vector{Float64}, ::Int64)
from L1(z, x, y, n) in Main at REPL[19]:1
Arguments
#self#::Core.Const(L1)
z::Num
x::StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}
y::Vector{Float64}
n::Int64
Locals
#25::var"#25#26"{Num, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}, Vector{Float64}, Int64}
Body::Any
1 ─ %1  = Main.:(var"#25#26")::Core.Const(var"#25#26")
│   %2  = Core.typeof(z)::Core.Const(Num)
│   %3  = Core.typeof(x)::Core.Const(StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64})
│   %4  = Core.typeof(y)::Core.Const(Vector{Float64})
│   %5  = Core.typeof(n)::Core.Const(Int64)
│   %6  = Core.apply_type(%1, %2, %3, %4, %5)::Core.Const(var"#25#26"{Num, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}, Vector{Float64}, Int64})
│         (#25 = %new(%6, z, x, y, n))
│   %8  = #25::var"#25#26"{Num, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}, Vector{Float64}, Int64}
│   %9  = (1:n)::Core.PartialStruct(UnitRange{Int64}, Any[Core.Const(1), Int64])
│   %10 = Base.Generator(%8, %9)::Core.PartialStruct(Base.Generator{UnitRange{Int64}, var"#25#26"{Num, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}, Vector{Float64}, Int64}}, Any[var"#25#26"{Num, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}, Vector{Float64}, Int64}, Core.PartialStruct(UnitRange{Int64}, Any[Core.Const(1), Int64])])
│   %11 = Main.sum(%10)::Any
└──       return %11
``````

## The second function IS type stable…

``````julia> @code_warntype L2(z,x,y,n)
MethodInstance for L2(::Num, ::StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}, ::Vector{Float64}, ::Int64)
from L2(z, x, y, n) in Main at REPL[20]:1
Arguments
#self#::Core.Const(L2)
z::Num
x::StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}
y::Vector{Float64}
n::Int64
Locals
#27::var"#27#28"{Num, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}, Vector{Float64}, Int64}
Body::Num
1 ─ %1  = Main.:(var"#27#28")::Core.Const(var"#27#28")
│   %2  = Core.typeof(z)::Core.Const(Num)
│   %3  = Core.typeof(x)::Core.Const(StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64})
│   %4  = Core.typeof(y)::Core.Const(Vector{Float64})
│   %5  = Core.typeof(n)::Core.Const(Int64)
│   %6  = Core.apply_type(%1, %2, %3, %4, %5)::Core.Const(var"#27#28"{Num, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}, Vector{Float64}, Int64})
│         (#27 = %new(%6, z, x, y, n))
│   %8  = #27::var"#27#28"{Num, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}, Vector{Float64}, Int64}
│   %9  = (1:n)::Core.PartialStruct(UnitRange{Int64}, Any[Core.Const(1), Int64])
│   %10 = Main.map(%8, %9)::Vector{Num}
│   %11 = Main.sum(%10)::Num
└──       return %11
``````

## And now, the first function is type stable? Why?

``````julia> @code_warntype L1(z,x,y,n)
MethodInstance for L1(::Num, ::StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}, ::Vector{Float64}, ::Int64)
from L1(z, x, y, n) in Main at REPL[8]:1
Arguments
#self#::Core.Const(L1)
z::Num
x::StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}
y::Vector{Float64}
n::Int64
Locals
#5::var"#5#6"{Num, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}, Vector{Float64}, Int64}
Body::Num
1 ─ %1  = Main.:(var"#5#6")::Core.Const(var"#5#6")
│   %2  = Core.typeof(z)::Core.Const(Num)
│   %3  = Core.typeof(x)::Core.Const(StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64})
│   %4  = Core.typeof(y)::Core.Const(Vector{Float64})
│   %5  = Core.typeof(n)::Core.Const(Int64)
│   %6  = Core.apply_type(%1, %2, %3, %4, %5)::Core.Const(var"#5#6"{Num, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}, Vector{Float64}, Int64})
│         (#5 = %new(%6, z, x, y, n))
│   %8  = #5::var"#5#6"{Num, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}, Vector{Float64}, Int64}
│   %9  = (1:n)::Core.PartialStruct(UnitRange{Int64}, Any[Core.Const(1), Int64])
│   %10 = Base.Generator(%8, %9)::Core.PartialStruct(Base.Generator{UnitRange{Int64}, var"#5#6"{Num, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}, Vector{Float64}, Int64}}, Any[var"#5#6"{Num, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}, Vector{Float64}, Int64}, Core.PartialStruct(UnitRange{Int64}, Any[Core.Const(1), Int64])])
│   %11 = Main.sum(%10)::Num
└──       return %11
``````
4 Likes
1. On v1.7.2, this phenomenon also happens when `Symbolics` is omitted and the global `z` is changed to `z=0`, so the problem doesn’t seem to be in `Symbolics`. The rest of my comment is about this `z=0` version.

2. Changing `L2` to `sum([ ... ])` doesn’t change anything, so this might be a matter of summing over a generator versus over an array (which `map` makes).

3. The same phenomenon also happens if you run either `Pⱼ(z,x,1,n)`, `L1(z, x, y, n)`, or `L2(z, x, y, n)` instead of `@code_warntype L2(z, x, y, n)` between the 2 runs of `@code_warntype L1(z,x,y,n)`. Running `δⱼ(z,x,1,3)` was not enough.

4. If you run `Pⱼ(z,x,1,n)`, `L1(z, x, y, n)`, `L2(z, x, y, n)`, or `@code_warntype L2(z, x, y, n)` in a fresh session first, `@code_warntype L1(z,x,y,n)` is type-stable on its first try afterward.

5. These Github issues (18695, 43330) seem related, but I couldn’t find anything about `code_warntype` “changing its mind” like this in those issues or other linked issues. I would really highlight that detail, you could even edit your post’s title to mention it. You could also open a new Github issue, see where it goes.

Due to work done in another thread about inconsistent inference, I also identified:

1. If `@code_warntype L1(z,x,y,n)` reports type-stability, then you redefine either `δⱼ` or `Pⱼ`, it will cause the next `@code_warntype L1(z,x,y,n)` to report type-instability. This re-unstabilizing effect does NOT occur if you redefine `L1` or `L2`.

2. `Test.@inferred L1(z, x, y, n)` can throw a `ERROR: return type Float64 does not match inferred return type Any` even when `@code_warntype` is inferring a `Float64` return type. `@inferred` is successful only if `Pⱼ` is compiled before `L1` is; switch the order (such as by only running `L1`) and `@inferred` gets stuck at that error; if it gets stuck at that error, redefine `L1` to “undo” `L1`'s compilation and it infers successfully again.

3. All of the above is reproducible on v1.4.1, so this isn’t a new regression.

My initial guess was that `Pⱼ` having been compiled seems to be a critical factor, but I could not claim that `@code_warntype L2(z,x,y,n)` compiles `Pⱼ`, let alone claim that’s how it triggers `@code_warntype L1(z,x,y,n)` to report type-stability.

1 Like