Multiplying by a variable in a constraint causing type instability in @code_warntype

I am trying to figure out how to avoid type instability in my model that uses some constraints and objectives that include a variable. I can’t seem to avoid a “::Union{Val{false}, GenericAffExpr{Float64,VariableRef}}” even if I assert a type or use an expression. Am I missing an easy way to avoid this type instability?

Here is a simplified version of my code that should simply be returning the rows of the input array with the three highest values in the second column:

using GLPKMathProgInterface
using JuMP
using GLPK
using BenchmarkTools

function test_warntype(values, num_values)

m = Model(with_optimizer(GLPK.Optimizer))

@variable(m, used_values[i=1:num_values], Bin)

@constraint(m, sum(used_values[i] for i in 1:num_values) == 3)

@objective(m, Max, sum(values[i,2]*used_values[i] for i in 1:num_values))

optimize!(m)

return(used_values)

end

values = [1.5 4.6
2.6 7.3
1.1 3
3.7 0.3
4.4 6]

num_values = size(values)[2]

@code_warntype test_warntype(values, num_values)

And here is the relevant part of the code_warntype output:

Variables
#self#::Core.Compiler.Const(test_warntype, false)
values::Array{Float64,2}
num_values::Int64
m::Model
##76793::Array{VariableRef,1}
used_values::Array{VariableRef,1}
@_7::Union{Nothing, Tuple{Int64,Int64}}
i@_8::Int64
q@_9::Union{Val{false}, GenericAffExpr{Float64,VariableRef}, VariableRef}
@_10::Union{Nothing, Tuple{Int64,Int64}}
##76798::Union{Val{false}, GenericAffExpr{Float64,VariableRef}, VariableRef}
##76797::Union{Float64, GenericAffExpr{Float64,VariableRef}}
##76796::Union{Float64, GenericAffExpr{Float64,VariableRef}}
##76795::ConstraintRef{Model,_A,ScalarShape} where _A
i@_15::Int64
q@_16::Union{Val{false}, GenericAffExpr{Float64,VariableRef}}
@_17::Union{Nothing, Tuple{Int64,Int64}}
##76799::Union{Val{false}, GenericAffExpr{Float64,VariableRef}}
i@_19::Int64

Body::Array{VariableRef,1}


5 ┄─ %52 = @_10::Tuple{Int64,Int64}::Tuple{Int64,Int64}
│ (i@_15 = Core.getfield(%52, 1))
│ %54 = Core.getfield(%52, 2)::Int64
│ %55 = q@_9::Union{Val{false}, GenericAffExpr{Float64,VariableRef}, VariableRef}
│ %56 = Base.getindex(used_values, i@_15)::VariableRef
│ (q@_9 = JuMP._destructive_add_with_reorder!(%55, %56))
│ (@_10 = Base.iterate(%47, %54))
│ %59 = (@_10 === nothing)::Bool
│ %60 = Base.not_int(%59)::Bool
└─── goto #7 if not %60
6 ── goto #5
7 ┄─ (##76798 = q@_9)
│ (##76797 = JuMP._destructive_add_with_reorder!(##76798, -1.0, 3))
│ (##76796 = JuMP._destructive_add_with_reorder!(##76797, 0))
│ %66 = m::Model
│ %67 = JuMP.build_constraint(getfield(JuMP, Symbol(“#_error#55”)){Symbol}(Core.Box(Any[:m, :(sum((used_values[i] for i = 1:num_values)) == 3)]), :constraint), ##76796, MathOptInterface.EqualTo{Float64}(0.0))::ScalarConstraint{GenericAffExpr{Float64,VariableRef},MathOptInterface.EqualTo{Float64}}
│ (##76795 = JuMP.add_constraint(%66, %67, “”))
│ ##76795
│ JuMP._valid_model(m, :m)
│ Core.NewvarNode(:(##76799))
│ %72 = Core.apply_type(JuMP.Val, false)::Core.Compiler.Const(Val{false}, false)
│ (q@_16 = (%72)())
│ %74 = (1:num_values)::Core.Compiler.PartialStruct(UnitRange{Int64}, Any[Core.Compiler.Const(1, false), Int64])
│ (@_17 = Base.iterate(%74))
│ %76 = (@_17 === nothing)::Bool
│ %77 = Base.not_int(%76)::Bool
└─── goto #10 if not %77
8 ┄─ %79 = @_17::Tuple{Int64,Int64}::Tuple{Int64,Int64}
│ (i@_19 = Core.getfield(%79, 1))
│ %81 = Core.getfield(%79, 2)::Int64
│ %82 = q@_16::Union{Val{false}, GenericAffExpr{Float64,VariableRef}}
│ %83 = Base.getindex(values, i@_19, 2)::Float64
│ %84 = Base.getindex(used_values, i@_19)::VariableRef
│ (q@_16 = JuMP._destructive_add_with_reorder!(%82, %83, %84))
│ (@_17 = Base.iterate(%74, %81))
│ %87 = (@_17 === nothing)::Bool
│ %88 = Base.not_int(%87)::Bool
└─── goto #10 if not %88
9 ── goto #8
10 ┄ (##76799 = q@_16)
│ %92 = m::Model
│ %93 = JuMP._throw_error_for_invalid_sense(getfield(JuMP, Symbol(“#_error#57”)){Symbol,Tuple{Symbol,Expr}}(:m, (:Max, :(sum(($(Expr(:escape, :(values[i, 2]))) * $(Expr(:escape, :(used_values[i]))) for i = 1:num_values))))), MathOptInterface.MAX_SENSE)::Core.Compiler.Const(MathOptInterface.MAX_SENSE, false)
│ JuMP.set_objective(%92, %93, ##76799)
│ ##76799
│ Main.optimize!(m)
└─── return used_values

Edited for formatting

Is this a bottleneck in your code? Or are you just trying to eliminate all type instabilities?

Here’s a simpler reproducer:

using JuMP
model = Model()
@variable(model, x)
test_warntype(model, x) = @expression(model, sum(x for i = 1:2))
@code_warntype test_warntype(model, x)

Digging in, the @expression expands to:

function foo(x)
    a = JuMP.Val{false}()
    for i = 1:2
        a = JuMP._destructive_add_with_reorder!(a, x)
    end
    return a
end
@code_warntype foo(x)

The issue is that foo(x) infers as Union{GenericAffExpr{Float64,VariableRef}, VariableRef}.

julia> @code_warntype foo(x)
Body::Union{GenericAffExpr{Float64,VariableRef}, VariableRef}
1 ──       goto #26 if not true
2 ┄─ %2  = φ (#1 => $(QuoteNode(Val{false}())), #25 => %57)::Union{Val{false}, GenericAffExpr{Float64,VariableRef}, VariableRef}
│    %3  = φ (#1 => 1, #25 => %63)::Int64
│    %4  = (isa)(%2, Val{false})::Bool
└───       goto #4 if not %4
3 ── %6  = (Base.getfield)(x, :model)::Model
│    %7  = (Base.getfield)(x, :index)::MathOptInterface.VariableIndex
│    %8  = %new(JuMP.VariableRef, %6, %7)::VariableRef
└───       goto #21
4 ── %10 = (isa)(%2, GenericAffExpr{Float64,VariableRef})::Bool
└───       goto #18 if not %10
5 ── %12 = π (%2, GenericAffExpr{Float64,VariableRef})
│    %13 = (Base.getfield)(%12, :terms)::OrderedCollections.OrderedDict{VariableRef,Float64}
│    %14 = (Base.getfield)(x, :model)::Model
│    %15 = (Base.getfield)(x, :model)::Model
│    %16 = (%14 === %15)::Bool
└───       goto #7 if not %16
6 ── %18 = (Base.getfield)(x, :index)::MathOptInterface.VariableIndex
│    %19 = (Base.getfield)(x, :index)::MathOptInterface.VariableIndex
│    %20 = (%18 === %19)::Bool
└───       goto #8
7 ──       goto #8
8 ┄─ %23 = φ (#6 => %20, #7 => false)::Bool
│    %24 = (Base.not_int)(%23)::Bool
└───       goto #10 if not %24
9 ── %26 = invoke Base.print_to_string(_2::VariableRef, " is not a valid key for type "::Vararg{Any,N} where N, VariableRef)::String
│    %27 = %new(Core.ArgumentError, %26)::ArgumentError
│          (OrderedCollections.throw)(%27)
└───       $(Expr(:unreachable))
10 ┄ %30 = invoke OrderedCollections.ht_keyindex2(%13::OrderedCollections.OrderedDict{VariableRef,Float64}, _2::VariableRef)::Int64
│    %31 = (Base.slt_int)(0, %30)::Bool
└───       goto #12 if not %31
11 ─ %33 = (Base.getfield)(%13, :vals)::Array{Float64,1}
│    %34 = (Base.arrayref)(true, %33, %30)::Float64
└───       goto #13
12 ─ %36 = (Base.neg_int)(%30)::Int64
│          invoke OrderedCollections._setindex!(%13::OrderedCollections.OrderedDict{VariableRef,Float64}, 0.0::Float64, _2::VariableRef, %36::Int64)
└───       goto #13
13 ┄ %39 = φ (#11 => %34, #12 => 0.0)::Float64
│    %40 = (Base.add_float)(%39, 1.0)::Float64
│          invoke Base.setindex!(%13::OrderedCollections.OrderedDict{VariableRef,Float64}, %40::Float64, _2::VariableRef)
└───       goto #14
14 ─       goto #15
15 ─       goto #16
16 ─       goto #17
17 ─       goto #21
18 ─ %47 = (isa)(%2, VariableRef)::Bool
└───       goto #20 if not %47
19 ─ %49 = π (%2, VariableRef)
│    %50 = %new(Pair{VariableRef,Float64}, %49, 1.0)::Pair{VariableRef,Float64}
│    %51 = %new(Pair{VariableRef,Float64}, x, 1.0)::Pair{VariableRef,Float64}
│    %52 = invoke JuMP._new_ordered_dict(VariableRef::Type{VariableRef}, Float64::Type{Float64}, %50::Pair{VariableRef,Float64}, %51::Pair{VariableRef,Float64})::OrderedCollections.OrderedDict{VariableRef,Float64}
│    %53 = %new(GenericAffExpr{Float64,VariableRef}, 0.0, %52)::GenericAffExpr{Float64,VariableRef}
└───       goto #21
20 ─       (Core.throw)(ErrorException("fatal error in type inference (type bound)"))
└───       $(Expr(:unreachable))
21 ┄ %57 = φ (#3 => %8, #17 => %12, #19 => %53)::Union{GenericAffExpr{Float64,VariableRef}, VariableRef}
│    %58 = (%3 === 2)::Bool
└───       goto #23 if not %58
22 ─       goto #24
23 ─ %61 = (Base.add_int)(%3, 1)::Int64
└───       goto #24
24 ┄ %63 = φ (#23 => %61)::Int64
│    %64 = φ (#22 => true, #23 => false)::Bool
│    %65 = (Base.not_int)(%64)::Bool
└───       goto #26 if not %65
25 ─       goto #2
26 ┄ %68 = φ (#24 => %57, #1 => $(QuoteNode(Val{false}())))::Union{Val{false}, GenericAffExpr{Float64,VariableRef}, VariableRef}
│    %69 = π (%68, Union{GenericAffExpr{Float64,VariableRef}, VariableRef})
└───       return %69

What’s weird is that

function bar(x)
    a = JuMP.Val{false}()
    a = JuMP._destructive_add_with_reorder!(a, x)
    a = JuMP._destructive_add_with_reorder!(a, x)
    return a
end
@code_warntype bar(x)

infers fine:

Body::GenericAffExpr{Float64,VariableRef}
1 ─ %1 = (Base.getfield)(x, :model)::Model
│   %2 = (Base.getfield)(x, :index)::MathOptInterface.VariableIndex
│   %3 = %new(JuMP.VariableRef, %1, %2)::VariableRef
│   %4 = %new(Pair{VariableRef,Float64}, %3, 1.0)::Pair{VariableRef,Float64}
│   %5 = %new(Pair{VariableRef,Float64}, x, 1.0)::Pair{VariableRef,Float64}
│   %6 = invoke JuMP._new_ordered_dict(VariableRef::Type{VariableRef}, Float64::Type{Float64}, %4::Pair{VariableRef,Float64}, %5::Pair{VariableRef,Float64})::OrderedCollections.OrderedDict{VariableRef,Float64}
│   %7 = %new(GenericAffExpr{Float64,VariableRef}, 0.0, %6)::GenericAffExpr{Float64,VariableRef}
└──      return %7

cc @blegat

1 Like

Thanks for your help simplifying it further. I don’t know if this is a bottleneck or how to check exactly. I just know I have a more complicated function with multiple of these constraints that is running slower than I would like. I have already eliminated some other type instabilities without seeing much improvement in my benchmarking so I am trying to eliminate as many as I can.

I have already eliminated some other type instabilities without seeing much improvement in my benchmarking so I am trying to eliminate as many as I can.

Type-instabilities don’t automatically lead to slow code. It’s really worth spending time figuring out what the problem is before you try to optimize the run time.

It’s hard to offer other advice without the full code.

The full code is fairly long and complex and might not be easy to follow without a lot of context. The basic principle though is running a more complicated version of this function, saving its results to an array storing the solutions, and then feeding those solutions back in and running it again and using a constraint to make sure no two solutions use more than a specified number of the same rows from the “values” array. Basically we want 100 solutions (sets of rows from the values array) with a maximum overlap.

Is there a way to write the constraint/expression to sum without expanding to a loop of “JuMP._destructive_add_with_reorder!” ? How did you find what that expression expands to? Is there an easier way to go about tracking what part of the code is slow? Thanks.

Is there a way to write the constraint/expression to sum without expanding

Use sum(used_values). We expand it to a for when we see sum(... for ...).
We might use a functional approach instead of for loops to avoid issue with type instability: see Replace for loops by functional approach · Issue #26 · jump-dev/MutableArithmetics.jl · GitHub

How did you find what that expression expands to?

You can see how it expands to using @macroexpand. Julia v1.0 can handle small unions without much overhead (this is the reason they chose findfirst to return a Union{Nothing, Int}).

Is there an easier way to go about tracking what part of the code is slow?

You can use ProfileView

1 Like

Interestingly, converting the constraints and objectives to use dot products instead of the for notation does get rid of the “::Union{Val{false}, GenericAffExpr{Float64,VariableRef}}” but it replaces some of them with ::Any (specifically those where I am multiplying by the variable) and in my actual code it increases the memory allocation by about 4 times. It has no noticeable impact on overall runtime. I also tried adding @views before the constraints that sum slices and it did not seem to have any impact either. This may be a case where that union isn’t so bad for real performance. The profileview for the actual code is hard to read but I don’t see a lot of red and none at the top so I think this could just be as good as it is going to get.

#@constraint(m, sum(used_values[i] for i in 1:num_values) == 3)
@constraint(m, sum(used_values) == 3)

#@objective(m, Max, sum(values[i,2]*used_values[i] for i in 1:num_values))
@views @objective(m, Max, sum(values[:,2].*used_values))

What about dot(values[:, 2], used_values) instead of sum(values[:,2] .* used_values)) ?

That gives a similar result in runtime but increase the memory allocation a bit compared to the original.