Error Handling in Quote Block using JuMP

I am puzzled by the following:

using JuMP
m = Model()
@variable(m,x)

code = quote
    try
        1
    catch e
        @NLconstraint($m, 0 .<= exp($x))
    end
end

eval(code)

ERROR: At REPL[240]:5: `@NLconstraint(Feasibility
Subject to
, [1, 2] .<= exp(x))`: expected comparison operator (<=, >=, or ==). 

Why is the @NLconstraint macro being evaluated here if the catch block shouldn’t be entered into? (btw, I know that the .<= operator is not allowed in this macro).

Context: I ran into this while trying to setup a try catch block that will call the JuMP.@constraints macro for a vectorized constraint expression. If that fails, then I want it to call the JuMP.@NLconstraints macro in the catch block:

The following works when I don’t use vectorized constraints:

function add_con(m, con_quote)
    quote
        try
            @constraints($m,$con_quote)
        catch
            @NLconstraints($m,$con_quote)
        end
    end
end

m1 = Model()
@variable(m1,x[1:2])
con_quote1 = :(
    begin
        con[i=1:2], [1,2][i] <= x[i] <= [5,6][i]
    end
)
eval(add_con(m1,con_quote1))
print(m1)
# Feasibility
# Subject to
#  con[1] : x[1] in [1.0, 5.0]
#  con[2] : x[2] in [2.0, 6.0]

When I switch over to a vectorized constraint, for some reason it tries to call the @NLconstraints macro instead:

m2 = Model()
@variable(m2,x[1:2])
con_quote2 = :(
    begin
        con, [1,2] .<= x .<= [5,6]
    end
)
eval(add_con(m2,con_quote2))
print(m2)
# ERROR: LoadError: `@NLconstraint(Feasibility
# Subject to
# , con, [1, 2] .<= x .<= [5, 6])`: only ranged rows of the form lb <= expr <= ub are supported.

However, the @constraints macro should work with this type of constraint:

m3 = Model()
@variable(m3,x[1:2])
@constraints(m3,begin
    con, [1,2] .<= x .<= [5,6]
end)
print(m3)
# Feasibility
# Subject to
#  con : x[1] in [1.0, 5.0]
#  con : x[2] in [2.0, 6.0]

Any help/insights would be appreciated.

Thanks to @pulsipher for some insight here: It’s probably a run time issue with the macros. When I replace the @NLconstraints macro with add_nonlinear_constraint, the examples now work:

function add_con(m, con_quote)
    quote
        try
            @constraints($m,$con_quote)
        catch
            for cq in $con_quote
                add_nonlinear_constraint($m,cq)
            end
        end
    end
end

What is the usecase? I would avoid macros if at all possible, and I would avoid nesting macros even more so. It’s hard to get right.

Is this not just the fact that macros expand when the code of the enveloping “function” is compiled instead of when the exact block they are written is run?

I am using this in DisjunctiveProgramming.jl. When creating a disjunction, you pass the constraints (as expressions) that belong to each disjunct. These are then converted to JuMP constraints and reformulated using BigM or Convex Hull reformulations. Since I don’t know if the user is going to pass a nonlinear constraint, I use the try catch block to create the appropriate constraint. That’s where this is coming from.

One goal of NumFOCUS signs agreement with LANL to improve nonlinear support in JuMP | JuMP is to make this easier to support. Embedding the current nonlinear macros is not really supported or intended to work.

1 Like

Is there a way of adding a constraint similar to add_nonlinear_constraint (without using macros), but for regular (i.e. ScalarConstraint or VectorConstraint) constraints? Or will it require build_constraint inside add_constraint? I want it to accept expressions of the form lb <= expr, expr <= ub, lb <= expr <= ub.

1 Like
julia> using JuMP

julia> model = Model();

julia> @variable(model, x)
x

julia> f = 2.0x + x^2
x² + 2 x

julia> s = MOI.LessThan(2.0)
MathOptInterface.LessThan{Float64}(2.0)

julia> add_constraint(model, ScalarConstraint(f, s))
x² + 2 x ≤ 2.0

julia> print(model)
Feasibility
Subject to
 x² + 2 x ≤ 2.0

This helps! What if I want to pass an expression with the comparison operators rather than pass the set (as we do with the @constraint macro)? So I want to parse the expression into the form func in set.

The user could potentially do x <= 2x, which gets translated into -x <= 0 by the @constraint macro

I think you can leverage a lot more of the JuMP extension infrastructure, and avoid messing with macros.

julia> using JuMP

julia> struct Disjunction
           model::Model
           name::String
           M::Float64
           variables::Vector{VariableRef}
       end

julia> function JuMP.build_constraint(::Function, f, lower, upper, d::Disjunction)
           return build_constraint(error, f, MOI.Interval(lower, upper), d)
       end

julia> struct RelaxedConstraint{F,S} <: AbstractConstraint
           f::F
           M::Float64
           y::VariableRef
           set::S
       end

julia> function JuMP.build_constraint(err::Function, f, set, d::Disjunction)
           y = @variable(d.model, binary = true)
           push!(d.variables, y)
           set_name(y, "$(d.name)[$(length(d.variables))]")
           return RelaxedConstraint(f, d.M, y, set)
       end

julia> function JuMP.add_constraint(
           model::Model, 
           con::RelaxedConstraint{F,<:MOI.LessThan}, 
           name::String,
       ) where {F}
           add_constraint(
               model, 
               ScalarConstraint(con.f - con.M * con.y, con.set), 
               name,
           )
           return
       end

julia> function JuMP.add_constraint(
           model::Model, 
           con::RelaxedConstraint{F,<:MOI.GreaterThan}, 
           name::String,
       ) where {F}
           add_constraint(
               model, 
               ScalarConstraint(con.f + con.M * con.y, con.set),
               name,
           )
           return
       end

julia> function JuMP.add_constraint(
           model::Model, 
           con::RelaxedConstraint{F,<:MOI.Interval}, 
           name::String,
       ) where {F}
           add_constraint(
               model, 
               ScalarConstraint(con.f - con.M * con.y, MOI.LessThan(con.set.upper)), 
               name,
           )
           add_constraint(
               model, 
               ScalarConstraint(con.f + con.M * con.y, MOI.GreaterThan(con.set.lower)),
               name,
           )
           return
       end

julia> function add_disjunction(f, model, name; bigM)
           d = Disjunction(model, name, bigM, VariableRef[])
           f(d)
           @constraint(model, sum(d.variables) == 1)
       end
add_disjunction (generic function with 1 method)

julia> model = Model()
A JuMP Model
Feasibility problem with:
Variables: 0
Model mode: AUTOMATIC
CachingOptimizer state: NO_OPTIMIZER
Solver name: No optimizer attached.

julia> @variable(model, -5 <= x <= 10)
x

julia> add_disjunction(model, "y"; bigM = 1000) do d
           @constraint(model, 0 <= x <= 3, d)
           @constraint(model, 5 <= x <= 9, d)
       end
y[1] + y[2] = 1.0

julia> print(model)
Feasibility
Subject to
 y[1] + y[2] = 1.0
 x ≥ -5.0
 x ≤ 10.0
 y[1] binary
 y[2] binary
 x - 1000 y[1] ≤ 3
 x - 1000 y[2] ≤ 9
 x + 1000 y[1] ≥ 0
 x + 1000 y[2] ≥ 5
1 Like

This is fabulous! I really wanted to use the JuMP infrastructure, but wasn’t sure how. Thanks for pointing me in this direction! I will start playing with this initial code you’ve provided to add the features I have implemented in the package (convex hull reformulation, inferring the tightest M value, etc.). I will likely be bugging you as I go about this to get your feedback. Thanks!