Can we improve JuMP.set_objective_coefficient?

I find JuMP.set_objective_coefficient somewhat favorable, because it is a functional API, and is flexible. e.g., I can use it to write a (pseudo-) Benders’ decomposition algorithm

import JuMP, Gurobi; GRB_ENV = Gurobi.Env();
function Model(name) # e.g. "st2_lms" or "st1_bMD" or "supp_lMs"
    m = JuMP.Model(() -> Gurobi.Optimizer(GRB_ENV))
    JuMP.MOI.set(m, JuMP.MOI.Name(), name)
    s = name[end-1]
    s == 'm' && JuMP.set_objective_sense(m, JuMP.MIN_SENSE)
    s == 'M' && JuMP.set_objective_sense(m, JuMP.MAX_SENSE)
    last(name) == 's' && JuMP.set_silent(m)
    m
end;
function add_a_Benders_cut() return nothing end
model = Model("_lms") # Min Sense
JuMP.@variable(model, x[1:4])
JuMP.@variable(model, Y[1:2, 1:3]) # Matrix Variable
JuMP.@variable(model, e) # an epigraphical variable
JuMP.set_objective_coefficient(model, x, rand(4))
JuMP.set_objective_coefficient.(model, Y, rand(2, 3)) # 1️⃣
JuMP.optimize!(model) # Initially, we don't have a cut for `e`, therefore we exclude it to avoid unboundedness
add_a_Benders_cut(); # Assume here we add a Benders' cut for `e`
common_expr = JuMP.objective_function(model)
JuMP.set_objective_coefficient(model, e, 1) # add the ≥2 stage costs
JuMP.optimize!(model) # This is a proper optimization, as opposed to the above "Initially"
lb = JuMP.objective_bound(model) # a valid global lower bound
common_cost = JuMP.value(common_expr)
cost_ge2_ub = some_function(JuMP.value.(x), JuMP.value.(Y))
ub = common_cost + cost_ge2_ub # a valid global upper bound
@assert lb <= ub
if lb + 1e-6 >= ub
    @info "Benders' method converges to global optimality"
end

The questions include:

  1. Can we add a querying function JuMP.objective_coefficient?
  2. Currently JuMP.set_objective_coefficient only support vector (see (model, x, rand(4))), but it doesn’t work for a matrix (1️⃣), without the presence of dot-broadcasting.
  3. Can we make this API more flexiable such that we can write JuMP.set_objective_coefficient(model, x, rand(4), Y, rand(2, 3))?

(The 1st and 3rd point are more desirable.)

For 1, do:

f = objective_function(model)
coefficient(f, x)

I don’t want to add objective_function_coefficient(model, x) because this implies that we can efficiently query an objective coefficient, which we cannot do. We have added normalized_coefficient for constraints, but I tend to think this was a mistake that I don’t want to repeat.

For 2: use broadcasting, or reshape into a vector. JuMP’s philosophy is to severely restrict the input types to provide a consistent and correct API experience. Allowing some sort of AbstractArray input makes MethodErrors much more likely, and we don’t think they are a good user experience.

For 3: no. Note that this would only be faster than two separate calls if we could actually communicate this to a solver in a single function call. That would require a much bigger change throughout the entire JuMP/MathOptInterface ecosystem which is not worth it for this single use-case, so use two separate calls, or if you really must:

JuMP.set_objective_coefficient(
    model,
    vcat(x; vec(Y)),
    vcat(rand(4), vec(rand(2, 3))),
)

As a related comment, supporting Varargs like this is a pain. At minimum, we’d want something more regular like:

JuMP.set_objective_coefficient(model, x => rand(4), Y => rand(2, 3))

but even then, the Pairs would be different types, so dispatch will be slow etc. We’d probably require:

JuMP.set_objective_coefficient(model, x => rand(4), vec(Y) => vec(rand(2, 3)))

but then you may as well use the vcat approach.

Okay, I understand.

Why?

I think for a modeling language like JuMP. Objective function is a Tier 2 object.

From my perspective, Tier 1 object includes variables, scalar expressions, scalar constraints.
And then, objective function is merely a special scalar expressions.

This style is rather lengthy. We don’t want this style. I would rather opt to write

@set_objective_function(model, sm(rand(4), x) + sm(rand(2, 3), Y))

Why

Because we current support only querying the entire ::MOI.ObjectiveFunction. A change would require extensive rework across all of JuMP and MOI which I don’t really want to do.

See

which is very inefficient because it first queries the entire constraint object (::MOI.ConstraintFunction and ::MOI.ConstraintSet), then drops everything and returns a single number. (This was added by me back in Add standard_form modifications by odow · Pull Request #1935 · jump-dev/JuMP.jl · GitHub. I now think it was a mistake.)

I would rather opt to write

Yes, if you are setting a new objective, please use this style. Better yet, do not invent new macros. Just do:

JuMP.@objective(model, Min, rand(4)' * x + sum(rand(2, 3) .* Y))

I think JuMP.objective_function is not a good name.
It should have been called JuMP.objective_expression.

julia> println(model)
Min 4 x[1] + 3 x[2] + 2 x[3] + x[4]
Subject to

julia> JuMP.objective_function(model)
4 x[1] + 3 x[2] + 2 x[3] + x[4]

julia> typeof(ans)
AffExpr (alias for JuMP.GenericAffExpr{Float64, JuMP.GenericVariableRef{Float64}})

The following are real functions

julia> f1(x) = 1 * x
f1 (generic function with 1 method)

julia> f1::Function # no ERROR

I’m going to link you to PSA: Julia is not at that stage of development anymore. It’s written for Julia, but the same logic applies to JuMP.

1 Like