I find JuMP.set_objective_coefficient
somewhat favorable, because it is a functional API, and is flexible. e.g., I can use it to write a (pseudo-) Benders’ decomposition algorithm
import JuMP, Gurobi; GRB_ENV = Gurobi.Env();
function Model(name) # e.g. "st2_lms" or "st1_bMD" or "supp_lMs"
m = JuMP.Model(() -> Gurobi.Optimizer(GRB_ENV))
JuMP.MOI.set(m, JuMP.MOI.Name(), name)
s = name[end-1]
s == 'm' && JuMP.set_objective_sense(m, JuMP.MIN_SENSE)
s == 'M' && JuMP.set_objective_sense(m, JuMP.MAX_SENSE)
last(name) == 's' && JuMP.set_silent(m)
m
end;
function add_a_Benders_cut() return nothing end
model = Model("_lms") # Min Sense
JuMP.@variable(model, x[1:4])
JuMP.@variable(model, Y[1:2, 1:3]) # Matrix Variable
JuMP.@variable(model, e) # an epigraphical variable
JuMP.set_objective_coefficient(model, x, rand(4))
JuMP.set_objective_coefficient.(model, Y, rand(2, 3)) # 1️⃣
JuMP.optimize!(model) # Initially, we don't have a cut for `e`, therefore we exclude it to avoid unboundedness
add_a_Benders_cut(); # Assume here we add a Benders' cut for `e`
common_expr = JuMP.objective_function(model)
JuMP.set_objective_coefficient(model, e, 1) # add the ≥2 stage costs
JuMP.optimize!(model) # This is a proper optimization, as opposed to the above "Initially"
lb = JuMP.objective_bound(model) # a valid global lower bound
common_cost = JuMP.value(common_expr)
cost_ge2_ub = some_function(JuMP.value.(x), JuMP.value.(Y))
ub = common_cost + cost_ge2_ub # a valid global upper bound
@assert lb <= ub
if lb + 1e-6 >= ub
@info "Benders' method converges to global optimality"
end
The questions include:
- Can we add a querying function
JuMP.objective_coefficient
? - Currently
JuMP.set_objective_coefficient
only support vector (see(model, x, rand(4))
), but it doesn’t work for a matrix (1️⃣
), without the presence of dot-broadcasting. - Can we make this API more flexiable such that we can write
JuMP.set_objective_coefficient(model, x, rand(4), Y, rand(2, 3))
?
(The 1st and 3rd point are more desirable.)