Sometimes no objective_bound info from an LP solve, by Gurobi

I solved an LP by Gurobi using

Non-default parameters:
Method  6
Crossover  0
PDHGGPU  1

And here’s the result

julia> model
A JuMP Model
├ mode: DIRECT
├ solver: Gurobi
├ objective_sense: MIN_SENSE
│ └ objective_function_type: JuMP.AffExpr
├ num_variables: 51762757
├ num_constraints: 130897631
│ ├ JuMP.VariableRef in MOI.GreaterThan{Float64}: 51759445
│ ├ JuMP.AffExpr in MOI.GreaterThan{Float64}: 13570000
│ ├ JuMP.VariableRef in MOI.LessThan{Float64}: 24619445
│ ├ JuMP.AffExpr in MOI.LessThan{Float64}: 40715428
│ └ JuMP.AffExpr in MOI.EqualTo{Float64}: 233313
└ Names registered in the model: none

julia> JuMP.solution_summary(model)
solution_summary(; result = 1, verbose = false)
├ solver_name          : Gurobi
├ Termination
│ ├ termination_status : OPTIMAL
│ ├ result_count       : 1
│ └ raw_status         : Model was solved to optimality (subject to tolerances), and an optimal solution is available.
├ Solution (result = 1)
│ ├ primal_status        : FEASIBLE_POINT
│ ├ dual_status          : FEASIBLE_POINT
│ ├ objective_value      : 6.70482e+02
│ └ dual_objective_value : 6.70480e+02
└ Work counters
  ├ solve_time (sec)   : 3.85499e+03
  ├ simplex_iterations : 0
  ├ barrier_iterations : 0
  └ node_count         : 0

I fail to query the objective_bound. Why?

julia> JuMP.objective_bound(model)
ERROR: MathOptInterface.GetAttributeNotAllowed{MathOptInterface.ObjectiveBound}:

## Cause

Getting attribute MathOptInterface.ObjectiveBound() cannot be performed

## Fixing this error

An `MOI.NotAllowedError` error occurs when you have tried to do something that
is not implemented by the solver.

The most common way to fix this error is to wrap the optimizer in a
`MOI.Utilities.CachingOptimizer`.

For example, if you are using `JuMP.Model` or `JuMP.set_optimizer`, do:

model = JuMP.Model(optimizer; with_cache_type = Float64)
model = JuMP.GenericModel{T}(optimizer; with_cache_type = T)
JuMP.set_optimizer(model, optimizer; with_cache_type = Float64)

Similarly, if you are using `MOI.instantiate`, do:

model = MOI.instantiate(optimizer; with_cache_type = Float64)


Stacktrace:
 [1] get(model::Gurobi.Optimizer, attr::MathOptInterface.ObjectiveBound)
   @ Gurobi ~/.julia/packages/Gurobi/Wo3Rk/src/MOI_wrapper/MOI_wrapper.jl:3344
 [2] _moi_get_result(model::Gurobi.Optimizer, args::MathOptInterface.ObjectiveBound)
   @ JuMP ~/.julia/packages/JuMP/N7h14/src/optimizer_interface.jl:1190
 [3] get(model::JuMP.Model, attr::MathOptInterface.ObjectiveBound)
   @ JuMP ~/.julia/packages/JuMP/N7h14/src/optimizer_interface.jl:1219
 [4] objective_bound(model::JuMP.Model)
   @ JuMP ~/.julia/packages/JuMP/N7h14/src/objective.jl:78
 [5] top-level scope
   @ REPL[35]:1

Because Gurobi did not have one to return.

Querying the objective bound calls simply:

I’m going to improve the error message: Change error msg for GetAttributeNotAllowed when is_set_by_optimize by odow · Pull Request #2910 · jump-dev/MathOptInterface.jl · GitHub

Because Gurobi did not have one to return.

It sometimes can indeed return one. While in this case it cannot. The behavior is somewhat nondeterministic. Don’t know the reason…

Edit: it appears that it depends on the Crossover parameter.

It appears to be safe to use this setting

JuMP.set_attribute(model, "Crossover", 0);
JuMP.set_attribute(model, "Method", 2);

, but it’s unsafe to set Crossover to 0 if your Method=6 (using PDHG)—in which case there is no objective_bound for an LP.

These 3 APIs do refer to distinct quantities:

julia> map(f -> f(model), (
           JuMP.objective_value,
           JuMP.dual_objective_value,
           JuMP.objective_bound
       ))
(597.6414184756391, 597.6414184754293, 597.6414184752236)

Currently I have no idea where dual_objective_value can be used in daily applications. But the rest two, especially objective_bound, is necessary, e.g. in the Benders’ decomposition algorithm.

The dual_objective_value shows up in feasibility cuts for Benders: Benders decomposition · JuMP

I don’t think objective_bound is necessary in Benders.

Having read that part, I think the usage there is weird—how can you retrieve any valid info when the termination status is INFEASIBLE? e.g.

import JuMP, Gurobi
const GRB_ENV = Gurobi.Env();
const model = JuMP.direct_model(Gurobi.Optimizer(GRB_ENV));
const x0 = 3
JuMP.@variable(model, y <= 2)
JuMP.@variable(model, x == x0)
JuMP.@constraint(model, c, y >= x)
JuMP.optimize!(model)

julia> JuMP.solution_summary(model)
solution_summary(; result = 1, verbose = false)
├ solver_name          : Gurobi
├ Termination
│ ├ termination_status : INFEASIBLE
│ ├ result_count       : 0
│ └ raw_status         : Model was proven to be infeasible.
├ Solution (result = 1)
│ ├ primal_status        : NO_SOLUTION
│ └ dual_status          : NO_SOLUTION

A modern method is to add slack variables to make the formulation always feasible, then we end up with OPTIMAL termination, and then we generate feasibility cuts.


I think it’s essential. Benders decomposition, after all, is a global optimization method, in which you keep both an upper bound and a lower (aka dual) bound. The term “upper bound” indeed should be more precisely described as “a objective value associated with a primal-side feasible solution”.

The core reason that global optimizers like Gurobi are stronger than local optimizers like Ipopt is that they provide dual bound via the API JuMP.objective_bound. And this is essential in all global optimization algorithms.

I’ve had a bunch of experience during the past several years studying these sorts of subjects. And I believe few people know about Benders decomposition better than I do. I don’t need to be modest on this particular field.

The dual_status is INFEASIBLITY_CERTIFICATE. I’ll update the docs to make this clearer.

(This is yet another example where I need to say thank you. Your close reading of the documentation does lead to improvements :smile:)

Edit: [docs] clarify dual unbounded ray in Benders decomposition tutorial by odow · Pull Request #4109 · jump-dev/JuMP.jl · GitHub

I don’t think objective_bound is necessary in Benders.

I’ll be direct: objective_bound is not necessary for Benders. For example, we don’t use it in the tutorial Benders decomposition · JuMP. I don’t want to get into a discussion about this point.

Trust me. Using objective_value in deriving the lower bound is wrong.

I’ll open a PR to revise that doc.


It doesn’t make sense to look at the objective_value of model, which is the master problem. The master problem is merely a surrogate. It only make sense to check the objective_bound of it. (I didn’t put images on Github, where it cannot be rendered decently).

After solving the master problem (even if not to OPTIMAL, i.e. suboptimally), we need to:

  1. at the primal side, retrieve the best feasible solution of the 1st-stage decision variable
  2. retrieve its objective_bound, which is a valid global lower bound, i.e. the one used in the Benders algorithm’s gap calculation.

These two are valid, and they are the only valid information that you can retrieve from the solvers.


Imagine now you solve the master problem with a local optimizer e.g. Ipopt.
Then you must solve the master problem to GLOBAL OPTIMALITY and then retrieve the objective_value, since only at global optimality we have objective_value(master) == objective_bound(master).

But you’re using Ipopt—the global optimality is never ensured. Therefore you simply cannot use Ipopt to solve the Benders’ master problem. Because by using Ipopt you cannot derive a valid lower bound in the Benders’ algorithm.


The correct way is to use a global optimizer, e.g. Gurobi, to solve the Benders’ master problem. And more importantly: you even do not need to solve the master problem (mixed-integer or not) to global optimality. Since the lower bound secured from objective_bound is always valid. And if the 1st-stage decision is feasible, the primal side solutions are also valid. In this manner you maintain both primal and dual side validity.

Some other comments about the Bender Decomposition algorithm design:

  • Initially the master problem is unbounded below, in literatures people often simple add a θ >= -SOME_BIG_M artificial constraint. Actually in practical applications, e.g. in 2SSP, this is not necessary. Because we can initially add a round of disaggregated cuts to ensure a finite initial bound.
  • In practical applications we don’t need to adopt the gap shrinking as a termination criterion, since the upper bound calculation might be nontrivial. Actually we only need to monitor the ascending progress of the lower bound. And we can terminate once we are not able to add more violating cuts.
  • A lot of literature study “stabilization techniques” of the master problem’s solution acquisition, e.g. proximal bundle method (I think Miles Lubin knows about this). I was quite obscured by these stuff when I entered this subject several years ago. But now I tend to believe that we don’t need these techniques at all—the naive cutting plane method already works well—provided you make proper use of multithreaded async programming—which we can immediately try out in the julia language.

Since I’m curious. Let me directly @miles.lubin

I replied to your PR: Please using `objective_bound` in deriving the lower bound in Benders by WalterMadelim · Pull Request #4110 · jump-dev/JuMP.jl · GitHub

I’m not going to reply any further on this thread.