About JuMP.is_solved_and_feasible

According to my experience in global optimization, I only encounter these situations so far.


# ✅ The foremost indispensable querying functions are `JuMP.value` and `JuMP.objective_bound`
# Theoretically, `JuMP.objective_value` is redundant, with the presence of `JuMP.value`

function JuMP_objective_bound_works_properly(model)::Bool 
    # ✅ I think JuMP.jl should have a function `JuMP.has_objective_bound()` in JuMP.jl
    # But as I didn't find it, I suppose that we have this function instead
    # this function returns true::Bool if `JuMP.objective_bound(model)` works properly after `JuMP.optimize!(model)`
    # which partly means that @assert JuMP.objective_bound(model) <= JuMP.objective_value(model) in a Min-program
    # where LHS is a valid dual bound, while RHS is a valid primal bound
end

JuMP.optimize!(model)
# After optimize!, 
# we employ 8 functions who return Bools to help us judge
# 1. if it is normal (true::Bool), we want it to be silent, then we just fetch things we need and proceed
# 2. if it is abnormal (false::Bool), we want to see an immediate red julia ERROR with useful info (better with the model's name, since we tend to build ≥2 models), then we go back to modify the corresponding model properly until it becomes normal
# In practice, we must only want the 1. (normal) to happen, such that we can finish the whole algorithm
# Mnemonic:
# `p`: primal
# `d`: dual
# `s`: simple (do not set TIME_LIMIT)
# `t`: have set a TIME_LIMIT
# `l`: local solvers
# `g`: global solvers
if no_dual # if I don't need dual solutions, i.e., `JuMP.dual` related things
    if no_time_limit # if I haven't set TIME_LIMIT
        if local_solver # I only want a primal-side feasible solution, e.g. I used Ipopt
            psl(model)
        else # I want these 2 functions work properly: `JuMP.value` and `JuMP.objective_bound`, e.g. I used Gurobi
            psg(model) 
        end
    else # e.g. if this is a practical large problem such that I had set a TIME_LIMIT
        if local_solver 
            ptl(model)
        else
            ptg(model)
        end
    end
else # if I need the `JuMP.dual` related things (e.g. do sensitivity analysis)
    if no_time_limit 
        if local_solver 
            dsl(model) 
        else
            dsg(model) 
        end
    else
        if local_solver 
            dtl(model)
        else
            dtg(model)
        end
    end
end

function psl(model) # ✅
    JuMP.has_values(model) || return false
    JuMP.termination_status(model) == JuMP.LOCALLY_SOLVED || return false
    return true
end
function dsl(model) 
    JuMP.has_duals(model) || return false
    return psl(model)
end

function psg(model) # ✅
    JuMP_objective_bound_works_properly(model) || return false
    JuMP.has_values(model) || return false
    JuMP.termination_status(model) == JuMP.OPTIMAL || return false
    return true
end
function dsg(model)
    JuMP.has_duals(model) || return false
    return psg(model)
end

function ptl(model) # ✅
    JuMP.termination_status(model) in [JuMP.LOCALLY_SOLVED, JuMP.TIME_LIMIT] || return false
    JuMP.has_values(model) || return false
    return true
end
function dtl(model)
    JuMP.has_duals(model) || return false
    return ptl(model)
end

function ptg(model) # ✅
    JuMP.termination_status(model) in [JuMP.OPTIMAL, JuMP.TIME_LIMIT] || return false
    JuMP_objective_bound_works_properly(model) || return false
    JuMP.has_values(model) || return false
    return true
end
function dtg(model)
    JuMP.has_duals(model) || return false
    return ptg(model)
end

Well, as far as I’m concerned, dual variables are useful only when the program is solved to OPTIMAL, besides, I never set a TIME_LIMIT to a local solver like Ipopt. ( here is a note about utilizing dual variables as cut coefficients).

Irrespective of whether we are going to fetch dual variables with JuMP.dual or whether we had set a TIME_LIMIT, the prime procedure, from the standpoint of a rigorous mathematician is as follows (assume we are doing Min-program).

  1. call JuMP.optimize!(model)
  2. when it is returned to julia REPL, we fetch a primal-side feasible solution via x = JuMP.value(x) (suppose it is valid).
  3. Use julia functions to double check x is feasible, in case the solver is unreliable. (It seems there is already such a functionality called JuMP.primal_feasibility_report, but I seldom use it because it is not handy)
  4. As long as this x is feasible, we use julia functions to calculate the objective value, in case the JuMP.objective_value is unreliable (as it depends on the solver!).
  5. Until here we’ve obtained a “good” solution and a primal bound (denoted by ub). When the solver is Ipopt, we finish at this step.

But if we are doing global optimization (e.g. via Gurobi):
6. We fetch the dual bound via JuMP.objective_bound, denoted by lb.
7. We calculate the absolute gap and relative gap based on lb and ub.
8. We assess the results based on these, if Gurobi reports OPTIMAL, these gaps should be very very small. If Gurobi reaches TimeLimit, there may be a gap, but at least we have a valid dual bound lb, a “good” solution x (with its primal bound ub).

Is there a question here?

is_solved_and_feasible does what it says it does: did the solver run successfully and find a feasible solution?

You can see the implementation here:

I should remove has_values from the documentation. It is misleading. It doesn’t confirm that there is a feasible primal solution, only that the primal status is not NO_SOLUTION. The result might be an INFEASIBLE_POINT, or a NEARLY_FEASIBLE_POINT… neither are probably what you are expecting.

No, this is a discussion post.

The standpoint of JuMP.is_solved_and_feasible is from the software design.

But my standpoint is at the user’s view.

The crux is that although JuMP.is_solved_and_feasible performs what it claims correctly, it may not fit the user’s need.

e.g., when I’m using Gurobi without TimeLimit, I do not deem LOCALLY_SOLVED normal.

But theoretically speaking, this can happen with JuMP.is_solved_and_feasible, as the default allow_local is true.

Besides, I never encounter issues about allow_almost.

From the user’s standpoint, I need an assertion (normality check) that JuMP.value and JuMP.objective_bound function (←verb) properly and consistently (e.g., lb \le ub in a Min Program).

When I want to fetch dual solutions, I need to ensure the normality of JuMP.dual, provided that the termination status is OPTIMAL.

I think at the currect state, it is a slightly bit of disconcerting to use JuMP.is_solved_and_feasible, just like a T-shirt which is too big for me.

More boldly speaking, I don’t need to check termination_status when the dual solutions are not desired.

The things I care is:

  1. the validity of JuMP.value (secondarily, the validity of JuMP.objective_value)

  2. the validity of JuMP.objective_bound

  3. the validity of JuMP.dual, provided that JuMP.termination_status(model) == JuMP.OPTIMAL

  4. the overall validity (consistency). Partly means that JuMP.objective_bound \le JuMP.objective_value always, in a Min-program.

  1. check primal_status(model) == FEASIBLE_POINT
  2. valid if primal_status(model) == FEASIBLE_POINT. If primal status is something else, then “it depends”. There is no universal way to check a status to see if the solver found a valid bound.
  3. check dual_status(model) == FEASIBLE_POINT
  4. see (1) and (2)

For time limits, see this discussion Support TIME_LIMIT in is_solved_and_feasible by odow · Pull Request #3915 · jump-dev/JuMP.jl · GitHub

See also our suggested workflow: Solutions · JuMP

2 Likes

I see. My thoughts are:
Well, technically speaking we don’t need a middleman like JuMP.is_solved_and_feasible.
The following workflow might compete:

# define this JuMP's function
function solve!_and_fetch(model)
    JuMP.optimize!(model)
    # here: all the valuable info are conveyed from the solver to JuMP
    # such that JuMP is well-prepared
end
# define this JuMP's function
function improved_value(model) # in place of the existing `JuMP.value`
    if normality
        return JuMP.value(model) # returns the existing API 
    else
        error("Sorry, this info is unavailable or invalid!")
    end
end
# similary, define improved_objective_bound::Function and improved_dual::Function in JuMP

# The following are user's code, which is concise and intelligible
JuMP.@variable(model, x >= 1)
JuMP.@objective(model, Min, x) # here finishes modeling
JuMP.solve!_and_fetch(model)
xv = JuMP.improved_value(model) # this should be 1.0

If this function is difficult to design, then probably it is counterintuitive.
From my perspective, JuMP.is_solved_and_feasible or some previous thing like @assert JuMP.termination_status(model) == OPTIMAL, are consequence of software design, which should be invisible from user’s side.

We purposefully chose not to do this. We want users to feel the pain of understanding the different solver statuses, that, for example, a solver might return termination_status(model) == OPTIMAL but primal_status(model) == INFEASIBLE_POINT.

In the happy path of “solve and get result” there is

optimize!(model)
assert_is_solved_and_feasible(model)
# ... do stuff

For all other use-cases, you need to deal with the statuses.

2 Likes

I think this is the no-goodness of solver developers.
From the users’ side, the solver is like a consultant.
We ask a value, e.g., by calling JuMP.value(x), there are 2 results only:

  1. “Yes, we can provide a valid (good) value.”
  2. “Sorry, this info is unavailable or invalid.”

Then we ask another value, e.g., by JuMP.objective_bound, there are also 2 results:

  1. “Yes, here is a valuable dualBound that is consistent with all the values I gave you before.”
  2. “Sorry, I fail to meet the above 1. line”.

This sounds like a reliable workflow. (My humble opinion)

I find that JuMP already has part of this nice warning Error message integrated in quering functions, just as the following.

import JuMP, Ipopt
model = JuMP.Model(Ipopt.Optimizer)
JuMP.@variable(model, x >= 1)
JuMP.@objective(model, Min, x)
JuMP.optimize!(model)
JuMP.objective_bound(model) # The following ERROR is approving 👍
# ERROR: MathOptInterface.GetAttributeNotAllowed

Which embodies the idea mentioned by my post above

function improved_value(model) # in place of the existing `JuMP.value`
    if normality
        return JuMP.value(model) # returns the existing API 
    else
        error("Sorry, this info is unavailable or invalid!")
    end
end

In view of this, the JuMP.objective_bound(CR) # 1.0e100 is weird here (Gurobi12 reports obj_value < obj_bound in a Min-Program?), which doesn’t make any sense.
The expected behavior is also an ERROR message. Do you agree? @odow

Do you agree?

No. Your problem is unbounded. The primal_status is NO_SOLUTION. Querying objective_bound is undefined behavior. JuMP defaults to whatever the solver reports. Gurobi have chosen to report 1e100. That’s their choice, and we won’t decide to error instead.

julia> using JuMP, Gurobi

julia> model = Model(Gurobi.Optimizer)
Set parameter LicenseID to value 890341
A JuMP Model
├ solver: Gurobi
├ objective_sense: FEASIBILITY_SENSE
├ num_variables: 0
├ num_constraints: 0
└ Names registered in the model: none

julia> @variable(model, x[1:3])
3-element Vector{VariableRef}:
 x[1]
 x[2]
 x[3]

julia> set_lower_bound(x[1], exp(1)); set_upper_bound(x[1], exp(2))

julia> set_lower_bound(x[3], 1e-6)

julia> @objective(model, Min, -x[1] * x[2] - log(x[3]) + x[2]^2/4)
((-x[1]*x[2]) - log(x[3])) + (0.25 x[2]²)

julia> optimize!(model)
Gurobi Optimizer version 12.0.1 build v12.0.1rc0 (mac64[x86] - Darwin 24.1.0 24B83)

CPU model: Intel(R) Core(TM) i5-8259U CPU @ 2.30GHz
Thread count: 4 physical cores, 8 logical processors, using up to 8 threads

Optimize a model with 0 rows, 5 columns and 0 nonzeros
Model fingerprint: 0xede27ada
Model has 1 general nonlinear constraint (3 nonlinear terms)
Variable types: 5 continuous, 0 integer (0 binary)
Coefficient statistics:
  Matrix range     [0e+00, 0e+00]
  Objective range  [1e+00, 1e+00]
  Bounds range     [1e-06, 7e+00]
  RHS range        [0e+00, 0e+00]
Presolve model has 1 nlconstr
Added 5 variables to disaggregate expressions.
Presolve time: 0.00s
Presolved: 15 rows, 11 columns, 32 nonzeros
Presolved model has 2 bilinear constraint(s)
Presolved model has 1 nonlinear constraint(s)

Solving non-convex MINLP

Variable types: 11 continuous, 0 integer (0 binary)
Found heuristic solution: objective -11.9929554

Root relaxation: unbounded, 1 iterations, 0.00 seconds (0.00 work units)

    Nodes    |    Current Node    |     Objective Bounds      |     Work
 Expl Unexpl |  Obj  Depth IntInf | Incumbent    BestBd   Gap | It/Node Time

     0     0  unbounded    0       -11.99296          -      -     -    0s
     0     0  postponed    0       -11.99296          -      -     -    0s
     0     0  postponed    0       -11.99296          -      -     -    0s
     0     2  postponed    0       -11.99296          -      -     -    0s

Explored 3 nodes (13 simplex iterations) in 0.01 seconds (0.00 work units)
Thread count was 8 (of 8 available processors)

Solution count 1: -11.993 

Model is unbounded
Best objective -1.199295535608e+01, best bound -, gap -

User-callback calls 131, time in user-callback 0.00 sec

julia> solution_summary(model)
* Solver : Gurobi

* Status
  Result count       : 1
  Termination status : DUAL_INFEASIBLE
  Message from the solver:
  "Model was proven to be unbounded. Important note: an unbounded status indicates the presence of an unbounded ray that allows the objective to improve without limit. It says nothing about whether the model has a feasible solution. If you require information on feasibility, you should set the objective to zero and reoptimize."

* Candidate solution (result #1)
  Primal status      : NO_SOLUTION
  Dual status        : NO_SOLUTION
  Objective value    : -1.19930e+01
  Objective bound    : 1.00000e+100
  Dual objective value : 1.00000e+100

* Work counters
  Solve time (sec)   : 7.37810e-03
  Simplex iterations : 13
  Barrier iterations : 0
  Node count         : 3
1 Like

And Ipopt errors because it cannot ever compute a valid objective bound. It is not implemented.

Gurobi12 is somewhat insane. If the staff of Gurobi has a slight bit of math knowledge, they should at least report -1e100, since it’s a Min-Program.

This is acceptable.

If the staff of Gurobi has a slight bit of math knowledge

They do :smile:

they should at least report -1e100 ,

But this isn’t correct either. The problem is unbounded. There is no valid objective bound.

1 Like

Actually in the field of optimization there is a set of conventions. e.g. Rochafellar’s variational analysis 2009, or Bertsekas’s convex optimization theory 2009.

For a Min-program, Gurobi should initially set lb = -\infty, ub = \infty.
If Gurobi could prove INFEASIBLE, then it should (in math sense) set objective_bound = Inf, indicating lb = ub = \infty.
If Gurobi could prove unboundedness, then it should (in math sense) set objective_value = -Inf ,indicating lb = ub = -\infty. In case -Inf is not defined in some programming language, the value -1e100 makes some sense. But in no way should Gurobi report some values >-\infty, which essentially indicates that the problem is bounded below.

FWIW and to add my own data point: I think this line of reasoning makes sense, even from a user’s perspective. And as a user I’m perfectly happy with it.

2 Likes

Thanks for sharing your opinion.
I recall that when I started to learn JuMP, they didn’t invent this.
IIRC, it’s @assert termination_status(model) == OPTIMAL
And I have been using this alone, until I learned this new API.
For me now, it just becomes
JuMP.assert_is_solved_and_feasible(model; allow_local = false);
A new consensus.

2 Likes
import JuMP, Gurobi
function objf(x, y) return (y - 2)^2/4 - x * y end
B = JuMP.Model(() -> Gurobi.Optimizer());
JuMP.@variable(B, 0 <= x <= 1);
JuMP.@variable(B, 0 <= y);
JuMP.@objective(B, Min, objf(x, y));
JuMP.optimize!(B); # see the following logging
JuMP.solution_summary(B) # see the following printing

Gurobi Optimizer version 12.0.1 build v12.0.1rc0 (win64 - Windows 11.0 (26100.2))

CPU model: 11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz, instruction set [SSE2|AVX|AVX2|AVX512]
Thread count: 4 physical cores, 8 logical processors, using up to 8 threads

Optimize a model with 0 rows, 2 columns and 0 nonzeros
Model fingerprint: 0x5b406e53
Model has 2 quadratic objective terms
Coefficient statistics:
  Matrix range     [0e+00, 0e+00]
  Objective range  [1e+00, 1e+00]
  QObjective range [5e-01, 2e+00]
  Bounds range     [1e+00, 1e+00]
  RHS range        [0e+00, 0e+00]

Continuous model is non-convex -- solving as a MIP

Found heuristic solution: objective 1.0000000
Found heuristic solution: objective 0.8925000
Presolve removed 0 rows and 2 columns
Presolve time: 0.00s
Presolve: All rows and columns removed

Explored 0 nodes (0 simplex iterations) in 0.00 seconds (0.00 work units)
Thread count was 1 (of 8 available processors)

Solution count 3: -3 0.8925 1 
No other solutions better than -3

Optimal solution found (tolerance 1.00e-04)
Best objective -3.000000000000e+00, best bound -3.000000000000e+00, gap 0.0000%

User-callback calls 90, time in user-callback 0.00 sec

* Solver : Gurobi

* Status
  Result count       : 3
  Termination status : OPTIMAL
  Message from the solver:
  "Model was solved to optimality (subject to tolerances), and an optimal solution is available." 

* Candidate solution (result #1)
  Primal status      : FEASIBLE_POINT
  Dual status        : NO_SOLUTION
  Objective value    : -3.00000e+00
  Objective bound    : -3.00000e+00
  Relative gap       : 0.00000e+00
  Dual objective value : -3.00000e+00

* Work counters
  Solve time (sec)   : 0.00000e+00
  Simplex iterations : 0
  Barrier iterations : 0
  Node count         : 0

I find that JuMP’s solution_summary appears to be seriously incorrect.
As a comparison, Gurobi’s logging is correct:

Optimal solution found (tolerance 1.00e-04)
Best objective -3.000000000000e+00, best bound -3.000000000000e+00, gap 0.0000%

Gurobi’s is correct in that it singled out this vital standalone quantity best bound, which in math sense called “best dual bound”, This quantity is from the dual side, it is not related to any primal side solutions.
However, JuMP.solution_summary did NOT furnish this vital quantity. Or it incorrectly put it in the block entitled Candidate solution (result #1).

The Objective bound associated with Candidate solution (result #1) is nonsensical. (it is not the value 1e100 nonsensical, it is the relationship incorrect.) (Did I express myself well?)

It’s this one. But it was a semi-intentional decision.

Would you prefer that we printed:

* Solver : Gurobi

* Status
  Result count       : 3
  Termination status : OPTIMAL
  Objective bound    : -3.00000e+00
  Message from the solver:
  "Model was solved to optimality (subject to tolerances), and an optimal solution is available."

* Candidate solution (result #1)
  Primal status      : FEASIBLE_POINT
  Dual status        : NO_SOLUTION
  Objective value    : -3.00000e+00
  Relative gap       : 0.00000e+00
  Dual objective value : -3.00000e+00

* Work counters
  Solve time (sec)   : 3.03030e-04
  Simplex iterations : 0
  Barrier iterations : 0
  Node count         : 0

But then the objective bound is quite far away from the other objective attributes when reading.

The updated one make clear sense!
The original (existing) layout is extremely misleading.
The best dual bound is an extremely indispensable quantity on the dual side.
In no way should it be mixed up with the first (result #1) primal solution.
This best dual bound is valid for all (#1, #2, #3…) primal solutions.

Had better add the OBJ_SENSE = MIn or Max or Feasibility to solution summary.

You see,
In a Min-program, best dual bound is lb, best primal objective value reached by a feasible solution is ub.
In a Max-program, best dual bound is ub, best primal objective value reached by a feasible solution is lb.