Add an "optimize_assert_optimal" functionality in JuMP would be handier?

Compared with the current “is_solved_and_feasible” functionality, I think the one suggested in the title would be more convenient sometimes.

In practice ((Mi)LP context particularly), we almostly certainly want our model to be solved to OPTIMAL.
If OPTIMAL is true, we want it to be quiet. (Then we just fetch values and proceed)
If it is not, we want an red ERROR mmessage to be displayed and then we can check its termination status to facilitate debugging.
Moreover, we want the name of the model to be reported, because there would usually be more than one models when we write algorithms. For example, in a while-loop, we optimize 2 models alternately. Printing the name of model would be extremely helpful.

Therefore, I wrote a “optimise_assert_optimal” macro and tested its functionality. I don’t know whether this is appropriate, since I seldom use metaprogramming in julia. But the test result exhibits successfully what the intended meaning is.

import JuMP
import Gurobi
GRB_ENV = Gurobi.Env()
function GurobiDirectModel() return JuMP.direct_model(Gurobi.Optimizer(GRB_ENV)) end
function optimise(model) return (JuMP.optimize!(model); JuMP.termination_status(model)) end
macro optimise_assert_optimal(model)
    name = string(model)
    return quote
        status = optimise($model)
        status == JuMP.OPTIMAL || error($name * ": " * string(status))
    end
end

# test begins
model_123 = GurobiDirectModel(); JuMP.set_silent(model_123);
model_456 = GurobiDirectModel(); JuMP.set_silent(model_456);
JuMP.@variable(model_123, x <= 0)
JuMP.@objective(model_123, Max, x)
JuMP.@variable(model_456, y <= 0)
@optimise_assert_optimal(model_123) # be silent because optimality
@optimise_assert_optimal(model_456) # be silent because optimality
@info("in model_123, x = $(JuMP.value(x))")
@info("in model_456, y = $(JuMP.value(y))")
JuMP.@objective(model_123, Min, x)
JuMP.set_lower_bound(y, 1)
@optimise_assert_optimal(model_123) # ERROR: model_123: DUAL_INFEASIBLE
@optimise_assert_optimal(model_456) # ERROR: model_456: INFEASIBLE

I added is_solved_and_feasible for a few reasons:

  • Many questions on this forum were because people did not check the termination status before querying results
  • Checking the termination status is a bit subtle, because users need to understand OPTIMAL and LOCALLY_SOLVED, and also the primal (and dual) statuses.
  • For the basic use case, it saves quite a few lines of code, and it makes the code written easier to understand.

I don’t think we will add this macro. The first reason is that it hides too much from the user while saving too few lines of code. Your line:

@optimise_assert_optimal(model_123) # be silent because optimality

is essentially just:

JuMP.optimize!(model_123)
if (status = JuMP.termination_status(model_123)) != JuMP.OPTIMAL
    error("model_123: $status")
end

A second reason is that, while macros are cool and JuMP uses them a lot, they’re somewhat over-used and abused by Julia programmers. There is almost never a good reason to write a macro for user-code.

For example, at the cost of an extra argument, you could use a function:

function optimize!_and_assert_optimality(model, name)
    JuMP.optimize!(model)
    if (status = JuMP.termination_status(model)) != JuMP.OPTIMAL
        error("$name: $status")
    end
end

optimize!_and_assert_optimality(model_123, "model_123")

One place where the function approach would be better is:

models = Dict("A" => model_123, "B" => model456)
for (name, model) in models
    @optimise_assert_optimal model
    # or
    optimize!_and_assert_optimality(model, name)
end

Your macro approach will just print ERROR: model: status whereas the function will print ERROR: A: status and ERROR: B: status.

That being said, a great thing about JuMP is that because it is embedded in Julia you are more than welcome to write and use this macro yourself! We just won’t be adding it to JuMP.

On a more technical note, one reason that I strongly, strongly discourage you to write your own macro is that they can be tricky to get correct. Consider what happens if you use @optimise_assert_optimal in a function:

julia> import JuMP

julia> import Gurobi

julia> function optimise(model) return (JuMP.optimize!(model); JuMP.termination_status(model)) end
optimise (generic function with 1 method)

julia> macro optimise_assert_optimal(model)
           name = string(model)
           return quote
               status = optimise($model)
               status == JuMP.OPTIMAL || error($name * ": " * string(status))
           end
       end
@optimise_assert_optimal (macro with 1 method)

julia> function foo()
           model_123 = JuMP.Model(Gurobi.Optimizer)
           @optimise_assert_optimal(model_123) # be silent because optimality
       end
foo (generic function with 1 method)

julia> foo()
Set parameter LicenseID to value 890341
ERROR: UndefVarError: `model_123` not defined
Stacktrace:
 [1] macro expansion
   @ ./REPL[4]:4 [inlined]
 [2] foo()
   @ Main ./REPL[5]:3
 [3] top-level scope
   @ REPL[6]:1

to fix, you need to use esc:

julia> import JuMP

julia> import Gurobi

julia> macro optimise_assert_optimal(model)
           name_str = string(model, ": ")
           code = quote
               JuMP.optimize!($model)
               if (status = JuMP.termination_status($model)) != JuMP.OPTIMAL
                   error($name_str * string(status))
               end
           end
           return esc(code)
       end
@optimise_assert_optimal (macro with 1 method)

julia> function foo()
           model_123 = JuMP.Model(Gurobi.Optimizer)
           @optimise_assert_optimal(model_123) # be silent because optimality
       end
foo (generic function with 1 method)

julia> foo()
Set parameter LicenseID to value 890341
Gurobi Optimizer version 12.0.0 build v12.0.0rc1 (mac64[x86] - Darwin 24.1.0 24B83)

CPU model: Intel(R) Core(TM) i5-8259U CPU @ 2.30GHz
Thread count: 4 physical cores, 8 logical processors, using up to 8 threads

Optimize a model with 0 rows, 0 columns and 0 nonzeros
Model fingerprint: 0xf9715da1
Coefficient statistics:
  Matrix range     [0e+00, 0e+00]
  Objective range  [0e+00, 0e+00]
  Bounds range     [0e+00, 0e+00]
  RHS range        [0e+00, 0e+00]
Presolve time: 0.00s
Presolve: All rows and columns removed
Iteration    Objective       Primal Inf.    Dual Inf.      Time
       0    0.0000000e+00   0.000000e+00   0.000000e+00      0s

Solved in 0 iterations and 0.00 seconds (0.00 work units)
Optimal objective  0.000000000e+00

User-callback calls 23, time in user-callback 0.00 sec
2 Likes

Yes, I came up with this function also. But I though it would be better to eliminate the latter, hence the proposition.
But the metaprogramming method is indeed tricky.
It seems nearly all user code need to use esc, at the expense of be wary of not confusing variables.
I’ll use functions instead, thanks.

This is an actual experiment I’m doing currently. And JuMP give me Warning message without mentioning the name of the model I’ve established. Therefore I am not able to locate the source of this Warning. @odow

julia> while true                                                                                                              
           opt_ass_opt(trh, "trh")                                                                                             
           x = jvr6.(trh_x); oLh_check = JuMP.value(oLh); lb = JuMP.objective_bound(trh);                                      
           JuMP.fix.(tr2_x, x) # 🖌️                                                                                            
           proper_Q = get_proper_Q(x) # ✅                                                                                     
           updVec .= false                                                                                                     
           for j in 1:proper_Q[1]                                                                                              
               Y = proper_Q[3][:, :, j]                                                                                        
               JuMP.fix.(tr2_Y, Y) # 🖌️                                                                                        
               opt_ass_opt(tr2, "tr2")                                                                                         
               b2 = jvr6.(tr2_b2); oΛ2_check = JuMP.value(oΛ2) # 🥑✂️                                                          
               set_argZ_objective(x, Y, b2); opt_ass_optortime(argZ, "argZ")                                                   
               Z_new = jvr6.(argZ_Z)                                                                                           
               cn, px, pY, pb = JuMP.value(argZ_cn), jvr6.(argZ_c_x), jvr6.(argZ_c_Y), jvr6.(argZ_c_b2)                        
               if oΛ2_check < cn + ip(px, x) + ip(pY, Y) + ip(pb, b2) - UPDTOL                                                 
                   add_cut_for_oΛ2(cn, px, pY, pb)                                                                             
                   updVec[2] = true                                                                                            
               end                                                                                                             
               recruit(Z_list, Z_new)                                                                                          
               proper_P = get_proper_P(x, Y, Z_list) # ✅                                                                      
               add_cut_for_oLh(x, proper_Q, proper_P, oLh_check) && (updVec[1] = true)                                         
           end                                                                                                                 
           @info "$updVec, lb = $lb"                                                                                           
           all(updVec .== false) && break                                                                                      
       end
[ Info: Bool[1, 1], lb = -208.90723332481141
┌ Warning: The addition operator has been used on JuMP expressions a large number of times. This warning is safe to ignore but may indicate that model generation is slower than necessary. For performance reasons, you should not add expressions in a loop. Instead of x += y, use add_to_expression!(x,y) to modify x in place. If y is a single variable, you may also use add_to_expression!(x, coef, y) for x += coef*y.
└ @ JuMP K:\judepot1112\packages\JuMP\i68GU\src\operators.jl:282
[ Info: Bool[1, 1], lb = 0.6
[ Info: Bool[1, 1], lb = 1.23
[ Info: Bool[1, 1], lb = 1.23

As you can see (restricted to the code I’ve posted here), there are 3 models trh, tr2 and argZ.
And for some information about this Warning, you can check this post if you have time, thanks.