AssertionError in Gurobi.jl with OptimalityTarget = 1 on simple nonlinear model (Gurobi 13)

Hi all,

I’m getting an AssertionError when using Gurobi 13 through JuMP with a very small nonlinear model, but only when I set OptimalityTarget = 1.

Minimal working example:

using JuMP, Gurobi

m = Model(Gurobi.Optimizer)
set_optimizer_attribute(m, "OptimalityTarget", 1)

@variable(m, 0 <= x <= 10)
@objective(m, Min, (x - 4)^2 - sin(x))

optimize!(m)
Set parameter OptimalityTarget to value 1
Gurobi Optimizer version 13.0.0 build v13.0.0rc1 (mac64[arm] - Darwin 25.1.0 25B78)

CPU model: Apple M4 Pro
Thread count: 14 physical cores, 14 logical processors, using up to 14 threads

Non-default parameters:
OptimalityTarget  1
NonConvex  2

Optimize a model with 0 rows, 3 columns and 0 nonzeros (Min)
Model fingerprint: 0x46697c78
Model has 1 linear objective coefficients
Model has 1 general nonlinear constraint (2 nonlinear terms)
Variable types: 3 continuous, 0 integer (0 binary)
Coefficient statistics:
  Matrix range     [0e+00, 0e+00]
  Objective range  [1e+00, 1e+00]
  Bounds range     [1e+01, 1e+01]
  RHS range        [0e+00, 0e+00]
  NLCon coe range  [8e+00, 2e+01]
Presolve time: 0.00s
Presolved: 0 rows, 3 columns, 0 nonzeros
Presolved model has 1 nonlinear constraint(s)

Solving NLP to local optimality

... (barrier iterations) ...

NL barrier solved model in 11 iterations and 0.00 seconds (0.00 work units)
First-order optimal solution
Solution objective 5.995500436724e-01

User-callback calls 130, time in user-callback 0.00 sec
ERROR: AssertionError: 1 <= valueP[] <= 17
Stacktrace:
  [1] _raw_status(model::Gurobi.Optimizer)
    @ Gurobi ~/.julia/packages/Gurobi/Wo3Rk/src/MOI_wrapper/MOI_wrapper.jl:2921
  [2] get
    @ ~/.julia/packages/Gurobi/Wo3Rk/src/MOI_wrapper/MOI_wrapper.jl:2932 [inlined]
  [3] get(model::Gurobi.Optimizer, attr::MathOptInterface.PrimalStatus)
    @ Gurobi ~/.julia/packages/Gurobi/Wo3Rk/src/MOI_wrapper/MOI_wrapper.jl:2944
  [4] optimize!(model::Gurobi.Optimizer)
    @ Gurobi ~/.julia/packages/Gurobi/Wo3Rk/src/MOI_wrapper/MOI_wrapper.jl:2828
  [5] optimize!
    @ ~/.julia/packages/MathOptInterface/tVdNJ/src/Bridges/bridge_optimizer.jl:367 [inlined]
  [6] optimize!
    @ ~/.julia/packages/MathOptInterface/tVdNJ/src/MathOptInterface.jl:122 [inlined]
  [7] optimize!(m::MathOptInterface.Utilities.CachingOptimizer{…})
    @ MathOptInterface.Utilities ~/.julia/packages/MathOptInterface/tVdNJ/src/Utilities/cachingoptimizer.jl:370
  [8] optimize!(model::Model; ignore_optimize_hook::Bool, _differentiation_backend::MathOptInterface.Nonlinear.SparseReverseMode, kwargs::@Kwargs{})
    @ JuMP ~/.julia/packages/JuMP/e83v9/src/optimizer_interface.jl:609
  [9] optimize!(model::Model)
    @ JuMP ~/.julia/packages/JuMP/e83v9/src/optimizer_interface.jl:560
 [10] top-level scope
    @ ex_nlp_Gurobi.jl:9

So Gurobi appears to solve the NLP to local optimality, but then Gurobi.jl crashes when querying the status (1 <= valueP <= 17 assertion).

If I remove the OptimalityTarget attribute, the same model runs fine:

using JuMP, Gurobi

m = Model(Gurobi.Optimizer)

@variable(m, 0 <= x <= 10)
@objective(m, Min, (x - 4)^2 - sin(x))

optimize!(m)

This gives a normal solve with:

Gurobi Optimizer version 13.0.0 build v13.0.0rc1 (mac64[arm] - Darwin 25.1.0 25B78)

CPU model: Apple M4 Pro
Thread count: 14 physical cores, 14 logical processors, using up to 14 threads

Non-default parameters:
NonConvex  2

Optimize a model with 0 rows, 3 columns and 0 nonzeros (Min)
...
Optimal solution found (tolerance 1.00e-04)
Best objective 5.995492605723e-01, best bound 5.995370086158e-01, gap 0.0020%

User-callback calls 153, time in user-callback 0.00 sec

Is OptimalityTarget = 1 supposed to be supported for nonlinear models in the Gurobi–JuMP integration, or is this parameter only meaningful for certain problem classes?

We still need to update Gurobi.jl to reflect some of the changes in Gurobi 13. We had a problem accessing the Gurobi binaries so that’s the current blocker. I may not get a chance to look at this until next week, sorry.

Thanks for the quick reply and for the context. I’ll stick with the default/global setting for now.