Gurobi's tolerance on convexity in QCP is about `1e-13`

Following this topic

I do the following test to verify the title

using JuMP, Gurobi

function test_convexity(d) # if it ain't Error, the model is deemed convex (by Gurobi)
    model = Model(Gurobi.Optimizer)
    @variable(model, 1 - d <= x <= 1 + d)
    @variable(model, y)
    @constraint(model, x * y <= 1)
    set_attribute(model, "QCPDual", 1)
    optimize!(model)
end

test_convexity((1e-13 + 1e-14) / 2) # non convex
test_convexity(1e-13 / 2) # convex

Coefficients less than 1e-13 are regarded as zero.

Could you leave a pointer to the Gurobi doc? :slightly_smiling_face: I haven’t found that.

In this case it’s more that a variable is resolved to a constant if the bounds differ by less than 1e-13. I don’t know if this is explicitly documented anywhere (I couldn’t find it).

I don’t think this variable (in my first post, x), is simply deemed a constant.
∵ if this is the case, Gurobi should solve it as an LP with the simplex log.
If we inspect Gurobi’s logging, we find it being barrier.
∴ Gurobi does not reduce x to 1, but indeed solve it as a convex QCP (which is also easy via the barrier algorithm).

You can verify my point using this 5-line program, via inspecting the logging:

using JuMP, Gurobi
model = Model(Gurobi.Optimizer)
@variable(model, x == 1); @variable(model, y)
@constraint(model, x * y <= 1)
optimize!(model)

Simply speaking, Gurobi is not intelligent enough to identify it as an LP.

If you are wondering how mighty Gurobi’s Presolve can be. You might be disappointed.
Because it literally cannot identify a fixed variable as a constant.
To see this, add JuMP.set_attribute(model, "Presolve", 2) to my 5-line program above. And we find that it still uses the barrier algorithm (which is not supposed to be invoked on small-scale LPs—Gurobi still solve that program as a convex QP—which is not desired).

Using the default parameters:

julia> using JuMP, Gurobi

julia> model = Model(Gurobi.Optimizer)
Set parameter LicenseID to value 890341
A JuMP Model
├ solver: Gurobi
├ objective_sense: FEASIBILITY_SENSE
├ num_variables: 0
├ num_constraints: 0
└ Names registered in the model: none

julia> @variable(model, x == 1)
x

julia> @variable(model, y)
y

julia> @constraint(model, c, x * y <= 1)
c : x*y ≤ 1

julia> @objective(model, Max, y)
y

julia> optimize!(model)
Gurobi Optimizer version 12.0.1 build v12.0.1rc0 (mac64[x86] - Darwin 24.1.0 24B83)

CPU model: Intel(R) Core(TM) i5-8259U CPU @ 2.30GHz
Thread count: 4 physical cores, 8 logical processors, using up to 8 threads

Optimize a model with 0 rows, 2 columns and 0 nonzeros
Model fingerprint: 0x1be0693c
Model has 1 quadratic constraint
Coefficient statistics:
  Matrix range     [0e+00, 0e+00]
  QMatrix range    [1e+00, 1e+00]
  Objective range  [1e+00, 1e+00]
  Bounds range     [1e+00, 1e+00]
  RHS range        [0e+00, 0e+00]
  QRHS range       [1e+00, 1e+00]
Presolve removed 0 rows and 1 columns
Presolve time: 0.00s
Presolved: 1 rows, 1 columns, 1 nonzeros
Ordering time: 0.00s

Barrier statistics:
 AA' NZ     : 0.000e+00
 Factor NZ  : 1.000e+00
 Factor Ops : 1.000e+00 (less than 1 second per iteration)
 Threads    : 1

                  Objective                Residual
Iter       Primal          Dual         Primal    Dual     Compl     Time
   0   1.00500000e+00  1.51017673e+00  5.00e-03 5.00e-01  2.12e-01     0s
   1   9.76922662e-01  1.00464675e+00  0.00e+00 0.00e+00  2.77e-02     0s
   2   9.99966338e-01  1.00251942e+00  0.00e+00 0.00e+00  2.55e-03     0s
   3   9.99996273e-01  1.00000251e+00  0.00e+00 0.00e+00  6.24e-06     0s
   4   9.99999996e-01  1.00000000e+00  0.00e+00 0.00e+00  6.24e-09     0s

Barrier solved model in 4 iterations and 0.00 seconds (0.00 work units)
Optimal objective 9.99999996e-01


User-callback calls 45, time in user-callback 0.00 sec

shows

Presolve removed 0 rows and 1 columns
Presolve time: 0.00s
Presolved: 1 rows, 1 columns, 1 nonzeros

so Gurobi did remove the x variable.

You can also see this with

julia> begin
           model = direct_model(Gurobi.Optimizer())
           set_silent(model)
           @variable(model, x == 1)
           @variable(model, y)
           @constraint(model, c, x * y <= 1)
           @objective(model, Max, y)
           optimize!(model)
           grb = backend(model)
           presolvedP = Ref{Ptr{Cvoid}}(C_NULL)
           GRBpresolvemodel(grb, presolvedP)
           GRBwrite(presolvedP[], "presolved.lp")
           print(read("presolved.lp", String))
       end
Set parameter LicenseID to value 890341
\ Model _pre
\ LP format - for model browsing. Use MPS format to capture full model detail.
\ Signature: 0x5af67735cda1dc02
Maximize
  y
Subject To
 R0: y <= 1
Bounds
 y free
End

And we find that it still uses the barrier algorithm (which is not supposed to be invoked on small-scale LPs

You cannot assume this. In the default parameter setting, Gurobi may choose any method to solve the problem.

2 Likes

Another simple example showing that Gurobi’s Presolve is not ideally mighty:

import JuMP, Gurobi; GRB_ENV = Gurobi.Env()
model = JuMP.direct_model(Gurobi.Optimizer(GRB_ENV))
JuMP.set_attribute(model, "Presolve", 2)
JuMP.@variable(model, x)
JuMP.@variable(model, y)
JuMP.@constraint(model, x + y == 2)
JuMP.@objective(model, Min, x * -y)
JuMP.optimize!(model) # 🟠 nonconvex-MIP method

model_r = JuMP.direct_model(Gurobi.Optimizer(GRB_ENV)) # eliminating `-y` from `model`
JuMP.@variable(model_r, x)
JuMP.@objective(model_r, Min, x * (x - 2))
JuMP.optimize!(model_r) # Convexity identified

where the logging of the first solve 🟠 is

Gurobi Optimizer version 12.0.1 build v12.0.1rc0 (win64 - Windows 11.0 (26100.2))

CPU model: 11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz, instruction set [SSE2|AVX|AVX2|AVX512]
Thread count: 4 physical cores, 8 logical processors, using up to 8 threads

Non-default parameters:
Presolve  2

Optimize a model with 1 rows, 2 columns and 2 nonzeros
Model fingerprint: 0x9e2ec78e
Model has 1 quadratic objective term
Coefficient statistics:
  Matrix range     [1e+00, 1e+00]
  Objective range  [0e+00, 0e+00]
  QObjective range [2e+00, 2e+00]
  Bounds range     [0e+00, 0e+00]
  RHS range        [2e+00, 2e+00]

Continuous model is non-convex -- solving as a MIP

Presolve time: 0.00s
Presolved: 3 rows, 4 columns, 6 nonzeros
Presolved model has 1 bilinear constraint(s)
Warning: Model contains variables with very large bounds participating
         in product terms.
         Presolve was not able to compute smaller bounds for these variables.
         Consider bounding these variables or reformulating the model.

Variable types: 4 continuous, 0 integer (0 binary)
Found heuristic solution: objective -1.0000000
Root relaxation presolve removed 3 rows and 3 columns

Root relaxation: unbounded, 0 iterations, 0.00 seconds (0.00 work units)

    Nodes    |    Current Node    |     Objective Bounds      |     Work
 Expl Unexpl |  Obj  Depth IntInf | Incumbent    BestBd   Gap | It/Node Time

     0     2  postponed    0        -1.00000          -      -     -    0s
*   79     2              15      -1.0000001          -      -   0.2    0s

Explored 157 nodes (24 simplex iterations) in 0.01 seconds (0.00 work units)
Thread count was 8 (of 8 available processors)

Solution count 2: -1 -1

Optimal solution found (tolerance 1.00e-04)
Best objective -9.999998658895e-01, best bound -1.000000081062e+00, gap 0.0000%

User-callback calls 484, time in user-callback 0.00 sec

We see that there is a Warning, explored 157 nodes before returning (the final result is correct though).

You might want to reflect on the fact that the goal of presolve is not to find the smallest possible model that can be solved. It’s to help the solver, on average, solve problems that are practically relevant in a shorter amount of time.

There’s a trade-off between code implementation complexity, techniques that work on small vs large models, etc. Gurobi still solved the problem in 0.01 seconds, so there’s minimal utility in developing techniques to solve it faster.

1 Like

Why it has 157 nodes for this 2-variable case?
Does Gurobi employs Binary Expansion—resembling the SDDiP paper?

An extension example, suggesting that Gurobi’s MIP method on general QP should be used cautiously and wittingly. The user had better not exploit this feature as a foolproof oracle.

import JuMP, Gurobi; GRB_ENV = Gurobi.Env()
model = JuMP.direct_model(Gurobi.Optimizer(GRB_ENV))
JuMP.set_attribute(model, "Presolve", 2)
JuMP.@variable(model, x)
JuMP.@variable(model, y)
JuMP.@variable(model, z)
JuMP.@constraint(model, x + y + z == 3)
JuMP.@objective(model, Min, -(x * y + y * z + x * z))
JuMP.optimize!(model)

PS this nonconvex QP is actually convex, if you eliminate z.