# Undetected numerical issues with GLPK

I’ve ran across an issue in some unit tests (of a JuMP model), that were hard to track down. Those are only using GLPK since that is easy to integrate into the testing. I noticed that models with numerical issues seem to be solved incorrectly, which is either not properly reported by GLPK or not properly picked up by the GLPK wrapper. I could not find out how to extract the full GLPK log, so I am not sure which of these is the culprit. I’d be very thankful for any pointers on where to further dig into!

To reproduce it, I’ve managed to reduce it to the following MWE:

``````import GLPK
using JuMP

m = Model(GLPK.Optimizer)
@variable(m, x, lower_bound=0)
@variable(m, y, lower_bound=0)
@variable(m, z, lower_bound=0)

@constraint(m, x + y - 5 >= -z)
@constraint(m, x + y - 5 <= z)

@objective(m, Min, 1.2*x + 1.1*y + 1e9*z)

optimize!(m)
solution_summary(m, verbose=true)
``````

This leads to the output from `solution_summary`, showing the erroneous solution (`x=0, y=5, z=0 => obj=5.5` should be the correct solution):

``````* Solver : GLPK

* Status
Termination status : OPTIMAL
Primal status      : FEASIBLE_POINT
Dual status        : FEASIBLE_POINT
Has duals          : true
Message from the solver:
"Solution is optimal"

* Candidate solution
Objective value      : 6.0
Objective bound      : -Inf
Dual objective value : 6.0
Primal solution :
x : 5.0
y : 0.0
z : 0.0
Dual solution :

* Work counters
Solve time (sec)   : 0.01000
``````

Using Gurobi with the same model properly calculates the solution (and if the coefficient of `z` is further increased to `1e10` also picks up the bad scaling of the coefficients in the objective function: `Warning: Model contains large objective coefficients`).

Other, more complicated, models of the same type solved with GLPK result in dual gaps (sometimes even `dual > primal`):

``````  Objective value      : 883.7428721483071
Dual objective value : 354.5757237030207
``````
``````  Objective value      : 17.0
Dual objective value : 17.0625
``````

Additional note: (1) Yes I am aware that scaling issues like this should be prevented during model creation. But passing it to a solver should - if possible - properly detect potential issues in the solution if they are reported back. If GLPK is indeed wrongly reporting an optimal solution that (for an LP) violates strong duality, this could be caught (which is probably not JuMP’s job but…). (2) This kind of problem is indeed sometimes necessary (and not arbitrarily constructed), since it relates to the condition `x + y = 5` with the possibility to violate exact equality (using `z`).

1 Like

This is just a tolerance issue. If you tighten the dual feasibility tolerance it solves okay. GLPK would ideally warn that it hasn’t found the correct solution, but it worked as intended given the provided settings.

``````import GLPK
using JuMP
m = Model(GLPK.Optimizer)
set_optimizer_attribute(m, "tol_dj", 1e-9)
@variable(m, x, lower_bound=0)
@variable(m, y, lower_bound=0)
@variable(m, z, lower_bound=0)
@constraint(m, x + y - 5 >= -z)
@constraint(m, x + y - 5 <= z)
@objective(m, Min, 1.2*x + 1.1*y + 1e9 * z)
optimize!(m)
julia> value.([x,y,z])
3-element Vector{Float64}:
0.0
5.0
0.0
``````

In general, you should not expect solvers to be bug-free. Even Gurobi and CPLEX from time-to-time will return suboptimal (or even infeasible) solutions as optimal. This can be due to a bug in the solver, or because of numerical issues.

You may want to try HiGHS out as an open-source MILP solver for your testing.

1 Like