Scalling both primal and dual feasibility tolerance according to the size of the problem


I have JuMP (MILP) models usually solved by HiGHS. Since some of my models were pretty constrained and had a lot of variables, some numerical issues arose and made some problems infeasible. I hence had to cross swords with the tolerance settings. Actually, I use the primal_feasibility_tolerance described here.

Then, the following questions come to my mind :

  1. Should the dual_feasibility_tolerance attribute should be set to the same value as primal_feasibily_tolerance?
  2. Is there a way to scale tolerance prior to the solving process by using the information from the problem / the data?

Thanks in advance


Regarding your second question:
You could set the tolerance with something like

set_optimizer_attribute( model, "primal_feasibily_tolerance", my_tolerance )

after adding variables and constraints to the model, but before solving it. Variable my_tolerance could be set via some function taking number of added variables/constraints as arguments (or whatever problem data you want to consider).

It seems, however, strange to me that the tolerance setting causes infeasibility in a “normal” solving process.

1 Like
  1. Not necessarily. But if you’re modifying the tolerances you should have a good understanding of what they represent and why the new value is okay.
  2. In general, needing to change the tolerances is a sign that your model has numerical issues. You’re almost always better off fixing the model, rather than changing the solver.

See for lots of tips and suggestions.

Common sources are very large or small coefficients in your model. Try to minimize the difference between the magnitude of the coefficients in your model, for example by changing the units of the input data and decision variables.