I have JuMP (MILP) models usually solved by HiGHS. Since some of my models were pretty constrained and had a lot of variables, some numerical issues arose and made some problems infeasible. I hence had to cross swords with the tolerance settings. Actually, I use the primal_feasibility_tolerance described here.
Then, the following questions come to my mind :
Should the dual_feasibility_tolerance attribute should be set to the same value as primal_feasibily_tolerance?
Is there a way to scale tolerance prior to the solving process by using the information from the problem / the data?
after adding variables and constraints to the model, but before solving it. Variable my_tolerance could be set via some function taking number of added variables/constraints as arguments (or whatever problem data you want to consider).
It seems, however, strange to me that the tolerance setting causes infeasibility in a “normal” solving process.
Not necessarily. But if you’re modifying the tolerances you should have a good understanding of what they represent and why the new value is okay.
In general, needing to change the tolerances is a sign that your model has numerical issues. You’re almost always better off fixing the model, rather than changing the solver.
Common sources are very large or small coefficients in your model. Try to minimize the difference between the magnitude of the coefficients in your model, for example by changing the units of the input data and decision variables.