Thank you both odow and mtanneau for your attention. I have written, discarded, and rewritten this post because the maddening spiral of debug. This post is already long, but is considerably smaller than its previous version. I am posting it because there is a little change it gives some insight to someone hitting the same problem.
If I understand correctly […]
You understood correctly. I use JuMP.fix(var, 0.0; force = true)
and after I unfix
them and use set{lower|upper}bound
to set them back to their old values.
Some of the presolve reductions may be invalidated by your change of bounds.
For what I see in the logs and what is explained by Gurobi manual, no presolve phase happens after the first LP solve because there is an implicit basis being given.
If all the modifications you’re doing are relaxing variable bounds / adding variables (in a column-generation fashion), make sure you use primal simplex (cf the
Method
parameter); Gurobi’s default is to use dual simplex.
I tried it with… terrible results. I can provide times/logs if someone is interested but the takeover seem to be: my problem/formulation has a considerable primal degeneracy problem, in the sense that trying to solve the integer relaxation problem caused messages like: ‘introducing perturbation’, ‘switching to devex’, ‘switching to steepest edge’, and had time at least an order of magnitude worse than dual simplex.
If you are deleting variables… this may be slow in itself anyway
In this step (a successive solution of growing LPs), I am not deleting variables. However, if not much of a bother, I would like you to expand on why it would be slow. I thought my PRs to Gurobi.jl
had solved most slowness problems with delete
.
It could also be that the subsequent model is just harder than the initial one.
This is kinda of a fact. I say ‘kinda of a fact’ because, yes, every “new” LP has all not-fixed-to-zero variables of the “last” LP and some of the previously fixed-to-zero variables are now not-fixed-to-zero (often with bounds like 0 <= x <= integer between 1 and 10
). However, alone it does not explain my problems, I will explain more below.
I have to reproduce a method from the literature that has a phase where it starts with a restricted LP (that does not solve a relaxation of my problem, but a relaxation of restricted version of it) and grows it into a LP that actually solves the relaxation of my problem (every time I say relaxation is just integer/continuous relaxation).
Because it is much more convenient, the way I used to do this is: generating the full model (that I kinda need it anyway), fix all variables outside of the restricted problem to zero, and do the “growing LP model” by computing which variables I should unfix next (using the dual of the constraints obtained solving the current LP).
So, in the original paper, in 2016, they solve an instance of my problem in 17 seconds, that I solved it in 80 seconds. A lot of things happened, I thought I discovered a bug in my implementation and ‘fixed it’, this reduced the initial number of variables of the LP model by a whole lot. However, the times gone from 80 seconds to 11 minutes in that instance. In the end, I was wrong about the bug, the model should be started with a whole lot more of variables that I thought necessary, and so I was doing almost the right thing before the “fix”. What I needed to correct was initial number of variables in a single LP solve followed by single MIP solve, before the whole LP growing thing. However, to solve the ‘basis disable presolve’ problem I actually rewrote a good deal of the code and made it actually start with a subset of the variables and grew the model by adding them (not just unfixing). This reduced the time from 80 seconds to 24~28s.