Adding a few variables and constraints makes the model nearly unsolvable

Hi all,

I know that this has nothing to do with Julia, but although I asked the following question in Gurobi community, I have not received a response yet. Could you please help me with the following issue?

I have a power systems optimization model. It is pretty large model, but Gurobi did not have any problem to solve it in 4 hours. This was the situation a few weeks ago. Below, you can see the number of variables and constraints before and after pre-solve procedure.

I recently added a few decision variables (126 continuous and 42 binary) and constraints. Gurobi can not solve this new model in 22 hours. Below, you can see the number of variables and constraints before and after pre-solve.

So, previously, there were 1,541,716 continuous decision variables before pre-solve. I added 126 continuous variables and total number of continuous decision variables turned out to be 1,541,842 in new model. This is correct. Similarly, there were 369,640 integer variables. I added 42 binary variables to the model and total number of integer variables turned out to be 369,682. This is correct too.

What I do not understand is the big difference in number of integer variables after pre-solve in two models. Previously, there were 251,215 integer variables after pre-solve. In the new model, this number is 369,115. Why is this happening? I assume that this is the reason why Gurobi can not solve the recent model in a reasonable amount of time. Again, previous model was solved by Gurobi in 4 hours. However, the recent model can not be solved in 22 hours. 22 hours is the time limit I put for Gurobi. All 22 hours were spent for root relaxation. I do not event see the log for branch and bound tree.

Can you please help me with this?

Kind regards

MIPs are really hard to solve. Small changes can have a big impact. So I’d say this is expected behavior.

Gurobi support may have suggestions for settings that could help.

1 Like

Notice that you have also made the problem more difficult numerically; you can see the warning in the second log, and the span of the order of magnitudes has increased by 1000 in the matrix range and RHS range (Are you maybe adding big-M constraints? These can sometimes be specially reformulated using SOS type constraints.)

1 Like

Thank you. I am waiting for their response. I was not expecting that these new constraints would make such a big impact.

Yes, I am adding bigM constraints. So, you suggest me to look at SOS type constraints. I What else do you suggest? I feel that I have to solve this issue before I solve it again.

Providing advice on a particular model is very hard without knowing the specifics. MIPs are hard to solve, and seemingly trivial changes (like re-arranging the order of the variables or constraints, or adding new variables and constraints) can have a big impact.

Read the Gurobi guidelines. You have both big-M and large coefficients. The “Warning: Model contains” is a hint that you should reformulate your model.

Read the JuMP documentation around indicator and SOS constraints:

You should also provide a feasible starting solution to your new formulation:

1 Like

Hi,

Thank you for your reply, and sorry for the long-delayed response from me. I have been traveling and have not had a chance to reply. However, I was working on this issue. I have played with some Gurobi parameters (MIPFocus, Method, ImproveStateTime, etc.), but it did not help much. As the warnings indicate, my model has numerical issues stemming from large matrix coefficient range and objective range. Hence, I will try to do scaling to shrink these ranges.

I have a question. Is there a quick way to see summary of the model (by summary, I mean matrix coefficient range, objective range, RHS range, etc.) without solving it? I will do several scaling attempts. I do not want to wait to see their effects until the model is solved.

According to this link Feature Request: printStats(model) · Issue #152 · jump-dev/JuMP.jl · GitHub, there is no such thing, but I wanted to make sure.

Thank you

No, there is no tool for that. The warnings come from Gurobi, not JuMP.

Hi again,

I am sorry for my long-delayed message. I just wanted to update regarding what I did and close this issue.

Yes, MIP problems are so hard. I had got rid of all big-M constraints, which helps me to reduce matrix coefficient range and objective coefficient range. However, my model is still too difficult to solve. I can solve a particular instance of my model in 25 hours, which is too long. Current matrix coefficient range is [e-6, e+3] and objective coefficient range is [e+0, e+8]. I have spent so much time to figure out warnings about numerical problems, possible scaling tasks, and tweaking Gurobi parameters to make the solution process faster. I realized that reducing matrix coefficient range and dividing all objective coefficient with 10,000 make barrier algorithm and crossover procedure incredibly faster. But, it is interesting that when I do both of these simultaneously, the solution process is much slow again. So, I need to stick with objective scaling as it is faster. However, even with the scaled model, reducing optimality gap in branch and bound tree is still too slow.

I am currently trying to figure out (by tweaking Gurobi parameters) how to make search over B&B tree faster with scaled model (by scaled model, I refer to that model for which all objective coefficients are divided by 10,000, so resulting objective range is [e-4, e+4]). I generally do not have any problem to find the first feasible solution. It is fast. However, finding more feasible (incumbent) solutions is too slow. Best bound is very good in my case. So, I do not have to focus on moving the best bound.

If you have any advice how to make this process faster, I am glad to listen you.

Kind regards

Dividing only the objective (and not the constraints) by 10,000 reduces the dual values by 10,000, which indeed improves the conditioning of the barrier system.
Apart from that, can you explain a little bit why the ranges in the model are so large? Why do yo have coefficients in [e-6, e+3] (constraints) and [e+0, e+8] (objective)? I’m also working in a project where we solve energy system models and the modelers tend to relax some soft constraints in the objective with a large penalty coefficient, which of course amplifies this ill-conditioning effect. Is it possible that this also occurs in your models?

My model is capacity expansion model with details of power system operations. Tiny values in matrix coefficient comes from capacity factor (generator variability of solar and wind resources) as a percent of installed capacity. For instance, if a capacity factor at an hour is 0.003, it means the corresponding solar or wind unit with capacity 1000 MW can not produce more than 1000 X 0.0003 = 3 MW. Some of these values are so small, like e-6. Even though I replaced these tiny values with 0.01, I still see small values like e-4 in the matrix coefficient. So, it explains that I have other constraints with small values, but I did not check which constraint.

My objective function is minimization of investment (generation and transmission expansion) in power system plus operational cost (variable cost of generators, start cost of thermal units, etc.). Large values come from transmission expansions. For long distances such as from Pacific Northwest to South California, expansion on transmission line ($ per MW) is very expensive. This explains why I see large coefficients in the objective function.

I avoid using penalty terms in my model. I do not have any right now. I am sure that this situation would be worse if I have penalty terms.

By the way, I opened a discussion on Gurobi forum. Just for your reference. You can find more details there.