This is probably not Julia related but could conceivably be due to an issue with CPLEX.jl, so I hope it’s OK that I ask my question here in the optimization subforum. I have a linear programming model written in JuMP that I solve using the barrier optimizer of CPLEX 12.8. A relatively small test version of my model (“only” ~1.2 million rows & columns after presolve) solves in a few minutes but solve time scales badly with number of available threads. This will become a significant problem as I move to the full-size model.
threads | barrier solve time (seconds) |
---|---|
1 | 312 |
2 | 222 |
3 | 198 |
4 | 196 |
5 | 190 |
6 | 194 |
(12) | 238 |
Note that this is pure barrier solve time without model generation time or simplex crossover, so the problem is not caused by irregular crossover times.
My first suspicion was that some of the threads could be being used for concurrent optimization (i.e. running simplex algorithms in parallel), but I am running with CPXPARAM_LPMethod = 4
which supposedly should use all threads for barrier. The solve log also confirms this: “Parallel mode: using up to 6 threads for barrier”.
CPLEX only uses 7 GB of my 64 GB RAM memory for the 6 thread solve, so memory is not a problem either. And CPU usage increases linearly with thread count, so the threads are all working but not helping. The CPU used here is a Core-i7 8700K with 6 cores (so the 12 thread solve uses hyperthreading).
I’ve never seen a barrier optimizer scale so badly with thread count before. Is this common? Any suggestions for how I can fix it? Could it be caused by bad numerical scaling for example?
Finally, is there a more appropriate forum elsewhere on the net for asking practical optimization questions like this?