JuMP optimize! used two much time and memory to solve a cplex model, do anyone knows why?

I am solving cplex models using julia, while writing (in julia) the model (to .lp file) and solve the .lp file directly by cplex can be done in secs and mins, but when I want to solve the model directly in julia by using optimize! it takes hours or days, can anyone knows why and how to solve or avoid it?

Can you provide a reproducible example? How big is the LP file? Are you running out of memory?

The example will be hard to give here, but the same structure model were run with several instances, this happens with the largest instance for the moment, the LP file is around 900MB, the LP file can be solved by CPLEX directly in 3 mins, and I have tried to run the .jl script line by line, writing the model can be done fast, but it was running long at the optimize! part (for small instances it could take one hour and for large instances i have tried to run 1 day without finishing the optimize! part), by the way I have set the CPLEX time limits in 10 mins in .jl to solve the model, so I don’t think it is the solver’s problem, I am assuming something is troubling inside optimize! .

This is certainly unexpected. It’s probably a bug in the CPLEX wrapper that leads to exponential time instead of linear.

Things to try:

  • Use Gurobi
  • Use model = direct_model(CPLEX.Optimizer()) instead of model = Model(CPLEX.Optimizer)

Can you also try building model = Model() and then provide the output of julia> model, it should be something like:

julia> model
A JuMP Model
Feasibility problem with:
Variables: 18
`AffExpr`-in-`MathOptInterface.EqualTo{Float64}`: 10 constraints
`VariableRef`-in-`MathOptInterface.ZeroOne`: 18 constraints
Model mode: AUTOMATIC
CachingOptimizer state: NO_OPTIMIZER
Solver name: No optimizer attached.
Names registered in the model: x, z

What types of variables and constraints are in the model?

1 Like

I have never seen something as drastic, but I can vouch that the non-direct_model models sometimes will be much slower because the model itself is not hard to solve, but it is big and/or the JuMP wrapper somehow build it in a very inefficient way.

Just to confirm, the Julia code never starts solving the model, right? Or the extra times happens during model solving? If it is the second case then either the problem is very dependent on variable ordering or there is a bug in the wrapper (not the same model is being written).

1 Like

Use model = direct_model(CPLEX.Optimizer()) instead of model = Model(CPLEX.Optimizer) solved the problem, but what is the difference between this two settings?

Basically, if you do not use direct_model, then your model is wrapped within an extra layer of indirection. This layer saves everything you do to the model and, besides other things, allows you to switch solver on the fly, because it knows everything that needs to be done again to the new solver object to replicate your model (I think it allows other things like copying and exporting your model to file formats, but I am not entirely sure).

My experience with this layer is not good. It seems like it has serious performance issues with anything besides the most straightforward model building, i.e., if you just build your model in a single go, without changing it after, the problems often do not appear, but if you stray from this path you have bad surprises.

Can you provide the output of > model like I described above? It’d be nice to find out where the problem is.

I have the same issue, solve time of ~600s with the normal Model(CPLEX.Optimizer) (btw, it’s roughly 3s with Model(Gurobi.Optimizer)) vs. 1.14s with the direct_model. Unfortunately, I can’t give the model specification and don’t currently have time to work on a MWE (will try later), but this is the requested output:

A JuMP Model
Maximization problem with:
Variables: 279072
Objective function type: AffExpr
AffExpr-in-MathOptInterface.EqualTo{Float64}: 18438 constraints
AffExpr-in-MathOptInterface.GreaterThan{Float64}: 864 constraints
AffExpr-in-MathOptInterface.LessThan{Float64}: 9504 constraints
VariableRef-in-MathOptInterface.GreaterThan{Float64}: 279072 constraints
Model mode: AUTOMATIC
CachingOptimizer state: EMPTY_OPTIMIZER
Solver name: CPLEX
Names registered in the model: …

1 Like
using JuMP
using CPLEX

A = 1:100
T = 1:1000

m = Model(CPLEX.Optimizer)

@variables m begin
    0 <= x[A, T] <= 2

@constraints m begin
    [a in A[2:end], t in T], x[a ,t] >= 0.5 * x[a-1, t]
    [a in A, t in T[2:end]], x[a, t] - x[a, t-1] >= -1
    [a in A, t in T[2:end]], - x[a, t] + x[a, t-1] <= 1

@objective(m, Min, sum(x))

@time optimize!(m)

Solving takes 61s on my (slow) machine, vs. 1.7s in direct mode.

UPDATE: I rushed the post, here some package/version info:

Julia Version 1.7.2
Commit bf53498635 (2022-02-06 15:21 UTC)
Platform Info:
  OS: macOS (x86_64-apple-darwin19.5.0)
  CPU: Intel(R) Core(TM) i5-5350U CPU @ 1.80GHz
  LIBM: libopenlibm
  LLVM: libLLVM-12.0.1 (ORCJIT, broadwell)
  [6e4b80f9] BenchmarkTools v1.3.1
  [a076750e] CPLEX v0.9.1
  [a93c6f00] DataFrames v1.3.2
  [31c24e10] Distributions v0.25.53
  [2e9cd046] Gurobi v0.11.0
  [7073ff75] IJulia v1.23.2
  [b6b21f68] Ipopt v1.0.2
  [4076af6c] JuMP v1.0.0
  [8314cec4] PGFPlotsX v1.4.1
  [91a5bcdd] Plots v1.27.4
  [014a38d5] QuadraticToBinary v0.4.0
  [295af30f] Revise v3.3.3
  [f3b207a7] StatsPlots v0.14.33
  [0c5d862f] Symbolics v4.3.1
  [fdbf4ff8] XLSX v0.7.9
  [e88e6eb3] Zygote v0.6.37
  [10745b16] Statistics
1 Like

Interesting. Can you open an issue with this at CPLEX.jl?

Opened here: Slow solution of large LPs w/o direct model · Issue #392 · jump-dev/CPLEX.jl · GitHub (for reference)

Thanks. I’ve identified the cause, Slow solution of large LPs w/o direct model · Issue #392 · jump-dev/CPLEX.jl · GitHub, and a fix is incoming: [Performance] don't pass names by default by odow · Pull Request #393 · jump-dev/CPLEX.jl · GitHub