Cbc and Clp performance

Hello everyone,

I have been testing the solvers of COIN-OR, namely Cbc (MILP solver) and Clp (LP solver).

I obtain for Cbc a very good performance, Cbc even outperforms some commertial solvers in numerically complex problems. But being a MILP solver it does not deliver dual variables.

For Clp the performance I get is very poor in terms of speed, it does not find a solution in times 10 - 20 times the time required by Cbc.

Is anyone able to provide a way out?

  1. I could estimate the duals, but I doubt I could find a way to determine them in all cases
  2. Is there a way to setup Clp so that its lp delivers the same performance as Cbc?

Thank you!

This is kinda strange because Cbc says Clp is their default LP solver so if you are not using any special configuration, then Cbc is just running Clp under the hood and you should be observing similar performance.

2 Likes

Do you have a reproducible example?

Maybe it’s a question of presolve. SCIP uses SoPlex as LP solver by default, but SCIP has much more presolving implemented, so it might still be faster to call SCIP on LP problems.

1 Like

Thanks, yes this is strange indeed. I have also tried SCIP but it does not work either (there is another post).

The code I am using is:

using JuMP 
using Cbc
using Clp
using MathOptFormat
using MathOptInterface

mps_model = MathOptFormat.MPS.Model()
MOI.read_from_file(mps_model,string(path,"test_1.mps"))

ucf = Model(with_optimizer(Cbc.Optimizer, logLevel=2))
MOI.copy_to(JuMP.backend(ucf),mps_model)
@time optimize!(ucf) 

ucf = Model(with_optimizer(Clp.Optimizer, LogLevel=4 )) 
MOI.copy_to(JuMP.backend(ucf),mps_model)
@time optimize!(ucf)

I have placed a set of three MPS files in WeTransfer to be able to reproduce the problem:

https://we.tl/t-nHrmMNhben

Thank you

You can use JuMP.read_from_file here: Solvers · JuMP

using JuMP
using Clp
model = read_from_file("test_1.mps")
set_optimizer(model, Clp.Optimizer)
optimize!(model)

Try with the above code and the latest version of JuMP. Is the time difference in the solver? Or in the JuMP set-up?

No, the JuMP setup is fine. It is in the execution of the problem (optimize!).

Cbc may take 100-300 seconds, whereas Clp in 1 hour is not finished.

I updated JuMP to version 21.2 and get after running what you suggested I get:

´´´
UndefVarError: read_from_file not defined
in top-level scope at TestOptimizers.jl:8
´´´

If the order was the contrary (i.e., if you called Clp before Cbc) I would question if the time difference has nothing to do with the call to Clp precompiling things Cbc will reuse after.

This is very strange, I have JuMP 0.21.2 in my machine and this method is defined.

For people trying to replicate, it should read:

set_optimizer(model, Clp.Optimizer)
1 Like

Thanks, I managed to run exactly what you were indicating just after restarting Julia.

This code does not complete in 1 hour, it neither prints any log:

using JuMP 
using Clp
model = read_from_file("test_1.mps")
set_optimizer(model,Clp.Optimizer)
set_optimizer_attribute(model, "LogLevel", 4)
@time optimize!(model) 

Whereas this other code is done in less than 200 seconds

using JuMP
using Cbc
model = read_from_file("test_1.mps")
set_optimizer(model,Cbc.Optimizer)
@time optimize!(model)

To prevent what you described I restarted Julia before every run

Presumably these are just hard models. Try GLPK or a commercial solver. Cbc must apply helpful presolves that Clp doesn’t, or it uses non-default parameters.

1 Like

Team, the issue is apparently on the wrapper side:

“but the Cbc version does not seem to have add_row or add_column so does things in a different way. The Clp version has both add_row and add_rows. If the interface is using add_row and/or add_column, then that would explain the large time spent before Clp. There is an example in Clp/examples/addRows.cpp which which is there just to explain why it is not a good idea to add one row at a time for many rows.”

I have also been evaluating performance with GLPK and with SCIP-SoPlex, but the performance is not good enough.

What do you think?

See:

https://github.com/coin-or/Cbc/issues/301

There is probably a lot of room for improvement in Clp’s copy_to method:

You could probably loop through, build the sparse A matrix with rhs vectors, and then pass to Clp directly.

I see, why should we focus on the copy_to method? Is that related to the add_rows and add_columns comment of John Forrester?

Do you think it is possible to leverage for this the existing Cbc wrapper?

I haven’t tried your models, but it seems like the speed difference is the loading performance. copy_to is where we go from Julia → C.

How likely it is that the performance loss is in the C interface?

Following up on this for prosperity: a new version of Clp has been released that resolves the performance problem.

3 Likes