Slower code after migrating from MathProgBase

Hi, after updating some packages I got this error:

 `ClpSolver` is no longer supported. If you are using JuMP, upgrade to the latest version and use `Clp.Optimizer` instead. If you are using MathProgBase (e.g., via `lingprog`), you will need to upgrade to MathOptInterface (https://github.com/JuliaOpt/MathOptInterface.jl).

No problem, I was certainly using MathProgBase via lingprog so I just needed to migrate my code and use JuMP. But, after doing it, I am having performance problems. Solving the same model now can take up to an order of magnitude longer.

I’m working with metabolic networks doing some FBA, so the optimization problem is quite simple:

\text{Maximize:} \: obj \in x
\text{subject to:}\\ Sx = b \\ lb \leq x \leq ub

Here was my code when using MathProgBase:

import MathProgBase.HighLevelInterface: linprog
import Clp

function fba_MathProgBase(S, b, lb, ub, obj_idx::Integer; 
                          sense = -1.0;
                          solver = Clp.ClpSolver())
    M, N, = size(S)
    sv = zeros(N);
    sv[obj_idx] = sense
    sol = linprog(
        sv, # Opt sense vector 
        S, # Stoichiometric matrix
        b, # row lb (mets)
        b, # row ub (mets)
        lb, # column lb (rxns)
        ub, # column ub (rxns)
        solver);

    return sol.sol
end

Now using JuMP:

import Clp
import JuMP

function fba_Jump(S, b, lb, ub, obj_idx::Integer; 
                 sense = JuMP.MOI.MAX_SENSE, 
                 solver = Clp.Optimizer)

    M, N, = size(S)
    model = JuMP.Model(solver)
    JuMP.set_optimizer_attribute(model, "LogLevel", 0)
    JuMP.@variable(model, x[1:N])
    JuMP.@constraint(model, balance, S * x .== b)
    JuMP.@constraint(model, bounds, lb .<= x .<= ub)
    JuMP.@objective(model, sense, x[obj_idx])
    JuMP.optimize!(model)
    return JuMP.value.(x)
end

benchmarks results (using @btime ):

Julia Version 1.1.0
Commit 80516ca202 (2019-01-21 21:24 UTC)
Platform Info:
  OS: macOS (x86_64-apple-darwin14.5.0)
  CPU: Intel(R) Core(TM) i5-8210Y CPU @ 1.60GHz
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-6.0.1 (ORCJIT, skylake)
Environment:
  JULIA_NUM_THREADS = 4
  JULIA_EDITOR = code


Model: toy_model.json size: (5, 8) -------------------

fba_JuMP-GLPK.Optimizer
  322.222 μs (1831 allocations: 112.19 KiB)
obj_val: 3.181818181818181

fba_JuMP-Clp.Optimizer
  916.346 μs (3214 allocations: 193.47 KiB)
obj_val: 3.1818181818181817

fba_MathProgBase-ClpSolver
  150.475 μs (32 allocations: 2.47 KiB)
obj_val: 3.1818181818181817

Model: e_coli_core.json size: (72, 95) -------------------

fba_JuMP-GLPK.Optimizer
  1.812 ms (10908 allocations: 533.33 KiB)
obj_val: 0.8739215069685011

fba_JuMP-Clp.Optimizer
  3.128 ms (19681 allocations: 973.48 KiB)
obj_val: 0.8739215069684311

fba_MathProgBase-ClpSolver
  688.877 μs (32 allocations: 11.58 KiB)
obj_val: 0.8739215069684311

Model: iJR904.json size: (762, 976) -------------------

fba_JuMP-GLPK.Optimizer
  41.945 ms (101450 allocations: 4.41 MiB)
obj_val: 0.5782403962872187

fba_JuMP-Clp.Optimizer
  42.815 ms (190924 allocations: 14.62 MiB)
obj_val: 0.5782403962871316

fba_MathProgBase-ClpSolver
  17.824 ms (35 allocations: 118.00 KiB)
obj_val: 0.5782403962871316

Model: HumanGEM.json size: (8461, 13417) -------------------

fba_JuMP-GLPK.Optimizer
  15.458 s (1300964 allocations: 61.45 MiB)
obj_val: 2.334553007169305

fba_JuMP-Clp.Optimizer
  5.295 s (2456570 allocations: 1.43 GiB)
obj_val: 2.334553007230854

fba_MathProgBase-ClpSolver
  568.137 ms (45 allocations: 1.35 MiB)
obj_val: 2.3345530071688514

You can find the tests here, I pushed a Manifest.toml

Thansk!!

[EDITED] Fix bug in benchmarks because @btime global design

The JuMP equivalent would be:

using JuMP, Clp
model = Model(Clp.Optimizer)
set_optimizer_attribute(model, "LogLevel", 0)
@variable(model, lb[i] <= x[i = 1:N] <= ub[i])
@constraint(model, S * x .== b)
@objective(model, sense, x[obj_idx])
optimize!(model)

Is S dense? Sparse? What is a long time? How big is the problem? Hard to offer more advice without a working example.

@odow Thanks for the reply!!, I was actually preparing some benchmarks.

I’m running:

Julia Version 1.1.0
Commit 80516ca202 (2019-01-21 21:24 UTC)
Platform Info:
  OS: macOS (x86_64-apple-darwin14.5.0)
  CPU: Intel(R) Core(TM) i5-8210Y CPU @ 1.60GHz
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-6.0.1 (ORCJIT, skylake)

benchmarks results (using @btime):

Model: toy_model.json size: (5, 8)
fba_JuMP-GLPK.Optimizer
  422.868 μs (2183 allocations: 138.34 KiB)
fba_JuMP-Clp.Optimizer
  958.208 μs (3511 allocations: 223.52 KiB)
fba_MathProgBase-ClpSolver
  173.561 μs (32 allocations: 2.47 KiB)

Model: e_coli_core.json size: (72, 95)
fba_JuMP-GLPK.Optimizer
  413.191 μs (2183 allocations: 138.34 KiB)
fba_JuMP-Clp.Optimizer
  1.135 ms (3511 allocations: 223.52 KiB)
fba_MathProgBase-ClpSolver
  166.507 μs (32 allocations: 2.47 KiB)

Model: iJR904.json size: (762, 976)
fba_JuMP-GLPK.Optimizer
  415.022 μs (2183 allocations: 138.34 KiB)
fba_JuMP-Clp.Optimizer
  964.665 μs (3511 allocations: 223.52 KiB)
fba_MathProgBase-ClpSolver
  162.673 μs (32 allocations: 2.47 KiB)

Model: HumanGEM.json size: (8461, 13417)
fba_JuMP-GLPK.Optimizer
  448.802 μs (2183 allocations: 138.34 KiB)
fba_JuMP-Clp.Optimizer
  1.023 ms (3511 allocations: 223.52 KiB)
fba_MathProgBase-ClpSolver
  156.015 μs (32 allocations: 2.47 KiB)

You can find the tests here, I pushed a Manifest.toml

Thansk!!

Well, S is sparse, but the used data structure is not, it is a regular Matrix{Float64}

This is actually the main difference with the code i’m using.

Is it much different than?

JuMP.@variable(model, x[1:N])
JuMP.@constraint(model, bounds, lb .<= x .<= ub)

@variable(model, 0 <= x <= 1) adds a variable x with bounds [0, 1].

@variable(model, x)
@constraint(model, 0 <= x <= 1)

adds a variable x and a ScalarAffineFunction-in-Interval constraint (i.e., another row to your A matrix).

See the documentation: https://jump.dev/JuMP.jl/v0.21.1/variables/#Variable-bounds-1

Please make sure you are using Clp 0.8.0. (Looks like you had 0.7.1: https://github.com/josePereiro/JuMP_issue.jl/blob/09ceaf56ba21f8cd1978cfd345eba74829c579dc/Manifest.toml#L34)

0.8 contains numerous performance improvements.

1 Like

Hi @odow, I followed all your advices.

This are the results of a test that first run everything
using Clp0.7.1, and then repeat the JuMP part with Clp0.8.0. The test is available in github

This is on macOS

Running test with Clp0.7.1 ------------------ 

Your branch is up to date with 'origin/clp_0.7.1'.
loaded /Users/Pereiro/.julia/config/startup.jl

vesioninfo -------------------
Julia Version 1.1.0
Commit 80516ca202 (2019-01-21 21:24 UTC)
Platform Info:
  OS: macOS (x86_64-apple-darwin14.5.0)
  CPU: Intel(R) Core(TM) i5-8210Y CPU @ 1.60GHz
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-6.0.1 (ORCJIT, skylake)
Environment:
  JULIA_NUM_THREADS = 4
  JULIA_EDITOR = code


Project.toml -------------------
    Status `~/University/Studying/JuMP_issue/Project.toml`
  [6e4b80f9] BenchmarkTools v0.5.0
  [e2554f3b] Clp v0.7.1
  [60bf3e95] GLPK v0.13.0
  [682c06a0] JSON v0.21.0
  [4076af6c] JuMP v0.21.2
  [fdba3010] MathProgBase v0.7.8
  [b77e0a4c] InteractiveUtils 
  [44cfe95a] Pkg 
  [2f01184e] SparseArrays 


Model: toy_model.json size: (5, 8) -------------------

fba_JuMP-GLPK.Optimizer
  329.750 μs (1831 allocations: 112.19 KiB)
obj_val: 3.181818181818181

fba_JuMP-Clp.Optimizer
  930.371 μs (3214 allocations: 193.47 KiB)
obj_val: 3.1818181818181817

fba_MathProgBase-ClpSolver
  152.660 μs (32 allocations: 2.47 KiB)
obj_val: 3.1818181818181817

Model: e_coli_core.json size: (72, 95) -------------------

fba_JuMP-GLPK.Optimizer
  1.965 ms (10908 allocations: 533.33 KiB)
obj_val: 0.8739215069685011

fba_JuMP-Clp.Optimizer
  3.209 ms (19681 allocations: 973.48 KiB)
obj_val: 0.8739215069684311

fba_MathProgBase-ClpSolver
  710.152 μs (32 allocations: 11.58 KiB)
obj_val: 0.8739215069684311

Model: iJR904.json size: (762, 976) -------------------

fba_JuMP-GLPK.Optimizer
  44.393 ms (101450 allocations: 4.41 MiB)
obj_val: 0.5782403962872187

fba_JuMP-Clp.Optimizer
  49.131 ms (190924 allocations: 14.62 MiB)
obj_val: 0.5782403962871316

fba_MathProgBase-ClpSolver
  18.327 ms (35 allocations: 118.00 KiB)
obj_val: 0.5782403962871316

Model: HumanGEM.json size: (8461, 13417) -------------------

fba_JuMP-GLPK.Optimizer
  18.933 s (1300964 allocations: 61.45 MiB)
obj_val: 2.334553007169305

fba_JuMP-Clp.Optimizer
  4.307 s (2456567 allocations: 1.43 GiB)
obj_val: 2.334553007230854

fba_MathProgBase-ClpSolver
  507.105 ms (45 allocations: 1.35 MiB)
obj_val: 2.3345530071688514


Running test with Clp up to date ------------------ 

Your branch is up to date with 'origin/clp_up_to_date'.
loaded /Users/Pereiro/.julia/config/startup.jl

vesioninfo -------------------
Julia Version 1.1.0
Commit 80516ca202 (2019-01-21 21:24 UTC)
Platform Info:
  OS: macOS (x86_64-apple-darwin14.5.0)
  CPU: Intel(R) Core(TM) i5-8210Y CPU @ 1.60GHz
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-6.0.1 (ORCJIT, skylake)
Environment:
  JULIA_NUM_THREADS = 4
  JULIA_EDITOR = code


Project.toml -------------------
    Status `~/University/Studying/JuMP_issue/Project.toml`
  [6e4b80f9] BenchmarkTools v0.5.0
  [e2554f3b] Clp v0.8.0
  [60bf3e95] GLPK v0.13.0
  [682c06a0] JSON v0.21.0
  [4076af6c] JuMP v0.21.3
  [fdba3010] MathProgBase v0.7.8
  [b77e0a4c] InteractiveUtils 
  [44cfe95a] Pkg 
  [2f01184e] SparseArrays 


Model: toy_model.json size: (5, 8) -------------------

fba_JuMP-GLPK.Optimizer
  327.498 μs (1831 allocations: 112.19 KiB)
obj_val: 3.181818181818181

fba_JuMP-Clp.Optimizer
  885.740 μs (3297 allocations: 197.16 KiB)
obj_val: 3.1818181818181817

Model: e_coli_core.json size: (72, 95) -------------------

fba_JuMP-GLPK.Optimizer
  1.870 ms (10908 allocations: 533.33 KiB)
obj_val: 0.8739215069685011

fba_JuMP-Clp.Optimizer
  3.016 ms (20216 allocations: 860.78 KiB)
obj_val: 0.8739215069684309

Model: iJR904.json size: (762, 976) -------------------

fba_JuMP-GLPK.Optimizer
  39.895 ms (101450 allocations: 4.41 MiB)
obj_val: 0.5782403962872187

fba_JuMP-Clp.Optimizer
  35.624 ms (203135 allocations: 7.03 MiB)
obj_val: 0.5782403962871316

Model: HumanGEM.json size: (8461, 13417) -------------------

fba_JuMP-GLPK.Optimizer
  12.222 s (1300962 allocations: 61.42 MiB)
obj_val: 2.334553007169305

fba_JuMP-Clp.Optimizer
  929.113 ms (2617068 allocations: 90.43 MiB)
obj_val: 2.3345531079323396
Your branch is up to date with 'origin/master'.

This is on linux

Your branch is up to date with 'origin/master'.

Running test with Clp0.7.1 ------------------ 

Branch 'clp_0.7.1' set up to track remote branch 'clp_0.7.1' from 'origin'.

vesioninfo -------------------
Julia Version 1.1.1
Commit 55e36cc308 (2019-05-16 04:10 UTC)
Platform Info:
  OS: Linux (x86_64-pc-linux-gnu)
  CPU: Intel(R) Core(TM) i3-8100 CPU @ 3.60GHz
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-6.0.1 (ORCJIT, skylake)


Project.toml -------------------
    Status `~/University/Projects/MaxEntEp/GitWorker/MaxEntEP_GWRepo/Worker2/JuMP_issue-gitworker-copy/origin/JuMP_issue.jl/Project.toml`
  [6e4b80f9] BenchmarkTools v0.5.0
  [e2554f3b] Clp v0.7.1
  [60bf3e95] GLPK v0.13.0
  [682c06a0] JSON v0.21.0
  [4076af6c] JuMP v0.21.2
  [fdba3010] MathProgBase v0.7.8
  [b77e0a4c] InteractiveUtils 
  [44cfe95a] Pkg 
  [2f01184e] SparseArrays 


Model: toy_model.json size: (5, 8) -------------------

fba_JuMP-GLPK.Optimizer
  287.398 μs (1831 allocations: 112.19 KiB)
obj_val: 3.181818181818181

fba_JuMP-Clp.Optimizer
  685.555 μs (3214 allocations: 193.47 KiB)
obj_val: 3.1818181818181817

fba_MathProgBase-ClpSolver
  90.368 μs (32 allocations: 2.47 KiB)
obj_val: 3.1818181818181817

Model: e_coli_core.json size: (72, 95) -------------------

fba_JuMP-GLPK.Optimizer
  1.661 ms (10913 allocations: 533.41 KiB)
obj_val: 0.8739215069685011

fba_JuMP-Clp.Optimizer
  2.387 ms (19691 allocations: 973.64 KiB)
obj_val: 0.8739215069684311

fba_MathProgBase-ClpSolver
  550.795 μs (32 allocations: 11.58 KiB)
obj_val: 0.8739215069684311

Model: iJR904.json size: (762, 976) -------------------

fba_JuMP-GLPK.Optimizer
  37.492 ms (101449 allocations: 4.41 MiB)
obj_val: 0.5782403962872187

fba_JuMP-Clp.Optimizer
  36.266 ms (190925 allocations: 14.62 MiB)
obj_val: 0.5782403962871316

fba_MathProgBase-ClpSolver
  15.086 ms (35 allocations: 118.00 KiB)
obj_val: 0.5782403962871316

Model: HumanGEM.json size: (8461, 13417) -------------------

fba_JuMP-GLPK.Optimizer
  9.987 s (1300959 allocations: 61.41 MiB)
obj_val: 2.334553007169305

fba_JuMP-Clp.Optimizer
  2.198 s (2456567 allocations: 1.43 GiB)
obj_val: 2.334553007226292

fba_MathProgBase-ClpSolver
  368.350 ms (45 allocations: 1.35 MiB)
obj_val: 2.334553007168189


Running test with Clp up to date ------------------ 

Branch 'clp_up_to_date' set up to track remote branch 'clp_up_to_date' from 'origin'.

vesioninfo -------------------
Julia Version 1.1.1
Commit 55e36cc308 (2019-05-16 04:10 UTC)
Platform Info:
  OS: Linux (x86_64-pc-linux-gnu)
  CPU: Intel(R) Core(TM) i3-8100 CPU @ 3.60GHz
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-6.0.1 (ORCJIT, skylake)


Project.toml -------------------
    Status `~/University/Projects/MaxEntEp/GitWorker/MaxEntEP_GWRepo/Worker2/JuMP_issue-gitworker-copy/origin/JuMP_issue.jl/Project.toml`
  [6e4b80f9] BenchmarkTools v0.5.0
  [e2554f3b] Clp v0.8.0
  [60bf3e95] GLPK v0.13.0
  [682c06a0] JSON v0.21.0
  [4076af6c] JuMP v0.21.3
  [fdba3010] MathProgBase v0.7.8
  [b77e0a4c] InteractiveUtils 
  [44cfe95a] Pkg 
  [2f01184e] SparseArrays 


Model: toy_model.json size: (5, 8) -------------------

fba_JuMP-GLPK.Optimizer
  290.409 μs (1831 allocations: 112.19 KiB)
obj_val: 3.181818181818181

fba_JuMP-Clp.Optimizer
  695.442 μs (3297 allocations: 197.16 KiB)
obj_val: 3.1818181818181817

Model: e_coli_core.json size: (72, 95) -------------------

fba_JuMP-GLPK.Optimizer
  1.677 ms (10913 allocations: 533.41 KiB)
obj_val: 0.8739215069685011

fba_JuMP-Clp.Optimizer
  2.409 ms (20217 allocations: 860.80 KiB)
obj_val: 0.8739215069684309

Model: iJR904.json size: (762, 976) -------------------

fba_JuMP-GLPK.Optimizer
  37.382 ms (101449 allocations: 4.41 MiB)
obj_val: 0.5782403962872187

fba_JuMP-Clp.Optimizer
  28.847 ms (203010 allocations: 7.03 MiB)
obj_val: 0.5782403962871316

Model: HumanGEM.json size: (8461, 13417) -------------------

fba_JuMP-GLPK.Optimizer
  9.924 s (1300958 allocations: 61.39 MiB)
obj_val: 2.334553007169305

fba_JuMP-Clp.Optimizer
  700.783 ms (2617005 allocations: 90.41 MiB)
obj_val: 2.334553007169208

Your branch is up to date with 'origin/master'.


As a summary, for the biggest model:

On macOS

Model: HumanGEM.json size: (8461, 13417) -------------------

# Clp 0.7.1
fba_JuMP-GLPK.Optimizer
  9.987 s (1300959 allocations: 61.41 MiB)
obj_val: 2.334553007169305

fba_JuMP-Clp.Optimizer
  2.198 s (2456567 allocations: 1.43 GiB)
obj_val: 2.334553007226292

fba_MathProgBase-ClpSolver
  368.350 ms (45 allocations: 1.35 MiB)
obj_val: 2.334553007168189

# Clp 0.8.0
fba_JuMP-GLPK.Optimizer
  12.222 s (1300962 allocations: 61.42 MiB)
obj_val: 2.334553007169305

fba_JuMP-Clp.Optimizer
  929.113 ms (2617068 allocations: 90.43 MiB)
obj_val: 2.3345531079323396

A good improvemet from JuMP-Clp 0.7.1 (2.198 s) to JuMP-Clp 0.8.0 (929.113 ms), but still not better than the deprecated MathProgBase-Clp0.7.1 (368.350 ms). Also see the allocations. Details at above

On Linux

Model: HumanGEM.json size: (8461, 13417) -------------------
# Clp 0.7.1
fba_JuMP-GLPK.Optimizer
  9.987 s (1300959 allocations: 61.41 MiB)
obj_val: 2.334553007169305

fba_JuMP-Clp.Optimizer
  2.198 s (2456567 allocations: 1.43 GiB)
obj_val: 2.334553007226292

fba_MathProgBase-ClpSolver
  368.350 ms (45 allocations: 1.35 MiB)
obj_val: 2.334553007168189

# Clp 0.8.0
fba_JuMP-GLPK.Optimizer
  9.924 s (1300958 allocations: 61.39 MiB)
obj_val: 2.334553007169305

fba_JuMP-Clp.Optimizer
  700.783 ms (2617005 allocations: 90.41 MiB)
obj_val: 2.334553007169208

Again updating Clp improve the performance, but MathProgBase-ClpSolver is still better. Details at above

The extra overhead is expected. It needs to allocate JuMP variables, etc. If you only want to use Clp, you can use the C API directly.

Read the source code for inspiration.

You can also try https://github.com/jump-dev/MatrixOptInterface.jl, but it is in development.