JuMP/CPLEX: adding objective function expression in loop is very resource-consuming

Dear community,
I use JuMP 0.18.5, CPLEX 0.4.3, Julia 1.1, and observe a strange behaviour when adding an objective function expression with a loop. Here is a stupid minimal example:

using TimerOutputs
using JuMP
using CPLEX
using MathProgBase

W = 2000
I = 100

function buildModel( m::Model )
    @variable( m, y[w=1:W, i=1:I], Bin)
    @objective( m, Max, (1/W) * sum(y) )
    @constraint(m, sum(y) <= 10)

to = TimerOutput()
m = Model( solver=CplexSolver(CPX_PARAM_THREADS = 1) )
@timeit to "build_model" buildModel( m )
@timeit to "solve_model" status = solve( m )
print( to )

which works fine as can be observed from the output:

  • build_model: runtime=192ms, alloc=48.4MiB
  • solve_model: runtime=382ms, alloc=41.5MiB

However, using the code above and instead creating the objective function like this:

    for w=1:W
        for i=1:I
            expr += y[w,i]
    @objective( m, Max, (1/W) * expr )

leads to:

  • build_model: runtime=169s, alloc=299GiB
  • solve_model: runtime=284ms, alloc=41.5MiB

which is a huge difference. The runtime and memory allocations seem to me astronomic for that little example, and I have no clue why this is the case.

The main reason why I like to add an objective function in a loop is, that in my “real” application the objective function looks like
\sum_{w \in W} \sum_{i \in I} c[i] y[w,i]
so that I cannot simply use dot(c,y), since vectors c and y have different dimensions. Clearly, I could stretch vector c appropriately, but this seems a bit messy to me.

Do you have any suggestions why there is such a big difference between both approaches? Is there a better work-around as the one I suggested?

Thank you in advance.

I think (no testing) you should be able to use:

@objective(m, Max, sum(c[i]*y[w,i] for i in I for w in W))

As for why this is slow, I don’t know, but I believe that there is a related GSOC project proposal.

1 Like

Thank you for that quick response. Your suggested code works fine and fast, and as far as I understand the explanations in the link they seem plausible to me.
Thank’s again!

The JuMP affine function is mutable so if you just use += it will be less efficient that an in-place operation. When written inside a macro, JuMP automatically rewrites expressions to in-place operations (see Benoît Legat @ JuMP-dev 2019 - YouTube).
Note that the same is true for BigInt which are also mutable, for this reason Base have a custom implementation of sum for BigInt’s:
julia/gmp.jl at 9f69b2e4c9e7b89cc00c6b1949100f299b420c09 · JuliaLang/julia · GitHub