Why does `JuMP.set_objective_coefficient(m, x, c::Float64)` have allocations?

I’m very curious here. Where can this API lead to allocations so that it has even more allocations that @objective (in number of allocations, though not in total memory size) when I want to reset the objective function?

@odow Could you explain the detailed procedures related to these two methods? e.g. the macro would construct a JuMP expression as an intermediate object…

It appears that you allocate an intermediate object before modification, why?

struct ScalarCoefficientChange{T} <: AbstractFunctionModification
    variable::VariableIndex
    new_coefficient::T
end
function MOI.modify(
    model::Optimizer,
    ::MOI.ObjectiveFunction{MOI.ScalarAffineFunction{Float64}},
    chg::MOI.ScalarCoefficientChange{Float64},
)
    ret = GRBsetdblattrelement(
        model,
        "Obj",
        c_column(model, chg.variable),
        chg.new_coefficient,
    )
    _check_ret(model, ret)
    model.is_objective_set = true
    _require_update(model, model_change = true)
    return
end

People wanting to use this modification API because of their being allocation-free. If this is not the case, why do they need such an API?

Please provide a minimal reproducible example

1 Like
import JuMP

incre(m, x, c) = for (x, c) = zip(x, c)
    JuMP.set_objective_coefficient(m, x, c)
end

function test(m, N, c)
    JuMP.@variable(m, x[1:N])
    JuMP.@objective(m, Min, 0)
    @time incre(m, x, c)
    @time JuMP.@objective(m, Min, c'x)
end
N = 300

m = Settings.Model();
test(m, N, rand(N));
# results:
0.000150 seconds (900 allocations: 23.438 KiB)
0.000055 seconds (60 allocations: 53.250 KiB)
# The incremental method takes longer time
# And more number of allocations (900 vs 60)
# Though the 23 KiB is fewer

The Settings is a module that I wrote. You can replace with a JuMP.direct_model(Gurobi), I guess it’s the same.

I find it appears that I can use this allocation-free version of incremental modification

gcc(o, x) = Gurobi.c_column(o, JuMP.index(x))
soc(o, x, c) = Gurobi.GRBsetdblattrelement(o, "Obj", gcc(o, x), c)

import Gurobi
m = Settings.Model() # a Gurobi's direct_model
o = m.moi_backend
JuMP.@variable(m, -0.5 <= x <= 1.5)
JuMP.@objective(m, Min, 0)
JuMP.unset_silent(m)
soc(o, x, -1.0) # set objective coefficient (0 allocation version)
JuMP.optimize!(m)
JuMP.value(x)
soc(o, x, 1.0) # set objective coefficient (0 allocation version)
JuMP.optimize!(m)
JuMP.value(x)

Although I’m not aware of its potential danger.

But I guess I’m on the right track—in practice we really need an incremental method that is allocation-free, instead of the existing one.

There’s a dynamic dispatch in set_objective_coefficient that depends on the objective function type. You can get around it by using the vector version:

julia> using JuMP

julia> incre(m, x, c) = for (x, c) = zip(x, c)
           JuMP.set_objective_coefficient(m, x, c)
       end
incre (generic function with 1 method)

julia> N = 300
300

julia> c = rand(N);

julia> model = Model();

julia> @variable(model, x[1:N]);

julia> @objective(model, Min, 0)
0

julia> @time incre(model, x, c);
  0.005044 seconds (11.15 k allocations: 538.391 KiB, 96.98% compilation time)

julia> @time set_objective_coefficient(model, x, c);
  0.000085 seconds (27 allocations: 8.500 KiB)

julia> @time @objective(model, Min, c'x);
  0.000059 seconds (113 allocations: 59.844 KiB)

Note that some allocations when modifying an objective coefficient are irrelevant in practice.

If you need a non-allocating version, you can use MOI directly, assume a fixed objective type, etc. JuMP makes things easy from a user.

3 Likes

Ok, now the moisoc function has 0 allocation

julia> import JuMP

julia> m = Settings.Model(); # a Gurobi's direct_model

julia> o = JuMP.backend(m);

julia> const objtype = JuMP.MOI.ObjectiveFunction{JuMP.MOI.ScalarAffineFunction{Float64}}();


julia> moisoc(o, x, c) = JuMP.MOI.modify(o, objtype, JuMP.MOI.ScalarCoefficientChange(JuMP.index(x), c));

julia> JuMP.@variable(m, -0.5 <= x <= 1.5);

julia> JuMP.@objective(m, Min, 0);

julia> moisoc(o, x, 1.0)

julia> @time moisoc(o, x, -1.0)
  0.000009 seconds

julia> JuMP.optimize!(m)

julia> JuMP.value(x)
1.5

julia> @time JuMP.value(x)
  0.000040 seconds (10 allocations: 208 bytes)
1.5

I further want to eliminate the allocation of JuMP.value. How to achieve this?

I care about this since I’m using multithreads and allocations would disturb the efficiency of multithreaded program due to GC overhead.

1 Like

@odow Thanks, I’ve learnt how to realize 0 allocation.

Here is a simple test you can take a look at the results

import JuMP, Gurobi
using BenchmarkTools
const r = Ref{Float64}();
const m = Settings.Model();
const o = JuMP.backend(m);

const N = 1000;
const c = rand(N);

JuMP.@variable(m, x[1:N]);

myset(o, x, c) = for (x, c) = zip(x, c)
    Settings.setoc(o, x, c)
end;
macroset(m, x, c) = JuMP.@objective(m, Min, c'x);
myget(o, x, r) = for x = x
    Settings.value(o, x, r) > -1.0 || error()
end
jumpget(x) = for x = x
    JuMP.value(x) > -1.0 || error()
end
function jumpdot(x)
    v = JuMP.value.(x)
    for v = v
        v > -1.0 || error()
    end
end

@btime myset($o, $x, $c) # 42.442 μs (0 allocations: 0 bytes)
@btime macroset($m, $x, $c); # 124.206 μs (74 allocations: 165.95 KiB)

myset(o, x, zeros(N)); JuMP.optimize!(m)

@btime myget($o, $x, $r) # 38.061 μs (0 allocations: 0 bytes)
@btime jumpget($x) # 413.360 μs (9488 allocations: 195.12 KiB)
@btime jumpdot($x) # 422.070 μs (9490 allocations: 202.98 KiB)
my module `Settings`
module Settings
import JuMP, Gurobi

const SAF = JuMP.MOI.ObjectiveFunction{JuMP.MOI.ScalarAffineFunction{Float64}}()

_gcc(o, x) = Gurobi.c_column(o, JuMP.index(x));
_gv!(r, o, x) = Gurobi.GRBgetdblattrelement(o, "X", _gcc(o, x), r);
value(o, x, r) = (_gv!(r, o, x); r.x);
setoc(o, x, c) = JuMP.MOI.modify(o, SAF, JuMP.MOI.ScalarCoefficientChange(JuMP.index(x), c))

function printinfo() 
    th = map(Threads.nthreads, (:default, :interactive))
    println("Settings> Threads=$th")
end
# The "Crossover=0" option is observed to be too vague (e.g. terminate with a 3% rGap) for certain cases, thus abandoned
const C = Dict{String, Any}("OutputFlag" => 0, "Threads" => 1, "MIPGap" => 0, "MIPGapAbs" => 0, "Method" => 2)
Env() = Gurobi.Env(C) # generate a _new_ one as defined by `Gurobi.Env`
Model() = JuMP.direct_model(Gurobi.Optimizer(Env()))

# multi-threading
Model(tks, v) = for i=eachindex(tks) setindex!(tks, Threads.@spawn(setindex!(v, Model(), i)), i) end
function Model(tks)
    v = similar(tks, JuMP.Model)
    Model(tks, v)
    foreach(wait, tks)
    v
end

# JuMP.set_objective_sense(m, JuMP.MIN_SENSE)

end