Why does `JuMP.set_objective_coefficient(m, x, c::Float64)` have allocations?

I’m very curious here. Where can this API lead to allocations so that it has even more allocations that @objective (in number of allocations, though not in total memory size) when I want to reset the objective function?

@odow Could you explain the detailed procedures related to these two methods? e.g. the macro would construct a JuMP expression as an intermediate object…

It appears that you allocate an intermediate object before modification, why?

struct ScalarCoefficientChange{T} <: AbstractFunctionModification
    variable::VariableIndex
    new_coefficient::T
end
function MOI.modify(
    model::Optimizer,
    ::MOI.ObjectiveFunction{MOI.ScalarAffineFunction{Float64}},
    chg::MOI.ScalarCoefficientChange{Float64},
)
    ret = GRBsetdblattrelement(
        model,
        "Obj",
        c_column(model, chg.variable),
        chg.new_coefficient,
    )
    _check_ret(model, ret)
    model.is_objective_set = true
    _require_update(model, model_change = true)
    return
end

People wanting to use this modification API because of their being allocation-free. If this is not the case, why do they need such an API?

Please provide a minimal reproducible example

1 Like
import JuMP

incre(m, x, c) = for (x, c) = zip(x, c)
    JuMP.set_objective_coefficient(m, x, c)
end

function test(m, N, c)
    JuMP.@variable(m, x[1:N])
    JuMP.@objective(m, Min, 0)
    @time incre(m, x, c)
    @time JuMP.@objective(m, Min, c'x)
end
N = 300

m = Settings.Model();
test(m, N, rand(N));
# results:
0.000150 seconds (900 allocations: 23.438 KiB)
0.000055 seconds (60 allocations: 53.250 KiB)
# The incremental method takes longer time
# And more number of allocations (900 vs 60)
# Though the 23 KiB is fewer

The Settings is a module that I wrote. You can replace with a JuMP.direct_model(Gurobi), I guess it’s the same.

I find it appears that I can use this allocation-free version of incremental modification

gcc(o, x) = Gurobi.c_column(o, JuMP.index(x))
soc(o, x, c) = Gurobi.GRBsetdblattrelement(o, "Obj", gcc(o, x), c)

import Gurobi
m = Settings.Model() # a Gurobi's direct_model
o = m.moi_backend
JuMP.@variable(m, -0.5 <= x <= 1.5)
JuMP.@objective(m, Min, 0)
JuMP.unset_silent(m)
soc(o, x, -1.0) # set objective coefficient (0 allocation version)
JuMP.optimize!(m)
JuMP.value(x)
soc(o, x, 1.0) # set objective coefficient (0 allocation version)
JuMP.optimize!(m)
JuMP.value(x)

Although I’m not aware of its potential danger.

But I guess I’m on the right track—in practice we really need an incremental method that is allocation-free, instead of the existing one.

There’s a dynamic dispatch in set_objective_coefficient that depends on the objective function type. You can get around it by using the vector version:

julia> using JuMP

julia> incre(m, x, c) = for (x, c) = zip(x, c)
           JuMP.set_objective_coefficient(m, x, c)
       end
incre (generic function with 1 method)

julia> N = 300
300

julia> c = rand(N);

julia> model = Model();

julia> @variable(model, x[1:N]);

julia> @objective(model, Min, 0)
0

julia> @time incre(model, x, c);
  0.005044 seconds (11.15 k allocations: 538.391 KiB, 96.98% compilation time)

julia> @time set_objective_coefficient(model, x, c);
  0.000085 seconds (27 allocations: 8.500 KiB)

julia> @time @objective(model, Min, c'x);
  0.000059 seconds (113 allocations: 59.844 KiB)

Note that some allocations when modifying an objective coefficient are irrelevant in practice.

If you need a non-allocating version, you can use MOI directly, assume a fixed objective type, etc. JuMP makes things easy from a user.

3 Likes

Ok, now the moisoc function has 0 allocation

julia> import JuMP

julia> m = Settings.Model(); # a Gurobi's direct_model

julia> o = JuMP.backend(m);

julia> const objtype = JuMP.MOI.ObjectiveFunction{JuMP.MOI.ScalarAffineFunction{Float64}}();


julia> moisoc(o, x, c) = JuMP.MOI.modify(o, objtype, JuMP.MOI.ScalarCoefficientChange(JuMP.index(x), c));

julia> JuMP.@variable(m, -0.5 <= x <= 1.5);

julia> JuMP.@objective(m, Min, 0);

julia> moisoc(o, x, 1.0)

julia> @time moisoc(o, x, -1.0)
  0.000009 seconds

julia> JuMP.optimize!(m)

julia> JuMP.value(x)
1.5

julia> @time JuMP.value(x)
  0.000040 seconds (10 allocations: 208 bytes)
1.5

I further want to eliminate the allocation of JuMP.value. How to achieve this?

I care about this since I’m using multithreads and allocations would disturb the efficiency of multithreaded program due to GC overhead.

1 Like

using MOI still has 2 allocations, albeit better than JuMP.value

o = JuMP.backend(m);
const res1 = JuMP.MOI.VariablePrimal(1);
moival(o, x) = JuMP.MOI.get(o, res1, JuMP.index(x));
julia> @time moival(o, x)
  0.000020 seconds (2 allocations: 32 bytes)
julia> @time JuMP.value(x)
  0.000031 seconds (10 allocations: 208 bytes)

I’m very curious, finding that it appears only the in-place acquiring the value has 0 allocation.
I hope someone can tell me the reason

import Gurobi, JuMP
gcc(o::Gurobi.Optimizer, x::JuMP.VariableRef) = Gurobi.c_column(o, JuMP.index(x))
grbval(o, x, r) = Gurobi.GRBgetdblattrelement(o, "X", gcc(o, x), r)
function grbval(o, x)
    r = Ref{Cdouble}()
    grbval(o, x, r)
    r[]
end
m = Settings.Model(); # a Gurobi's direct_model
o = JuMP.backend(m);
const r = Ref{Cdouble}();
JuMP.@variable(m, -0.5 <= x <= 1.5);
JuMP.@objective(m, Min, 1.0 * x);
JuMP.optimize!(m)

julia> @time grbval(o, x, r)
  0.000034 seconds

julia> @time grbval(o, x) # why do this has allocation??
  0.000014 seconds (2 allocations: 32 bytes)