Specifying objective coefficient when creating a JuMP variable

What about adding a kwarg option obj in JuMP.@variable so that we can specify objective coefficients? just like the existing lower_bound?

julia> import JuMP

julia> model = JuMP.Model();

julia> JuMP.set_objective_sense(model, JuMP.MIN_SENSE);

julia> # [I propose this] JuMP.@variable(model, x[i=1:2, j=1:3], lower_bound=i-j, obj=i+j);

julia> JuMP.@variable(model, x[i=1:2, j=1:3], lower_bound=i-j); # existing

julia> for i=1:2, j=1:3 # currently I have to single this part out
           JuMP.set_objective_coefficient(model, x[i,j], i+j)
       end

julia> print(model)
Min 2 x[1,1] + 3 x[1,2] + 4 x[1,3] + 3 x[2,1] + 4 x[2,2] + 5 x[2,3]
Subject to
 x[1,1] ≥ 0
 x[2,1] ≥ 1
 x[1,2] ≥ -1
 x[2,2] ≥ 0
 x[1,3] ≥ -2
 x[2,3] ≥ -1

(I happen to think of this today, it’s a different issue. I put it here for convenience)

I think the design of the JuMP container is somewhat inconsistent. Look at

julia> JuMP.@variable(model, x[i=1:2, j=1:3])
2Ă—3 Matrix{JuMP.VariableRef}:
 x[1,1]  x[1,2]  x[1,3]
 x[2,1]  x[2,2]  x[2,3]

julia> JuMP.@variable(model, y[i=1:2, j=ifelse(i>1, 2:3, 1:1)])
JuMP.Containers.SparseAxisArray{JuMP.VariableRef, 2, Tuple{Int64, Int64}} with 3 entries:
  [1, 1]  =  y[1,1]
  [2, 2]  =  y[2,2]
  [2, 3]  =  y[2,3]

For the x, from julia’s (column-wise storage) standpoint, j is the “major” or “outer” index.,
But concerning y, i becomes the “major” or “outer” index.

What I’m thinking is that if we change the JuMP’s grammar for the y’s construction to be

JuMP.@variable(model, y[j=ifelse(i>1, 2:3, 1:1), i=1:2])

, it would seems more consistent to me, resembling the math textbook style, e.g.

" some assertion about s holds, \forall s \in S(n), n \in \mathbb{Z}"

What about adding a kwarg option obj in JuMP.@variable so that we can specify objective coefficients? just like the existing lower_bound ?

I don’t think I want to add this. Use @objective or set_objective_coefficient instead.

The reason that lower_bound etc exist is to support anonymous variables with bounds, so @variable(model, lower_bound = 0). I strongly prefer that people use the >= syntax where possible.

it’s a different issue. I put it here for convenience)

In future, please make separate posts for separate issues.

I think the design of the JuMP container is somewhat inconsistent

The loops are unrolled left-to-right, following Julia’s convention:

julia> [(i, j) for i in 1:2, j in 1:3]
2Ă—3 Matrix{Tuple{Int64, Int64}}:
 (1, 1)  (1, 2)  (1, 3)
 (2, 1)  (2, 2)  (2, 3)

Changing this is a breaking change that we will not be making.

2 Likes

I think it’s not adverse to have a obj kwarg option, given there is a similar option
start::Float64: specify the value passed to set_start_value for each variable.

This will be helpful for linear programs.

e.g. My realistic code

Δ_ub(h, Frac, PMax) = ifelse(h == :l, identity, x -> 1-x)(Frac)PMax
Δ_obj(h, Mul, Slope) = ifelse(h == :l, 1, Mul)Slope

function add_p(model, T, Other)
    p = JuMP.@variable(model, [z=Other.Zone, o=eachindex(Other.N[z]), t=1:T, h=(:h, :l)],
        lower_bound = 0,
        upper_bound = Δ_ub(h, Other.pCost[z].KneeFraction[o], Other.PMax[z][o])
    )
    for z=Other.Zone, o=eachindex(Other.N[z]), t=1:T, h=(:h, :l)
        JuMP.set_objective_coefficient(model, p[z,o,t,h], Δ_obj(h, Other.pCost[z].Mul[o], Other.pCost[z].Slope[o]))
    end
    p
end

function add_p0_δ0_δ(model, T, S, Reserve)
    p0 = JuMP.@variable(model, [z=Reserve.Zone, r=eachindex(Reserve.N[z]), t=1:T, h=(:h, :l)],
        lower_bound = 0,
        upper_bound = Δ_ub(h, Reserve.p0Cost[z].KneeFraction[r], Reserve.PMax[z][r])
    )
    for z=Reserve.Zone, r=eachindex(Reserve.N[z]), t=1:T, h=(:h, :l)
        JuMP.set_objective_coefficient(model, p0[z,r,t,h], Δ_obj(h, Reserve.p0Cost[z].Mul[r], Reserve.p0Cost[z].Slope[r]))
    end
    ...
end

It would save me those for loops to have a obj kwarg.
(I don’t know whether there’s some better design.)

1 Like

Not sure if you were indicating Performance problems with sum-if formulations · JuMP


I did some tests and the resultant experiences are:

  • creating anonymous variables (and thus using the upper_bound keyword inside the macro) have no inferior performance than the normal style @variable(model, x[...] <= ...).
  • The suggestion “write code inside macros” do make sense. I’m very curious how the JuMP macros can achieve better performance than calling functional API directly. (can you tell me why?)

I don’t want to add it because it adds more complexity to JuMP in exchange for a small benefit to a small number of users.

It’s also probably faster in most cases to build a single objective function instead of setting the objective coefficients variable-by-variable.

There are also many ambiguities that can make the code more difficult to reason about. As an example, what happens if you call @objective after objective_coefficient = c?

The most useful case for specifying the objective coefficient when adding a variable is in linear column generation. But we’ve already discussed this and decided that set_objective_coefficient is okay.

creating anonymous variables (and thus using the upper_bound keyword inside the macro) have no inferior performance than the normal style

Anonymous variables are unrelated to performance. See Variables · JuMP

I’m very curious how the JuMP macros can achieve better performance than calling functional API directly

I’ll reply to your other thread.

I surmise you have some optimized formula integrated in the macros, that accounts for this phenomenon. In a lower-level language (e.g. C + Gurobi’s C-API) I don’t believe using functional API to set the objective is anywhere inferior. I also think that aesthetically speaking using the functional API to set objective variable-by-variable is preferable.


Now I tend to think in julia it’s better to leave only the simplest for-logic to JuMP’s macros.
As such, I re-write my code in #3 post as

prepare_ub(GD) = (
    l = [[GD.pCost[z].KneeFraction[g]GD.PMax[z][g] for g=eachindex(GD.N[z])] for z=GD.Zone],
    h = [[(1-GD.pCost[z].KneeFraction[g])GD.PMax[z][g] for g=eachindex(GD.N[z])] for z=GD.Zone]
)
prepare_h_cost(GD) = [[GD.pCost[z].Slope[g]GD.pCost[z].Mul[g] for g=eachindex(GD.N[z])] for z=GD.Zone]
build_pCost(model, GD, T, p, H_Cost) = JuMP.@expression(model, sum(
    p[z, g, t, :l]GD.pCost[z].Slope[g] +
    p[z, g, t, :h]H_Cost[z][g]
    for z=GD.Zone for g=eachindex(GD.N[z]) for t=1:T
))
add_pwl_power(model, GD, T, UB) = JuMP.@variable(model,
    [z=GD.Zone, g=eachindex(GD.N[z]), 1:T, h=(:h, :l)],
    lower_bound = 0, upper_bound = UB[h][z][g]
)
add_pwl_power(model, GD, T) = add_pwl_power(model, GD, T, prepare_ub(GD))
build_pCost(model, GD, T, p) = build_pCost(model, GD, T, p, prepare_h_cost(GD))

This idea is also known as “function barrier” in julia.