Inner product grammar is neater than sum(index) in JuMP modeling, But triggers Warning?

Continuing the discussion from JuMP slow model generation:

I figured out a novel neat way to write a particular kind of constraint, however it trigger warnings like the above post did. Can we somehow improve this?

see the following code

############## An motivating example ##############
import JuMP
I = 4 # change 4 to 20004 to see the Warning
M_vector = [rand(2, 3) for _ in 1:I];
M_mean = rand(2, 3);
model = JuMP.Model();
# The following line create a probability decision vector
JuMP.@variable(model, p[1:I] >= 0); JuMP.@constraint(model, sum(p) == 1);
# The following are meant to be 4 equivalent ways to write a Mean constraint
# 1st: usable, fast
JuMP.@constraint(model, sum(p[i] * M_vector[i] for i in 1:I) .== M_mean) # correct, but cumbersome!
# 2nd: unusable currently
import LinearAlgebra: dot as ip
JuMP.@constraint(model, ip(p, M_vector) .== M_mean) # elegantly neat, but LinearAlgebra doesn't support (regrettably)
# 3rd (my novel idea): usable, but currently slow with Warning when I is large
function my_ip(p, vector) return mapreduce(prod, +, zip(p, vector)) end
JuMP.@constraint(model, my_ip(p, M_vector) .== M_mean)
# 4th: If we want to piggyback on ip (inner product) like the 2nd method above, we could write
foreach([[M_vector[i][e] for i in eachindex(p)] for e in eachindex(M_mean)], M_mean) do l, r
    JuMP.@constraint(model, ip(p, l) == r)
end 

# if I = 20004, the 1st and 4th method are fast, the 3rd method is slow
# Can we somehow improve the 3rd method?

############## Another example ##############
I = 4
M_vector = rand(2, 3, I);
M_mean = rand(2, 3);
model = JuMP.Model();
JuMP.@variable(model, p[1:I] >= 0); JuMP.@constraint(model, sum(p) == 1);
# instead of writing 
JuMP.@constraint(model, sum(p[i] * M_vector[:, :, i] for i in 1:I) .== M_mean)
# we could write
JuMP.@constraint(model, my_ip(p, eachslice(M_vector; dims = 3)) .== M_mean)

An unexpected new finding is that This Warning persists, even if I opt not to use mapreduce. (I only used LinearAlgebra.dot)

The whole code have 310 lines (copy paste to REPL if you want to try). The Warning should appear within 2 minutes.

import JuMP
import Gurobi
import Random
import Distributions: Uniform as UDistri
import LinearAlgebra: dot as ip
import LinearAlgebra: norm as norm

const UPDTOL, argZ_TimeLimit = 0.01, 8.0
GRB_ENV = Gurobi.Env()
Random.seed!(3)
function GurobiDirectModel() return JuMP.direct_model(Gurobi.Optimizer(GRB_ENV)) end
function jvr6(x) return round(JuMP.value(x); digits = 6) end
function jdr6(x) return round( JuMP.dual(x); digits = 6) end
function optimise(m) return (JuMP.optimize!(m); JuMP.termination_status(m)) end
# function my_ip(x::AbstractVector{JuMP.VariableRef}, y)
#     JuMP.@expression(JuMP.owner_model(first(x)), sum(xi * yi for (xi, yi) in zip(x, y)))
# end
function opt_ass_opt(m, str)
    sta = optimise(m)
    sta == JuMP.OPTIMAL || error(str * ": " * string(sta))
end
function opt_ass_optortime(m, str)
    sta = optimise(m)
    sta == JuMP.OPTIMAL && return
    sta != JuMP.TIME_LIMIT && error(str * ": " * string(sta))
    JuMP.has_values(m) || error(str * ": no primal solution")
end
function brcs(v) return ones(T) * transpose(v) end
function gen_load_pattern(fLM)
    fL = length(fLM)
    fLP = rand(UDistri(0.7 * fLM[1], 1.4 * fLM[1]), T, 1)
    for i in 2:fL
        fLP = [fLP rand(UDistri(0.7 * fLM[i], 1.4 * fLM[i]), T)]
    end
    return fLP
end

begin # base case: such that the first stage solution is to turn on all the generators
    G = 2;
    UT = DT = 3;
    PI = [0.45, 0.375, 0.5];
    PS = [4.5, 4, 5.5];
    ZS = [0, 0,   1];
    CST = [0.63, 0.60, 0.72];
    CG = [0.8, 0.68, 0.72];
    CL_BASE = [1.6, 1.3776, 1.4442];
    NW = [2, 3];
    W = 2;
    NL = [4, 5, 6];
    L = 3;
    LM = [4.0, 3.5, 3.0];
    T = 24;
    (CL = brcs(CL_BASE); CL[end, :] *= 2);
end
MZ = gen_load_pattern(LM)
MY, yM, NY = let
    YABSMAX = [1.6, 2.3]
    Pr = UDistri.(0.2 * rand(W), YABSMAX / 1.1)
    Ybaseline = [rand(Pr[w]) for t in 1:T, w in 1:W]
    YMIN, YMAX = 0.9 * Ybaseline, 1.1 * Ybaseline # used to construct vertices
    all0() = Bool.(zero(YMIN))
    vertexList = [all0() for _ in eachindex(YMIN)]
    for i in eachindex(YMIN)
        vertexList[i][i] = true
    end
    pushfirst!(vertexList, all0())
    for i in 1:1+length(YMIN)
        push!(vertexList, .!(vertexList[i]))
    end
    ref = vertexList[1]
    yM = [(ref[i,j] ? YMAX[i,j] : YMIN[i,j]) for i in eachindex(eachrow(YMIN)), j in eachindex(eachcol(YMIN))]
    yM = reshape(yM, (T, W, 1))
    popfirst!(vertexList)
    for ref in vertexList
        yM = cat(yM, [(ref[i,j] ? YMAX[i,j] : YMIN[i,j]) for i in eachindex(eachrow(YMIN)), j in eachindex(eachcol(YMIN))]; dims = 3)
    end # ✅ after this line, yM is already gened
    NY = size(yM, 3)
    PrVec = rand(UDistri(.4, .6), NY)
    PrVec = PrVec / sum(PrVec)
    MY = sum(yM[:, :, i] * PrVec[i] for i in 1:NY)
    @assert all(MY .< YMAX) && all(MY .> YMIN)
    function is_strict_in(MY, yM)
        NY = size(yM, 3)
        ø = GurobiDirectModel()
        JuMP.set_silent(ø)
        JuMP.@variable(ø, c[1:NY] >= 1e-5)
        JuMP.@constraint(ø, sum(c) == 1)
        JuMP.@constraint(ø, sum(yM[:, :, i] * c[i] for i in 1:NY) .== MY)
        status = optimise(ø)
        status == JuMP.OPTIMAL && return true
        return false
    end
    @assert is_strict_in(MY, yM)
    MY, yM, size(yM)[3]
end;
Z_list = [MZ]; # must involve MZ to ensure feasibility
function recruit(Z_list, Z_new)
    for Z in Random.shuffle(Z_list)
        norm(Z .- Z_new, Inf) < 5e-5 && return false
    end
    push!(Z_list, Z_new); return true
end

trh = GurobiDirectModel(); JuMP.set_objective_sense(trh, JuMP.MIN_SENSE); JuMP.@variable(trh, oLh); JuMP.set_silent(trh);
JuMP.@variable(trh, trh_u[t = 1:T, g = 1:G+1], Bin);
JuMP.@variable(trh, trh_v[t = 1:T, g = 1:G+1], Bin);
JuMP.@variable(trh, trh_x[t = 1:T, g = 1:G+1], Bin);
let # 1st stage feasible region
    JuMP.@constraint(trh, trh_x .- [transpose(ZS); trh_x[1:T-1, :]] .== trh_u .- trh_v)
    JuMP.@constraint(trh, [g = 1:G+1, t = 1:T-UT+1], sum(trh_x[i, g] for i in t:t+UT-1) >= UT * trh_u[t, g])
    JuMP.@constraint(trh, [g = 1:G+1, t = T-UT+1:T], sum(trh_x[i, g] - trh_u[t, g] for i in t:T) >= 0)
    JuMP.@constraint(trh, [g = 1:G+1, t = 1:T-DT+1], sum(1 - trh_x[i, g] for i in t:t+DT-1) >= DT * trh_v[t, g])
    JuMP.@constraint(trh, [g = 1:G+1, t = T-DT+1:T], sum(1 - trh_x[i, g] - trh_v[t, g] for i in t:T) >= 0)
end;
u, v, x = let
    JuMP.set_objective_function(trh, ip(brcs(CST), trh_u)) # to derive a feas (u, v, x)
    opt_ass_opt(trh, "trh")
    u = jvr6.(trh_u);
    v = jvr6.(trh_v);
    x = jvr6.(trh_x);
    u, v, x
end

fx2Z = GurobiDirectModel(); JuMP.set_objective_sense(fx2Z, JuMP.MAX_SENSE); JuMP.set_silent(fx2Z);
f_c_x, f_c_Y, f_c_Z = let
    JuMP.@variable(fx2Z, Dbal[t = 1:T]);
    JuMP.@variable(fx2Z, Dvp[t = 1:T, w = 1:W] >= 0);
    JuMP.@variable(fx2Z, Dzt[t = 1:T, l = 1:L] >= 0);
    JuMP.@variable(fx2Z, Dvr[t = 1:T, g = 1:G+1] >= 0);
    JuMP.@variable(fx2Z, Dpi[t = 1:T, g = 1:G+1] >= 0);
    JuMP.@variable(fx2Z, Dps[t = 1:T, g = 1:G+1] >= 0);
    JuMP.@constraint(fx2Z, [t = 1:T, g = 1:G+1], Dps[t, g] - Dpi[t, g] - Dvr[t, g] + CG[g] == 0);
    JuMP.@constraint(fx2Z, [t = 1:T, g = 1:G+1], Dbal[t] - CG[g] + Dvr[t, g] >= 0);
    JuMP.@constraint(fx2Z, [t = 1:T, w = 1:W], Dvp[t, w] + Dbal[t] >= 0);
    JuMP.@constraint(fx2Z, [t = 1:T, l = 1:L], Dzt[t, l] - CL[t, l] - Dbal[t] >= 0);
    JuMP.@expression(fx2Z, f_c_x, brcs(PI) .* Dpi .- brcs(PS) .* Dps); # in x2
    JuMP.@expression(fx2Z, f_c_Y, -Dvp); # in x2
    JuMP.@expression(fx2Z, f_c_Z, CL .- Dzt)
    f_c_x, f_c_Y, f_c_Z
end;
function set_f_objective(x, Y, Z)
    JuMP.set_objective_function(fx2Z,
          ip(f_c_x, x)
        + ip(f_c_Y, Y)
        + ip(f_c_Z, Z)
    )
end
function get_f_value(x, Y, Z)
    set_f_objective(x, Y, Z)
    opt_ass_opt(fx2Z, "fx2Z")
    JuMP.objective_value(fx2Z)
end
function get_f_cut(x, Y, Z)
    set_f_objective(x, Y, Z)
    opt_ass_opt(fx2Z, "fx2Z")
    cn = ip(jvr6.(f_c_Z), Z) + ip(jvr6.(f_c_Y), Y)
    px = jvr6.(f_c_x)
    return cn, px
end
function add_cut_for_oLh(cn, px)
    JuMP.@constraint(trh, oLh >= cn + ip(px, trh_x))
end
function get_cut_oLh_coeff(x, Q, P)
    cn, px = 0., zero(x)
    for j in 1:Q[1]
        for i in 1:P[1]
            p = Q[2][j] * P[2][i]
            this_cn, this_px = get_f_cut(x, Q[3][:, :, j], P[3][i])
            cn += p * this_cn
            px += p * this_px
        end
    end
    return cn, px
end
function add_cut_for_oLh(x, Q, P, oLh_check)
    cn, px = get_cut_oLh_coeff(x, Q, P)
    if oLh_check < cn + ip(px, x) - UPDTOL
        add_cut_for_oLh(cn, px)
        return true
    end
    return false
end

argZ = GurobiDirectModel(); JuMP.set_objective_sense(argZ, JuMP.MAX_SENSE); JuMP.set_attribute(argZ, "TimeLimit", argZ_TimeLimit); JuMP.set_silent(argZ);
JuMP.@variable(argZ, argZ_Z[t = 1:T, l = 1:L] == MZ[t, l]); # TBD
argZ_c_x, argZ_c_Y, argZ_c_b2, argZ_cn = let
    JuMP.@variable(argZ, Dbal[t = 1:T]);
    JuMP.@variable(argZ, Dvp[t = 1:T, w = 1:W] >= 0);
    JuMP.@variable(argZ, Dzt[t = 1:T, l = 1:L] >= 0);
    JuMP.@variable(argZ, Dvr[t = 1:T, g = 1:G+1] >= 0);
    JuMP.@variable(argZ, Dpi[t = 1:T, g = 1:G+1] >= 0);
    JuMP.@variable(argZ, Dps[t = 1:T, g = 1:G+1] >= 0);
    JuMP.@constraint(argZ, [t = 1:T, g = 1:G+1], Dps[t, g] - Dpi[t, g] - Dvr[t, g] + CG[g] == 0);
    JuMP.@constraint(argZ, [t = 1:T, g = 1:G+1], Dbal[t] - CG[g] + Dvr[t, g] >= 0);
    JuMP.@constraint(argZ, [t = 1:T, w = 1:W], Dvp[t, w] + Dbal[t] >= 0);
    JuMP.@constraint(argZ, [t = 1:T, l = 1:L], Dzt[t, l] - CL[t, l] - Dbal[t] >= 0);
    JuMP.@expression(argZ, argZ_c_x, brcs(PI) .* Dpi .- brcs(PS) .* Dps); # in x2
    JuMP.@expression(argZ, argZ_c_Y, -Dvp); # in x2
    JuMP.@expression(argZ, argZ_c_b2, -argZ_Z); # is beta2
    JuMP.@expression(argZ, argZ_cn, ip(CL .- Dzt, argZ_Z)); # the rest
    argZ_c_x, argZ_c_Y, argZ_c_b2, argZ_cn
end;
function set_argZ_objective(x, Y, b2) # (x2, b2)
    JuMP.set_objective_function(argZ, argZ_cn
        + ip(argZ_c_x, x)
        + ip(argZ_c_Y, Y)
        + ip(argZ_c_b2, b2)
    )
end
tr2 = GurobiDirectModel(); JuMP.@variable(tr2, oΛ2); JuMP.set_silent(tr2); # ✅
JuMP.@variable(tr2, tr2_b2[eachindex(eachrow(MZ)), eachindex(eachcol(MZ))]);
JuMP.@variable(tr2, tr2_x[t = 1:T, g = 1:G+1] == x[t, g]);
JuMP.@variable(tr2, tr2_Y[t = 1:T, w = 1:W] == MY[t, w]);
function add_cut_for_oΛ2(cn, px, pY, pb)
    JuMP.@constraint(tr2, oΛ2 >= cn + ip(px, tr2_x) + ip(pY, tr2_Y) + ip(pb, tr2_b2)) 
end
let
    set_argZ_objective(x, (local Y = MY), (local b2 = zero(MZ)));
    opt_ass_optortime(argZ, "argZ")
    cn, px, pY = JuMP.value(argZ_cn), jvr6.(argZ_c_x), jvr6.(argZ_c_Y)
    add_cut_for_oΛ2(cn, px, pY, (local pb2 = -JuMP.fix_value.(argZ_Z)))
    JuMP.@objective(tr2, Min, ip(MZ, tr2_b2) + oΛ2); # ❄️ stable obj
    try
        opt_ass_opt(tr2, "tr2")
    finally # finish establishing Z's support
        JuMP.unfix.(argZ_Z)
        (JuMP.set_lower_bound.(argZ_Z, 0.9 * MZ); JuMP.set_upper_bound.(argZ_Z, 1.1 * MZ))
        JuMP.@variable(argZ, argZ_a[eachindex(eachrow(MZ)), eachindex(eachcol(MZ))])
        JuMP.@constraint(argZ, argZ_a .>= argZ_Z .- MZ)
        JuMP.@constraint(argZ, argZ_a .>= MZ .- argZ_Z)
        JuMP.@constraint(argZ, [t = 1:T], sum(argZ_a[t, :] ./ (0.1 * MZ[t, :])) <= L/3)
    end
end
function psi_via_19(x, i)
    JuMP.fix.(tr2_x, x)
    JuMP.fix.(tr2_Y, yM[:, :, i])
    opt_ass_opt(tr2, "tr2")
    JuMP.objective_value(tr2)
end
function get_proper_Q(x)
    JuMP.set_objective_function(fr9, ip(fr9_p, [psi_via_19(x, i) for i in 1:NY]));
    opt_ass_opt(fr9, "fr9")
    prob_Y = jvr6.(fr9_p)
    itmp = prob_Y .> 1e-6
    proper_Q_p = prob_Y[itmp]
    proper_Q_n = length(proper_Q_p)
    proper_Q_Y = yM[:, :, itmp]
    proper_Q_n, proper_Q_p, proper_Q_Y
end
function get_proper_P(x, Y, Z_list)
    fr10 = GurobiDirectModel(); JuMP.set_silent(fr10); JuMP.set_objective_sense(fr10, JuMP.MAX_SENSE); # JuMP.set_attribute(fr10, "Presolve", 0);
    JuMP.@variable(fr10, p[eachindex(Z_list)] >= 0); JuMP.@constraint(fr10, sum(p) == 1);
    # JuMP.@constraint(fr10, my_ip(p, Z_list) .== MZ) # this style triggers Warning
    JuMP.@constraint(fr10, sum(p[i] * Z_list[i] for i in eachindex(p)) .== MZ) # this equivalent style also triggers
    JuMP.set_objective_function(fr10, ip(p, [get_f_value(x, Y, Z) for Z in Z_list]));
    opt_ass_opt(fr10, "fr10")
    prob_Z = jvr6.(p)
    itmp = prob_Z .> 1e-6
    proper_P_p = prob_Z[itmp]
    proper_P_n = length(proper_P_p)
    proper_P_Z = Z_list[itmp]
    proper_P_n, proper_P_p, proper_P_Z
end
fr9 = GurobiDirectModel(); JuMP.set_objective_sense(fr9, JuMP.MAX_SENSE); JuMP.set_silent(fr9);
JuMP.@variable(fr9, fr9_p[1:NY] >= 0); JuMP.@constraint(fr9, sum(fr9_p) == 1);
# JuMP.@constraint(fr9, my_ip(fr9_p, [yM[:, :, i] for i in 1:size(yM, 3)]) .== MY);
JuMP.@constraint(fr9, sum(fr9_p[i] * yM[:, :, i] for i in 1:NY) .== MY);
proper_Q = get_proper_Q(x) # ✅
JuMP.fix.(tr2_x, x);
i = 1; # for one Y
Y = proper_Q[3][:, :, i];
JuMP.fix.(tr2_Y, Y);
opt_ass_opt(tr2, "tr2");
b2 = jvr6.(tr2_b2); # 🥑
set_argZ_objective(x, Y, b2);
opt_ass_optortime(argZ, "argZ");
Z_new = jvr6.(argZ_Z);
recruit(Z_list, Z_new);
proper_P = get_proper_P(x, Y, Z_list) # ✅
cn, px = get_cut_oLh_coeff(x, proper_Q, proper_P); add_cut_for_oLh(cn, px)
JuMP.set_objective_function(trh, ip(brcs(CST), trh_u) + oLh) # ❄️ stable obj
opt_ass_opt(trh, "trh")
@info "trh's initialization succeed, main loop begins"

updVec = trues(2)
while true
    opt_ass_opt(trh, "trh")
    x = jvr6.(trh_x); oLh_check = JuMP.value(oLh); lb = JuMP.objective_bound(trh);
    JuMP.fix.(tr2_x, x) # 🖌️
    proper_Q = get_proper_Q(x) # ✅
    updVec .= false
    for j in 1:proper_Q[1]
        Y = proper_Q[3][:, :, j]
        JuMP.fix.(tr2_Y, Y) # 🖌️
        opt_ass_opt(tr2, "tr2")
        b2 = jvr6.(tr2_b2); oΛ2_check = JuMP.value(oΛ2) # 🥑✂️
        set_argZ_objective(x, Y, b2); opt_ass_optortime(argZ, "argZ")
        Z_new = jvr6.(argZ_Z)
        cn, px, pY, pb = JuMP.value(argZ_cn), jvr6.(argZ_c_x), jvr6.(argZ_c_Y), jvr6.(argZ_c_b2)
        if oΛ2_check < cn + ip(px, x) + ip(pY, Y) + ip(pb, b2) - UPDTOL
            add_cut_for_oΛ2(cn, px, pY, pb)
            updVec[2] = true
        end   
        recruit(Z_list, Z_new)
        proper_P = get_proper_P(x, Y, Z_list) # ✅
        add_cut_for_oLh(x, proper_Q, proper_P, oLh_check) && (updVec[1] = true)
    end
    @info "$updVec, lb = $lb"
    all(updVec .== false) && break
end



The result should be like

[ Info: Bool[1, 1], lb = -208.90723332481141
┌ Warning: The addition operator has been used on JuMP expressions a large number of times. This warning is safe to ignore but may indicate that model generation is slower than necessary. For performance reasons, you should not add expressions in a loop. Instead of x += y, use add_to_expression!(x,y) to modify x in place. If y is a single variable, you may also use add_to_expression!(x, coef, y) for x += coef*y.
└ @ JuMP K:\judepot1112\packages\JuMP\i68GU\src\operators.jl:282
[ Info: Bool[1, 1], lb = 0.6
...

The warning happens when you use operators that do not use the efficient in-place mutable arithmetics.

We have dot specialized for dot(::Vector, ::Vector), but not dot(::Vector, ::Vector{<:Matrix}).

One suggestion would be

function ip(model, x, y)
    @expression(model, sum(xi * yi for (xi, yi) in zip(x, y)))
end

JuMP.@constraint(model, ip(model, p, M_vector) .== M_mean)

or

function ip(x::AbstractVector{JuMP.VariableRef}, y)
    model = owner_model(first(x))
    @expression(model, sum(xi * yi for (xi, yi) in zip(x, y)))
end

JuMP.@constraint(model, ip(p, M_vector) .== M_mean)
1 Like

@blegat do we want to cover this case? I don’t know if it’s very common.

Another option that we could consider fixing is:

julia> using JuMP

julia> model = Model()
A JuMP Model
├ solver: none
├ objective_sense: FEASIBILITY_SENSE
├ num_variables: 0
├ num_constraints: 0
└ Names registered in the model: none

julia> @variable(model, x[1:3])
3-element Vector{VariableRef}:
 x[1]
 x[2]
 x[3]

julia> y = [rand(2, 2) for _ in 1:3]
3-element Vector{Matrix{Float64}}:
 [0.5701080035016627 0.6022404547340912; 0.5605957743951222 0.6061453626876424]
 [0.362357957737371 0.5970481709139359; 0.03292266575902236 0.34169043415484046]
 [0.8061135686585252 0.764917425247723; 0.7067819448426711 0.12129596496862716]

julia> @which x' * y
*(u::Union{LinearAlgebra.Adjoint{T, var"#s997"}, LinearAlgebra.Transpose{T, var"#s997"}} where {T, var"#s997"<:(AbstractVector)}, v::AbstractVector)
     @ LinearAlgebra ~/.julia/juliaup/julia-1.10.7+0.x64.apple.darwin14/share/julia/stdlib/v1.10/LinearAlgebra/src/adjtrans.jl:464

julia> x' * y
ERROR: MethodError: no method matching zero(::Type{Matrix{AffExpr}})

Closest candidates are:
  zero(::Type{Union{}}, Any...)
   @ Base number.jl:310
  zero(::Type{Pkg.Resolve.FieldValue})
   @ Pkg ~/.julia/juliaup/julia-1.10.7+0.x64.apple.darwin14/share/julia/stdlib/v1.10/Pkg/src/Resolve/fieldvalues.jl:38
  zero(::Type{Dates.Date})
   @ Dates ~/.julia/juliaup/julia-1.10.7+0.x64.apple.darwin14/share/julia/stdlib/v1.10/Dates/src/types.jl:439
  ...

Stacktrace:
 [1] neutral_element(::typeof(+), T::Type)
   @ MutableArithmetics ~/.julia/packages/MutableArithmetics/BLlgj/src/reduce.jl:25
 [2] fused_map_reduce(::typeof(MutableArithmetics.add_mul), ::LinearAlgebra.Adjoint{VariableRef, Vector{…}}, ::Vector{Matrix{…}})
   @ MutableArithmetics ~/.julia/packages/MutableArithmetics/BLlgj/src/reduce.jl:44
 [3] _dot_nonrecursive(lhs::LinearAlgebra.Adjoint{VariableRef, Vector{VariableRef}}, rhs::Vector{Matrix{Float64}})
   @ MutableArithmetics ~/.julia/packages/MutableArithmetics/BLlgj/src/dispatch.jl:49
 [4] *(u::LinearAlgebra.Adjoint{VariableRef, Vector{VariableRef}}, v::Vector{Matrix{Float64}})
   @ LinearAlgebra ~/.julia/juliaup/julia-1.10.7+0.x64.apple.darwin14/share/julia/stdlib/v1.10/LinearAlgebra/src/adjtrans.jl:464
 [5] top-level scope
   @ REPL[8]:1
Some type information was truncated. Use `show(err)` to see complete types.

julia> z = rand(3)
3-element Vector{Float64}:
 0.11627605066264912
 0.3058601513730612
 0.44928823383281546

julia> z' * y
2×2 Matrix{Float64}:
 0.539298  0.596308
 0.392802  0.229487

When I was a noob I used to write inner product with x' * y, but currently I find that ip(x, y) is more decent, where ip is imported from LinearAlgebra.dot.

I’ve updated my 2nd post, yet I cannot find out where that Warning is from.
You can tell me once you’ve found.

Here’s the code for the warning:

There is a model-level variable that tracks how many times two AffExpr are added together, and the warning is thrown once this reaches 20,000. So you won’t find the warning on small examples.

I’m guessing your issue is:

JuMP.set_objective_function(fx2Z,
          ip(f_c_x, x)
        + ip(f_c_Y, Y)
        + ip(f_c_Z, Z)
    )

Just do:

JuMP.@objective(fx2Z, Max,
          ip(f_c_x, x)
        + ip(f_c_Y, Y)
        + ip(f_c_Z, Z)
    )
1 Like

Although your finding seems to be workable. But isn’t the former functional API more fundamental, intuitive and concise?
I’m going to guess you want to fix this problem in JuMP, will you? :smiling_face_with_tear:
Is there any motivation for me to resort to the macro API?

So If I’m dealing with an optimization problem whose parameters are in objective (e.g. the dual problem of a right-hand-side uncertainty program). Then a style should be like:

model = JuMP.Model()
JuMP.@objective(model, Max, JuMP.objective_function(model)) # set the sense of the model initially
for i in 1:3
    JuMP.@objective(model, JuMP.objective_sense(model), i) # then change the function later
end

I’m going to guess you want to fix this problem in JuMP, will you?

I think you might mean “you won’t fix this problem”, in which case, no, we won’t be changing the current behavior.

For the motivation, see Performance tips · JuMP

Note that if you followed the text in the warning, it would suggest:

expr = ip(f_c_x, x)
add_to_expression!(expr, ip(f_c_Y, Y))
add_to_expression!(expr, ip(f_c_Z, Z))
JuMP.set_objective_function(fx2Z, expr)

The key is that as you have written the original function, JuMP doesn’t know that it can modify the results of ip(f_c_x, x) in-place. Therefore, it needs to allocate memory for each temporary expression.

I didn’t look through your code in detail, but this probably happens in a bunch of other places as well.

Yes, I take your point. @expression should be used for performance.
So is it true that LinearAlgebra.dot is now well supported with performance guarantees by the JuMP macros? like
@expression, @objective, @constraint, @build_constraint