I always struggle in JuMP to model NLPs involving arrays, summations, polynomials, custom multi-argument functions etc
Yeah this is a known problem. For now you’d need to do some ugly hack like:
using JuMP, Ipopt
begin
Q = -0.8:0.4:0.8
model = Model(Ipopt.Optimizer)
@variable(model, -2 <= p[1:5] <= 2)
@variable(model, -1 <= w <= 3)
@variable(model, -1 <= q <= 3)
@objective(model, Min, w)
total = Dict(
_q => @NLexpression(
model,
sum(_p / sqrt(2π) * exp(-(i - _q)^2 / 2) for (i, _p) in enumerate(p))
)
for _q in Any[Q; q; 0.5]
)
l1 = Dict(
_q => @NLexpression(model, 1 - total[_q] + 0.5 * total[0.5])
for _q in Any[Q; q]
)
@NLconstraint(
model,
[_q in Q],
w * (l1[q] - l1[_q]) + (1 - w) * (total[q] - 1) <= 0
)
optimize!(model)
end
We’re working on fixing this, but it’s not ready yet: https://github.com/jump-dev/JuMP.jl/pull/3106#issuecomment-1592091623