I have a non-linear user-defined function, y = myfun(x), where the output y is a 1*2 array.
I am going to use the output y to construct constraints. I will have 2 constraint equations since y has 2 components and each components constructs one constraint equation. In my case, my constraints will be something like: y[1] >= 1, y[2] >=1.
When the output length n = 1, I did the following and worked fine.
using JuMP
using Ipopt
dim = 2
x0 = ones((1,dim)).*35
# This is where I am confused, I have to
# return y[1] in order to make it work.
# And I don't know what to do if I want to return
# both y[1] and y[2]
function myfun(x...)
y = zeros(eltype(x), 2)
y[1] = sin(x[1]) + x[2]^2
y[2] = 1/x[1] + x[2]^2
return y[1]
end
model = Model(Ipopt.Optimizer)
@variable(model, 0.1 <= x[i = 1:dim] <= 35.0, start = x0[i])
register(model, :myfun, dim, myfun, autodiff = true)
@NLconstraint(model, nlcon, myfun(x...) >= 1)
@objective(model, Min, sum(x[i] for i in 1:dim))
optimize!(model)
@show value.(x);
@show objective_value(model);
Notice in myfun(x…), the output y has two components. But I have to return y[1] in order to make this work for one constraint only. So what should I do if I want to keep both y[1] and [y2]? Any help will be appreciated. Thank you in advance.
Got it and thanks for your reply. But here is the truth…
The myfun(x) I am using in this post is just a toy function I created for demonstration. The actual myfun(x) returns a 1*n array simultaneously, so it cannot separate easily like this one. One way is to create two identical functions but return y[1] and y[2], respectively. But I don’t think this is efficient, any idea? Thanks.
The suggested work-around is to use memoization like so:
using JuMP
using Ipopt
function myfun(x)
return sum(xi for xi in x), sum(xi^2 for xi in x)
end
function memoized()
cache = Dict{UInt, Any}()
fi = (i, x) -> begin
h = hash((x, typeof(x)))
if !haskey(cache, h)
cache[h] = myfun(x)
end
return cache[h][i]::Real
end
return (x...) -> fi(1, x), (x...) -> fi(2, x)
end
model = Model(Ipopt.Optimizer)
f1, f2 = memoized()
register(model, :f1, 3, f1; autodiff = true)
register(model, :f2, 3, f2; autodiff = true)
@variable(model, x[1:3] >= 0, start = 0.1)
@NLconstraint(model, f1(x...) <= 2)
@NLconstraint(model, f2(x...) <= 1)
@objective(model, Max, sum(x))
optimize!(model)