 # How to set up NLconstraints from multi-output user-defined function?

I have a non-linear user-defined function, y = myfun(x), where the output y is a 1*2 array.
I am going to use the output y to construct constraints. I will have 2 constraint equations since y has 2 components and each components constructs one constraint equation. In my case, my constraints will be something like: y >= 1, y >=1.

When the output length n = 1, I did the following and worked fine.

``````using JuMP
using Ipopt

dim = 2
x0 = ones((1,dim)).*35

# This is where I am confused, I have to
# return y in order to make it work.
# And I don't know what to do if I want to return
# both y and y
function myfun(x...)
y = zeros(eltype(x), 2)
y = sin(x) + x^2
y = 1/x + x^2
return y
end

model = Model(Ipopt.Optimizer)
@variable(model, 0.1 <= x[i = 1:dim] <= 35.0, start = x0[i])

register(model, :myfun, dim, myfun, autodiff = true)
@NLconstraint(model, nlcon, myfun(x...) >= 1)

@objective(model, Min, sum(x[i] for i in 1:dim))

optimize!(model)

@show value.(x);
@show objective_value(model);

``````

Notice in myfun(x…), the output y has two components. But I have to return y in order to make this work for one constraint only. So what should I do if I want to keep both y and [y2]? Any help will be appreciated. Thank you in advance.

So what should I do if I want to keep both y and [y2]?

You cannot. User-defined nonlinear functions must return a single value.

If you can, you’re better off writing these out algebraically

``````@NLconstraint(model, sin(x + x^2 >= 1)
@NLconstraint(model, 1 / x + x^2 >= 1)
``````
1 Like

Got it and thanks for your reply. But here is the truth…
The myfun(x) I am using in this post is just a toy function I created for demonstration. The actual myfun(x) returns a 1*n array simultaneously, so it cannot separate easily like this one. One way is to create two identical functions but return y and y, respectively. But I don’t think this is efficient, any idea? Thanks.

Here is what I meant by creating two identical functions. And it worked in this way.

``````y1(x...) = myfun(x)
y2(x...) =myfun(x)

register(model, :y1, dim, y1, autodiff = true)
@NLconstraint(model, y1(x...) >= -2)
register(model, :y2, dim, y2, autodiff = true)
@NLconstraint(model, y2(x...) >= -2)
``````

But I don’t know if the program will evaluate myfun(x) once or twice for each iteration.

It will call your function twice.

The suggested work-around is to use memoization like so:

``````using JuMP
using Ipopt

function myfun(x)
return sum(xi for xi in x), sum(xi^2 for xi in x)
end

function memoized()
cache = Dict{UInt, Any}()
fi = (i, x) -> begin
h = hash((x, typeof(x)))
cache[h] = myfun(x)
end
return cache[h][i]::Real
end
return (x...) -> fi(1, x), (x...) -> fi(2, x)
end

model = Model(Ipopt.Optimizer)
f1, f2 = memoized()
register(model, :f1, 3, f1; autodiff = true)
register(model, :f2, 3, f2; autodiff = true)
@variable(model, x[1:3] >= 0, start = 0.1)
@NLconstraint(model, f1(x...) <= 2)
@NLconstraint(model, f2(x...) <= 1)
@objective(model, Max, sum(x))
optimize!(model)
``````
2 Likes

Thanks, Oscar. This is very helpful.