The JuMP documentation and this answer suggests that nonlinear user-defined functions with vector outputs need to be defined using many @operators
. The documentation actually says “User-defined operators …”.
This post suggest that nonlinear functions can be used directly. The date on that post and syntax suggest an old version of JuMP.
The example below also suggest that nonlinear functions can be used directly.
This works
import Pkg; Pkg.activate(temp=true)
Pkg.add(name="JuMP", version="1.15.1");
Pkg.add("Ipopt");
Pkg.instantiate();
import JuMP
import Ipopt
model = JuMP.Model(Ipopt.Optimizer)
function dy(x)
return [cos(x[1])*x[2]; sin(x[1]) + sin(x[2])]
end
x0 = [.1;.1]
xc = [1;0.5]
JuMP.@variable(model, x[i=1:2], start = x0[i])
JuMP.@constraint(model, dy(x) == xc)
JuMP.optimize!(model)
print("x: ", JuMP.value.(x))
print("dy(x): ", dy(JuMP.value.(x)))
...
EXIT: Optimal Solution Found.
x: [-0.39325899095511446, 1.0826434588266454]dy(x): [0.9999999999993564, 0.5000000000001374]
So does this
import Pkg; Pkg.activate(temp=true)
Pkg.add(name="JuMP", version="1.15.1");
Pkg.add("Ipopt");
Pkg.instantiate();
import JuMP
import Ipopt
function dy(x)
return [cos(x[1])*x[2]; sin(x[1]) + sin(x[2])]
end
x0 = [.1;.1]
xc = [1;0.5]
model = JuMP.Model(Ipopt.Optimizer)
JuMP.@variable(model, x[i=1:2], start = x0[i])
JuMP.@operator(model, op_f1, 2, (x...) -> dy(collect(x))[1])
JuMP.@operator(model, op_f2, 2, (x...) -> dy(collect(x))[2])
JuMP.@constraint(model, op_f1(x...) == xc[1])
JuMP.@constraint(model, op_f2(x...) == xc[2])
JuMP.optimize!(model)
print("x: ", JuMP.value.(x))
print("dy(x): ", dy(JuMP.value.(x)))
...
EXIT: Optimal Solution Found.
x: [-0.39325899095511446, 1.0826434588266454]dy(x): [0.9999999999993564, 0.5000000000001374]
Is there a preferred syntax?
Is there a reason to prefer the @operator
macro over the user-defined function?
If we have a nice conclusion I will open a PR to add to the documentation
PS:
This for nonlinear and this for linear are possibly related.