Below is a simplified part of my code, which attempts to do nonlinear optimization using JuMP . I have both cost and constraints which require vector input. However I am having issues with NLconstraint. Would anyone know how I could make this piece of code run? I read somewhere that when using splatting there are limitations (cannot due linear algebra with it), but maybe there is a way around it. I am new to Julia so any advice would be helpful.
using Zygote
using LinearAlgebra
using JuMP
using Ipopt
N=100
d=4
a = (N+1)*d
b = (N+1)*d
c = (N+1)*d
size_overall = a+b+c
print(size_overall,“\n”) #say for this example
A = Matrix(I, d, d)
B = 2 .*Matrix(I, d, d) #then
Hi, thank you for the response! Yes,my initial func1 could be other than quadratic, including nonlinear.
If I change the line x=y to x=collect(y) an error still appears : MethodError: Cannot convert an object of type VariableRef to an object of type Float64…
I do not think the error is in @NLconstraint(model,[i=1:size_overall],func_con(x…)[i]==0) as I have tested it with other simpler constraints and it worked. I could be wrong though.
I tried to make a simpler example with 3 variables and two constraints (I know they are convex), to make the discussion easier.
model = Model(Ipopt.Optimizer)
@variable(model, x[1:3])
# definie cost function
function cost_func(x::T...) where { T <: Real}
return dot(x, x) + dot([1.; 2.; 3.], x)
end
# minimize x^⊤x + q^⊤x
JuMP.register(model, :f, 3, cost_func, autodiff = true)
@NLobjective(model, Min, f(x...))
# make constraint terms as function with vector output
# f_con[1] = x1 - x2
# f_con[2] = x_3 - (x1 + x2)
function func_con(x::T...) where { T <: Real}
f_constr = zeros(T, 2)
f_constr[1] = x[1] - x[2]
f_constr[2] = x[3] - (x[1] + x[2])
return f_constr
end
# define separate functions for each component of vector valued function
func_con_1(x) = func_con(x)[1]
func_con_2(x) = func_con(x)[2]
# register each component individually
register(model, :func_con_1, 3, func_con_1; autodiff = true)
register(model, :func_con_2, 3, func_con_2; autodiff = true)
@NLconstraint(model, func_con_1(x...) <= 0)
@NLconstraint(model, func_con_2(x...) <= 0)
optimize!(model)
This works for 2 constraints, but how would you do it if your function outputs a vector of dimension 100?
@odow I think the question is:
Given a vector-valued function with n-components in the output, can you generate the n nonlinear constraints automatically without manually defining and registering additional functions for each component?
Given a vector-valued function with n-components in the output, can you generate the n nonlinear constraints automatically without manually defining and registering additional functions for each component?