I’m trying to solve an optimization problem w/ (N+1) parameters subject to a set of N nonlinear constraints, each of which relies on the whole vector of the parameters.
A simplified version of my problem looks like this:
@variables m begin
q_b[i=1:N] >= 0.
0 <= gamma <= 0.001
end
m = Model(solver=IpoptSolver(print_level=0, tol=1e-5))
@NLobjective(m, Min, sum((q_b[i])^2 for i in 1:N)))
function my_constraint_i(q_b_index, gamma, q_b_arr…)
q_b = [q_b_arr…]
exp_of_sum = exp(gamma * (dot(q_b, X))) ## X is an N-d vector of data
lhs = q_b[q_b_i]
rhs = exp_of_sum * y ## y is another N-d vector data
constraint_i = lhs - rhs
return constraint_i
end
JuMP.register(m, :constraint_i, (2+N), constraint_i, autodiff=true)
When I try to use the constraint macros, I get a method error complaining that it can’t parse the inputs:
foc_constraint = @NLexpression(m, [i=1:length(q_a)], foc_constraint_i(index_vector[i], gamma, q_b…) )
@NLconstraint(m, [i=1:length(q_a)], foc_constraint[i] == 0)
Note, however that if I listed the vars, q_b[1], q_b[2], etc. it does work.
I saw the thread on using splatting w/ user defined functions here and here but I’m a novice and haven’t been able to adapt the solutions to my problem. Any help would be tremendously appreciated.