I apologize if someone had raised questions like this. And I understand the topic sounded similar to some previous topics. I did do a search. But didn’t found specific answers to all the questions I have.
I understand that Gurobi is not a nonlinear programming solver. But it’s one of the lead solvers these days. Is it worth to go through all the trouble trying to reformulate the problem somehow into a convex quadratic problem approximately or a nonlinear solver such as Ipopt works as well?
Another question is on user defined functions in nonlinear constraints. I read from a post that the function will need to be registered and due to JuMP syntax, the augments have to be scalars, not vectors. I understand that this is only due to JuMP syntax. Therefore, a wrapper would be ok. So, I assume if I have something like
function myP(kk, n, x, alpha, Q, w, S, T)
and a wrapper function
function outP(kk, x…)
I could do something like
@variable(model, x[1:n]>=0.0)
JuMP.register(model, :outP, 5, outP, autodiff=true)
@variable(model, y>=0.0)
for i=1:K
@NLconstraint(model, y>=outP(i, x[1], x[2], x[3], x[4]))
end
So, I gave this a try. Here are the errors I saw. The first augment in outP and myP somehow became float, not the integer supposedly passed by i. Thus, there was a index error (1.0). If I force the function definition into
function myP(kk::Int, n, x, alpha, Q, w, S, T)
function outP(kk::Int, x…)
I got rid of the error in index. However, when it continues to
@objective(model, Min, y)
status = solve(model)
I got a method error - ERROR: MethodError: no method matching outP(::Float64, ::Float64, ::Float64, ::Float64, ::Float64)
Is there some default in register that make all the augments Float64? If yes, is there a way to define it explicitly?
Of course, this is just work around for a small example. What if x is of high dimension? Any way to get around the augment restriction or some other clever ways to handle nonlinear constraints?
Thanks!