This question has certainly been asked, but I couldn’t actually find a clear answer.
Having the following problem:
n = 10000;
k = rand(n, n)
ke = 0.5(k + k');
function func(x::T...) where T
y = collect(x)
return y' * ke * y
end
model = Model(Ipopt.Optimizer)
@variable(model, x[1:n])
@objective(model, Min, func(x...) );
generating the model is really painfull when n > 10^4 and I want it to be like 10^6. In my real problem ke is sparse and optimization itself is quite quick but for n=10^5 I had to wait 11h for executing objective(model, Min, func(x…) ) whereas optimization took 8 minutes. Is there another way to speed things up ?
I thought it was because automatic differentiation but providing gradient and hessian makes everything even slower, e.g.
function func_grad(g::AbstractVector, x...)
y = collect(x)
g .= ke * y
return
end
function func_hess(h::AbstractMatrix, x...)
h .= ke
return
end
model = Model(Ipopt.Optimizer)
@variable(model, x[1:n])
@operator(model, op_f, n, func, func_grad, func_hess)
@objective(model, Min, op_f(x...) );
btw.: if ke is sparse, h is also?
I would expect that providing gradient and hessian speed up performance but it is opposite. Is it something I can do about it ?