Comparison between Nonconvex.jl, JuMP and CasADi for large sparse nonlinear optimization

My experience in using these two libraries is consistent with yours. I think the short answer is, if you want to take advantage of sparsity in Julia NLP frameworks other than JuMP, you would need code your own oracles for the sparse Jacobean and Hessian and provide these functions to the modeling layer. There are some special cases where this is not required, for example have a look at this list, https://galacticoptim.sciml.ai/stable/API/optimization_function/#Defining-Optimization-Functions-Via-AD

You might be interested in following the discussions that are on going in the following threads where I have been exploring some similar questions,

1 Like