As a follow up to my NLP modeling survey post, I am please to announce the release of the Rosetta OPF project. The goal of this project is to highlight the basic technical requirements for solving the kinds of continuous non-convex mathematical optimization problems that I regularly encounter. To that end, Rosetta OPF provides example implementations of the AC Optimal Power Flow (AC-OPF) problem in five of Julia’s nonlinear optimization modeling layers. Based on the findings of the NLP modeling survey, Rosetta OPF currently includes AC-OPF implementations in GalacticOptim, JuMP, ADNLPModels, Nonconvex, Optim. If I have missed any applicable optimization framework, please let me know. Other implementations are welcome!

At this time I would appreciate feedback and implementation improvement suggestions from the package developers. Below is a brief report on each implementation (in alphabetical order) and some specific inquiries for the developers.

GalacticOptim (@ChrisRackauckas,@Vaibhavdixit02): This implementation is working with GalacticOptim#master / ForwardDiff v0.10.25 and is converging to the expected solution. The primary limitation is that the AD system I am using does not take advantage of the problem’s sparsity. Suggestions for improvements are welcome.

JuMP (@odow,@miles.lubin): This implementation is working with JuMP v0.23.2 / Ipopt v1.0.2 and scales to large problem sizes. Suggestions for implementation improvements are welcome.

NLPModels (@abelsiqueira): This implementation is working with the releases ADNLPModels v0.3.1 / NLPModelsIpopt v0.9.1. The primary limitation is that the AD system does not take advantage of the problem’s sparsity. Suggestions for improvements are welcome.

Nonconvex (@mohamed82008): So far I have not been able to get a basic implementation working. This is in part due to the Zygote issue but also the problem information from Ipopt seems off (e.g., it reports 0 terms in the Jacobian and Hessian). I have also noticed that there are some differences in the solver traces between `Model`

and `DictModel`

in a small test problem. If you could make a PR with a working implementation that would be much appreciated.

Optim (@pkofod): It is difficult for me to verify if this implementation is correct or not (tested with Optim v1.6.2). At first glance the `IPNewton`

algorithm seems to work and converges. However, when I check the details of the converged point it has significant constraint violations. I did verify that if I provide `IPNewton`

with a near-optimal starting point it will converge with minimal constraint violations. This makes me *suspect* that the algorithm is the issue and not the model’s implementation. I also tried testing with `LBFGS`

and `NelderMead`

but they yielded stack overflow errors. In any case, please let me know if there are any obvious changes I should make or other directions to investigate.