AC Optimal Power Flow in Various Nonlinear Optimization Frameworks

When it comes to AC-OPF faster is always better and more useful but <1h is interesting and useful for “day-ahead” operations; <10 minutes is required for things like “real-time” operations.

Current state of the art for this case, as far as I know, is OPOMO at 303 seconds.

I have a PR to get it onto the SciMLBenchmarks system so that way it can be auto-run on a standard machine through CI, making it easier for everyone to get involved:

I noticed Optimization.jl is using ForwardDiff.jl and no sparsity. These days, it has Enzyme with sparse Hessians. I presume that would make a major difference, and so after this merges @Vaibhavdixit02 should look into adding that.

From memory we tried the various sparsity and AD options and ran into memory issues. This was the fastest, but perhaps recent versions are better. :+1: for putting this in the benchmark suite so you have control over what options are used etc.

Avik’s new and improved sparse interface which makes sparse reverse mode simpler only merged a week ago New High Level Interface for Sparse Jacobian Computation by avik-pal · Pull Request #253 · JuliaDiff/SparseDiffTools.jl · GitHub so I presume it wasn’t the latest stuff. Not all of this has made it to Optimization.jl’s high level AD system yet, but by making this be the benchmark we can do a benchmark-driven development to ensure what’s merged is efficient. I hate building in a vacuum.

1 Like

Update: the benchmark is now running on the SciMLBenchmarks! You can find it here:

Anyone can trigger the benchmark to run just by changing the code here:

It makes a cute table at the end with the current results:

Things to note:

  1. I know the table formatting can be improved. If anyone knows how to make documenter show the whole thing I’d be happy to have the help in formatting it.
  2. There is a parameter SIZE_LIMIT which tells the benchmark the maximum number of variables to use in the benchmark files. This is set to 100 right now just to play around with formatting, but I’ll bump it up and do a bigger run. I did do a bigger run a bit back so I know it works, just :person_shrugging: when fixing table formatting it’s annoying to wait a day haha.
  3. Optimization.jl code was horrendously allocating and type-unstable. A common issue in the code was using d["$i"] on a dictionary, instead of just using an integer. The dictionaries are still there, so there’s still some indirection that should be removed and the performance should increase, but at least it’s type-stable and non-allocating now. I’ll actually optimize the code in the near future, but the code now is at least a lot simpler than the original code and won’t interfere with AD tooling.
  4. Optimization.jl is now using a sparse reverse mode, but it’s not optimized yet. This case is a case we’re using to optimize things. See:

which are low hanging fruit identified from this.

  1. I added a lot of validation cases so if you change the code around, assertions will throw failures if you break something.
  2. I couldn’t get the CASADI code to run, I’d like some help figuring that out.

This looks like a problem that FastDifferentiation.jl can solve.

It handles sparsity well and is likely to be faster than the alternatives for this type of problem. If you decide to try it I’ll be happy to help you get it working.

@brianguenter we have it setup to MTK trace already and it produces a branch-free code, so once the Symbolics integration is done this was going to be one of the first cases I wanted to test.