AC Optimal Power Flow in Various Nonlinear Optimization Frameworks

Thanks, @ccoffrin for an amazing project and for sharing detailed results.

Here’s a quick summary on how ExaModels works.

What is ExaModels?

ExaModels.jl is an algebraic modeling system embedded in Julia Language, which is tailored for what we call “SIMD abstraction”. By itself, ExaModels doesn’t have optimization solver capability. This means that you need to interface it with an optimization solver (e.g., Ipopt) to solve problems. ExaModels provide a user front-end to specify the nonlinear model and an automatic differentiation capability to evaluate first/second order derivatives in sparse forms. So, among the total solution time reported in this benchmark, which roughly consists of (1) data parsing time, (2) model creation time, (3) solver time, which can be again broken down to (3-1) derivative evaluation time, (3-2) linear solver time, and (3-3) solver internal time, ExaModels is responsible for (2) and (3-1). ExaModels is pretty efficient both in terms of model creation and derivative evaluation.

So, what is “SIMD abstraction?”

SIMD abstraction is the key idea behind the high performance of ExaModels for OPFs. When specifying the model, ExaModels always require the user to specify the model using iterators. Let’s take an example:

Let’s say that our constraint takes the following form:

function cons!(x,y)
    for i=1:length(x)-2
        y[i] =  3x[i+1]^3+2*x[i+2]-5+sin(x[i+1]-x[i+2])sin(x[i+1]+x[i+2])+4x[i+1]-x[i]exp(x[i]-x[i+1])-3
    end
end

Instead of taking the constraint information in this way,

constraint(model, cons!)

we take the constraint information as

con(x,i) = 3x[i+1]^3+2*x[i+2]-5+sin(x[i+1]-x[i+2])sin(x[i+1]+x[i+2])+4x[i+1]-x[i]exp(x[i]-x[i+1])-3
    
constraint(model, con(x,i) for i=1:length(x)-2)

This allows the model to process the data in the form of an iterator, which allows us to capture the repetitive patterns in the model. This repetitive pattern can be exploited when applying automatic differentiation (AD). In particular, instead of applying AD to the whole cons!, whose internal structure is obscure, we can apply AD to con and run for loop on top of it.

What does ExaModels do more to make AD faster?

An additional trick used in ExaModels is to assume that con has a simple structure, which is the case for OPFs, and when doing the AD, we create expression trees with concrete types. AD on this concrete expression tree can fully take advantage of julia’s JIT compilation, and highly optimized callback can be put together–zero type inference and memory allocations. The even nicer thing here is that we can make these callbacks as GPU kernels using KernelAbstractions.jl, and AD can be performed on different GPU accelerators. Some GPU results can be found in the ExaModels repository.

Limitations

Of course, it is important to note that this improved performance comes from assuming more structures and exploiting them. Clearly, ExaModels serves a narrower class of problems, compared to other modeling systems compared in this benchmark. The key requirement for ExaModels to shine is that the problem has (1) repeated pattern in objective/constraints (meaning that it can be specified as iterators) and (2) each pattern is relatively simple (meaning that it is only dependent on a handful of variables). Many large-scale problems in mathematical programming are like this; e.g., optimal power flow, PDE-constrained optimization, optimal control and parameter estimation problems formulated w/ direct transcription methods. But problems with more complex function structures, e.g., those embedding neural nets, DAE solvers, etc., are not suitable for modeling in this way.

Takeaways and suggestions

One takeaway here is that the more you exploit the structure, the better performance you get. So, we might want to specify how much of the structure is exploited in each modeling approach. Polar form AC OPF problems have several distinct structures, which include:

  • Objective functions and constraints are smooth nonlinear functions, whose 1st and 2nd order derivatives can be cheaply evaluated.
  • Constraint Jacobian and Lagrangian Hessian are extremely sparse, and their sparsity pattern is known.
  • The algebraic structure of the objective and constraint functions are simple and highly repetitive.

Thus, we might want to classify the compared frameworks based on the following criteria, so that one can see if we are doing apples to apples comparison.

  • Does the modeling system exploit second-order derivatives?
  • Does the modeling system exploit sparsity?
  • Does the modeling system exploit repeated patterns?

Also, to reduce the signal-to-noise ratio of the results, I’d suggest

  • Exclude data parsing time, as it doesn’t have much to do with the capabilities of optimization frameworks.
  • Separately report model creation time and solution time. Also, if possible, separately report derivative evaluation time, linear solver time, and solver internals.

A bit more detailed description of each configuration might be helpful—this may not be captured by ]pkg st. For example,

  • ExaModels, JuMP, ADNLPModels, Nonconvex, Optimization are configured with Ipopt and ma27.
  • For Optimization.jl and ADNLPModels.jl (I’d suggest renaming NLPModels as ADNLPModels, as NLPModels itself is simply a template and doesn’t have a modeling capability), I’d also suggest specifying the AD backends, as it seems that they can be configured with different AD backends.
14 Likes