JuMP: Constraint with a huge expression

I am developing a package for Direct Optimal Control using JuMP. Function tracing is used heavily to formulate the problems. It works well for small to medium sized problems. However, recently I tried one largish example from the Gpops manual and it got stuck in formulating the problem when using hermite-simpson method. I have created a self contained example (could not think of way to make it smaller). It gets stuck at formulating the constraint after the @infiltrate macro:

MWE
using JuMP
import Ipopt
using Infiltrator

dubins = Model(Ipopt.Optimizer)

# #### Define the models and cost functions
# Now let us define the parameters and functions which make up the model
T = 4.446618e-3 # [lb]
Isp = 450 # [s]
mu = 1.407645794e16 # [ftˆ3/sˆ2]
gs = 32.174 # [ft/sˆ2]
Re = 20925662.73 # [ft]
J2 = 1082.639e-6
J3 = -2.565e-6
J4 = -1.608e-6
p0 = 21837080.052835 # [ft]
f0 = 0
g0 = 0
h0 = -0.25396764647494
k0 = 0
L0 = pi
w0 = 1
tau = -25;
t0 = 0;
tf = 90000;

t0 = 0; tmin = t0; tfmin = 50000; tmax = 100000;
p0 = 21837080.052835; pf = 40007346.015232;
f0 = 0;
g0 = 0;
h0 = -0.25396764647494;
k0 = 0;
L0 = pi;
w0 = 1;

pmin = 20000000; pmax = 60000000;
fmin = -1; fmax = +1;
gmin = -1; gmax = +1;
hmin = -1; hmax = +1;
kmin = -1; kmax = +1;
Lmin = L0; Lmax = 9*2*pi;
wmin = 0.1; wmax = w0;
urmin = -1; urmax = +1;
utmin = -1; utmax = +1;
uhmin = -1; uhmax = +1;
taumin = -50; taumax = 0;

x0 = [p0, f0, g0, h0, k0, L0, w0]           # Initial state
n = 20
ns = 7
nu = 3
@variable(dubins, kp[1:1])
function cross(a, b)
    return [a[2]*b[3]-a[3]*b[2], a[3]*b[1]-a[1]*b[3],a[1]*b[2]-a[2]*b[1]]
end

function f(x, u)
    # Extract state, control, and parameter for problem
    p = x[1]
    f = x[2]
    g = x[3]
    h = x[4]
    k = x[5]
    L = x[6]
    w = x[7]
    ur = u[1]
    ut = u[2]
    uh = u[3]
    tau = kp[1]

    # Gravitational disturbing acceleration
    q = 1 + f * cos(L) + g * sin(L)
    r = p / q
    alpha2 = h * h - k * k
    chi = sqrt(h * h + k * k)
    s2 = 1 + chi * chi
    rX = (r / s2) * (cos(L) + alpha2 * cos(L) + 2 * h * k * sin(L))
    rY = (r / s2) * (sin(L) - alpha2 * sin(L) + 2 * h * k * cos(L))
    rZ = (2 * r / s2) * (h * sin(L) - k * cos(L))
    rVec = [rX, rY, rZ]
    rMag = sqrt(rX^2 + rY^2 + rZ^2)
    rXZMag = sqrt(rX^2 + rZ^2)
    vX = -(1 / s2) * sqrt(mu / p) * (sin(L) + alpha2 * sin(L) - 2 * h * k * cos(L) + g - 2 * f * h * k + alpha2 * g)
    vY = -(1 / s2) * sqrt(mu / p) * (-cos(L) + alpha2 * cos(L) + 2 * h * k * sin(L) - f + 2 * g * h * k + alpha2 * f)
    vZ = (2 / s2) * sqrt(mu / p) * (h * cos(L) + k * sin(L) + f * h + g * k)
    vVec = [vX, vY, vZ]
    rCrossv = cross(rVec, vVec)
    rCrossvMag = sqrt(rCrossv[1]^2 + rCrossv[2]^2 + rCrossv[3]^2)
    rCrossvCrossr = cross(rCrossv, rVec)
    ir1 = rVec[1] / rMag
    ir2 = rVec[2] / rMag
    ir3 = rVec[3] / rMag
    ir = [ir1, ir2, ir3]
    it1 = rCrossvCrossr[1] / (rCrossvMag * rMag)
    it2 = rCrossvCrossr[2] / (rCrossvMag * rMag)
    it3 = rCrossvCrossr[3] / (rCrossvMag * rMag)
    it = [it1, it2, it3]
    ih1 = rCrossv[1] / rCrossvMag
    ih2 = rCrossv[2] / rCrossvMag
    ih3 = rCrossv[3] / rCrossvMag
    ih = [ih1, ih2, ih3]
    enir = ir3
    enirir1 = enir * ir1
    enirir2 = enir * ir2
    enirir3 = enir * ir3
    enenirir1 = 0 - enirir1
    enenirir2 = 0 - enirir2
    enenirir3 = 1 - enirir3
    enenirirMag = sqrt(enenirir1^2 + enenirir2^2 + enenirir3^2)
    in1 = enenirir1 / enenirirMag
    in2 = enenirir2 / enenirirMag
    in3 = enenirir3 / enenirirMag
    # Geocentric latitude
    sinphi = rZ / rXZMag
    cosphi = sqrt(1 - sinphi^2)
    # Legendre polynomials
    P2 = (3 * sinphi^2 - 2) / 2
    P3 = (5 * sinphi^3 - 3 * sinphi) / 2
    P4 = (35 * sinphi^4 - 30 * sinphi^2 + 3) / 8
    dP2 = 3 * sinphi
    dP3 = (15 * sinphi - 3) / 2
    dP4 = (140 * sinphi^3 - 60 * sinphi) / 8
    # Oblate earth perturbations
    sumn = (Re / r)^2 * dP2 * J2 + (Re / r)^3 * dP3 * J3 + (Re / r)^4 * dP4 * J4
    sumr = (2 + 1) * (Re / r)^2 * P2 * J2 + (3 + 1) * (Re / r)^3 * P3 * J3 + (4 + 1) * (Re / r)^4 * P4 * J4
    ∆gn = -(mu * cosphi / (r^2)) * sumn
    ∆gr = -(mu / (r^2)) * sumr
    ∆gnin1 = ∆gn * in1
    ∆gnin2 = ∆gn * in2
    ∆gnin3 = ∆gn * in3
    ∆grir1 = ∆gr * ir1
    ∆grir2 = ∆gr * ir2
    ∆grir3 = ∆gr * ir3
    ∆g1 = ∆gnin1 - ∆grir1
    ∆g2 = ∆gnin2 - ∆grir2
    ∆g3 = ∆gnin3 - ∆grir3
    Deltag1 = ir[1] * ∆g1 + ir[2] * ∆g2 + ir[3] * ∆g3
    Deltag2 = it[1] * ∆g1 + it[2] * ∆g2 + it[3] * ∆g3
    Deltag3 = ih[1] * ∆g1 + ih[2] * ∆g2 + ih[3] * ∆g3
    # Thrust acceleration
    DeltaT1 = ((gs * T * (1 + 0.01 * tau)) / w) * ur
    DeltaT2 = ((gs * T * (1 + 0.01 * tau)) / w) * ut
    DeltaT3 = ((gs * T * (1 + 0.01 * tau)) / w) * uh
    # Total acceleration
    Delta1 = Deltag1 + DeltaT1
    Delta2 = Deltag2 + DeltaT2
    Delta3 = Deltag3 + DeltaT3
    # Differential equations of motion
    dp = (2 * p / q) * sqrt(p / mu) * Delta2
    df = sqrt(p / mu) * sin(L) * Delta1 + sqrt(p / mu) * (1 / q) * ((q + 1) * cos(L) + f) * Delta2 - sqrt(p / mu) * (g / q) * (h * sin(L) - k * cos(L)) * Delta3
    dg = -sqrt(p / mu) * cos(L) * Delta1 + sqrt(p / mu) * (1 / q) * ((q + 1) * sin(L) + g) * Delta2 + sqrt(p / mu) * (f / q) * (h * sin(L) - k * cos(L)) * Delta3
    dh = sqrt(p / mu) * (s2 * cos(L) / (2 * q)) * Delta3
    dk = sqrt(p / mu) * (s2 * sin(L) / (2 * q)) * Delta3
    dL = sqrt(p / mu) * (1 / q) * (h * sin(L) - k * cos(L)) * Delta3 + sqrt(mu * p) * ((q / p)^2)
    dw = -(T * (1 + 0.01 * tau) / Isp)

    return [dp, df, dg, dh, dk, dL, dw]
end


@variables(dubins, begin
    Δt   # Time step
    # State variables
    x[1:7,1:n]               # Velocity
    # Control variables
    u[1:3,1:n]    # Thrust
end)


# Dynamic constraints: hermite_simpson
for j in 1:n-1
    f1 = f(x[1:ns, j], u[1:nu, j])
    f2 = f(x[1:ns, j+1], u[1:nu, j+1])

    # hermite-simpson
    ub = (u[1:nu, j] + u[1:nu, j+1]) / 2

    xb = (x[1:ns, j] + x[1:ns, j+1]) / 2 + Δt * (f1 - f2) / 8
    fb = f(xb, ub)

    dx = Δt * (f1 + 4 * fb + f2) / 6

    # # trapezoidal
    # dx = Δt * (f1 + f2) / 2

    # The formulation of constraint after infiltrate takes up a long time (does not finish)
    @infiltrate
    @constraint(dubins, x[1:ns, j+1] .== x[1:ns, j] + dx)
end

# # Objective
# function obj_f(x, u, Δt)
#     return Δt
# end
# @objective(dubins, Min, obj_f(x[1:7,1:n], u[1:3,1:n], Δt))

# #Initial conditions
# @constraint(dubins, x[1:7, 1] .== x0 )

Any way to make this faster?
Tagging @odow .

JuMP isn’t really built for expressions like this. You can simplify things a lot by introducing an intermediate variable for f.

using JuMP, Ipopt
dubins = Model(Ipopt.Optimizer)
@variables(dubins, begin
    kp[1:1]
    Δt
    x[1:7, 1:n]
    u[1:3, 1:n]
    f_var[1:7, 1:n]
end)
@constraint(dubins, [j in 1:n], f_var[:, j] .== f(x[:, j], u[:, j]));
for j in 1:n-1
    ub = (u[:, j] + u[:, j+1]) / 2
    xb = (x[:, j] + x[:, j+1]) / 2 + Δt * (f_var[:, j] - f_var[:, j+1]) / 8
    dx = Δt * (f_var[:, j] + 4 * f(xb, ub) + f_var[:, j+1]) / 6
    @constraint(dubins, x[1:ns, j+1] .== x[1:ns, j] + dx)
end
optimize!(dubins)

You can also tidy up your f function a lot.

For example, instead of

    ir1 = rVec[1] / rMag
    ir2 = rVec[2] / rMag
    ir3 = rVec[3] / rMag
    ir = [ir1, ir2, ir3]

do

ir = rVec ./ rMag
1 Like

Defining another variable speeds it up. But if I increase n then it again gets stuck. I will try to optimize the f function and see if it helps.
Adding another variable will increase the size of the Optimization problem right?

Your issue is not the number of variables and constraints, but the size of the expressions.

Try adding intermediate variables for rX, rY, and rZ, as well as vX, vY, vZ.

1 Like

The f function acts as an input to the package DirectOptimalCotrol.jl. So the user will enter the system dynamics, dimensions, and objective functions and the package will provide the solution. See the rocket example using this package (The analogous function to f there is dyn).
The function f will also operate on Float64 for computing the errors, mesh recomputation, etc and as such must be generic. So I cannot introduce additional variables in f. However, optimizing the hermite-simpson code here will be fine.
The particular example I have taken is only for testing the package on large problems and I have added it here as an MWE.
Writing the package using JuMP using function tracing is very convinient and it works nicely for small to medium size problems. But it might not be scalable. Some examples using the package:

Multi phase problems (Dubins vehicle):


Mesh refinement (Rocket Example): Starts with coarse grid and adds point as required.

The problem is really that the new interface does not cache common subexpressions, so you get lots of very complicated expressions appearing multiple times. @ccoffrin and I have discussed this, but nothing concrete yet.

1 Like

It will be nice if this can be made to work. In the meanwhile I will try directly with Ipopt C interface and see how far I can get. Handrolling gradients and hessians for this is pain :sweat_smile:

Hey @A_C
I think the devs of OptimalControl.jl had similar issues and used ADNLPModels.jl + NLPModelsIpopt.jl instead of recoding the interface to Ipopt.

OptimalControl.jl : GitHub - control-toolbox/OptimalControl.jl: Solvers of optimal control problems / https://www.youtube.com/watch?v=RYUtVnzLj5k&t=4s

2 Likes

Hi @tmigot
The package looks very interesting, especially the Indirect Shooting part. Is there a plan to add 1. Multiple phases 2. Mesh Refinement 3. Scaling to the package?

It is mentioned here that ADNLPModels.jl is not very efficient for large models. Also, in the benchmark here NLPModels seems to be slower than JuMP. So it would be interesting to see how the package performs on some largish problems.

These questions should probably be directly asked to the developers of OptimalControl.jl, feel free to open an issue on their package.

For the rest, the comment on the documentation is slightly outdated, sorry for that. My point is more than if JuMP doesn’t fit (as it seems to be temporary the case here), this is a good alternative.

2 Likes

NLPModels seems to be slower than JuMP

This is only true if you can write out the problem analytically for JuMP. Each tool has trade-offs. If you could write the problem in JuMP, it could generate fast oracles, but the expressions are too large :smile:.

It’s a case of try and see what works.

2 Likes

Yups, it will be nice to compare the different approaches for Optimal Control:

  1. JuMP with function tracing
  2. ADNLPmodels
  3. Nonconvex.jl
  4. Optimization.jl
  5. Casadi.jl
  6. Directly with Ipopt.jl
    I am currently trying with Ipopt.jl directly. Once I finish that will try with other approaches.

I think if you plan such comparison, you should differentiate the modeling tools from the the solvers. For instance, in AC-Optimal-Power-Flow benchmark the comparison is only on the modeling tools and the solver used is always Ipopt.
It could be interesting to try out different solvers than Ipopt too.

If you know the derivatives and want to use Ipopt one way is to use ManualNLPModels.jl.

1 Like

We’ve already set this up. Notice that the folder for OptimizationFrameworks (SciMLBenchmarks.jl/benchmarks/OptimizationFrameworks at master · SciML/SciMLBenchmarks.jl · GitHub) is separate from the folder for Optimization (SciMLBenchmarks.jl/benchmarks/Optimizaton at master · SciML/SciMLBenchmarks.jl · GitHub). OPF is currently in the OptimizationFrameworks category only and Optimization just has a simple 2D Rosenbrock (SciMLBenchmarks.jl/benchmarks/Optimizaton/2drosenbrock.jmd at master · SciML/SciMLBenchmarks.jl · GitHub), but we’re growing that pretty soon once the new set of solvers is close to release. And if you check the built benchmarks:

https://docs.sciml.ai/SciMLBenchmarksOutput/dev/OptimizationFrameworks/optimal_powerflow/

You notice that these correspond to separate sections, i.e. Optimization Framework Benchmarks and Nonlinear Optimization Solver Benchmarks are the two titles.

Note that the Parameter Estimation and Inverse Problem Benchmarks, like FitzHugh-Nagumo Parameter Estimation Benchmarks · The SciML Benchmarks are also effectively nonlinear optimization solver benchmarks, showcasing the difference between using Optim.jl, NLopt.jl, and soon a few of the others hooked into Optimization.jl (Metaheurstics.jl and again the new Julia global optimizers we’re working on) all in the same interface with the same function definition to be more of a solver head-to-head.

And yet another set of benchmarks that’s solver vs solver is the physics-informed neural network (PINN) training benchmarks, for example Diffusion Equation Physics-Informed Neural Network (PINN) Optimizer Benchmarks · The SciML Benchmarks. Those only focus on local optimizers for clear reasons, and we haven’t added IPOPT to them yet though we should in the next round.

If you have any optimization problems you’d like added to the system, open an issue and let us know. Most of the ones that we have planned are things along the lines of PDE-constrained optimization, controls, and parameter estimation problems.

1 Like

NLopt does not support sparsity (it is mentioned here).
Theoretically the algorithm used in Optim.jl should support sparsity. However, the way it is implemented using PositiveFactorizations.jl
will I think destroy the Hessian structure and the PositiveFactorizations.jl package does not support Sparse matrices. Traditionally it should use Sparse BunchKaufamnn but the routine is not available in Julia.
Since sparsity is not supported they will not perform well atleast for optimal control problems. There are other options like Knitro, SNOPT, Bonmin, but those are commercial. There is MadNLP.jl but that is work in progress. So we are left only with Ipopt. I guess similar argument applies to AC optimal power flow.

You have more alternatives like Percival.jl which is factorization-free and so really adapted to large scale problems.
If you only equality constraints (so no inequalities and no bounds) FletcherPenaltySolver.jl or DCISolver.jl might work too.

This model is not large-scale. It has only 800 variables. The problem isn’t the speed of solving, but the cost of constructing the derivatives.

Julia> model
A JuMP Model
Minimization problem with:
Variables: 798
Objective function type: VariableRef
`NonlinearExpr`-in-`MathOptInterface.EqualTo{Float64}`: 273 constraints
`AffExpr`-in-`MathOptInterface.EqualTo{Float64}`: 190 constraints
`QuadExpr`-in-`MathOptInterface.EqualTo{Float64}`: 266 constraints
`VariableRef`-in-`MathOptInterface.EqualTo{Float64}`: 7 constraints
`VariableRef`-in-`MathOptInterface.GreaterThan{Float64}`: 195 constraints
`VariableRef`-in-`MathOptInterface.LessThan{Float64}`: 195 constraints
Model mode: AUTOMATIC
CachingOptimizer state: ATTACHED_OPTIMIZER
Solver name: Ipopt
Names registered in the model: d_x, f_b, f_i, tau, u, u_b, x, x_b, Δt

Here’s one JuMP approach that builds quickly, but it requires introducing some additional variables:

using JuMP
import Ipopt

cross(a, b) = [a[2]*b[3]-a[3]*b[2], a[3]*b[1]-a[1]*b[3], a[1]*b[2]-a[2]*b[1]]

function f(x, u, tau)
    T = 4.446618e-3
    Isp = 450
    mu = 1.407645794e16
    gs = 32.174
    Re = 20925662.73
    J2 = 1082.639e-6
    J3 = -2.565e-6
    J4 = -1.608e-6
    # Extract state, control, and parameter for problem
    p, f, g, h, k, L, w = x
    ur, ut, uh = u
    # Gravitational disturbing acceleration
    q = 1 + f * cos(L) + g * sin(L)
    r = p / q
    alpha2 = h * h - k * k
    chi = sqrt(h * h + k * k)
    s2 = 1 + chi * chi
    rX = (r / s2) * (cos(L) + alpha2 * cos(L) + 2 * h * k * sin(L))
    rY = (r / s2) * (sin(L) - alpha2 * sin(L) + 2 * h * k * cos(L))
    rZ = (2 * r / s2) * (h * sin(L) - k * cos(L))
    rVec = [rX, rY, rZ]
    rMag = sqrt(rX^2 + rY^2 + rZ^2)
    rXZMag = sqrt(rX^2 + rZ^2)
    vX = -(1 / s2) * sqrt(mu / p) * (sin(L) + alpha2 * sin(L) - 2 * h * k * cos(L) + g - 2 * f * h * k + alpha2 * g)
    vY = -(1 / s2) * sqrt(mu / p) * (-cos(L) + alpha2 * cos(L) + 2 * h * k * sin(L) - f + 2 * g * h * k + alpha2 * f)
    vZ = (2 / s2) * sqrt(mu / p) * (h * cos(L) + k * sin(L) + f * h + g * k)
    vVec = [vX, vY, vZ]
    rCrossv = cross(rVec, vVec)
    rCrossvMag = sqrt(rCrossv[1]^2 + rCrossv[2]^2 + rCrossv[3]^2)
    rCrossvCrossr = cross(rCrossv, rVec)
    ir1 = rVec[1] / rMag
    ir2 = rVec[2] / rMag
    ir3 = rVec[3] / rMag
    ir = [ir1, ir2, ir3]
    it1 = rCrossvCrossr[1] / (rCrossvMag * rMag)
    it2 = rCrossvCrossr[2] / (rCrossvMag * rMag)
    it3 = rCrossvCrossr[3] / (rCrossvMag * rMag)
    it = [it1, it2, it3]
    ih1 = rCrossv[1] / rCrossvMag
    ih2 = rCrossv[2] / rCrossvMag
    ih3 = rCrossv[3] / rCrossvMag
    ih = [ih1, ih2, ih3]
    enir = ir3
    enirir1 = enir * ir1
    enirir2 = enir * ir2
    enirir3 = enir * ir3
    enenirir1 = 0 - enirir1
    enenirir2 = 0 - enirir2
    enenirir3 = 1 - enirir3
    enenirirMag = sqrt(enenirir1^2 + enenirir2^2 + enenirir3^2)
    in1 = enenirir1 / enenirirMag
    in2 = enenirir2 / enenirirMag
    in3 = enenirir3 / enenirirMag
    # Geocentric latitude
    sinphi = rZ / rXZMag
    cosphi = sqrt(1 - sinphi^2)
    # Legendre polynomials
    P2 = (3 * sinphi^2 - 2) / 2
    P3 = (5 * sinphi^3 - 3 * sinphi) / 2
    P4 = (35 * sinphi^4 - 30 * sinphi^2 + 3) / 8
    dP2 = 3 * sinphi
    dP3 = (15 * sinphi - 3) / 2
    dP4 = (140 * sinphi^3 - 60 * sinphi) / 8
    # Oblate earth perturbations
    sumn = (Re / r)^2 * dP2 * J2 + (Re / r)^3 * dP3 * J3 + (Re / r)^4 * dP4 * J4
    sumr = (2 + 1) * (Re / r)^2 * P2 * J2 + (3 + 1) * (Re / r)^3 * P3 * J3 + (4 + 1) * (Re / r)^4 * P4 * J4
    ∆gn = -(mu * cosphi / (r^2)) * sumn
    ∆gr = -(mu / (r^2)) * sumr
    ∆gnin1 = ∆gn * in1
    ∆gnin2 = ∆gn * in2
    ∆gnin3 = ∆gn * in3
    ∆grir1 = ∆gr * ir1
    ∆grir2 = ∆gr * ir2
    ∆grir3 = ∆gr * ir3
    ∆g1 = ∆gnin1 - ∆grir1
    ∆g2 = ∆gnin2 - ∆grir2
    ∆g3 = ∆gnin3 - ∆grir3
    Deltag1 = ir[1] * ∆g1 + ir[2] * ∆g2 + ir[3] * ∆g3
    Deltag2 = it[1] * ∆g1 + it[2] * ∆g2 + it[3] * ∆g3
    Deltag3 = ih[1] * ∆g1 + ih[2] * ∆g2 + ih[3] * ∆g3
    # Thrust acceleration
    DeltaT1 = ((gs * T * (1 + 0.01 * tau)) / w) * ur
    DeltaT2 = ((gs * T * (1 + 0.01 * tau)) / w) * ut
    DeltaT3 = ((gs * T * (1 + 0.01 * tau)) / w) * uh
    # Total acceleration
    Delta1 = Deltag1 + DeltaT1
    Delta2 = Deltag2 + DeltaT2
    Delta3 = Deltag3 + DeltaT3
    # Differential equations of motion
    dp = (2 * p / q) * sqrt(p / mu) * Delta2
    df = sqrt(p / mu) * sin(L) * Delta1 + sqrt(p / mu) * (1 / q) * ((q + 1) * cos(L) + f) * Delta2 - sqrt(p / mu) * (g / q) * (h * sin(L) - k * cos(L)) * Delta3
    dg = -sqrt(p / mu) * cos(L) * Delta1 + sqrt(p / mu) * (1 / q) * ((q + 1) * sin(L) + g) * Delta2 + sqrt(p / mu) * (f / q) * (h * sin(L) - k * cos(L)) * Delta3
    dh = sqrt(p / mu) * (s2 * cos(L) / (2 * q)) * Delta3
    dk = sqrt(p / mu) * (s2 * sin(L) / (2 * q)) * Delta3
    dL = sqrt(p / mu) * (1 / q) * (h * sin(L) - k * cos(L)) * Delta3 + sqrt(mu * p) * ((q / p)^2)
    dw = -(T * (1 + 0.01 * tau) / Isp)
    return [dp, df, dg, dh, dk, dL, dw]
end

n = 20
x_0 = [21837080.052835, 0, 0, -0.25396764647494, 0, pi, 1]
x_l = [20_000_000, -1, -1, -1, -1, x_0[6], 0.1]
x_u = [60_000_000, 1, 1, 1, 1, 9*2*pi, x_0[7]]

model = Model(Ipopt.Optimizer)
@variables(model, begin
    0 <= Δt <= 100_000
    x_l[i] <= x[i = 1:7, 1:n] <= x_u[i]
    -1 <= u[1:3, 1:n] <= 1
    -50 <= tau <= 0
    f_i[1:7, 1:n]
    u_b[1:3, 1:n-1]
    x_b[1:7, 1:n-1]
    f_b[1:7, 1:n-1]
    d_x[1:7, 1:n-1]
end)
@objective(model, Min, Δt)
fix.(x[1:7, 1], x_0; force = true)
@constraints(model, begin
    [j in 1:n], f_i[:, j] .== f(x[:, j], u[:, j], tau)
    [j in 1:n-1], 2 * u_b[:, j] .== u[:, j] + u[:, j+1]
    [j in 1:n-1], x_b[:, j] .== (x[:, j] + x[:, j+1]) / 2 + Δt * (f_i[:, j] - f_i[:, j+1]) / 8
    [j in 1:n-1], f_b[:, j] .== f(x_b[:, j], u_b[:, j], tau)
    [j in 1:n-1], d_x[:, j] .== Δt * (f_i[:, j] + 4 * f_b[:, j] + f_i[:, j+1]) / 6
    [j in 1:n-1], x[:, j+1] .== x[:, j] + d_x[:, j]
end);
optimize!(model)

The issue is that the f function is very complicated. It can probably be simplified.

The other issue is that solving currently results in:

julia> optimize!(model)
This is Ipopt version 3.14.13, running with linear solver MUMPS 5.6.2.

Number of nonzeros in equality constraint Jacobian...:     4950
Number of nonzeros in inequality constraint Jacobian.:        0
Number of nonzeros in Lagrangian Hessian.............:    12319

The Jacobian for the equality constraints contains an invalid number

Number of Iterations....: 0

Number of objective function evaluations             = 0
Number of objective gradient evaluations             = 0
Number of equality constraint evaluations            = 0
Number of inequality constraint evaluations          = 0
Number of equality constraint Jacobian evaluations   = 1
Number of inequality constraint Jacobian evaluations = 0
Number of Lagrangian Hessian evaluations             = 0
Total seconds in IPOPT                               = 0.301

EXIT: Invalid number in NLP function or derivative detected.

Likely because the nasty expression has some points which are not differentiable.

1 Like

Yes, the problem here is the cost of constructing derivatives. The discussion about the solvers was for benchmarking purpose. Your code is quite fast and if you replace the last few lines by trapezoidal method and keep n =10 it converges to a solution:

@objective(model, Max, x[4, n])
fix.(x[1:7, 1], x_0; force = true)
@constraints(model, begin
    [j in 1:n], f_i[:, j] .== f(x[:, j], u[:, j], tau)
    # [j in 1:n-1], u_b[:, j] .== (u[:, j] + u[:, j+1])/2
    # [j in 1:n-1], x_b[:, j] .== (x[:, j] + x[:, j+1]) / 2 + Δt * (f_i[:, j] - f_i[:, j+1]) / 8
    # [j in 1:n-1], f_b[:, j] .== f(x_b[:, j], u_b[:, j], tau)
    # [j in 1:n-1], d_x[:, j] .== Δt * (f_i[:, j] + 4 * f_b[:, j] + f_i[:, j+1]) / 6
    [j in 1:n-1], d_x[:, j] .== Δt * (f_i[:, j] + f_i[:, j+1]) / 2
    [j in 1:n-1], x[:, j+1] .== x[:, j] + d_x[:, j]
end);

optimize!(model)

So, no idea why the hermite-simpson formulation does not work. It works for some other examples that I have tried.

The NLP problems in Optimal Control are not very large I suppose. Checking examples in the Betts book one of the largest problems has 5149 variables. But the problem needs to be solved around 10 to 15 times for mesh refinement. The jacobian and hessian are sparse, so solvers which support sparsity will have better performance I suppose.

1 Like

There is a significant slow down of IPOPT version

Ipopt v1.6.0

using MUMPS 5.6.2. The previous version using 5.6.1 was running fine. You can test this by:

using PowerModels,Ipopt,PGLib
solve_ac_opf(“pglib_opf_case3012wp_k.m”, >optimizer_with_attributes(Ipopt.Optimizer))

It is not IPOPTs problem, but MUMPS 5.6.2 that ships with it. Infact it is not even a problem of MUMPS 5.6.2 because I am using it with Ipopt from MATLAB and it runs fine. I believe the BLAS it is using is statically compiled vanilla BLAS compiled by mistake in the library or something similar. On my laptop this example takes 62 iterations to converge and approximately 10 seconds before upgrading IPOPT. After upgrading it takes 80 seconds spent in IPOPT. Try Julia 1.8.5 and there everything is fine. Somewhat slower than what comes by default in Julia 10, but Ipopt was not so slow there. Downgrading to Ipopt v1.5.1 using MUMPS 5.6.1 solves the problem.

1 Like

This comes as a bit of a surprise to me. I have not noticed any notable change in Ipopt’s performance in Julia in the last couple years. If you can put together a MWE that highlights the issue I am sure @odow and others would be interested to understand what is going on.