Is there a Symbolics.jl interface to Ipopt (or similar)?

Hello, is there an interface to Ipopt which can accept Symbolics functions?

There is NLPModelsIpopt, which accepts NLPModels. Going via ADNLPModels seems unnecessary as Symbolics can already provide all the hessians, jacobians etc…

I tried to implement a (VERY) quick proof of concept SymNLPModels.jl package and it seems reasonably straightforward. Should I continue with it, or am reinventing the wheel?

1 Like

Try ADNLPModels.

1 Like

ADNLPModels seem to accept callable function only, not Symbolics.jl functions.

You can create an optimization problem with ModelingToolkit (Modeling Optimization Problems · ModelingToolkit.jl) which uses Symbolics under the hood and instead using Optim wrapper as shown there use the OptimizationMOI sub-package from Optimization.jl Modeling Optimization Problems · ModelingToolkit.jl to use Ipopt (shown in examples there)

Interesting! Two questions:

  1. If the Symbolics expressions already exist – e.g. being changed throughout the course of an iterative algorithm – can ModelingToolkit accept them?
  2. Can MOI really handle arbitrary Symbolics expressions, or will I have to fight it?

Yes

Yes we give MOI functions.

That is a rather overwhelming collection of packages. Could I get a little hint, please? It seems I am missing a level of indirection.

using ModelingToolkit
using OptimizationMOI
using Ipopt

vars = @variables x y

objective = 5*x*y^2 - 2*x^2 + y^3 - 1
constraints = [x^2 ≲ 1, y^2 ≲ 1]

@named os = OptimizationSystem(objective, vars, []; constraints)

u0 = vars .=> 0.
prob = OptimizationProblem(os, u0; grad = true, hess = true)

sol = solve(prob, Ipopt.Optimizer())

@Vaibhavdixit02 is going to write a tutorial on this. That’s probably the most productive way to share this information given that indeed the documentation on this is currently too sparse.

Since Ipopt needs the contraints’ jacobian and hessians hence you’d need to switch them on in the OptimizationProblem

prob = OptimizationProblem(os, u0; grad = true, hess = true, cons_j = true, cons_h = true)

The tutorial for this case already exists Modeling Optimization Problems · ModelingToolkit.jl

2 Likes

That works great, thank you! :slight_smile:

What does the current error message say? It might need to be more explicit if the user didn’t figure this one out.

1 Like

Oh, it’s probably down to my inexperience with the project, so I did not want to bother you with it.

It says Use OptimizationFunction to pass the derivatives or automatically generate them with one of the autodiff backends, which I interpreted to mean that the OptimizationMOI package implements a “OptimizationFunction” as some kind of an adapter class to MOI, and wants me to use it instead of OptimizationProblem.

I would say a clearer message would be

Enable automatic generation of derivatives in $(TypeName), or pass them directly.

Where TypeName would be the type-name of whatever was passed in (OptimizationProblem), if it supports it (or list of the ones that do).

Hey @votroto!

Note that we are adding several backends to compute derivatives in ADNLPModels and in particular it is possible to compute the Jacobian and Hessian with Symbolics (so adding the gradient wouldn’t be difficult - if you are interested in contributing).

The following works:

using ADNLPModels, Symbolics, NLPModels

@variables x
objective = (x - 3)^2

variables = Symbolics.get_variables(objective)
build(f) = Symbolics.build_function(f, variables; expression=false)
f = build(objective)

T = Float64
nlp = ADNLPModel(f, zeros(T, 2), hessian_backend = ADNLPModels.SparseSymbolicsADHessian)
hess(nlp, nlp.meta.x0)

I hope you understand that even though the model is handled with Symbolics expression, Ipopt does not perform symbolic computations internally.

I’ll also leave the JuMP example here for completeness:

using JuMP, Ipopt
model = Model(Ipopt.Optimizer)
@variable(model, x, start = 0)
@variable(model, y, start = 0)
@NLobjective(model, Min, 5*x*y^2 - 2*x^2 + y^3 - 1)
@constraint(model, x^2 <= 1)
@constraint(model, y^2 <= 1)
optimize!(model)
value(x), value(y)
1 Like

Odow, JuMP cannot directly send DynamicPolynomials or Symbolics to Ipopt, can it?

But even in static cases, let’s say your example is closer to (This example is made up and has no meaning):

using ModelingToolkit, OptimizationMOI, Ipopt
ADPARAMS = (;grad = true, hess = true, cons_j = true, cons_h = true)

f(p, q) = (1 / sqrt(2π)) * exp(-((p - q)^2) / 2)
total(p, q) = sum(_p * f(i, q) for (i, _p) in enumerate(p))
l1(p, q) = 1 - total(p, q) + 0.5 * total(p, 0.5)
l2(p, q) = total(p, q) - 1
lhs(p, q, _q) = l1(p, q) - l1(p, _q)

function approximate(Q)
    @variables p[1:5] [bounds = (-2.0, 2.0)]
    @variables w [bounds = (-1.0, 3.0)] 
    @variables q [bounds = (-1.0, 3.0)]
    variables = [p; w; q]

    constraints = [w*lhs(p, q, _q) + (1-w)*l2(p, q) ≲ 0 for _q in Q]
    @named os = OptimizationSystem(w, variables, []; constraints)
    prob = OptimizationProblem(os, randn(length(variables)); ADPARAMS...)
    return solve(prob, Ipopt.Optimizer())
end

approximate(-0.8:0.4:0.8)

I always struggle in JuMP to model NLPs involving arrays, summations, polynomials, custom multi-argument functions etc. Maybe I’m doing it wrong, idk…

I always struggle in JuMP to model NLPs involving arrays, summations, polynomials, custom multi-argument functions etc

Yeah this is a known problem. For now you’d need to do some ugly hack like:

using JuMP, Ipopt
begin
    Q = -0.8:0.4:0.8
    model = Model(Ipopt.Optimizer)
    @variable(model, -2 <= p[1:5] <= 2)
    @variable(model, -1 <= w <= 3)
    @variable(model, -1 <= q <= 3)
    @objective(model, Min, w)
    total = Dict(
        _q => @NLexpression(
            model, 
            sum(_p / sqrt(2π) * exp(-(i - _q)^2 / 2) for (i, _p) in enumerate(p))
        )
        for _q in Any[Q; q; 0.5]
    )
    l1 = Dict(
        _q => @NLexpression(model, 1 - total[_q] + 0.5 * total[0.5])
        for _q in Any[Q; q]
    )
    @NLconstraint(
        model, 
        [_q in Q], 
        w * (l1[q] - l1[_q]) + (1 - w) * (total[q] - 1) <= 0
    )
    optimize!(model)
end

We’re working on fixing this, but it’s not ready yet: https://github.com/jump-dev/JuMP.jl/pull/3106#issuecomment-1592091623

1 Like