Trust-region constrained optimization

Hello,

I was wondering if there exists a Julia package that implements a trust-region constrained optimization method. The packages I have seen so far are for unconstrained or bound-constrained problems. I am looking for general non-linear equality and inequality constrained minimization.

I see that scipy has it (Optimization (scipy.optimize) — SciPy v1.11.0.dev0+1313.bb673f2 Manual). Is there a similar package or wrapper in Julia?

Thank you,

shce

2 Likes

I’d use Ipopt, you can use this solver through Optimization.jl

Percival is Augmented Lagrangian with internal solver tron, which is a trust region solver.
DCISolver.jl uses something similar to trust regions, but it is restricted to equality constraints.
These are all in pure Julia.

You can also access Ipopt through NLPModelsIpopt.jl, if you want something more traditional, but it is a line search method, not trust region.

Since they are all JSO solvers, you can follow this tutorial, and just swap IPOPT for the other solvers (if the problem is accepted).

JSO solver list

6 Likes

Thank you for your suggestion. I’m currently using Ipopt but I would like to limit the steps between consecutive iterations. Unfortunately, I cannot find a parameter to control it. That’s one of the reason I would like to use trust-region.

Percival looks interesting. After a quick reading of the research paper, I have a couple of questions you may know the answer to.

  1. Is it possible to specify the radius for the trust-region optimization part?
  2. I understand that the hessian is approximated. Is it possible to use the exact derivatives for the objective function and the constraints?

Thank you very much,

shce

That is not usually desired behaviour, because the trust region controls the “progress” of the solver. If you fix it, and the step is bad, there is no way to fix it. And if it is good, it could be too small.

You can pass an argument max_radius for tron, but it is only used for the initial trust region.
So something like

percival(nlp; subsolver_kwargs = Dict(:max_radius => 123.4))

should work. But as I said, only the first iteration will use this radius. If the steps are good, they will increase.
Do you want to always limit this radius?

  1. I thought that by default the exact derivatives were used for the Hessian, but I don’t remember. @tmigot, can you answer?
3 Likes

You are right. My mistake. I meant the maximum radius. It is a pity it is only used in the first step… I was looking for something similar to delta_hat in Optim.jl.

Thank you very much

It uses exact derivatives, so the Hessian is not approximated. It computes Hessian-vector products only, so we don’t need to store the Hessian matrix.

Hi tmigot,

What do you mean by exact derivatives? Do you compute the derivatives by means of packages like ReverseDiff? Or do you use user-provided derivatives?

Since both of you seem to work/have worked on the project, do you think it is possible to add an additional parameter to control the maximum radius?

Thank you

Percival takes any instance of an AbstractNLPModel as input ( Home · NLPModels.jl ), which defines a set of function that a solver can rely on.

The derivative part depends on how you model your problem: you can model it with JuMP (and convert it as an NLPModels with NLPModelsJuMP.jl), or use automatic differentiation with ADNLPModels.jl (and choose the backend you prefer), or manually inputed functions with ManualNLPModels.jl, …

I see that the keyword max_radius is there, but not really used in the algorithm. Let me modify it and give you an example.

2 Likes

Thank you very much for your time and dedication.

1 Like

Another solver interfacing with NLPmodels is this one: GitHub - pjssilva/NLPModelsAlgencan.jl

The solver (Algencan) is very well stablished and robust, they Julia interface is newer.

1 Like

I didn’t know about this solver. I will explore this solution too. Thank you.

Both NLopt.jl and Nonconvex.jl also include algorithms for arbitrary nonlinear equality/inequality constraints.

1 Like

Hey @shce
You can now with Percival 0.6.3 use max_radius as follows:

    using ADNLPModels, Percival, Test
    max_radius = 0.00314
    function cb(nlp, solver, stats)
      @test solver.sub_solver.tr.radius ≤ max_radius
    end
    subsolver_kwargs = Dict(:max_radius => max_radius)

   
    nlp = ADNLPModel(
      x -> (x[1] - 1)^2 + 100 * (x[2] - x[1]^2)^2,
      [-1.2; 1.0],
      x -> [x[1]^2 + x[2]^2],
      [-Inf],
      [1.0],
    )
    stats = percival(nlp, subsolver_kwargs = subsolver_kwargs, callback = cb)

I added a callback just to test that the radius of the trust-region is indeed restricted, but obviously you don’t necessarily need it.

2 Likes

Thank you for your reply, Steven.

As far as I know, NLopt implements Newton with trust region for unconstrained or bound-constrained problems, but not for general non-linear equality and inequality constraints. Am I right?

I am looking for packages implementing trust-region constrained optimization method.

That’s impressive! Thank you very much for your quick commit. I will try it with my specific problem.

Thank you @abelsiqueira too.

Hi @tmigot , I might be doing something wrong but I don’t see it. The following code:

    println("x0 = ", x0)
    nlp = MathOptNLPModel(model)
    nlp.meta.x0 .= x0
    println("nlp.meta.x0 = ", nlp.meta.x0)
    println("nlp.meta.nvar, nlp.meta.ncon = ", nlp.meta.nvar, " ", nlp.meta.ncon)
    max_radius = 0.001
    subsolver_kwargs = Dict(:max_radius => max_radius)
    function cb(nlp, solver, stats)
        @assert solver.sub_solver.tr.radius ≤ max_radius
        println("(cb) x0 = ", nlp.meta.x0)
        println("(cb) x = ", solver.x)
    end
    output = percival(nlp, subsolver_kwargs = subsolver_kwargs, verbose = 1, callback = cb)

outputs for the first iteration:

x0 = [0.25, 0.25, 0.25, 0.25, 0.8535533905936621, 0.6038554030714633]
nlp.meta.x0 = [0.25, 0.25, 0.25, 0.25, 0.8535533905936621, 0.6038554030714633]
nlp.meta.nvar, nlp.meta.ncon = 6 7
[ Info:   iter        fx    normgp    normcx         μ     normy    sumc     inner_status        iter_type  
[ Info:      0   1.2e+34   1.7e+40   5.1e+00   1.0e+01   2.7e+51       3
(cb) x0 = [0.25, 0.25, 0.25, 0.25, 0.8535533905936621, 0.0, 0.0, 0.0]
(cb) x = [0.25, 0.25, 0.25, 0.25, 0.8535533905936621, 0.0, 0.0, 0.0]

What are those three zero components at the end of x? Why is the last component of the initial condition not present in the vector?

Moreover, when I try to specifically set the initial guess with the x=x0 keyword argument, the following errors occurs:

ERROR: DimensionMismatch: array could not be broadcast to match destination
Stacktrace:
  [1] check_broadcast_shape
    @ ./broadcast.jl:540 [inlined]
  [2] check_broadcast_axes
    @ ./broadcast.jl:543 [inlined]
  [3] instantiate
    @ ./broadcast.jl:284 [inlined]
  [4] materialize!
    @ ./broadcast.jl:871 [inlined]
  [5] materialize!
    @ ./broadcast.jl:868 [inlined]
  [6] solve!(solver::Percival.PercivalSolver{Vector{Float64}}, nlp::NLPModelsModifiers.SlackModel{Float64, Vector{Float64}, NLPModelsJuMP.MathOptNLPModel}, stats::SolverCore.GenericExecutionStats{Float64, Vector{Float64}, Vector{Float64}, Any}; callback::IntegrationOptimizationBarycentricSymmetry.var"#cb#306", x::Vector{Float64}, μ::Float64, max_iter::Int64, max_time::Float64, max_eval::Int64, atol::Float64, rtol::Float64, ctol::Float64, subsolver_logger::Base.CoreLogging.NullLogger, inity::Nothing, subproblem_modifier::typeof(identity), subsolver_max_eval::Int64, subsolver_kwargs::Dict{Symbol, Int64}, verbose::Int64)
    @ Percival ~/.julia/packages/Percival/u9dGX/src/method.jl:251

Am I doing anything wrong?

Edit: The same model with JuMP and Ipopt is solved successfully.

Thank you

shce

SLSQP is a quasi-Newton with nonlinear constraints, but it uses backtracking rather than a trust region if I recall correctly. MMA/CCSA supports nonlinear inequality constraints and has a form of trust region, but is not a quasi-Newton method.

I’m not sure why you care about whether it is a trust-region method specifically, rather than about what kind of problems it can solve?

2 Likes

I would like to constrain the maximum displacement between two iterations to ensure a smooth dynamics. For unconstrained optimization, I have used trust-region and obtained beautiful dynamics. Now, my problem has constraints and I would like to have the same behavior.

I’m sorry. I meant Optim here: