Adaptive barrier method for constrained optimization

Hi all, I’m refactoring some old R code into Julia. In R I was using constrOptim: Linearly Constrained Optimization which lets us use constraints like U \theta - c \geq 0, where \theta is our parameters to be optimized, U is a matrix and c is a vector. The objective function involves an ODE solve within it, so I’m not able to use JuMP to specify the model structure. I see Optim.jl has Interior point Newton · Optim but it requires a Hessian evaluation as well. Are there any adaptive barrier methods available that would let me use something like BFGS but including linear equality/inequality constraints conveniently without the JuMP modeling language?

1 Like

There are lots of libraries out there for nonconvex local optimization with inequality constraints that only require you to supply gradients, and don’t require you to use JuMP. For example, Ipopt.jl, NLopt.jl, Nonconvex.jl, and more…

(They don’t necessarily work by the specific barrier algorithm you mentioned, but do you care?)

2 Likes

Thanks for the suggestions. You’re right I don’t specifically care about this barrier method, though I may reimplement it in Julia just for comparison with others. I think something like Nonconvex.jl is what I’m looking for: an abstract interface where I can test out different solvers.

I’m having a hard time debating the merits of the various interfaces though, I also found:

  • Optimization.jl: my differential equations are in the SciML-verse already, not sure if that means there’s some advantage to differentiating through a call to the ODE solver when I stay in the ecosystem.
  • NLPModels.jl: this one is interesting to me because I also found that UnoSolver.jl implements its interface, although I would need to dig more to see if its the right route for me to take. Maybe @cvanaret would be able to comment.

I think this is understandable to some extent, but not completely.

I’m not sure if you mean “the objective function is defined as the solution of some ODE”.

To be honest, I only know that UnoSolver.jl has an interface to NLPModels.jl (and to MathOptInterface.jl), but I’m not really familiar with the package.

1 Like

You could have a look at InfiniteOpt.jl ?

1 Like

@slwu89 I suggest to implement your own NLPModels, the API is quite simple: GitHub - JuliaSmoothOptimizers/NLPModels.jl: Data Structures for Optimization Models
We have examples here:

After you can provide your model to Ipopt, KNITRO, Uno (with the interfaces NLPModelsIpopt.jl / NLPModelsKnitro.jl, UnoSolver.jl) and specify that you want a Quasi-Newton approximation.

Example with Ipopt: Tutorial · NLPModelsIpopt.jl

1 Like