Trust-region constrained optimization

Wild guess, but the three extra variables could be slack variables that transform inequality constraints into equalities.

1 Like

Hi @shce !

What are those three zero components at the end of x?
As @cvanaret guessed, Percival.jl is actually adding slack variables to your inequality constraints before solving, which explains why the dimension is augmented.

Why is the last component of the initial condition not present in the vector?
Well spotted, that was a bug, I think if you update your packages now, this should be fixed.

Moreover, when I try to specifically set the initial guess with the x=x0 keyword argument
Again well spotted, Percival was expecting a vector of the new size (with slack variables there). I will try to fix this for the next release (the main branch should work though).

1 Like

It does works now! :partying_face: I mark your previous answer as the solution.

Thank you very much for your quick replies and fixes.

If anyone thinks there is a better approach to what I want to accomplish, feel free to reply (even it is already solved) or contact me privately.

2 Likes

I would have been curious to see how Percival compares to filterSQP, a state-of-the-art trust-region filter SQP method. I donā€™t think thereā€™s a Julia binding though.

If the constraint defines a differentiable manifold, then I would use Manopt.jl . Obviously it does not allow inequality constraints. I find that understanding the manifold structure of the constraint has a huge advantage for efficiency.

1 Like

Hi, if you need any help with NLPModelsAlgencan.jl, please let me know. It should work well with NLPModels and with JuMP using the NLPModelsJuMP.jl.

1 Like

Hi @pjssilva , where can I find a more detailed description of GENCAN (both the optimization method and the package)?

Iā€™m surprised this hasnā€™t been mentioned yet, but Artelys KNITRO (GitHub - jump-dev/KNITRO.jl: Julia interface to the Artelys Knitro solver) handles whatever nonlinear constraints you want. It has several trust region algorithms to choose from. I solve low-dimensional but ugly problems and algorithm 4 has a special place in my heart. It also has a lot of tools for pre-solving certain parts of your problem, and so the JuMP wrapper Iā€™ve linked above would be a great option because JuMP is smart enough to identify quadratic expressions in your objective and stuff as well, so you could quite easily exploit all these fancy options.

Even the academic licenses are pretty expensive (although people pay much more for MATLAB Without blinkingā€¦), but if 300 unknowns is enough for you then there is a six month trial you could use.

Iā€™m not sure if Iā€™m able to use commercial code in this project (I would need to check), but I will definitely give it a try for my personal projects and prototyping. Thank you very much

The main references for GENCAN are:

E. G. Birgin and J. M. MartĆ­nez, A box-constrained optimization algorithm with negative
curvature directions and spectral projected gradients, Computing [Suppl] 15, pp. 49ā€“60, 2001.

E. G. Birgin and J. M. MartĀ“ınez, Large-scale active-set box-constrained optimization method
with spectral projected gradients, Computational Optimization and Applications 23, pp. 101ā€“
125, 2002.

M. Andretta, E. G. Birgin and J. M. MartĆ­nez, Practical active-set Euclidian trust-region
method with spectral projected gradients for bound-constrained minimization, Optimization
54, pp. 305ā€“325, 2005.

GENCAN is now part of Algencan and it is called by it if the problem only has bound constraints. It is also used by Algencan to solve the inner subproblems in an Augmented Lagrangian method when solving problems with general constraints.

The website for the original Algencan library that we wrap in NLPModelsAlgencan.jl is

https://www.ime.usp.br/~egbirgin/tango/codes.php#algencan

Best,

Paulo

Thank you very much. I was struggling with finding the documentation of the package. It turns out it is available in the directory: NLPModelsAlgencan.jl/docs/src at master Ā· pjssilva/NLPModelsAlgencan.jl Ā· GitHub

Have you considered including it in a wiki?

I guess the link is: Home Ā· NLPModelsAlgencan.jl

A link to it in README.md file is missing, though, @pjssilva.

Just added a simple docs badge. Thanks.