Does someone have experience in Augmented Lagrangian in julia?

It seems that most of the posts, I can find, are old. My code is currently in R and I think about porting to Julia. Does it work well?

This question is very vague, are you talking about some particular optimization framework in julia?
Augmented Lagrangian is a very general method that can be applied to more or less any constrained problem. Possibly, you don’t even have to port your code, you might be able to use a constrained solver already existing in one of the julia offerings for constrained optimization.

2 Likes

In the past I used Nlopt for basic optimization problems, why this would be my prefered choice right now. I left it open because I’d like to hear experience from others in julia. The porting is no problem, that’s done within a couple of hours or maybe even minutes depending on what to port. I have a nonlinear function which is to be minimized with inequality constraints. Unfortunately I’m not (yet?) as deep in optimization as I’d like to be to specify the proper question.

I have a student working on Augmented Lagrangian as part of her Master’s: https://github.com/CiDAMO/AugLag.jl. It is not released, and will eventually be part of JuliaSmoothOptimizers, althought it may take a couple of months before she finishes it.

5 Likes

Theres also a Julia interface to NLopt https://github.com/JuliaOpt/NLopt.jl
so the porting should be very straightforward if you make use of that

2 Likes

Out of curiosity, Why Augmented Lagrangian and not ADMM?

1 Like

The code looks great so far!

I was using that in the past for Optimizations without constraints ! The API is really great. Thanks!

I am not sure about Augmented Lagrangian and I was not aware of ADMM, why would you prefer ADMM? Which package would you use? ProximalOperators, COSMO or another one?

Is ADMM really applicable for general constrained problems the way the augmented Lagrangian method is?

1 Like

I would say it is.
ADMM, if we remove the Convex Context (Which then have a nice structure with Proximal Operator), is just the idea of splitting the variables in ALM.

Since I think there are many ADMM implementations, and if not, it is usually few lines to code, I thought it might be a good thing to search for.

Well, you must be able to perform both prox operations efficiently also. Sure you can probably run the algorithm, but if you remove the convexity and the efficient prox there is not much left of the attractiveness of ADMM

Unless each sub problem, due to splitting, is much easier to solve.
Which in the context of Image Processing / Signal Processing is in many cases.

Moreover, ADMM has much better convergence properties even in the Non Convex cases.

See - Wotao Yin (UCLA Math) - Non Convex ADMM: Convergence and Applications.

Specifically:

Anyhow, just pointed that ADMM might be also an alternative as a special case of ALM.

3 Likes