Getting Oriented

The introduction to the optimization topic in discourse refers to http://www.juliaopt.org/, but that site declares itself to be dead and obsolete, while referencing https://jump.dev/. But the latter does not appear to be a home base for Julia optimizers in general, but only those involved in JuMP. It does have pointers to some other sites.

In particular `Optim`, which I have been using, doesn’t seem to be there.

So where’s the best place to get an overview of the alternatives?

I’ve been using `Optim` since it allows me to take advantage of auto-differentiation. But I have a new problem that adds a linear equality constraints to the old problem (smooth nonlinear optimization) and am wondering what the best way to approach that is.

2 Likes

The package list of that site has many alternatives: Packages

Probably that got lost when JuMP got more attention.

See this post for some of the background on the JuliaOpt/JuMP-dev split:

We point to other options on jump.dev:

I will update the introduction for this category.

Here’s a JuMP NLP example: https://jump.dev/JuMP.jl/stable/examples/mle/. Ipopt is an excellent solver for constrained (convex) nonlinear problems.

Edit: I updated the intro: About the Optimization (Mathematical) category

3 Likes

It sounds like those constraints will define a manifold. Can you use the manifolds feature of Optim, define the `project_tangent!` method to be an orthogonal projector, and define `retract!` in terms of the same projector? I suspect there is a simple way to implement that with QR factorization, but I’d have to think about the details.

2 Likes

Thank you for the suggestion. I don’t think manifolds are quite what I need, which is just to pick the optimal value of a vector b subject to a constraint matrix C such that Cb=0.

If there are q constraints and p parameters then C is q x p and this can be turned into an unconstrained optimization of p-q parameters using a QR decomposition of C^T to move between the constrained and unconstrained spaces. Though it would be nice not to have to do it by hand, since I’d also need to apply the transform to the Hessian to get the covariance matrix–the optimization is a conditional maximum likelihood estimation.

The problem could also be approached with Lagrange multipliers, but that seems less desirable computationally.

2 Likes

Thanks for the pointer to the MLE example in JuMP. That looks like what I’m trying to do, and the JuMP docs indicate it does auto-differentiation. I do need to provide a custom nonlinear objective function, but apparently that’s supported.

1 Like

Yes, see here: Nonlinear Modeling · JuMP