Lagrangian Function

Note that, given any optimization algorithm, you can easily compute the Lagrange multipliers yourself given the optimum — it’s just a system of linear equations (from the KKT equations) given the gradients of the objective and constraints at the optimum, which you can solve with a backslash operation in Julia.

For example, if you are minimizing f_0(x) subject to h_i(x)=0 for i=1\ldots p, and your optimization algorithm gives you an approximate minimum x^*, then to find the Lagrange multipliers \nu_i for your constraints you would solve

\nu = \begin{bmatrix} \nabla h_1 & \cdots & \nabla h_m \end{bmatrix} \setminus -\nabla f_0

where we have a matrix whose columns are the constraint gradients (evaluated at x^*) and \ denotes the Julia backslash operation, which will solve the equations in the least-square sense assuming you have fewer constraints than variables x (this is necessary since x^* will only be an approximate minimum).

If you have inequality constraints, it is the same thing except that you need only include active constraints in your linear system; the Lagrange multipliers of inactive constraints are zero (“complementary slackness”).

6 Likes