CasADI has lam_x and lam_g outputs. I can’t find in what they’re calling in Ipopt to get these, or I’d try digging into the Ipopt.jl code to see where it was. I haven’t had any luck with the jump-dev or ipopt documentation or web searches…
JuMP calls these “dual” solutions: Solutions · JuMP
julia> using JuMP
julia> using Ipopt
julia> model = Model(Ipopt.Optimizer)
A JuMP Model
├ solver: Ipopt
├ objective_sense: FEASIBILITY_SENSE
├ num_variables: 0
├ num_constraints: 0
└ Names registered in the model: none
julia> @variable(model, x >= 0)
x
julia> @variable(model, 0 <= y <= 3)
y
julia> @objective(model, Min, 12x + 20y)
12 x + 20 y
julia> @constraint(model, c1, 6x + 8y >= 100)
c1 : 6 x + 8 y ≥ 100
julia> @constraint(model, c2, 7x + 12y >= 120)
c2 : 7 x + 12 y ≥ 120
julia> optimize!(model)
This is Ipopt version 3.14.16, running with linear solver MUMPS 5.7.3.
Number of nonzeros in equality constraint Jacobian...: 0
Number of nonzeros in inequality constraint Jacobian.: 4
Number of nonzeros in Lagrangian Hessian.............: 0
Total number of variables............................: 2
variables with only lower bounds: 1
variables with lower and upper bounds: 1
variables with only upper bounds: 0
Total number of equality constraints.................: 0
Total number of inequality constraints...............: 2
inequality constraints with only lower bounds: 2
inequality constraints with lower and upper bounds: 0
inequality constraints with only upper bounds: 0
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
0 3.1999968e-01 1.20e+02 7.60e-01 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0
1 4.7893512e+00 1.17e+02 1.42e+01 -1.0 3.44e+01 - 1.82e-03 2.88e-02h 1
2 1.9573531e+01 1.08e+02 1.32e+01 -1.0 1.19e+01 - 9.92e-03 7.08e-02h 1
3 1.3373464e+02 4.17e+01 5.40e+00 -1.0 1.43e+01 - 1.00e+00 6.18e-01h 1
4 2.0522904e+02 0.00e+00 2.09e-02 -1.0 5.55e+00 - 7.19e-01 1.00e+00h 1
5 2.0503895e+02 0.00e+00 2.00e-07 -1.7 4.52e-01 - 1.00e+00 1.00e+00f 1
6 2.0500506e+02 0.00e+00 2.83e-08 -2.5 6.69e-02 - 1.00e+00 1.00e+00f 1
7 2.0500030e+02 0.00e+00 1.50e-09 -3.8 7.95e-03 - 1.00e+00 1.00e+00f 1
8 2.0500000e+02 0.00e+00 1.84e-11 -5.7 5.95e-04 - 1.00e+00 1.00e+00f 1
9 2.0500000e+02 0.00e+00 2.71e-14 -8.6 7.38e-06 - 1.00e+00 1.00e+00f 1
Number of Iterations....: 9
(scaled) (unscaled)
Objective...............: 2.0499999795501228e+02 2.0499999795501228e+02
Dual infeasibility......: 2.7149984354261548e-14 2.7149984354261548e-14
Constraint violation....: 0.0000000000000000e+00 0.0000000000000000e+00
Variable bound violation: 0.0000000000000000e+00 0.0000000000000000e+00
Complementarity.........: 2.5088486144185382e-09 2.5088486144185382e-09
Overall NLP error.......: 2.5088486144185382e-09 2.5088486144185382e-09
Number of objective function evaluations = 10
Number of objective gradient evaluations = 10
Number of equality constraint evaluations = 0
Number of inequality constraint evaluations = 10
Number of equality constraint Jacobian evaluations = 0
Number of inequality constraint Jacobian evaluations = 1
Number of Lagrangian Hessian evaluations = 1
Total seconds in IPOPT = 0.001
EXIT: Optimal Solution Found.
julia> dual(c1)
0.25000000012313955
julia> dual(c2)
1.499999999870584
julia> dual(LowerBoundRef(x))
1.6710061615730542e-10
julia> dual(LowerBoundRef(y))
5.678764080540814e-10
julia> dual(UpperBoundRef(y))
0.0
Note that there may be a factor of -1
in there, depending on the convention you expect. JuMP’s convention is: the dual of >=
is non-negative, and the dual of <=
is non-positive.
With any optimization software, you can always get the Lagrange multipliers / dual variables from the objective/constraint gradients at the optimum just by solving a linear equation (from KKT).
Thanks, that makes perfect sense in retrospect, I just hadn’t hit on using that search term.
1 Like