Lagrange multipliers of a `VectorNonlinearOracle` after solve

I have a nonlinear optimization problem that relies on MOI.VectorNonlinearOracle for the constraints. I would like to compute the Hessian of the Langrangian at the optimum after solving, or, in other words, fetching the ret value at the optimum in the function eval_hessian_lagrangian(ret, x, μ). I need the optimal Langrange multiplier μ for this. I think JuMP.dual is intended for this?

For Ipopt.jl, why does dual called on the VectorNonlinearOracle constraint returns a vector with the number of elements equal to the number of decision variable? Isn’t supposed to be the number of nonlinear constraint in the set, if it’s truly the Langrange multipliers μ ?

Here’s a MRE with two decision variables and one nonlinear constraint:

using JuMP, Ipopt, MathOptInterface
set = MathOptInterface.VectorNonlinearOracle(;
    dimension = 2,
    l = [-Inf],
    u = [1.0],
    eval_f = (ret, x) -> (ret[1] = x[1]^2 + x[2]^2),
    jacobian_structure = [(1, 1), (1, 2)],
    eval_jacobian = (ret, x) -> ret .= 2.0 .* x,
    hessian_lagrangian_structure = [(1, 1), (2, 2)],
    eval_hessian_lagrangian = (ret, x, u) -> ret .= 2.0 .* u[1],
)
model = Model(Ipopt.Optimizer)
set_silent(model)
@variable(model, x)
@variable(model, y)
@objective(model, Max, x + y)
@constraint(model, c, [x, y] in set)
optimize!(model)
@show value(x), value(y)
@show dual(c)

giving a dual vector with 2 elements instead of 1 element:

(value(x), value(y)) = (0.707106783471979, 0.707106783471979)
dual(c) = [-0.9999999949684055, -0.9999999949684055]

The dual we return from VectorNonlinearOracle is {\mu}^\top J(x).

Our general convention has been that the dual is the same dimension as the input dimension. Our input here is a 2-element vector [x, y], so \mu^\top J(x) seemed to be the must relevant “dual” to return from the MOI point of view.

I’ll open an issue to discuss this further: The dual of VectorNonlinearOracle · Issue #2887 · jump-dev/MathOptInterface.jl · GitHub

1 Like