where \lambda and \nu are known as the dual variables of the inequality an equality constraints of the primal problem, and \lambda is required to be nonnegative, i.e., \lambda \succeq 0.

In JuMP I see there are three related functions to query the value of the dual variables after the primal model is solved: dual, shadow_price and reduced_cost. But after reading the documentation, I’m still not quite sure about the difference between them, especially the sign issue.

What I most want to confirm is what is the exact way to get the values of \lambda (the dual variable on the inequality constraint) and \nu (the dual variable on the equality constraint).

the sign of JuMP.dual(constraint) does not depend on whether we are minimizing or maximizing

the sign of JuMP.dual(constraint) is non-negative for \ge and non-positive for \le

In more detail:

the dual of a \le constraint is non-positive, because f(x) \le y is rewritten to f(x) - y \in \mathbb{R}_- (the MOI.Nonpositives cone), and the dual cone of MOI.Nonpositives is MOI.Nonpositives

the dual of a \ge constraint is non-negative, because f(x) \ge y is rewritten to f(x) - y \in \mathbb{R}_+ (the MOI.Nonnegatives cone), and the dual cone of MOI.Nonnegatives is MOI.Nonnegatives.

the dual of a = constraint is free, because f(x) = y is rewritten to f(x) - y \in {0} (the MOI.Zeros cone), and the dual of the MOI.Zeros cone is MOI.Reals

Notably, this is the opposite to your statement that λ is required to be nonnegative, so to convert to your assumed Lagragian, you need to add - sign.

shadow_price and reduced_cost are mostly useful for people coming from a background in linear programming. There, shadow_price is the change in the objective value as the constraint is relaxed, and the sign depends on whether we are maximizing or minimizing.