Consider this simple example
import JuMP, Gurobi
model = JuMP.Model(Gurobi.Optimizer)
JuMP.@variable(model, x[1:3])
JuMP.@constraint(model, c1, x .<= 2)
JuMP.@constraint(model, c2, x .>= -2)
JuMP.@objective(model, Min, 0.4 * sum(x))
JuMP.optimize!(model); JuMP.assert_is_solved_and_feasible(model)
JuMP.value.(x)
JuMP.dual.(c1)
We know that JuMP can manipulate primal variable/expressions flexibly, via @expression
.
e.g. before optimize!
, I can build @expression(model, expr, -x[3] - x[2])
, then after optimize!
, I can query with JuMP.value(expr)
. Note that building expr
is one-off, if we don’t have this functionality, we have to write the cumbersome -value(x[3]) - value(x[2])
every time after an optimize!
.
Now the question is
- can I also enjoy the similar convenience on dual variables?
One thing that I don’t fathom is, as long as I can call JuMP.dual(c1[1])
, c1[1]
should be deemed a dual variable? But it turns out to be a JuMP.ConstraintRef
. This implies that we can also call JuMP.value(c1[1])
properly after optimize!
, which is somewhat unusual. (Are there really people intend to do this, is it often, I don’t quite understand the point of this query)
The current behavior is that, we can only get raw material from JuMP.dual
after optimize!
, e.g. I can do
optimize!(model)
c1v = -1 * JuMP.dual.(c1) # to make the resulting value positive
c2v = JuMP.dual.(c2)
cv = [c1v; c2v] # store them in a compact array to facilitate carrying
The hope is that Can we do this before optimize!
? Something like
JuMP.@dual_expression(model, cv, [-c1; c2]) # setting this is one-off
optimize!(model) # we may do this multiple times
JuMP.dual.(cv) # this style is concise and desirable
In JuMP v1.25, we cannot yet.