Example
Here is a simple example:
suppose the primal problem is the follows (x
and y
are decisions)
@objective(model, Min, x + y)
@constraint(model, 0 <= x <= 2)
@constraint(model, -1 <= y <= 1)
@constraint(model, x == y)
where the optimal obj_value is 0
attained when x = y = 0
.
Now suppose we introduce one more free decision z
and reformulate it as
@objective(model, Min, x + y)
@constraint(model, 0 <= x <= 2)
@constraint(model, -1 <= y <= 1)
@constraint(model, a, z == x)
@constraint(model, b, z == y)
Then the Lagrangian is x + y + a * x + b * y - (a + b) * z
, in which a
and b
are Lagrangian multipliers.
We notice that we should enforce the constraint a + b == 0
at the outer layer, for otherwise the inner min-program will go to -Inf
, which is undesired (as we want a Max-Min outcome).
Therefore by weak duality we can secure a lower bound to the original problem via this Max-Min problem:
Max_{a} Min_{x ∈ [0, 2], y ∈ [-1, 1]} x + y + a * (x - y)
We notice a = 1
is a feasible solution, at which a valid lower bound 0
is derived—already being tight to the original optimal value.
PS
An alternative view is that, the original problem can also be reformulated as
@objective(model, Min, 2 * z)
@constraint(model, 0 <= x <= 2)
@constraint(model, -1 <= y <= 1)
@constraint(model, a, z == x)
@constraint(model, b, z == y)
In view of this, the final Max-Min problem becomes
Max_{a, b: a + b == 2} Min_{x ∈ [0, 2], y ∈ [-1, 1]} a * x + b * y
Notice that the inner objective is a pure linear expression in (a, b)
.