Write the lagrangian relaxation with JuMP concisely

I discover a new syntax to do the lagrangian relaxation, it looks elegant, but will it be efficiency also?

# The following 2 constraints are to be relaxed (and penalized on the obj) 
# via Lagrangian multipliers `a` and `b` respectively
JuMP.@constraint(m, a[i = 1:N, j = 1:N], x[i, j] == X[i]);
JuMP.@constraint(m, b[i = 1:N, j = 1:N], y[i, j] == Y[j]);
JuMP.@objective(m, Min, # the following will be the obj_expr upon relaxation
    dot(C, z) + # this is the primal objective expression, which you can neglect here
    dot(X, sum(a; dims = 2)) - dot(x, a) +
    dot(Y, sum(b; dims = 1)) - dot(y, b)
);

This isn’t valid syntax because a and b are constraint references.

Can you provide a reproducible example?

More generally: if you’re wondering if something is efficient: benchmark it. If it works well enough for you go for it.

2 Likes

I rethought this process and conclude that the term dot(X, sum(a; dims = 2)) in my original post was needless, because it won’t appear in practice (it is at the outside layer.). Details see

Example

Here is a simple example:
suppose the primal problem is the follows (x and y are decisions)

@objective(model, Min, x + y)
@constraint(model, 0 <= x <= 2)
@constraint(model, -1 <= y <= 1)
@constraint(model, x == y)

where the optimal obj_value is 0 attained when x = y = 0.
Now suppose we introduce one more free decision z and reformulate it as

@objective(model, Min, x + y)
@constraint(model, 0 <= x <= 2)
@constraint(model, -1 <= y <= 1)
@constraint(model, a, z == x)
@constraint(model, b, z == y)

Then the Lagrangian is x + y + a * x + b * y - (a + b) * z, in which a and b are Lagrangian multipliers.
We notice that we should enforce the constraint a + b == 0 at the outer layer, for otherwise the inner min-program will go to -Inf, which is undesired (as we want a Max-Min outcome).
Therefore by weak duality we can secure a lower bound to the original problem via this Max-Min problem:

Max_{a} Min_{x ∈ [0, 2], y ∈ [-1, 1]} x + y + a * (x - y)

We notice a = 1 is a feasible solution, at which a valid lower bound 0 is derived—already being tight to the original optimal value.

PS
An alternative view is that, the original problem can also be reformulated as

@objective(model, Min, 2 * z)
@constraint(model, 0 <= x <= 2)
@constraint(model, -1 <= y <= 1)
@constraint(model, a, z == x)
@constraint(model, b, z == y)

In view of this, the final Max-Min problem becomes

Max_{a, b: a + b == 2} Min_{x ∈ [0, 2], y ∈ [-1, 1]} a * x + b * y

Notice that the inner objective is a pure linear expression in (a, b).