Could we define a summation indexed by another variables?

Hi guys!

For a network problem with N nodes, P people and some binary variables

x(i,j, p)  # p is a person-- i,j are network  nodes
y(i, p)    # p is a person-- i is a network node
z(i)        # i is a network node

I would like to define a set similar to passed_nodes(p) = {i in N: x(i,j,p) =1 } and then use these sets for a constraints (on a summation) like:

@constraint( model, [i in N, p in P ] , 
            sum(y[(j,p)] for j in passed_nodes(p)) >= z(i))

Could we have such thing?? Because JuMP should assign a value to x and then use it right away. I cannot find a way to code this out. So, was wondering if it is a valid statement??

The short answer is no, because such a thing isn’t allowed by mathematical optimization solvers.

The longer answer is that there is probably a reformulation into a linear program.

One option that comes to mind, assuming I understand your problem correctly, is:

model = Model()
@variable(model, x[i=1:N, j=1:N, p=1:P], Bin)
@variable(model, y[i=1:N, p=1:P], Bin)
@variable(model, z[i=1:N], Bin)
# @constraint(model, [i in N, p in P],
#             sum(y[(j,p)] for j in passed_nodes(p)) >= z(i))
# @constraint(model, [i in N, p in P], 
#             sum(y[j, p] for j in 1:N if x[i, j, p] == 1) >= z[i])
# @constraint(model, [i in N, p in P], 
#             sum(x[i, j, p] * y[j, p] for j in 1:N) >= z[i])
@variable(model, xy[i=1:N, j=1:N, p=1:P], Bin)
@constraint(model, [i=1:N, j=1:N, p=1:P], xy[i,j,p] <= x[i,j,p])
@constraint(model, [i=1:N, j=1:N, p=1:P], xy[i,j,p] <= y[j,p])
@constraint(model, [i=1:N, j=1:N, p=1:P], xy[i,j,p] >= x[i,j,p] + y[j,p] - 1)
# Now substitute in the xy variable:
@constraint(model, [i in N, p in P], sum(xy[i, j, p] for j in 1:N) >= z[i])

You might be able to get away with this, depending on the rest of your
formulation:

model = Model()
@variable(model, x[i=1:N, j=1:N, p=1:P], Bin)
@variable(model, y[i=1:N, p=1:P], Bin)
@variable(model, z[i=1:N], Bin)
@constraint(model, [i=1:N, j=1:N, p=1:P], y[j,p] <= x[i,j,p])
@constraint(model, [i in N, p in P], sum(y[j, p] for j in 1:N) >= z[i])
2 Likes

Hi @odow your idea of x*y then linearization is very clever. Thanks for that. I was able to follow your thought until this point

# Now substitute in the xy variable:
@constraint(model, [i in N, p in P], sum(xy[i, j, p] for j in 1:N) >= z[i])

But after that, I could not fully grasp. Do you mean the rest (below the line, You might be able to get away ...) is equivalent to the above of it? I mean, if I understood correctly there’s no even need to multiply then linearize. Right?

The second formulation is not equivalent. It depends on whether x[i,j,p] = 0 forces y[j,p] = 0. If that’s the case, then great. Otherwise, if x[i,j,p] = 0 but y[j,p] = 1, then you’ll need the first formulation.

1 Like

Hi again @odow! Yes, this is exactly the case. If x[i,j,p] = 0 then y[j,p] is automatically should be zero. It does not make sense for it to be one, in that case. You have a very good insight despite never seen my model. I thank you so much and appreciate it!

1 Like