Updating expressions for indicator constraints in JuMP

Hello, I am trying to implement a constraint with a conditional statement that I am making piecewise affine by introducing binary variables and using the big M method. I know that JuMP supports indicator constraints but I have not been able to find any relevant examples for more complicated use cases such as this one. My question may seem long, but I wanted to give enough information so that it is clear.

When I run my code and look at how the expressions are being built, it does not look like that it is doing what it should be doing. I will really appreciate if some can guide me on:
What changes do I need to make to the code? If indicator variables can be used it will make the code more readable. Could this constraint be expressed using indicator constraints in JuMP?

The constraint requires selecting between two different parameters, say factor 1 and factor 2, depending upon whether the value is positive or negative at the previous period:
The constraint requires defining an expression, let’s call it Linear_exp
Linear_exp = @expression(model, A*x-b) (where A and b are constants, and x are variables)
Now define another expression: Affine_exp

Affine_exp[1] = Linear_exp[1]
# Without (piecewise) linearizing, the constraint will look like:
for k=2:length(Affine_exp)
      Affine_exp[k] = Linear_exp[i] + (Affine_exp[k-1] > 0 Affine_exp[k-1]*factor1[i] : Affine_exp[k-1] *factor2[i])
end
 @constraint(model, zero_balance, Affine_exp[end] == 0)

(I don’t think I can write latex here so I am pasting an image with some equations)

My MWE:

using JuMP, Cbc
function some_data()
    cost = [500,400,300,600,450]
    A= [100 120 130 140 150; 100 150 200 300 90; 100 200 150 140 150; 100 120 130 140 150; 200 300 400 500 200; 0 0 0 0 0; 0 0 0 0 0; 0 0 0 0 0]
    b = [200, 300, 400, 1000, 500, 20, 30, 10]
    factor1 = [0.001, 0.002, 0.003, 0.004, 0.0045, 0.005,0.0055, 0.006]
    factor2 = [0.0015, 0.0025, 0.0035, 0.0048,0.0057, 0.007, 0.0065,0.0075]
    t = length(b)
    n = size(A,2)

    return cost, A, b, factor1, factor2,n , t
end

function bigM(A,b,factor1,factor2, n,t)
    # Calculate suitable values of big M
     aux_M = A*ones(n) - b
     M = Vector{Float64}(undef,length(aux_M))
     M[1] = aux_M[1]
     for i  = 2:length(M)
         M[i] = aux_M[i] + (M[i-1] > 0 ? M[i-1]*factor1[i] : M[i-1] * factor2[i])
     end
     M = round.(abs.(M)) .+ 100
  return M
end


function indicator_model(cost,A,b,factor1,factor2,n,t)
    model = Model(Cbc.Optimizer)
    @variable(model,0<= x[i=1:n] <= 1)
    @variable(model,y[i=1:t], Bin)
    @objective(model, Min, sum(cost[i]*x[i] for i in 1:n))

    Linear_exp = @expression(model, A*x-b)
    Affine_exp =[AffExpr(0.0) for i = 1:length(Linear_exp)]
    aux_Affine_exp = [AffExpr(0.0) for i = 1:length(Linear_exp)]
    Affine_exp[1] = Linear_exp[1]
    aux_Affine_exp[1] = 0.0

    for k = 2:length(Linear_exp)
        @constraint(model, Affine_exp[k-1] <= (1-y[k]) *M[k])
        @constraint(model,  0 <= aux_Affine_exp[k] - factor1[k] * Affine_exp[k-1] + (1-y[k]) *M[k] - (1-y[k]) *M[k] <= 0)
        @constraint(model, Affine_exp[k-1] >= -y[k] *M[k])
        @constraint(model, -y[k] *M[k] <= aux_Affine_exp[k] - factor2[k])
        @constraint(model, 0 <= aux_Affine_exp[k] - factor2[k] * Affine_exp[k-1] + (1-y[k]) *M[k] - y[k] *M[k] <= 0)
        Affine_exp[k] = Linear_exp[k]+aux_Affine_exp[k]
    end

    @constraint(model,con_zero, Affine_exp[t] == 0.0)

    return x, model

end

cost, A, b, factor1, factor2,n , t = some_data()
M = bigM(A,b,factor1,factor2, n,t)

x, model = indicator_model(cost,A,b,factor1,factor2,n,t)

You’re close, but your big-M reformulation isn’t quite right.

Mosek has a good section on this trick: 9 Mixed integer optimization — MOSEK Modeling Cookbook 3.3.0

Here’s something to point you in the right direction (I haven’t tested it; there might be typos):

A = rand(3, 2)
b = rand(3)
Factor1 = rand(3)
Factor2 = rand(3)

model = Model()
@variables(model, begin
    x[1:2]
    y[1:3]
    y_p[1:3] >= 0
    y_n[1:3] >= 0
    z[1:3], Bin
end)
@expression(model, expr[i=1:3], A[:, i]' * x - b[i] + y[i])
fix(y[1], 0.0)
for i in 2:3
    @constraints(model, begin
        y_p[i] - y_n[i] == expr[i - 1]
        y[i] == Factor1[i] * y_p[i] + Factor2[i] * y_n[i]
        y_p[i] <= M * z[i]
        y_n[i] <= M * (1 - z[i])
    end)
end
@constraint(model, expr[3] == 0)

Thank you @odow much appreciated. You are right my big M formulation is not correct.

I would like to check few points with you, I hope that is ok

The code with your sample data works, but anything else I try the solver returns infeasible problem (even the problems I know are feasible). This could be because I should have defined the expression:
expressions
If I change this line of code: y[i] == Factor1[i] * y_p[i] + Factor2[i] * y_n[i] to
y[i] == Factor1[i] * y_p[i] + Factor2[i] * y_n[i] + y[i-1] will this effectively change this constraint? I must be missing another line of code that needs to be changed as well to implement this change.

In your code, you have used one big M rather than 2. Is that deliberate or it is just for simplicity?

I think you intended A = rand(2,3)

Don’t you just want

y[i] == (1 + Factor1[i]) * y_p[i] + (1 + Factor2[i]) * y_n[i]

Thanks again and also for sending Mosek Modelling cookbook.

I also thought about this one but for some reason decided to add y[i-1]
I will spend some time to understand why the solver is returning the problem infeasible for different data. I will mark this as solved but I may have to come back with questions later :smiley:

Hi @odow I have tried your suggestion, but it doesn’t seem to update the constraints correctly:
y[i] == (1 + Factor1[i]) * y_p[i] + (1 + Factor2[i]) * y_n[i]

However, if I change this to: y[i] == Factor1[i-1] * y_p[i] + Factor2[i-1] * y_n[i] + expr[i-1] then I do get feasible result sometimes if the solver doesn’t take for ever.

I do not understand why there should be any difference between these two lines of code. Could this be a bug in JuMP or there is something wrong in the formulation?

Cbc is taking forever when I give it a larger problem, so I tried Bonmin and Couenne but they both finish with a message “The LP is infeasible or too expensive”.

A full working example is here:

using JuMP, Cbc

function some_data()
    cost = [500,400,300,600,450]
    A= [100 120 130 140 150; 100 150 200 300 90; 100 200 150 140 150; 100 120 130 140 150; 200 300 400 500 200]
	b = [200, 300, 400, 1000, 500, 20, 30, 3]
	# Append matrix with zeros to ensure dimension matches with b for any operations on the expression
	if size(A,2) < length(b)
		A = hcat(A,zeros(size(A,1),length(b)-size(A,2)))
	elseif length(b) < size(A,2)
		b = vcat(b,zeros(size(A,2)-length(b)))
	end
	Factor1 = [0.001, 0.002, 0.003, 0.004, 0.0045, 0.005,0.0055, 0.006]
    Factor2 = [0.0015, 0.0025, 0.0035, 0.0048,0.0057, 0.007, 0.0065,0.0075]
    t = length(b)
    n = size(A,1)

    return cost, A, b, Factor1, Factor2, n , t
end

function indicator_cons(A,b,Factor1,Factor2,M, n, t)
    model = Model(Cbc.Optimizer)
	
    @variables(model, begin
    0 <= x[1:n] <= 1
    y[1:t]
    y_p[1:t] >= 0
    y_n[1:t] >= 0
    z[1:t], Bin
end)

@expression(model, expr[i=1:t], A[:, i]' * x - b[i] + y[i])
fix(y[1], 0.0)

for i in 2:t
    @constraints(model, begin
        y_p[i] - y_n[i] == expr[i - 1]
        y[i] == (1+Factor1[i-1]) * y_p[i] + (1+Factor2[i-1]) * y_n[i]
	# change the previous line to
	#y[i] == Factor1[i-1] * y_p[i] + Factor2[i-1] * y_n[i] + expr[i-1]
        y_p[i] <= M * z[i]
        y_n[i] <= M * (1 - z[i])
    end)
end
@constraint(model, con_zero, expr[t] == 0)

return x, y, expr, con_zero, model
end


function audit(x_r,A,b,Factor1,Factor2,t)
	# Function to test results
  lastvalue = Vector{Float64}(undef,length(b))

	lastvalue[1] = A[:,1]'*x_r - b[1]
	for i in 2:t
		if lastvalue[i-1] > 0
			lastvalue[i] = A[:,i]'*x_r - b[i] +  lastvalue[i-1]*(1+Factor1[i-1])
		else
			lastvalue[i] = A[:,i]'*x_r - b[i] +  lastvalue[i-1]*(1+Factor2[i-1])
		end
	end
	return lastvalue[t]
end

# Test code
cost, A, b, Factor1, Factor2,n , t = some_data()

M=1000000
x, y, expr, con_zero, model = indicator_cons(A,b,Factor1,Factor2,M, n, t)
optimize!(model)

x_r = value.(x)
audit(x_r,A,b,Factor1,Factor2,t)
# This return -377.71088024605297
# Retry by changing to y[i] == Factor1[i-1] * y_p[i] + Factor2[i-1] * y_n[i] + expr[i-1]
# audit(x_r,A,b,Factor1,Factor2,t)  returns -1.3500311979441904e-13

I made a typo. It should be

y[i] == (1 + Factor1[i]) * y_p[i] - (1 + Factor2[i]) * y_n[I]
                                  ^
                              -, not +
1 Like

Thank you, much appreciated