Optimization constraints

I’m working on an optimization model in JuMP where I need to implement conditional constraints based on the sign of the right-hand side (RHS) of certain conditions. This is typically handled using the Big-M method, which activates constraints using binary variables based on the RHS’s sign. However, I’m exploring an alternative approach that uses the maximum of zero and the RHS to enforce conditional constraints effectively. My goal is to activate a constraint when the RHS is positive and another constraint when the RHS is negative. I’m considering two methodologies for this in JuMP and am seeking advice on their equivalence and effectiveness.

  1. Maximum of Zero and RHS Approach:

I aim for the purchase constraint to be active specifically when the RHS, defined as a \cdot x - b, is positive. This can be mathematically expressed by activating the constraint \text{purchase_price} \cdot y \leq \max(0, a \cdot x - b) . In JuMP, I’m attempting to model this using auxiliary variables to capture the maximum of zero and the RHS, alongside binary variables for conditional constraint activation.


using JuMP

model = Model()

@variable(model, 0 <= x <= 1)

@variable(model, 0 <= y <= 1)

@variable(model, 0 <= z <= 1)

@variable(model, aux1 >= 0) # For capturing max(0, RHS) for purchase

@variable(model, aux2 >= 0) # For capturing min(0, RHS) for sale

@variable(model, δ1, Bin) # Binary variable for purchase constraint activation

@variable(model, δ2, Bin) # Binary variable for sale constraint activation

a = 80

b = 100

purchase_price = 20

sale_price = 15

# Constraints to model the conditional activation

@constraint(model, aux1_ge_rhs, aux1 >= a * x  - b)

@constraint(model, purchase_constraint, purchase_price * y <= aux1)

@constraint(model, purchase_activation, y <= δ1)

@constraint(model, aux2_le_rhs, aux2 <= b - a * x)

@constraint(model, sale_constraint, sale_price * z <= aux2)

@constraint(model, sale_activation, z <= δ2)

# Ensuring that one and only one of the constraints is active

@constraint(model, one_constraint_active, δ1 + δ2 == 1)

  1. Big-M Method Approach:

For comparison, I’m also contemplating the Big-M method as an alternative for enforcing these conditional constraints.


using JuMP

model = Model()

@variable(model, 0 <= x <= 1)

@variable(model, 0 <= y <= 1)

@variable(model, 0 <= z <= 1)

@variable(model, δ1, Bin) # Binary variable for purchase constraint activation

@variable(model, δ2, Bin) # Binary variable for sale constraint activation

M = 1000 # Large constant for Big-M method

# Purchase constraint activated by δ1

@constraint(model, purchase_constraint, purchase_price * y <= a * x - b + M * (1 - δ1))

# Sale constraint activated by δ2, ensuring it's enforced when the RHS is negative

@constraint(model, sale_constraint, sale_price * z <= - (a * x - b) + M * δ2)

# Ensuring that one and only one of the constraints is active

@constraint(model, one_constraint_active, δ1 + δ2 == 1)

Question: Is the approach using the maximum of zero and the RHS correct for enforcing conditional constraints in JuMP? I would like to understand if both methods are equivalent and which one is more advantageous.

1 Like

Have you tried solving your first model to see if it gets the results you expect? For the current data, a * x - b is always less than zero. Try some different data. You have the constraint:

@constraint(model, aux2_le_rhs, aux2 <= b - a * x)

can a * x - b ever be positive?

Your big-M model is better, but probably also not what you want. It is feasible to have both non-zero y and z, and I don’t know if your M * (1 - δ1) is correct.

I’d write your model like:

# A better estimate of `M`
M = max(b + purchase_price, sale_price + a - b)
using JuMP, HiGHS
model = Model(HiGHS.Optimizer)
@variables(model, begin
    0 <= x <= 1
    0 <= y <= 1
    0 <= z <= 1
    δ[1:2], Bin
end)
@constraints(model, begin
    y <= 1 - δ[1]
    purchase_price * y <= a * x - b + M * δ[1]
    z <= 1 - δ[2]
    sale_price * z <= -(a * x - b) + M * δ[2]
    sum(δ) == 1
end)
optimize!(model)
value.([x, y, z])

Thanks. I don’t get the expected result with the first model. I have tested it for different data. But I am not sure what is wrong with this approach.

If I want to change the big-M constraints to represent equality purchase_price*y == -(a*x-b), how will this answer change? I understand that it requires adding another constraint of the form purchase_price*y <= -(a*x-b) + M*δ[1] but I am uncertain.

You’d need to do something like:

# y == max(0, a * x - b)
a, b = 1, 1
M = 1000
model = Model()
@variable(model, 0 <= x <= 1)
@variable(model, y >= 0)
@variable(model, z, Bin)
@constraint(model, y >= a * x - b)
# if z == 0 then y == 0
@constraint(model, y <= z)
# if z == 1 then y <= a * x - b
@constraint(model, y <= a * x - b + M * (1 - z))
# if a * x - b > 0 then z == 1
@constraint(model, a * x - b <= M * z)

The Mosek modeling cook book has a number of tricks: 9 Mixed integer optimization — MOSEK Modeling Cookbook 3.3.0

See also Tips and tricks · JuMP

Thank you.

In this code that you have provided, y is used either to represent the maximum value or as an auxiliary variable. However, wouldn’t the constraint @constraint(model, y <= z) invalidate its intended use by forcing y to be less than or equal to 1, especially when I anticipate that values greater than one could be possible for a \cdot x-b?

In the question, I defined the variable with the range 0 \leq y \leq 1, but the actual intended constraint was purchase\_price \cdot y = \max(0, a \cdot x -b). I’m unsure if simply replacing y with purchase_price * y in the code you have provided in all constraints would be correct. Could you please provide some clarification?

Yes, I was just following your original definition.

You need to replace it by some appropriate big-M:

@constraint(model, y <= M * z)

(If the upper bound of y is 1, then M = 1.)

thanks

1 Like