I’m working on an optimization model in JuMP where I need to implement conditional constraints based on the sign of the right-hand side (RHS) of certain conditions. This is typically handled using the Big-M method, which activates constraints using binary variables based on the RHS’s sign. However, I’m exploring an alternative approach that uses the maximum of zero and the RHS to enforce conditional constraints effectively. My goal is to activate a constraint when the RHS is positive and another constraint when the RHS is negative. I’m considering two methodologies for this in JuMP and am seeking advice on their equivalence and effectiveness.

**Maximum of Zero and RHS Approach**:

I aim for the purchase constraint to be active specifically when the RHS, defined as a \cdot x - b, is positive. This can be mathematically expressed by activating the constraint \text{purchase_price} \cdot y \leq \max(0, a \cdot x - b) . In JuMP, I’m attempting to model this using auxiliary variables to capture the maximum of zero and the RHS, alongside binary variables for conditional constraint activation.

```
using JuMP
model = Model()
@variable(model, 0 <= x <= 1)
@variable(model, 0 <= y <= 1)
@variable(model, 0 <= z <= 1)
@variable(model, aux1 >= 0) # For capturing max(0, RHS) for purchase
@variable(model, aux2 >= 0) # For capturing min(0, RHS) for sale
@variable(model, δ1, Bin) # Binary variable for purchase constraint activation
@variable(model, δ2, Bin) # Binary variable for sale constraint activation
a = 80
b = 100
purchase_price = 20
sale_price = 15
# Constraints to model the conditional activation
@constraint(model, aux1_ge_rhs, aux1 >= a * x - b)
@constraint(model, purchase_constraint, purchase_price * y <= aux1)
@constraint(model, purchase_activation, y <= δ1)
@constraint(model, aux2_le_rhs, aux2 <= b - a * x)
@constraint(model, sale_constraint, sale_price * z <= aux2)
@constraint(model, sale_activation, z <= δ2)
# Ensuring that one and only one of the constraints is active
@constraint(model, one_constraint_active, δ1 + δ2 == 1)
```

**Big-M Method Approach**:

For comparison, I’m also contemplating the Big-M method as an alternative for enforcing these conditional constraints.

```
using JuMP
model = Model()
@variable(model, 0 <= x <= 1)
@variable(model, 0 <= y <= 1)
@variable(model, 0 <= z <= 1)
@variable(model, δ1, Bin) # Binary variable for purchase constraint activation
@variable(model, δ2, Bin) # Binary variable for sale constraint activation
M = 1000 # Large constant for Big-M method
# Purchase constraint activated by δ1
@constraint(model, purchase_constraint, purchase_price * y <= a * x - b + M * (1 - δ1))
# Sale constraint activated by δ2, ensuring it's enforced when the RHS is negative
@constraint(model, sale_constraint, sale_price * z <= - (a * x - b) + M * δ2)
# Ensuring that one and only one of the constraints is active
@constraint(model, one_constraint_active, δ1 + δ2 == 1)
```

**Question:** Is the approach using the maximum of zero and the RHS correct for enforcing conditional constraints in JuMP? I would like to understand if both methods are equivalent and which one is more advantageous.