How to model an implication

Suppose we have a two-dimensional binary variable x. Let, for example, be two-dimensional with n x m rows and columns.
a) How can I express, in linear terms, a constraint of the following type?


# sum(x[1,:])== 5 => sum(x[3,:])==2

is something like this okay?

sum(x[1,:])== 5 --> {sum(x[3,:])==2}

or do we need something like this (maybe done better)?

# s>=0 ; y in {0,1}
# sum(x[3,:]) + s ==2
# sum(x[1,:]) - 3y >= s+(1-y)*3

# y==1 => s==0 => sum(x[3,:])==2
# y==0 => (0  <= s <= 2) => 2 >= sum(x[3,:]) >=0

# maybe this one is better?
# sum(x[1,:]) - 5y >= s-(1-y)*(m-5)
# y==0 => (0  <= s <= m) => m >= sum(x[3,:]) >=0

b) Can I impose a nonlinear objective function? A quadratic one, for example?

I hope the question is well posed, even if I didn’t specify the context well.

b) Can I impose a nonlinear objective function? A quadratic one, for example?

This question is isolated (not related to the other content in your post).

I hope the question is well posed

Can be improved. e.g. you are introducing additional information here:

# s>=0
# sum(x[3,:]) + s ==2

Finally, a n-by-m matrix is said to have two axes. I think that sounds better. “Dimension” is generally used in euclidean spaces.

I think it helps to introduce a physical background, rather than pure math equations.

I would like to be able to use a quadratic objective function in a context like the following

 using JuMP

 import HiGHS

model = Model(HiGHS.Optimizer)
# set_silent(model)
@variable(model, x[i in 1:nr, j in 1:ng, k in turni ], Bin)

# @objective(model, Min, sum(x[2:nr,:,"N"]))

@constraints(model, begin
 # each day must have exactly one Ti or one Fi
    fer1[k in union(t_feriali,t_card_urg), g in g_feriali], sum(x[:,g,k]) == 1
    fer2[k in t_festivi, g in g_feriali], sum(x[:,g,k]) == 0

    fest1[k in t_festivi, g in g_festivi], sum(x[:,g,k]) == 1
    fest2[k in union(t_feriali, t_card_urg), g in g_festivi], sum(x[:,g,k]) == 0
    #card_urg[k in t_card_urg, g in g_feriali], sum(x[:,g,k]) == 1
...
end
...

If you associate sum(x[1, :]) to a variable, and likewise for the other expression. Then your constraint is an implication constraint

  • xc == some_value implies xe == some_other_value

with out loss of generality we can study

  • xc == 0 implies xe == 0

So the continuous variable xc has two states: tight (== 0) or relaxed (otherwise). To this end we introduce an aux 01-variable bc. If we do the same thing for xe. Then we write the following program

using JuMP
import Gurobi
Mc1, Mc2, Me1, Me2 = 11, 12, 13, 14 # some big-M's that you should decide
model = Model(Gurobi.Optimizer)

@variable(model, -Mc1 ≤ xc ≤ Mc2)
@variable(model, bc, Bin)
@constraint(model, xc ≤ (Mc2)bc)
@constraint(model, -(Mc1)bc ≤ xc)

@variable(model, -Me1 ≤ xe ≤ Me2)
@variable(model, be, Bin)
@constraint(model, xe ≤ (Me2)be)
@constraint(model, -(Me1)be ≤ xe)

Then, that implication constraint can be precisely added via

@constraint(model, be ≤ bc)

So my answer is Yes—you can use linear inequalities to build that implication constraint.


A math program comprises a feasible set and an objective.
The feasible set including a set of constraints represents the physical aspect, whereas the objective represents the economic aspect. They are not related.
Your program is already nonconvex, so adding a quadratic objective won’t significantly complicate the original problem in the complexity sense. You may have a try.

1 Like

JuMP can directly model implications through indicator constraints. See: indicator constraints.

Not every solver supports indicator constraints directly though. I am not sure what JuMP will do for solver which don’t support it. Oscar will probably give a better answer to that :slight_smile:

Big M constraints can also work like suggested above. But my experience with them is that they introduce undesired numerical properties.

2 Likes

I know this form: “Indicator constraints consist of a binary variable and a linear constraint.”
How can I reduce my implication to this form?

Can I use the quadratic function with model = Model(HiGHS.Optimizer)?
or do I have to change something?

The tips and tricks section can be helpful here Tips and tricks · JuMP

1 Like
@constraint(model, sum(x[1, :]) - 5 == xc)
@constraint(model, sum(x[3, :]) - 2 == xe)

try it, see what you get.

Thank you.
I fully understood your proposal, which is detailed and very clear.
The question was directed at the suggestion of @langestefan to use Indicator constraints

If I understand your constraint correctly then it should just be

@variable(model, x)
@variable(model, a, Bin)

@constraint(model, a --> {sum(x[1,:]) == 5})
@constraint(model, a --> {sum(x[3,:]) == 2})

(I haven’t tested this)

I guess there is a logical issue with your formulation:

in your formulation, a == false and sum(x[1,:]) == 5 can happen simultaneously—there is no restrictions for reaching this state. Okay, then in this case the sum(x[3,:]) == 2 constraint is relaxed—it might not be met. So the original implication is not guaranteed.


I also want to know about this.:upside_down_face: waiting for odow…

Could this work?
I haven’t tested it yet.

y==sum(x[1,:])-5

!y --> {sum(x[3,:])==2}

How big is n? You could do something like this:

using JuMP
model = Model()
n, N = 5, 3
@variable(model, x[1:2, 1:n], Bin)
@variable(model, z[1:n], Bin)
@constraint(model, sum(x[1, :]) == sum(i * z[i] for i in 1:n))
@constraint(model, z[N] --> {sum(x[2, :]) == 2})
@constraint(model, sum(z) <= 1)

For b): HiGHS supports only linear problems if there are integer variables.

1 Like

Yes, @odow 's answer is correct in idea.

The crux here is to constrict the “cause” value sum(x[1, :]) to some enum values via

== sum(i * z[i] for i in 1:n)

where z is a Vector of Bool decisions.

So the “cause” condition sum(x[1, :]) == some_enum_value is iff-ly related to the condition z[N] == true for some N (i.e. one of the enum states).

This constraint is essential for the establishment of the concept of enumeration. But at the same time this constraint also restrict the range of sum(x[1, :]), so you need to make sure that this is valid for your original problem—it won’t cut off any other feasible solutions.

To make a reassessment, my idea was wrong. The bc here can lazily take true, which won’t prevent the cause condition (i.e. == some_cause_value). Therefore the implication was not properly enforced.

I think my misconception was due to that I wasn’t realizing that

  • The cause should be Boolean or Enum, it cannot be continuous.

There are many other possible formulations. It always helps to have some test cases that enumerate all possible values. Otherwise it can be very easy to make a logic mistake.

julia> using JuMP, HiGHS, Test

julia> function test_subproblem(build_fn; n, N)
           model = Model(HiGHS.Optimizer)
           set_silent(model)
           @variable(model, x[1:2, 1:n], Bin)
           @objective(model, Max, sum(x))
           build_fn(n, N, model, x)
           for y in Iterators.product(ntuple(i -> 0:1, n)...)
               if sum(y) != N
                   continue
               end
               @testset "$y" begin
                   fix.(x[1, :], y; force = true)
                   optimize!(model)
                   @test is_solved_and_feasible(model)
                   @test isapprox(sum(value.(x[2, :])), 2, atol = 1e-4)
               end
           end
           return
       end
test_subproblem (generic function with 1 method)

julia> function main(build_fn)
           @testset "main" begin
               @testset "$n-$N" for n in 2:10, N in 1:n
                   test_subproblem(build_fn; n, N)
               end
           end
           return
       end
main (generic function with 1 method)

julia> main() do n, N, model, x
           @variable(model, z[1:n], Bin)
           @constraint(model, sum(x[1, :]) == sum(i * z[i] for i in 1:n))
           @constraint(model, z[N] --> {sum(x[2, :]) == 2})
           @constraint(model, sum(z) <= 1)
       end
Test Summary: | Pass  Total  Time
main          | 4070   4070  0.3s

julia> main() do n, N, model, x
           @variable(model, z[1:3], Bin)
           @variable(model, 0 <= s[1:2] <= n)
           @constraint(model, [i in 1:2], s[i] <= n * z[i])
           @constraint(model, [i in 1:2], s[i] >= z[i])
           @constraint(model, z[3] == z[1] + z[2])
           @constraint(model, sum(x[1, :]) + s[1] - s[2] == N)
           @constraint(model, !z[3] --> {sum(x[2, :]) == 2})
       end
Test Summary: | Pass  Total  Time
main          | 4070   4070  0.4s

the "physical" model is this:

turni = ["M", "P", "N", "GF","NF","CUM","CUP"]
model = Model(HiGHS.Optimizer)

# set_silent(model)

@variable(model, x[i in 1:nr, j in 1:ng, k in turni ], Bin)  # nr = 13 now (maybe it will get to 25) ; ng = 30 (31)

# @objective(model, Min, something))

@constraints(model, begin

# a long list of contraints

    # range of N and NF for row for month
    [r in ops_lim], sum(x[r,j,"N"]+x[r,j,"NF"] for j in 1:ng) <= 4
    [r in ops_lim], sum(x[r,j,"N"]+x[r,j,"NF"] for j in 1:ng) >= 3
    
    [r in ops_lim], sum(x[r,j,"N"] for j in 1:ng) <= 2
    [r in ops_lim], sum(x[r,j,"N"] for j in 1:ng) >= 1

   # I want to add the following constraint to these constraints:
   [r in 1: ops_lim],  sum(x[r,j,"NF"] for j in 1:ng)==2  --> { sum(x[r,j,"N"] for j in 1:ng)<=1}

   # ops_lim is a subset of 1:nr

Given the physical model described above, which optimizer should I use to impose a nonlinear objective function?
For example, quadratic.

see Supported-solvers

some commercial solvers need licenses.

1 Like