Vectorized Modulo Function for JuMP

I’m trying to write a vectorized modulo function for JuMP to be invoked via:
e = reformulate_mod_vec(model, A*[x;y], v)
A is a Jx(n+m) fixed matrix, x is a nx1 vector of JuMP variables, y is a mx1 vector of JuMP variables, and v is the integer modulus.
Is x::Vector{AffExpr} the correct second argument for the definition of the vectorized mod function? Should the first argument in the definition instead be model::Model?

function reformulate_mod_vec(model, x::Vector{AffExpr}, v::Int)
n = length(x)
mult = @variable(model, [1:n], integer = true)
rem = @variable(model, [1:n], integer = true, lower_bound = 0, upper_bound = v - 1)
@constraint(model, x .== v .* mult + rem)
return rem
end

Annotating arguments with their type in Julia is optional:

using JuMP
function reformulate_mod_vec(model, x, v)
    n = length(x)
    mult = @variable(model, [1:n], integer = true)
    rem = @variable(
        model, 
        [1:n], 
        integer = true, 
        lower_bound = 0, 
        upper_bound = v - 1,
    )
    @constraint(model, x .== v .* mult + rem)
    return rem
end

m, n = 2, 2
A = [1 1 1 1; 1 1 1 1; 1 1 1 1]
v = 3
model = Model()
@variable(model, x[1:n])
@variable(model, y[1:m])
rem = reformulate_mod_vec(model, A * [x; y], v)

Annotations have no impact on performance, but they two main benefits:

  • It can help document the function
  • It can be used for multiple dispatch.

As an example, you might have something like this:

function reformulate_mod_vec_2(model, x, v)
    mult = @variable(model, integer = true)
    rem = @variable(model, integer = true, lower_bound = 0, upper_bound = v - 1)
    @constraint(model, x == v * mult + rem)
    return rem
end

rem = reformulate_mod_vec_2.(model, A * [x; y], v)

Here the reformulate_mod_vec_2 needs a scalar valued x, and we need to use broadcasting reformulate_mod_vec_2.( for it to work on our vector. But it’s hard to tell that just from the signature.

You might do instead:

function reformulate_mod_vec_3(model, x::AbstractJuMPScalar, v)
    mult = @variable(model, integer = true)
    rem = @variable(model, integer = true, lower_bound = 0, upper_bound = v - 1)
    @constraint(model, x == v * mult + rem)
    return rem
end

so that not using broadcasting fails:

julia> rem = reformulate_mod_vec_3(model, A * [x; y], v)
ERROR: MethodError: no method matching reformulate_mod_vec_3(::Model, ::Vector{AffExpr}, ::Int64)
Closest candidates are:
  reformulate_mod_vec_3(::Any, ::AbstractJuMPScalar, ::Any) at REPL[12]:1
Stacktrace:
 [1] top-level scope
   @ REPL[14]:1

julia> rem = reformulate_mod_vec_3.(model, A * [x; y], v)
3-element Vector{VariableRef}:
 _[18]
 _[20]
 _[22]
1 Like

Is there a difference in performance and efficiency between reformulate_mod_vec(model, A * [x; y], v) and reformulate_mod_vec_2.(model, A * [x; y], v)?

Is x::Vector{AffExpr} the correct annotation for the second argument of the vectorized implementation, reformulate_mod_vec?

Is there a difference in performance and efficiency between

In theory, if you benchmark them they’ll come out slightly differently, but it shouldn’t make much of a difference. You should consider them equivalent for now.

Is x::Vector{AffExpr} the correction annotation for the second argument of the vectorized implementation, reformulate_mod_vec?

Sure, if you want to pass a Vector{AffExpr}. But if you passed a vector of variables, then you’d need Vector{VariableRef} instead. You could also just use x::Vector and it’ll accept any Vector{T} that you pass.