JuMP summation constraint error

I’m trying to add a constraint to a JuMP model that sums the elements of a constant matrix c times binary variables x. For i=1:I, x[i,j] is a binary variable for j=1:len_J_i[i]. That is for each i=1:I, x[i,:] is a vector of binary variables, the length of which depends on i.

The commented-out constraint works to achieve the desired summation constraint but takes a very long time to construct for the particular values of I and len_J_i in the actual model. The uncommented-out constraint, which I thought might be more efficient, generates the error below. Is there a more efficient way to realize the commented-out constraint for large values of I (15000) and M (5000)?

Sample Code:

using JuMP
m = Model()

I = 10
M = 500 # M is the maximum possible length in J_i.
len_J_i = Array{Int64,1}(undef,I) # This stores the length of J_i[i] for i=1:I.
for i = 1:I
    len_J_i[i] = rand(1:M)
end

max_c = 1000
c = Array{Int64,2}(undef,I,M)
for i=1:I
  for j=1:len_J_i[i]
    c[i,j] = rand(0:max_c)
  end
end

@variable(m,x[i=1:I,j=1:len_J_i[i]],Bin)
@variable(m,200 <= z <= 300,Int)

#@constraint(m,z == sum(c[i,j]*x[i,j] for i=1:I, j=1:len_J_i[i])) # This constraint works.
@constraint(m,z == sum(c[i,1:len_J_i[i]]*x[i,:]' for i=1:I)) # This constraint generates the error below.



$ julia Wotan2.jl
ERROR: LoadError: `Base.size` is not implemented for `SparseAxisArray` because although it is a subtype of `AbstractArray`, it is conceptually closer to a dictionary with `N`-dimensional keys. If you encounter this error and you didn't call `size` explicitly, it is because you called a method that is unsupported for `SparseAxisArray`s. Consult the JuMP documentation for a list of supported operations.
Stacktrace:
  [1] error(s::String)
    @ Base ./error.jl:35
  [2] size(#unused#::JuMP.Containers.SparseAxisArray{VariableRef, 1, Tuple{Int64}})
    @ JuMP.Containers ~/.julia/packages/JuMP/H2SWp/src/Containers/SparseAxisArray.jl:56
  [3] axes
    @ ./abstractarray.jl:98 [inlined]
  [4] axes(v::LinearAlgebra.Adjoint{VariableRef, JuMP.Containers.SparseAxisArray{VariableRef, 1, Tuple{Int64}}})
    @ LinearAlgebra ~/julia-1.9.1/share/julia/stdlib/v1.9/LinearAlgebra/src/adjtrans.jl:298
  [5] combine_axes
    @ ./broadcast.jl:512 [inlined]
  [6] instantiate
    @ ./broadcast.jl:294 [inlined]
  [7] materialize
    @ ./broadcast.jl:873 [inlined]
  [8] broadcast(::typeof(*), ::Vector{Int64}, ::LinearAlgebra.Adjoint{VariableRef, JuMP.Containers.SparseAxisArray{VariableRef, 1, Tuple{Int64}}})
    @ Base.Broadcast ./broadcast.jl:811
  [9] *(u::Vector{Int64}, v::LinearAlgebra.Adjoint{VariableRef, JuMP.Containers.SparseAxisArray{VariableRef, 1, Tuple{Int64}}})
    @ LinearAlgebra ~/julia-1.9.1/share/julia/stdlib/v1.9/LinearAlgebra/src/adjtrans.jl:439
 [10] sub_mul(::VariableRef, ::Vector{Int64}, ::LinearAlgebra.Adjoint{VariableRef, JuMP.Containers.SparseAxisArray{VariableRef, 1, Tuple{Int64}}})
    @ MutableArithmetics ~/.julia/packages/MutableArithmetics/h0wjj/src/MutableArithmetics.jl:39
 [11] operate(op::typeof(MutableArithmetics.sub_mul), x::VariableRef, y::Vector{Int64}, args::LinearAlgebra.Adjoint{VariableRef, JuMP.Containers.SparseAxisArray{VariableRef, 1, Tuple{Int64}}})
    @ MutableArithmetics ~/.julia/packages/MutableArithmetics/h0wjj/src/interface.jl:206
 [12] operate_fallback!!(::MutableArithmetics.IsNotMutable, ::typeof(MutableArithmetics.sub_mul), ::VariableRef, ::Vector{Int64}, ::LinearAlgebra.Adjoint{VariableRef, JuMP.Containers.SparseAxisArray{VariableRef, 1, Tuple{Int64}}})
    @ MutableArithmetics ~/.julia/packages/MutableArithmetics/h0wjj/src/interface.jl:586
 [13] operate!!(::typeof(MutableArithmetics.sub_mul), ::VariableRef, ::Vector{Int64}, ::LinearAlgebra.Adjoint{VariableRef, JuMP.Containers.SparseAxisArray{VariableRef, 1, Tuple{Int64}}})
    @ MutableArithmetics ~/.julia/packages/MutableArithmetics/h0wjj/src/rewrite.jl:89
 [14] macro expansion
    @ ~/.julia/packages/MutableArithmetics/h0wjj/src/rewrite.jl:322 [inlined]
 [15] macro expansion
    @ ~/.julia/packages/JuMP/H2SWp/src/macros.jl:986 [inlined]
 [16] top-level scope
    @ ~/julia_code/Wotan/Wotan2.jl:23
in expression starting at /home/stuart/julia_code/Wotan/Wotan2.jl:23

The issue is that c[i,1:len_J_i[i]]*x[i,:]' isn’t defined for Vector * Adjoint{SparseAxisArray} because SparseAxisArray is more like a dictionary than a matrix.

What you have written should be efficient. The problem is just the number of variables. If I = 15,000 and M = 5,000, then you’re going to have 10^7 variables, and a dense constraint with that many elements! JuMP is build assuming that most constraints are highly sparse.

Why so many variables and why the dense constraint?

c[i,1:len_J_i[i]]‘*x[i,:] fixes the error. Are you saying that sum(c[i,j]*x[i,j] for i=1:I, j=1:len_J_i[i]) should be equivalent to sum(c[i,1:len_J_i[i]]’*x[i,:] for i=1:I) in terms of time to construct the constraint?

@constraint(m,z == sum(c[i,1:len_J_i[i]]'*x[i,:] for i=1:I))

The variable z represents the objective function value of a minimization problem. Except I want to instead solve a feasibility problem (no objective) with lower and upper bounds on z in order to show optimality of a best known solution. For example, the lower bound is the best known lower bound, e.g., from a LP relaxation or a Benders decomposition, and the upper bound is one less than the best known upper bound, e.g., from a metaheuristic.

@variable(m,200 <= z <= 300,Int)

Sure, but what problem has a dense objective with 10^7 variables?

It’s a facility location problem with I = 15000 customers and M = 5000 candidate facilities. c is the cost matrix giving the cost c[i,j] of assigning customer i to facility j. x determines to which facility each customer is assigned.

If you have a variable for every possible assignment, then that is a very large problem. I’d suggest that you try to make the problem sparser, by pre-filtering the list of facilities that each customer can be assigned to the closest N for each customer.

I don’t really understand the need for an explicit z variable. Can’t you just solve and see what the optimum is? Adding bounds likely doesn’t help the solution process, and might even make it worse.