Take as an example a vector v which is of type Boolean and has n elements. Based on this vector we build a variable array x of the same length of v , but from which only the indices corresponding with the non-zero elements of v are effectively variables.

With JuMP we could do something like this:

v = Bool[1, 1, 1, 0, 0, 1]
n = size(v,1)
@variable(model, x[i=1:n; v[i] != 0])

The problem for me with the interaction of this variable x in the model building. If we wanted to create the following objective function:

Throws the following: ERROR: DimensionMismatch(â€śfirst array has length 6 which does not match the length of the second, 4.â€ť)
I would have expected for the variable to behave like a sparse array, with the â€śnon-variableâ€ť entries of x as zeros. But x[4] doesnâ€™t exist instead of being 0.

Is there any workaround to fill the empty entries with zeros? I would like to avoid for-loops, since the model Iâ€™m trying to build is already clearly written in Matrix-vector form (and is much more complex than the example).

Despite the name, SparseAxisArray isnâ€™t really a sparse matrix. It acts more like a dictionary. Why not just make x dense and then fix it to be zero? Solvers will presolve them out.

I have to use sparse arrays (in particular from the SparseArrays stdlib, but also some custom array types) with JuMP all the time. Iâ€™m pleased to report that this works great, but unfortunately JuMP is still missing a lot of methods for making this convenient, in particular, last I checked it was missing pretty much all efficient sparse matrix multiplications. The good news is that the default methods arenâ€™t always that bad, and itâ€™s pretty easy to add more.

I keep intending to make PRâ€™s to JuMP to make it more convenient to work with sparse variables. One of the things that has stopped me is that Iâ€™m not entirely clear on where in the JuMP API these would go.

Things that are needed include:

The aforementioned efficient multiplication methods

Perhaps some macros that simplify some expressions (e.g. there are some cases in which having a single multiplication with 3 arguments for a matrix multiplication is much more efficient than two separate multiplications).

Convenience functions for defining sparse variables (I often use one where I pass a binary matrix that specifies where the non-zero elements are).

Incorporating the sparse variable definitions into the existing macros.

@ExpandingMan the issue is that operations on Variables donâ€™t preserve output types. So Variable + Variable isnâ€™t a Variable. This contradicts widespread assumptions spread throughout base Julia code, e.g., Float64 + Float64 = Float64.

You can easily just do SparseArrays.findnz and then just make a loop that defines scalar JuMP variables and sticks them in a matrix.

Yes, I should have mentioned, that when you create the sparse variable matrices one should create matrices of GenericAffExpr, not of VariableRef. This works really well for linear problems, Iâ€™ve yet to work it out for quadratic problems.

To be honest, the last time I really looked at the JuMP sparse dictionary stuff I found it super annoying because they did not behave enough like normal AbstractArray in the cases I needed. Granted, itâ€™s been a while since I did anything with these, so the situation may have improved.