JuMP and SparseArrays

The background is

using SparseArrays, JuMP
m = Model();
@variable(m, x[1:2]);

My question is, can I create a sparse 2-by-2 object (whatever it is) denoted A, such that
A[1, 1] returns me x[1], A[2, 1] returns me x[2]. (And these 2 are the “nonzero” in that sparse object, like the SparseMatrixCSC in SparseArrays.jl); A[1, 2] and A[2, 2] are the “undefined zeros”, i.e., the dots in a SparseMatrixCSC that can be visualized in REPL.

When A is used, e.g., used in a constraint @constraint(m, A .== rhs),
A[1, 2] and A[2, 2] are recognized as a numeric 0—whether it is false, or 0::Int or 0.0::Float64—they will be all the same when delivered to the solver. Here by “numeric” I mean that they are treated as data, not decisions like A[1, 1] and A[2, 1].

A dense version of this desired A can be

julia> dense_version_A = Union{VariableRef, Bool}[x[1] false; x[2] false]
2×2 Matrix{Union{Bool, VariableRef}}:
 x[1]  false
 x[2]  false

But what about a sparse counterpart?

additional question

Why do we need a AffExpr zero? Wouldn’t a plain numeric zero be simpler?
A numeric value is always easier to handle than a decision variable in optimization problems.

julia> z = zero(x[1])
0

julia> z::Int
ERROR: TypeError: in typeassert, expected Int64, got a value of type AffExpr
Stacktrace:
 [1] top-level scope
   @ REPL[14]:1

And an undesirable consequence is

julia> zero(x[1]) * zero(x[2])
0

julia> typeof(ans)
QuadExpr (alias for GenericQuadExpr{Float64, GenericVariableRef{Float64}})

julia> @constraint(m, zero(x[1]) * zero(x[2]) >= x[1])
-x[1] >= 0

julia> typeof(ans)
ConstraintRef{Model, MathOptInterface.ConstraintIndex{MathOptInterface.ScalarQuadraticFunction{Float64}, MathOptInterface.GreaterThan{Float64}}, ScalarShape}

julia> zero(x[1])^2 * zero(x[2])
(0) * (0)

julia> typeof(ans)
NonlinearExpr (alias for GenericNonlinearExpr{GenericVariableRef{Float64}})

Another example:

julia> using LinearAlgebra, JuMP

julia> m = Model();

julia> @variable(m, e[1:2]);

julia> @variable(m, d[1:3]);

julia> S = existing_S = SymTridiagonal(d, e)
3×3 SymTridiagonal{VariableRef, Vector{VariableRef}}:
 d[1]  e[1]  ⋅
 e[1]  d[2]  e[2]
 ⋅     e[2]  d[3]

julia> S[1, 3] * S[1, 2]
0

julia> typeof(ans)
QuadExpr (alias for GenericQuadExpr{Float64, GenericVariableRef{Float64}})

julia> 0 * S[1, 2]
0

julia> typeof(ans)
AffExpr (alias for GenericAffExpr{Float64, GenericVariableRef{Float64}})

julia> 0
0

julia> typeof(ans) # This is the desired, I think
Int64

I assume that it could be better for the user themselves to create a sparse container. Do you think so? Like the follows

using JuMP
m = Model();
@variable(m, d[1:4]);
@variable(m, u[1:3]);
# the desired sparse matrix (SymTridiagonal in this context)
# can be embodied in this user-defined entry function
function e(i, j)
    if j == i
        return d[i]
    elseif j == i + 1
        return u[i]
    elseif j > i
        return 0 # the desired numeric zero
    else
        return e(j, i)
    end
end;
[e(i, j) for i in 1:4, j in 1:4] # just take a look. (we won't create this Matrix in practice)

So, a larger question is

  • would it be meaningful to let SparseArrays.jl contain non-numeric objects?

At least according to the above tests, my answer would be “No”.

My question is, can I create a sparse 2-by-2 object (whatever it is) denoted A , such that

See this part of the documentation Variables · JuMP

Why do we need a AffExpr zero? Wouldn’t a plain numeric zero be simpler?

The main problem is the SparseArrays.jl has the assumption that zero(T)::T, which is not true for JuMP variables. This leads into all manner of issues.

And an undesirable consequence is

Yes. This is expected behavior. There’s no solution or work-around.

I assume that it could be better for the user themselves to create a sparse container

See this open issue: SparseMatrix support · Issue #4005 · jump-dev/JuMP.jl · GitHub

1 Like

About the type conversion:

julia> false::Bool * 0::Int
0

julia> typeof(ans)
Int64

julia> [false, 0]
2-element Vector{Int64}:
 0
 0

julia> eltype(ans)
Int64

I think the above are “okay” conversions, as Ints are only a slight bit more cumbersome that Bools. And modern computers can even work fluently with Float64s.

This is not the case in operations research.
Promoting a numeric/data zero to a VariableRef or AffExpr zero would obfuscate the problem formulation. Unlike promoting an Int to Float64, it’s a bad idea to promote an linear constraint to a quadratic constraint—the latter is much harder, and might even become unsolvable (e.g. before Gurobi 8.0).

There are a number of undesirable consequences from JuMPs choice of zero(VariableRef) = AffExpr, but we’re not changing it now because that would be a breaking change.