The background is
using SparseArrays, JuMP
m = Model();
@variable(m, x[1:2]);
My question is, can I create a sparse 2-by-2 object (whatever it is) denoted A
, such that
A[1, 1]
returns me x[1]
, A[2, 1]
returns me x[2]
. (And these 2 are the “nonzero” in that sparse object, like the SparseMatrixCSC
in SparseArrays.jl); A[1, 2]
and A[2, 2]
are the “undefined zeros”, i.e., the dots in a SparseMatrixCSC that can be visualized in REPL.
When A
is used, e.g., used in a constraint @constraint(m, A .== rhs)
,
A[1, 2]
and A[2, 2]
are recognized as a numeric 0
—whether it is false
, or 0::Int
or 0.0::Float64
—they will be all the same when delivered to the solver. Here by “numeric” I mean that they are treated as data, not decisions like A[1, 1]
and A[2, 1]
.
A dense version of this desired A
can be
julia> dense_version_A = Union{VariableRef, Bool}[x[1] false; x[2] false]
2×2 Matrix{Union{Bool, VariableRef}}:
x[1] false
x[2] false
But what about a sparse counterpart?
additional question
Why do we need a AffExpr zero? Wouldn’t a plain numeric zero be simpler?
A numeric value is always easier to handle than a decision variable in optimization problems.
julia> z = zero(x[1])
0
julia> z::Int
ERROR: TypeError: in typeassert, expected Int64, got a value of type AffExpr
Stacktrace:
[1] top-level scope
@ REPL[14]:1
And an undesirable consequence is
julia> zero(x[1]) * zero(x[2])
0
julia> typeof(ans)
QuadExpr (alias for GenericQuadExpr{Float64, GenericVariableRef{Float64}})
julia> @constraint(m, zero(x[1]) * zero(x[2]) >= x[1])
-x[1] >= 0
julia> typeof(ans)
ConstraintRef{Model, MathOptInterface.ConstraintIndex{MathOptInterface.ScalarQuadraticFunction{Float64}, MathOptInterface.GreaterThan{Float64}}, ScalarShape}
julia> zero(x[1])^2 * zero(x[2])
(0) * (0)
julia> typeof(ans)
NonlinearExpr (alias for GenericNonlinearExpr{GenericVariableRef{Float64}})