Tackling the AbstractArray fallback trap - wrapped sparse matrices

When handling wrapped sparse matrices, for example Symmetric(sparse(...)), it easily happens, the performance is not as expected, because you get caught in the “abstract array fallback” trap: while you are expecting to use an algorithm, which takes advantage of the sparse structure of your problem, a generic algorithm for argument type AbstractArray or AbstractMatrix takes control, while the same algorithm is available for SparseMatrixCSC. This is not a new issue - see for example 1 or 2.
In my opinion it is also a severe competition drawback, compared to MATLAB, if we cannot handle really big sparse symmetric matrices in a better way.

I want to discuss a way out of the trouble.

  1. provide a unified way to transform any kind of wrapped sparse matrix to a SparseMatrixCSC
  2. provide specialized algorithms for selected wrapped types
  3. establish a “sparse matrix fallback”, which replaces the “abstract arrax fallback” for wrapped sparse types
  4. avoid deeply nested wrapped types by quasi constructors, which simplify combinations of wrappers

Ad 1: I would like to have something like unwrap(A::AbstractArray), which returns an Matrix or SparseMatrixCSC depending on the underlying storage of A. If A has already the target type, return A itself. Also nested wrappers must be supported.
Currently we have tri[ul] to convert triangular matrices, getindex to convert SubArrays, and methods to transform transposed and adjoint wrappers of Matrix and SparseMatrixCSC.
Missing are methods for symmetric and hermitian matrices, for Unit[Upper|Lower]Triangular, for Diagonal, Bidiagonal, etc. and for nested wrappers, e.g. Adjoint(view(sparse(...))).
And there is currently no unique function to perform the conversions.

Ad 2: There has been already some effort in this area for a few common cases. Nevertheless there are some algorithms and there as quite a lot of useful (nested) wrappers. Potentially the number of nested wrappers is unlimited. So this approach can improve the situation at most important edges, but never be complete.

Ad 3: Some or all methods with argument type AbstractMatrix, which implement an algorithm using the standard interface ([], getindex, setindex) are affected. These methods obtain a new signature (rename or extra trait argument). At the same time a new method with the original signature is defined, which dispatches to the appropriate implementation method. The special cases of item 2. and the special implementations for SparseMatrixCSC have to be shared by the dispatching process.

Ad 4: We have already transpose and adjoint for this purpose. Also most constructors avoid to deliver duplicate things like Symmetric(Symmetric(A)). Also Hermitian(Symmetric(A)) = Hermitian(A), but these constructors do a half-heated job and Symmetric(Hermitian(A)) = Symmetric(A) is not correct.
Also combinations with triangular matrices, and the structured types (Diagonal, …) allow simplifications, which can reduce nesting depth. Actually, if we exclude SubArray and ReshapedArray, we can reduce the potentially infinite number of combinations of wrappers to finite. So I propose new quasi-constructors like symmetric, upper_triangular, etc. besides transpose and adjoint.

I implemented most of those ideas in SparseWrappers.
It was not possible to implement the “sparse array fallback” pattern for real implementation, because that requires changes of the existing “abstract array fallback” implementations in stdlib. The pattern is demonstrated in an example.
As a principle, all branches are evaluated at compile time by evaluating only type information. That was not possible at some areas where structural information is only stored in a data field (Symmetric.uplo …).

In order to make these improvements available to the public, I would like to move them piece by piece to stdlib/SparseArrays and stdlib/LinearAlgebra. There seems to be a review bottleneck though, e.g. #30018.

5 Likes

There are two not easily separable concepts here: storage (dense, sparse, block), and matrix properties (diagonal, unit lower triangular, symmetric, hermitian). Currently, wrappers like Symmetric accept all abstract matrices (that satisfy some properties, eg are square), and just impose symmetry by just using the elements above/below the diagonal, but it is my impression that this works ideally only when the underlying storage is dense.

Moreover, a lot of LinearAlgebra routines use modifying versions (eg mul! as a fallback), which do not necessarily mesh well with anything but dense matrices (and some similar layouts).

I run into these things all the time, eg #28869, but I don’t know what a good solution would be. Perhaps a disjoint set of types instead of wrappers, and traits to query matrix properties (eg a matrix can be diagonal, lower triangular, upper triangular, and sparse at the same time), which would allow some (possibly not 100% optimal) fallbacks until all cases are implemented.

1 Like

I do not agree. The mul!(::StridedArray, X, ::StridedArray) are a counter example, where the sparsity of X can be used efficiently. Implementations for Transpose(sparse), UpperTriangular(sparse), Transpose(LowerTriangular(sparse)) etc. are already merged in master, while Symmetric(sparse) etc. exist as reviewable PR.

You get

julia> using SparseWrappers

julia> A = lower_triangular(Diagonal([1, 2]))
2×2 Diagonal{Int64,Array{Int64,1}}:
 1  ⋅
 ⋅  2

julia> A * A'
2×2 Diagonal{Int64,Array{Int64,1}}:
 1  ⋅
 ⋅  4

That is a point for item 4. in the original post.

That is true. My statement is not, that it is easy, but that it is possible and has been done.

When the underlying storage is sparse, there is a gap in the current release of Julia. This proposal is going to close this gap.

There is now a new traits based idea for this problem. See README.
Comments are welcome.

1 Like