Symmetric matrices

According to the docs, we are able to create a Symmetric matrix from another matrix that is already stored in memory:

A = eye(3)
B = Symmetric(A)
sizeof(A) # 72 bytes
sizeof(B) # 16 bytes

How to populate a Symmetric matrix directly without having to store a full matrix? In particular, if we could index a Symmetric matrix and have the expected behavior that whenever B[i,j] is set to b, B[j,i] is automatically set to b, that would be very useful.

I would like to save memory while working with big covariance matrices and also be able to enforce symmetry in the type to dispatch the appropriate LinAlg solvers. Is this a feature that you consider relevant?

B[1,1] # ERROR: not supported
1 Like

If you are using dense matrices (or sparse for that matter, although see #22200), you can’t. Most algorithms depend on matrix multiplication, diagonalisation etc which are BLAS or LAPACK routines that take in the full dense array, despite not needing it all. Even if your application does not depend on those operations, the Symmetric type in julia is just a wrapper for the full array. You would need to make your own type that stores only the entries you want. Examples of similar types are Diagonal or SymmetricTridiagonal, which only store the required entries.

1 Like

@jebej, thank you for sharing the issue on GitHub. I tried to digest the information there, but couldn’t understand why Symmetric cannot be made to store only half of the elements like Diagonal and SymmetricTridiagonal?

Couldn’t a Symmetric matrix be converted to a dense representation on the last minute when BLAS/LAPACK routines are called? We would still save memory, correct?

It could, but I assume the reason it’s not done is because the benefits do not outweigh the disadvantages (i.e. it is pretty expensive to do this conversion). Plus, if you do end up making the matrix for a BLAS call, then you would need both the hypothetical SymmetricWhichSavesMemory matrix stored, plus the BLAS-required matrix, so it would end up using more memory.

2 Likes

Note that if you are only looking to save memory for long-term storage, you could always store only the upper triangle in a sparse matrix. To make it dense again you could then call Symmetric(full(A)).

2 Likes

LAPACK supports a packed storage layout for symmetric matrices. That’s not what Symmetric uses, I think for performance reasons. But it would make sense to have a type supporting it (or maybe even to do that by default) to save memory.

3 Likes

I would love to have these memory savings with big symmetric matrices, it sounds much smarter and efficient. I don’t have a clear picture of why this is not feasible, and thus I will leave that question to the LAPACK experts.

It’s only a factor of two in memory, which is usually not the difference between feasible and infeasible, and the price you pay is that operations on the packed format are much less efficient (you probably lose much more than a factor of two in performance for most matrix operations).

2 Likes

For some operations (esp. Cholesky and triangular solves) LAPACK has routines for rectangular full packed format (newer than the format @nalimilan and @stevengj address above), which

enables the use of half the full storage while maintaining efficiency by using Level 3 BLAS/LAPACK kernels.

(detaiils here, about haflway down the page).

If these capabilities fit the bill, the price to be paid is in developer time.

3 Likes

@Ralph_Smith Sounds very interesting. Could you file an issue describing the new format a bit and (if possible) how it could be implemented in Julia?

Have there been any developments or further discussions on this? Symmetric matrix operations are ubiquitous in optimization, for which Julia has great potential.

cf. discussion here:
https://github.com/JuliaLang/julia/issues/22259

Note that Andreas renamed his package with wrappers for RFP routines to GenericLinearAlgebra

2 Likes

i’ve just made public SymmetricFormats.jl, which provides the packed storage format for symmetric matrices that @nalimilan mentioned above. should be in the GeneralRegistry in a couple days.

my use case is a simulation where a symmetric corellation matrix consumes the vast majority of memory. saving 2x on memory permits me to train models 2x bigger. moreover, i find that using packed arrays for this particular codebase is also almost 2x faster, contrary to @stevengj comment above. perhaps because i only rely on two BLAS routines (spmv and syr). syr computes A += alpha * x * x’, and so the iteration over A is halved.

i’m particularly interested in feedback on an open PR to this package which provides an optional way to set off-diagonal elements, which is a frequently requested feature.

12 Likes

i’ve also just made public BatchedBLAS.jl, which provides additional tooling for symmetric packed arrays on GPUs. specifically, batched_{syr,spr,symv,spmv} and for comparison batched_{ger,gemv,dot}. it was surprisingly easy to get within 5% of CUDA code with pure julia kernels!

4 Likes