# Is there a Symmetric Sparse Matrix Implementation in Julia?

Hi, I want a symmetric sparse matrix that behaves like this

``````using StaticArrays
using Accessors
A = zeros(SHermitianCompact{3})
A = @set A[1,2] = 1
3×3 SHermitianCompact{3, Float64, 6} with indices SOneTo(3)×SOneTo(3):
0.0  1.0  0.0
1.0  0.0  0.0
0.0  0.0  0.0
``````

Basically, whenever I modify the value of the matrix I would want it to understand that it’s symmetric. Also, note that

``````A[2,1]
1.0
``````

`SHermitianCompact` is great, is exactly what I want but it has a problem. I need to do a huge matrix, for example, 10_000 x 10_000. For every step, I will make a for loop that looks like this

``````for element in elements
if condition(element)
i,j = index(element)
A = @set A[i,j] = 0.0
else
A = @set A[i,j] = value(element)
end
end
``````

I know for a fact that most of the matrix elements will be 0. Then, a sparse matrix will be Ideal. However looking through most of the space matrix implementations like `SparseMatricesCSR`, `SparseArrays`, `SuiteSparseGraphBLAS`, `SparseArrayKit`, and `SparseArrayKit` don’t have what I need. Actually, `SparseMatricesCSR` has a symmetric sparse but because of the lack of documentation, I couldn’t make it work.

At last, most of these implementations add 0 as a new element when

``````using ExtendableSparse
A = ExtendableSparseMatrix(zeros(4,4))
A[1,2] = 1.0
A[1,2] = 0.0
A
4×4 ExtendableSparseMatrix{Float64, Int64} with 1 stored entry:
⋅   0.0   ⋅    ⋅
⋅    ⋅    ⋅    ⋅
⋅    ⋅    ⋅    ⋅
⋅    ⋅    ⋅    ⋅
``````

and to remove it one has to call drop zeros. I would like it to do so automatically, I don’t want to have to call the function each time I add a zero.

Do you know of one package that has this implementation? Do you know a way of doing this in a good way? What do you recommend?

Thanks.

You can create the upper (or lower) half and then use the `Symmetric` wrapper view from `LinearAlgebra`, or adding the transpose.

Whether using this method, or any other “symmetric method”, is a good idea ultimately depends on what you will use the matrix for. For example, if you are solving a linear system, the solver might not be faster and/or have special method for the symmetric case: the default Cholesky factorization will utilize the symmetry if you use the method I mentioned above, but there is nothing similar for the default LU factorization. For iterative methods I think it will be faster to have the full matrix too when doing the matrix-vector multiplications. Quite often it is much better to create the full matrix – the storage savings would only be a factor of 2 anyway.

Adding new structural elements like that (and removing them) is pretty expensive, so I would not insert elements like this in the first place.

1 Like

Thanks for your answer. Actually, I’m only using it to store and get data from the matrix. I’m not doing any linear algebra on it.