Specific methods for symmetric matrices

I’m defining a function f(m) that operates on a big matrix m that may be or not be symmetric. If it is symmetric, the algorithm can by optimized by only working on the upper triangle.

I know that the symmetry of m can be checked with issymmetric(m), but for big matrices this also has some cost, and in some cases I already know that m is symmetric, because it has been created by another function g that always produces symmetric matrices.

I guess that there must be something that I could do with types, in order to apply the efficient algorithm for symmetric matrices when suitable (without using issymmetric), but I’m failing to find the correct solution.

I have tried with a special function f(x::Symmetric) that implements the efficient algorithm, and calling m=Symmetric(m) in the last line of the function g that produces the symmetric matrix. But the conversion to Symmetric also takes time, and performance can be even worse.

Now I was trying to create a subtype of AbstractMatrix, ad hoc for the output of g, which I cold use in the signature of f instead of Symmetric. But I don’t know how to declare such type.

https://docs.julialang.org/en/stable/manual/types/ tells how to define subtypes of scalars, composite types, etc., but not how to define a subtype of vectors, matrices, etc. Doing it as for scalars, with abstract type, I get an error of invalid subtyping.

1 Like

It sounds like you actually have almost the right implementation already. As I understand it, what you’re doing is:

f(x::Symmetric) = <do the fast symmetric version>

f(x::AbstractMatrix) = <do the slow non-symmetric version>

and then when you know you have a symmetric input, you do:


That sounds totally fine to me. You could certainly implement your own type which is a subtype of AbstractMatrix, but you might not need to.

However, you mentioned that the conversion to Symmetric(m) takes time and hurts your performance. That shouldn’t be the case, as the Symmetric type is already just an extremely cheap wrapper around an existing matrix. For example:

julia> using BenchmarkTools

julia> A = rand(1000, 1000);

julia> @btime Symmetric($A);
  13.772 ns (1 allocation: 32 bytes)

constructing the Symmetric wrapper takes less than 14 nanoseconds, which is pretty fast.

I suspect the performance problem with Symmetric is coming from some other aspect of your code. For example, the following code will be slower:

m = rand(10, 10)  # in this line, m is a Matrix{Float64}
m = Symmetric(m)  # now m has changed type to a `Symmetric{Float64, Matrix{Float64}}`

because the variable m changes type, which is the definition of type-unstable code. See: https://docs.julialang.org/en/stable/manual/performance-tips/#Avoid-changing-the-type-of-a-variable-1

If you share more about your actual code we can probably help you figure out why the Symmetric wrapper is slow for you.


You are right. The problem was in another place, not with Symmetric.

Just to explain what happened: I neglected to mention that I am actually working with symmetric sparse bool matrices (SparseMatrixCSC{Bool,Int64}), and my original code was like this:

function f(x)
    # do some stuff, and call another function h

# Different methods of h:
function h(x::SparseMatrixCSC{Bool})
    # Efficient method that goes through nonzero cells
function h(x::AbstractMatrix)
    # Slower method that traverses the full matrix

I had created a new function hsym with equivalent methods, but only going through the upper triangle, and then added:

function f(x::Symmetric)
    # call the symmetric version of h

Now, Symmetric <: AbstractMatrix, but not SparseMatrixCSC, so I was calling the wrong, slower method of hsym. Fixed it with:

function f(x::Symmetric)
    # call the symmetric version of h

I have changed the title of the conversation, since I think the new one reflects better what we are talking about.

By the way, I found only by chance that I can access the underlying data of x::Symmetric through x.data, since it is not documented. So I wonder if this is the right way of solving my problem, or just a hack that might break without notice.

That’s not really what ‘type-unstable’ means, in my understanding. The docs call it “An analogous “type-stability” problem”. Type-instability generally means that the output type of an operation is not predictable from the input types, but also depends on run-time values, or some other run-time information.

The code above is entirely type-predictable in principle, but the compiler is for some reason not currently clever enough to track type changes in a variable.