so I was testing how Julia (1.1.0-DEV.452) handles Matrices with different kinds of properties (Symmetric, SPD, UpperTriangular etc.) and how much faster certain operations (i.e. inv(A)) perform based on those properties. I noticed that for Random, Symmetric & SPD (of size 1000x1000 Float64) performance (in terms of execution time, using the BenchmarkTools module) was pretty much the same, which is odd, considering that, for example, using the Cholesky factorization on an SPD and then computing the inverse is much faster.

Then I decided to use the Profile module to see exactly what functions run in each case. As you can see in the screenshot, in both cases LU factorization is called, even though that is not optimal for SPD.

Please post the code that you used to generate the matrices. In general, allowing people to reproduce what you are seeing is crucial if you want to get any real help.

using BenchmarkTools
using LinearAlgebra
A = rand(1000, 1000)
B = rand(1000, 1000)
C = rand(1000, 1000)
B = B*B'
B[1,1] = -1
B = Symmetric(B)
println(typeof(B))
println(issymmetric(B))
println(isposdef(B))
C = C*C'
C = C+1000*Array{Float64}(I, (1000, 1000))
C = Symmetric(C)
println(typeof(C))
println(issymmetric(C))
println(isposdef(C))
println(@benchmark inv($A))
println(@benchmark inv($B))
println(@benchmark inv($C))

Youâ€™re correct, but the problem is how do you determine if a symmetric the matrix is positive-definite? You need to try to compute the Cholesky factor.

On your 1000x1000 matrices I get roughly

10ms to compute Cholesky, 20ms to invert

20ms to compute LU, 40ms to invert

17ms to compute Bunch-Kaufman, 55 ms to invert

(it also takes ~1.4ms to check symmetry if you donâ€™t use a Symmetric wrapper)

So the current choice gives the best â€śworst-caseâ€ť behaviour (i.e. if your matrix is not positive definite). Another option would be to try to compute the Cholesky, then fallback on the LU if that fails. That would slightly increase the worst-case time (by 10ms), but give a large decrease to the positive-definite case (by 30ms), so that might be worth trying.

Of course, if you know that your matrix is positive definite, you can just call inv(cholesky(C)) which will give you the optimal performance.

In real applications Iâ€™ve never used Cholesky unless I knew a priori that the matrix was SPD. Usually SPD matrices are SPD for a reason â€” there is something about your problem that ensures this property, and it is a good idea to know about it if you want to understand what is going on.

I would also never just use \ or inv with an SPD matrix (actually, probably inv in general). Now. But I think it is important to consider the use cases of \ and inv. They are not for seasoned users, who will just use the best factorization directly given the structure of the matrix. Rather, I think the main use cases for \ and inv are prototyping and language promotion. In the latter case, they can be a litmus test for the language.