# Inverse of a symmetric matrix is not symmetric?

I am new to Julia and I am using Julia Version 0.6.2 and I am wondering the possible reason for the following.

``````mat = zeros(6, 6)
for i = 1 : 6
for j = 1 : 6
mat[i, j] = exp(-(i - j)^2)
end
end
issymmetric(mat)
issymmetric(inv(mat))
``````

And I get the following result,

``````Main> issymmetric(mat)
true

Main> issymmetric(inv(mat))
false
``````

Is it some kind of bug or am I not using Julia properly?

All math performed with floating-point numbers is subject to some degree of round-off and approximation. The inverse of your matrix is not perfectly symmetric, but it is very close:

``````julia> minv = inv(mat)
6×6 Array{Float64,2}:
1.18147    -0.502645    0.1883     -0.0692716   0.0250252  -0.0079607
-0.502645    1.39526    -0.582587    0.217304   -0.0786496   0.0250252
0.1883     -0.582587    1.42474    -0.59216     0.217304   -0.0692716
-0.0692716   0.217304   -0.59216     1.42474    -0.582587    0.1883
0.0250252  -0.0786496   0.217304   -0.582587    1.39526    -0.502645
-0.0079607   0.0250252  -0.0692716   0.1883     -0.502645    1.18147

julia> minv - minv'
6×6 Array{Float64,2}:
0.0          -1.11022e-16   2.77556e-17   4.16334e-17  -1.04083e-17   3.46945e-18
1.11022e-16   0.0          -1.11022e-16  -5.55112e-17   1.38778e-17  -3.46945e-18
-2.77556e-17   1.11022e-16   0.0           2.22045e-16  -8.32667e-17   1.38778e-17
-4.16334e-17   5.55112e-17  -2.22045e-16   0.0           1.11022e-16  -5.55112e-17
1.04083e-17  -1.38778e-17   8.32667e-17  -1.11022e-16   0.0           0.0
-3.46945e-18   3.46945e-18  -1.38778e-17   5.55112e-17   0.0           0.0
``````
4 Likes

Julia has a Symmetric matrix type. What happens if you use that? (Away from a computer right now.)

It works!

``````julia> s = Symmetric(mat)
6×6 Symmetric{Float64,Array{Float64,2}}:
1.0          0.367879    0.0183156   0.00012341  1.12535e-7  1.38879e-11
0.367879     1.0         0.367879    0.0183156   0.00012341  1.12535e-7
0.0183156    0.367879    1.0         0.367879    0.0183156   0.00012341
0.00012341   0.0183156   0.367879    1.0         0.367879    0.0183156
1.12535e-7   0.00012341  0.0183156   0.367879    1.0         0.367879
1.38879e-11  1.12535e-7  0.00012341  0.0183156   0.367879    1.0

julia> inv(s)
6×6 Symmetric{Float64,Array{Float64,2}}:
1.18147    -0.502645    0.1883     -0.0692716   0.0250252  -0.0079607
-0.502645    1.39526    -0.582587    0.217304   -0.0786496   0.0250252
0.1883     -0.582587    1.42474    -0.59216     0.217304   -0.0692716
-0.0692716   0.217304   -0.59216     1.42474    -0.582587    0.1883
0.0250252  -0.0786496   0.217304   -0.582587    1.39526    -0.502645
-0.0079607   0.0250252  -0.0692716   0.1883     -0.502645    1.18147

julia> issymmetric(inv(s))
true
``````

Hooray for special matrix types!

2 Likes

Operations only use the upper triangle (by default). Getindex 6, 2, for example, points to 2, 6 in the parent matrix. BLAS/LAPACK routines will also all just use and assign to the upper half as appropriate (eg, symm instead of gemm for multiplication), and some operations (like inv) should be much faster. Yay for dispatch.

Not quite: for inversion of a 2k x 2k matrix, I get twice worse performance using the Symmetric wrappers than not. This is because the symmetric storing scheme forces BLAS to only look at the upper half of the matrix, causing non-optimal access patterns.

Could you please post the benchmarking code you’re using?

``````N = 2000
A = randn(N,N)
A = A'A
# A = Symmetric(A)
@time inv(A)
``````

Usually you should not benchmark in global scope. Wrap everything in a function. (Though I’m not sure it will make a difference in this case.)

Well, `inv` is the function. A similar `Symmetric` performance issue is also here Slow sparse matrix-vector product with symmetric matrices.

Ah, good point, thanks.

I cannot reproduce. For me there is no `Symmetric` penalty for `inv` (20 days old master)

``````using BenchmarkTools
N = 2000
A = randn(N,N)
A = A'A
B = Symmetric(A)
``````
``````julia> @btime inv(\$A);
248.214 ms (6 allocations: 31.51 MiB)

julia> @btime inv(\$B);
245.801 ms (13 allocations: 31.51 MiB)
``````

(although the timings are a bit noisy, for some reason)

``````julia> versioninfo()
Julia Version 0.7.0-DEV.4570
Platform Info:
OS: macOS (x86_64-apple-darwin17.4.0)
CPU: Intel(R) Core(TM) i7-5775R CPU @ 3.30GHz
WORD_SIZE: 64
LIBM: libopenlibm
Environment:
``````

EDIT: I do see the 2x penalty under 0.6.2, however.

``````julia> @btime inv(\$A);
250.712 ms (13 allocations: 62.04 MiB)

julia> @btime inv(\$B);
511.156 ms (18 allocations: 62.04 MiB)
``````

(Gotta say, 0.7 is a beast!)

1 Like

Hmm, on 0.7, MKL:

``````julia> using BenchmarkTools, LinearAlgebra

julia> s = randn(4000,2000) |> x -> x' * x;

julia> ss = Symmetric(s);

julia> @benchmark inv(\$s)
BenchmarkTools.Trial:
memory estimate:  33.95 MiB
allocs estimate:  6
--------------
minimum time:     119.113 ms (0.21% GC)
median time:      120.680 ms (0.19% GC)
mean time:        127.429 ms (2.23% GC)
maximum time:     188.418 ms (23.44% GC)
--------------
samples:          40
evals/sample:     1

julia> @benchmark inv(\$ss)
BenchmarkTools.Trial:
memory estimate:  33.95 MiB
allocs estimate:  13
--------------
minimum time:     148.684 ms (1.96% GC)
median time:      161.715 ms (0.18% GC)
mean time:        167.425 ms (2.81% GC)
maximum time:     221.199 ms (20.42% GC)
--------------
samples:          30
evals/sample:     1
``````

OpenBLAS:

``````julia> using BenchmarkTools, LinearAlgebra

julia> s = randn(4000,2000) |> x -> x' * x;

julia> ss = Symmetric(s);

julia> @benchmark inv(\$s)
BenchmarkTools.Trial:
memory estimate:  31.51 MiB
allocs estimate:  6
--------------
minimum time:     192.289 ms (0.14% GC)
median time:      243.728 ms (0.11% GC)
mean time:        264.181 ms (2.64% GC)
maximum time:     648.272 ms (0.05% GC)
--------------
samples:          19
evals/sample:     1

julia> @benchmark inv(\$ss)
BenchmarkTools.Trial:
memory estimate:  31.51 MiB
allocs estimate:  13
--------------
minimum time:     222.558 ms (2.07% GC)
median time:      234.200 ms (1.01% GC)
mean time:        251.864 ms (3.07% GC)
maximum time:     372.182 ms (14.26% GC)
--------------
samples:          20
evals/sample:     1
``````

EDIT:

``````julia> versioninfo()
Julia Version 0.7.0-DEV.4741
Commit 14b16b7 (2018-04-03 15:49 UTC)
Platform Info:
OS: Linux (x86_64-linux-gnu)
CPU: Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz
WORD_SIZE: 64
LIBM: libopenlibm
LLVM: libLLVM-6.0.0 (ORCJIT, haswell)
Environment:
JULIA_PKG3_PRECOMPILE = true

julia> versioninfo()
Julia Version 0.7.0-DEV.4741
Commit 24e2005* (2018-04-03 15:48 UTC)
Platform Info:
OS: Linux (x86_64-linux-gnu)
CPU: Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz
WORD_SIZE: 64
LIBM: libimf
LLVM: libLLVM-6.0.0 (ORCJIT, haswell)
Environment:
JULIA_PKG3_PRECOMPILE = true
``````
2 Likes

(Wow, that’s a sizeable performance increase with MKL. I’m jealous. I tried a thousand ways to compile 0.7 against MKL under macos, and never managed to make it all the way through)

Hmm, I didn’t run into any problems just following the (new) instructions on github (ie, not using Intel’s compilers).

Unfortunately, (unless I’m mistaken) the free of charge MKL download doesn’t come with the `compilervars.sh`. I got it with their compilers (free of charge for students, but also qualified open source contributors and educators).
[EDIT: compilervars.sh is a relatively short file, only 128 lines. ]

I don’t know what your errors were, but it took me a while to figure that out the first time I did.

Yes, but you are running Linux, not macos, right?
(Sorry for the change of subject here. This could be split)

I made a separate post sharing my build woes with MKL on the mac (Issues building Julia 0.7 with Intel MKL on macOS), to avoid hijacking the `Symmetric` issue… Although I’m not sure it’s an issue anymore?