Hi everyone,
I have a question regarding the performance of matrix multiplication involving symmetric matrices in Julia. Here’s my code and the results:
let N = 1000
λ, Γp = [randn(N, N) for _ in 1:2]
Γp_symm = Symmetric(Γp)
Γp = Matrix(Γp_symm)
@assert λ' * Γp * λ ≈ λ' * Γp_symm * λ
@btime $λ' * $Γp_symm * $λ
@btime $λ' * $Γp * $λ
@btime $λ' * ($Γp_symm * $λ)
@btime $λ' * ($Γp * $λ)
end
The timing results are as follows:
-
λ' * Γp_symm * λ
: 651.877 ms (6 allocations: 15.26 MiB) -
λ' * Γp * λ
: 13.033 ms (6 allocations: 15.26 MiB) -
λ' * (Γp_symm * λ)
: 12.925 ms (6 allocations: 15.26 MiB) -
λ' * (Γp * λ)
: 12.975 ms (6 allocations: 15.26 MiB)
My questions are:
-
Why does the multiplication involving the
Symmetric
type without parentheses become significantly slower compared to the other cases? -
Does matrix-matrix multiplication in Julia require parentheses to specify the order explicitly? Adding parentheses seems to slightly speed up the naive multiplication, but it leads to a huge speedup when involving a symmetric matrix.
Here is my Julia version information:
Julia Version 1.11.0
Commit 501a4f25c2b (2024-10-07 11:40 UTC)
Build Info:
Official release
Platform Info:
OS: Linux (x86_64-linux-gnu)
CPU: 256 × Intel(R) Xeon(R) Gold 6448H
WORD_SIZE: 64
LLVM: libLLVM-16.0.6 (ORCJIT, sapphirerapids)
Threads: 4 default, 0 interactive, 2 GC (on 256 virtual cores)
Environment:
JULIA_PKG_SERVER = https://mirrors.nju.edu.cn/julia
JULIA_NUM_THREADS = 4
Any insights would be greatly appreciated!