To add an AbstractVector v
to the diagonal of an AbstractMatrix M
, one can do
using LinearAlgebra
M + Diagonal(v)
Is there a way to do the same thing without creating a new matrix (i.e., modify M
in place)? I’m looking for a one-liner that would be as fast and work for Matrix, DiagonalMatrix, TridiagonalMatrix etc…
I’ve tried using broadcasting but it is not as fast as it could be:
using LinearAlgebra, BenchmarkTools
N = 10_000
M = Tridiagonal(ones(N-1), ones(N), ones(N-1))
v = ones(N)
@btime M + Diagonal(v)
# 13.545 μs (8 allocations: 234.58 KiB)
@btime M .= M .+ Diagonal(v)
# 67.193 μs (3 allocations: 80 bytes)
Not too general (relying on an implementation detail) but fast:
julia> @btime $M.d .= $v;
2.449 μs (0 allocations: 0 bytes)
julia> @btime $M + Diagonal($v); # for comparison on my machine
19.084 μs (7 allocations: 234.56 KiB)
julia> @btime $M .= $M .+ Diagonal($v); # for comparison on my machine
104.632 μs (0 allocations: 0 bytes)
1 Like
The obvious and general (but slower) answer is @view(M[diagind(M)]) .= v
julia> @btime @view($M[diagind($M)]) .= $v;
86.377 μs (0 allocations: 0 bytes)
And, just for the fun of it, the basic loop implementation:
julia> function f!(M,v)
@assert size(M, 1) == size(M, 2)
@assert length(v) == size(M, 1)
for i in axes(M, 1)
@inbounds M[i,i] = v[i]
end
return M
end
f! (generic function with 1 method)
julia> @btime f!($M,$v);
77.950 μs (0 allocations: 0 bytes)