Apologies if this has been discussed somewhere and I missed it in my searches, but I’m a bit confused by the behavior implemented for block diagonal matrices.
For example, I note that if I did something like
using LinearAlgebra blocks = [rand(2,2) for _ in 1:5] D = Diagonal(blocks) @show det(D), tr(D)
I get 2x2 matrices back. I recognize that this literally makes sense in the sense that we’re adding up the diagonals of a matrix whose elements are 2x2 matrices, but I think another natural way one might have expected this to behave is more like
_det(A) = prod(det.(A.diag)) _tr(A) = sum(tr.(A.diag))
that treat the block diagonal structure simply as a convenient and lightweight wrapper around the sparse matrix whose nonzero entries are diagonal blocks.
I expect that people thought about this and decided on the current implementation (and I see that the above functionality that does exist is actually quite new), so I’m curious to hear about why.
I think it would also be nice, for example, if there were some functionality whose use resembled
Diagonal is really just a thin wrapper that means I don’t have to write a careful loop that does soemthing like
blocks[j] \ x[some_indices(j)] and so on. Maybe functions
diagsolve or something like that exist and I’m just not aware of them?
I would be interested to hear people’s thoughts on this. I have seen similar packages (like BlockArrays.jl), but unless I’ve missed something in the docs they don’t exactly implement the functionality that I’m describing.
If this is something that people are interested in having but doesn’t exist yet, I’d be happy to contribute something (although I do not consider myself to write code that is of comparable quality to existing software in JuliaMatrices).