ldiv!(Y, A, B) -> Y
Compute A \ B in-place and store the result in Y, returning the result.
The argument A should not be a matrix. Rather, instead of matrices it should be a factorization object (e.g. produced by factorize or cholesky). The reason for this is that factorization itself is both
expensive and typically allocates memory (although it can also be done in-place via, e.g., lu!), and performance-critical situations requiring ldiv! usually also require fine-grained control over the
factorization of A.
Examples
≡≡≡≡≡≡≡≡≡≡
julia> A = [1 2.2 4; 3.1 0.2 3; 4 1 2];
julia> X = [1; 2.5; 3];
julia> Y = zero(X);
julia> ldiv!(Y, qr(A), X);
Thank you for your note. Aha, is this rule of thumb cited in the literature? Does this because of the complex structure of Sparse Matrix/vector than the normal matrix/vector (thus slower in performance if nonzero éléments is more than 10%)?
I quickly searched for the 10% rule but couldn’t find a reference to it. In any case, the benefits of sparse matrix code depends also of the operations you want to conduct. To make a sound decision, you should benchmark your code with dense or sparse matrices.
You are right, sparse matrix code is more complex and therefore takes more time if you have a lot on non-zero element in the matrix
honestly 10% is very low sparsity. I would consider 1% closer to the breakeven point. Of course it depends on how big your matrices are and memory bandwidth vs compute bandwidth etc.