I have encountered a problem of the following kind.
C = A^-1 * B
where A is a very large nxn sparse banded symmetric matrix (of SparseMatrixCSC type) and B is nx5 matrix.
I have read on this forum and somewhere else that inverting a large sparse matrix gives a dense matrix and is computationally expensive.
So, I am thinking of taking each column of B at a time and solving individually and then assembling solutions into a matrix C.
A * x1 = b1
A * x2 = b2
A * x3 = b3
A * x4 = b4
A * x5 = b5
C = [x1 x2 x3 x4 x5]
Is this the right approach?
Is there any efficient way in Julia or in general to get matrix C?
Note: As I am interested in checking if the procedure works, I have used the fully populated matrices in sparse matrix form, ignoring the computational cost.
julia> using SparseArrays
julia> A = sparse(rand(4,4))
4×4 SparseMatrixCSC{Float64, Int64} with 16 stored entries:
0.320401 0.966415 0.164792 0.222872
0.53768 0.54209 0.197565 0.0362941
0.897137 0.242278 0.3575 0.882516
0.250789 0.872864 0.955418 0.0661719
julia> A \ b
ERROR: MethodError: no method matching ldiv!(::SuiteSparse.UMFPACK.UmfpackLU{Float64, Int64}, ::SparseMatrixCSC{Float64, Int64})
Closest candidates are:
ldiv!(::LinearAlgebra.LU{T, LinearAlgebra.Tridiagonal{T, V}}, ::AbstractVecOrMat) where {T, V} at C:\Users\rajeshnakka\AppData\Local\Programs\Julia170\share\julia\stdlib\v1.7\LinearAlgebra\src\lu.jl:595
ldiv!(::Number, ::AbstractArray) at C:\Users\rajeshnakka\AppData\Local\Programs\Julia170\share\julia\stdlib\v1.7\LinearAlgebra\src\generic.jl:271
ldiv!(::SuiteSparse.UMFPACK.UmfpackLU{T}, ::StridedVecOrMat{T}) where T<:Union{Float64, ComplexF64} at C:\Users\rajeshnakka\AppData\Local\Programs\Julia170\share\julia\stdlib\v1.7\SuiteSparse\src\umfpack.jl:751
I am getting a similar error with b \ A
julia> b \ A
ERROR: MethodError: no method matching ldiv!(::SuiteSparse.SPQR.QRSparse{Float64, Int64}, ::SparseMatrixCSC{Float64, Int64})
Closest candidates are:
ldiv!(::LinearAlgebra.LU{T, LinearAlgebra.Tridiagonal{T, V}}, ::AbstractVecOrMat) where {T, V} at C:\Users\rajeshnakka\AppData\Local\Programs\Julia170\share\julia\stdlib\v1.7\LinearAlgebra\src\lu.jl:595
ldiv!(::Number, ::AbstractArray) at C:\Users\rajeshnakka\AppData\Local\Programs\Julia170\share\julia\stdlib\v1.7\LinearAlgebra\src\generic.jl:271
ldiv!(::LinearAlgebra.LDLt{<:Any, <:LinearAlgebra.SymTridiagonal}, ::AbstractVecOrMat) at C:\Users\rajeshnakka\AppData\Local\Programs\Julia170\share\julia\stdlib\v1.7\LinearAlgebra\src\ldlt.jl:172
Coming from Matlab, I was expecting the same behavior too.
As far as I can tell, Julia’s \ accepts as right-hand side b only vectors like rand(4) or single-column matrices like rand(4,1). The result is of the same type as b.
The vector form seems to be preferred, not all linear solvers accept a single-column matrix.
using LinearAlgebra, SparseArrays
function solve_multiRHS!(A, b)
lu!(A)
for i = 1:length(b)
b[i] = A\b[i]
end
return b
end
A = rand(4,4)
b = Vector{Float64}[]
for i = 1:3
push!(b, rand(4))
end
solve_multiRHS!(A, b)
The sparse operator, when needed, is then applied to A and the individual vectors of b.
Nope. sparse(rand(4,4)) \ rand(4,3) (i.e. a matrix of right-hand sides) works just fine.
The key point here is that while the system matrix A can be sparse, the right-hand-side vector (or matrix!) must be dense — Julia doesn’t have any method to solve with a sparse right-hand-side.
The basic reason for this is not specific to Julia: you can’t generally gain anything from sparse right-hand sides in linear algebre, since once you perform the solve the solution is no longer sparse. In fact, this is exactly the same reason why you don’t want to compute inv(A) or A^-1 for a sparse A — matrix inversion is equivalent to solving with a right-hand side equal to the identity matrix (which is very sparse!) but the resulting inverse matrix is almost always dense.