Solving sparse linear system returns a Matrix

Solving the sparse system is returning a Matrix while converting to dense and then solving returns a Vector as expected. Is this the expected behavior? Or am I doing something incorrectly?

using SparseArrays

# Constructing sparse array 
function hess_sparse(x)
    return [-sin(x[1] + x[2]) + 1, -sin(x[1] + x[2]), -sin(x[1] + x[2]), -sin(x[1] + x[2]) + 1, 1.0, 1.0, 12*x[5]^2 + 1.0, 1.0]
end
rowval = [1, 1, 2, 2, 3, 4, 5, 6]
colval = [1, 2, 1, 2, 3, 4, 5, 6]

# Constructing sparse vec
function grad_sparse(x::Vector{T}) where T<: Number
    return [cos(x[1] + x[2]), cos(x[1] + x[2]), 2*x[3], 1/2, 4*x[5]^3, 1/2]
end
gradinds = [1, 2, 3, 4, 5, 6]


x0 = [1.0, 1.0, 1.0, 1.0, 1.0, 1.0]
n = length(x0)
hess_mat = sparse(rowval, colval, hess_sparse(x0), n, n) 
grad_vec = sparsevec(gradinds, grad_sparse(x0), n)

# Dense linear solve returns vector 
@show x = Array(hess_mat) \ Array(grad_vec)

# Sparse linear solve returns  matrix
@show x = hess_mat \ grad_vec
2 Likes

It seems like a bug in SparseArrays, could you file an issue with Julia?

1 Like

I have created an issue.
Solving sparse linear system returns a Matrix · Issue #48095 · JuliaLang/julia (github.com)

2 Likes

I could work around it by converting the matrix to vector explicitly.
I had another doubt. For a different value of x the sparse solver gives an error about using pivoted LU while the dense solver works. How do I resolve this? Thanks.

using SparseArrays

# Constructing sparse array 
function hess_sparse(x::Vector{T}) where T
    return [-sin(x[1] + x[2]) + 1, -sin(x[1] + x[2]), -sin(x[1] + x[2]), -sin(x[1] + x[2]) + 1.0, 1.0, 1.0, 12.0*x[5]^2 + 1.0, 1.0]
end
rowval = [1, 1, 2, 2, 3, 4, 5, 6]
colval = [1, 2, 1, 2, 3, 4, 5, 6]

# Constructing sparse vec
function grad_sparse(x::Vector{T}) where T<: Number
    return [cos(x[1] + x[2]), cos(x[1] + x[2]), 2*x[3], 1/2, 4*x[5]^3, 1/2]
end
gradinds = [1, 2, 3, 4, 5, 6]

x0 = [0.7853981648713337, 0.7853981693418342, 1.023999999999997e-7, -1.0, 0.33141395338218227, -1.0]
n = length(x0)
hess_mat = sparse(rowval, colval, hess_sparse(x0), n, n) 
grad_vec = sparsevec(gradinds, grad_sparse(x0), n)

# Dense linear solve is successful 
@show x = Array(hess_mat) \ Array(grad_vec)

# Sparse linear solve fails
@show x = hess_mat \ grad_vec


The problem here is that hess_mat is symmetric, and hence it tries to do a LDLt factorization, which fails for the reason that was pointed out in the github issue. So you can improve things a bit by requesting lu(hess_mat) \ grad_vec. However, that will fail because the rhs is sparse, but solves are only implemented for strided rhs’s, and generally that will perform much better because solutions to linear systems, even very sparse ones, are typically dense. One could help here by implicitly promoting the sparse rhs to a dense vector and then call ldiv!(lu(hess_mat), Vector(grad_vec)). That raises the question whether the all-sparse approach is actually the right one for your application. You may be better off working with a dense grad_vec to begin with, but keep the hess_mat sparse (for faster decomposition and lower memory consumption).

If you’re planning to work with very large linear systems with multiple very sparse right hand sides you may want to consider the MUMPS solver, which is able to exploit sparsity in right hand sides. However, unless you’re working on particularly large scale problems I’m not sure it’s worth the hassle of moving away from the backslash.

Thanks for the detailed response. I did try all the things you have mentioned but finally ended up by using rhs to be a dense vector. This was also the issue with LinearSolve.jl (see here).

One thing to note here is that MATLAB is able to solve this with sparse rhs and returns a sparse vector. No idea which algorithm it uses in particular. But I guess it is a better default. The timings are same as that with promoting rhs to dense in Julia. But that might be because in this toy problem the rhs is dense and problem size is too small. Not sure what the benefit would be for an actual sparse case (since x returned would be dense mostly).

Does SparseArrays.jl use its own routines for solving Linear systems or only those from SuiteSparse?

Thanks for the suggestion. The final system that I want to solve will be sparse but I guess the rhs would still be dense. But since I am just starting out with the problem I will stick with backslash or LinearSolve.jl.

You are artificially sparsifying the rhs. Your function grad_sparse already returns the rhs as is. You could then even save the additional memory allocation due to gradinds, if you left the rhs dense. Also, you would save the allocation of the index vector for the result, and, if you don’t need grad_vec, you could even solve in-place!

One challenge with the backslash is that it first determines an appropriate factorization for hess_mat. Then, depending on whether it is triangular or even diagonal or neither, it uses different “factorizations”. Unfortunately, this may yield either dense or sparse return values, as you noticed in the github issue.

1 Like

Yes the sparsity in rhs is artificial and even for the large problem that I want to eventually solve the rhs will be dense. So I will remove it. I am new to sparse linear algebra, and I thought that different forward/backward substitution algorithms will be used depending on whether rhs is sparse or not.

I do think the interface should be uniform i.e. if rhs is sparse the x returned should be a sparse vector and if rhs is dense the x returned must be a dense vector. Or if there is no advantage of sparsity on rhs then maybe dense vector in both the cases.

See also Sparse solve with sparse rhs

1 Like