Hi,
I get very wrong answers on solving Ax=b with LU where A is a sparse matrix.

I am a bit puzzled by this, since as far as I understand Julia calls the SuiteSparse library for sparse Linear-Algebra like Matlab does.

In my MWE julia should ‘just’ make one call to UMFPACK for the LU-factorization
and two calls to SPEX for finding x.

If I do it in Matlab it works just fine, and it also works if do LU with a dense A i.e. lu(Array(A))
Maybe I dont understand how to use SparseArrays.jl, because this just seems too odd to me.

EDIT: In making a minimum example modifying my own I made A rectangular, A should be of course be quadratic…

Any comment on this are very welcome!

MWE:

using LinearAlgebra, SparseArrays
A = sprand(15, 15, 0.35) + 0.12 * I
cond(Array(A))
b = randn(15)
F = lu(A, check=true)
y = F.L \ b[F.p]
x = F.U \ y
A * x - b # very non-zero array for a not too ill-conditioned A
isapprox(A * x - b, zeros(15), atol=1e-4)

platform/environment:
Julia Version 1.8.5
Commit 17cfb8e65ea (2023-01-08 06:45 UTC)
Platform Info:
OS: macOS (x86_64-apple-darwin21.4.0)
CPU: 12 × Intel(R) Core™ i7-9750H CPU @ 2.60GHz
WORD_SIZE: 64
LIBM: libopenlibm
LLVM: libLLVM-13.0.1 (ORCJIT, skylake)
Threads: 1 on 12 virtual cores

(@v1.8) pkg> status
Status ~/.julia/environments/v1.8/Project.toml
[37e2e46d] LinearAlgebra
[2f01184e] SparseArrays

using LinearAlgebra, SparseArrays, LinearSolve
A = sprand(10, 15, 0.35) + 0.12 * I
cond(Array(A))
b = randn(10)
prob = LinearProblem(A, b)
sol = solve(prob)
x = sol.u
isapprox(A * x - b, zeros(10), atol=1e-10)

yeah but I think LinearSolve will choose the best way to factor A, which is probably not LU.
I am only interested in Julia’s own / operator, which might not be as fast as LinearSolve.jl, but it should work.

There are a couple of things going on: I’m not sure what solution you expect from an LU factorization of a rectangular matrix. In fact, on my machine with v1.9rc2, I get an error from LU on this matrix.

The other thing is that the decomposition is a little more complicated. It’s more along the lines of PDAQ^T = LU. This works for me on a square system:

using LinearAlgebra, SparseArrays
A = sprand(10, 10, 0.35) + 0.12 * I
cond(Array(A))
b = randn(10)
F = lu(A, check=true)
y = F.L \ (F.Rs .* b)[F.p]
z = F.U \ y
x = z[invperm(F.q)]
A * x - b
isapprox(A * x - b, zeros(10), atol=1e-4)

And of course F\b works just fine. So you probably don’t need to worry about breaking apart the decomposition like this.

I’m not sure as to the status of using lu on rectangular matrices. For example, lu(A) \ b throws an obscure error (that I’m assuming is related to this). But qr works fine, so perhaps that will work for you?

Note that, of course, you can simply use x = F \ b in order to make use of factorization objects, rather than dealing with the low-level details like this.

yeah I modified another example when I made this post to cut it down, little embarrassing…
But I still think there is an issue here, especially when I compare to Matlab.

If I do the same example with just bit larger matrix say 15x15 or 20x20 it cannot solve the system and matlab gives a warning, but solves the system.

hmmm yeah okay maybe I should read more about how the sparse decomposition really works. Matlab it just giving me more correct solutions where Julia’s is way off for anything larger than 10.

Reading more might help. I don’t have Matlab handy to check, but lu isn’t giving you the same decomposition as Matlab is apparently computing. It sounds like Matlab is giving the common LU decomposition with just row permutations: PA=LU. The decomposition for a nonsymmetric sparse matrix in Julia is using UMFPACK, which does NOT compute that decomposition. It is documented in the Julia help

The individual components of the factorization F can be accessed by indexing:
Component Description
––––––––– –––––––––––––––––––––––––––––––
L L (lower triangular) part of LU
U U (upper triangular) part of LU
p right permutation Vector
q left permutation Vector
Rs Vector of scaling factors
: (L,U,p,q,Rs) components
The relation between F and A is
F.L*F.U == (F.Rs .* A)[F.p, F.q]

The structure is also described in the UMFPACK documentation. Since it’s not the same decomposition, there’s no reason to expect what works in Matlab to work in Julia. You can ignore all this if you use F\backslash b. But if you want to break apart the decomposition, you’re going to need to work with the decomposition you have and that is necessarily going to be different from what you do in Matlab.

UMFPACK is incorporated as a built-in operator in MATLAB 6.5 as x = A\b when A is sparse and unsymmetric.

And Matlab can easily solve Ax=b where A is 40x40 sparse matrix with density 0.35,
Julia just gives wrong answers to a 15x15 with density 0.35.

I still think there might be an issue here, since Julia’s implementation just breaks when n goes above 10?

I don’t really know how to be totally sure? I guess I can try to benchmark Julias and Matlab against each other and if they have similar speed, and Matlab can solve much bigger problems than Julia then we know there is an issue?

I just assumed that whatever Matlab did was right, but as you stated in your first answer it does a different composition given the captured variables.

Your snippet works, I misunderstood your original answer, and I did not understand the decomposition before now.