Type stability of sparse solves?

Hi,

If you run the following code:

using SparseArrays
A = sparse([1.0f0 2.0f0; 3.0f0 4.0f0])
b = Float32[1, 2]
println(eltype(A \ b))

it prints Float64, I thought it should’ve produced a vector of Float32. Is this expected/known/documented?

2 Likes

Note that this is not a type instability (the output type is always Float64), it is an unexpected promotion.

I never noticed this, but presumably it is because the underlying SuiteSparse library is only double precision? This issue seems relevant: \ does not work for sparse matrices with Float32 elements · Issue #111 · JuliaSparse/SparseArrays.jl · GitHub … single precision Cholesky was only recently added (Add Support for Single Precision by rayegun · Pull Request #487 · JuliaSparse/SparseArrays.jl · GitHub); I’m not sure about LU.

4 Likes

Yup, the LU factorization is explicitly converting to Float64 on these lines of SparseArrays/src/solvers/umfpack.jl.

In 2023, the SuiteSparse author wrote:

UMFPACK and SPQR are not yet revised to handle single precisions matrices

so it’s (still, apparently) a limitation of the underlying libraries.

2 Likes

You should be able to use SparsPak w/o the implicit promotion.

It looks like Pardiso.jl and MUMPS.jl also support single precision.

Well, I don’t know if it’s a bug, but the way it is now just seems like a method was accidentally added for A\b with Float32 types… In my admittedly unexpert opinion, it seems like it would have been more logical not to have it at all (I think it used to be it wasn’t there), or if you’re going to silently cast it to Float64, maybe cast it back to Float32 when you return?

You can end with:

julia> Float32.(A \ b)  # I believe this cast rounds, not just truncates, what I want, but couldn't easily confirm, truncating might even be good enough for you?!
2-element Vector{Float32}:
 0.0
 0.5

While I believe you want it the operator, i.e that cast added automatically, I’m not 100% sure it’s wanted there (if the library/operator would ever returnFloat32, that will be a no-op). I.e. I suppose you get rounding to Float64, and then with the cast rounding on top of rounding (and smaller range, could it ever change to Infs or would that not happen since the input was Float32).

Double rounding is almost correct, I believe only 1, or 0, ULP off, but maybe that’s not something we want to do in a library/operator automatically. You might be perfectly ok with that though.

[I’m not expert here, I was just reading about correctly rounded math libraries, and some can do this double rounding safely, most, i.e. older do not. What I was reading didn’t have anything to do with sparse libraries, directly, but mostly likely translated to it. Julia could use such correctly rounded Float32 libraries, I think don’t and doesn’t promise it; the table makers dilemma by now only applies to larger then 32-bit, e.g. Float64, and intractable to fix, there only, my understanding.]

This is an actual type instability, because if you do

using SparseArrays
A = sparse([1.0f0 2.0f0 3.0f0; 4.0f0 5.0f0 6.0f0])
b = Float32[1, 2]
println(eltype(A \ b))

you get Float32, i.e., you indeed get output of different types for inputs of the same type.

The reason is that for square A left division will use lu, which promotes to Float64, whereas for non-square A left division uses qr, which can handle Float32.

Now the promotion to Float64 was clearly done intentionally, and it cannot be removed because that would be breaking. However, casting it back to Float32 can be done (and should, in my opinion). I encourage you to open an issue in SparseArrays.

1 Like