Type stability of sparse solves?

Hi,

If you run the following code:

using SparseArrays
A = sparse([1.0f0 2.0f0; 3.0f0 4.0f0])
b = Float32[1, 2]
println(eltype(A \ b))

it prints Float64, I thought it should’ve produced a vector of Float32. Is this expected/known/documented?

1 Like

Note that this is not a type instability (the output type is always Float64), it is an unexpected promotion.

I never noticed this, but presumably it is because the underlying SuiteSparse library is only double precision? This issue seems relevant: \ does not work for sparse matrices with Float32 elements · Issue #111 · JuliaSparse/SparseArrays.jl · GitHub … single precision Cholesky was only recently added (Add Support for Single Precision by rayegun · Pull Request #487 · JuliaSparse/SparseArrays.jl · GitHub); I’m not sure about LU.

3 Likes

Yup, the LU factorization is explicitly converting to Float64 on these lines of SparseArrays/src/solvers/umfpack.jl.

In 2023, the SuiteSparse author wrote:

UMFPACK and SPQR are not yet revised to handle single precisions matrices

so it’s (still, apparently) a limitation of the underlying libraries.

1 Like

You should be able to use SparsPak w/o the implicit promotion.

It looks like Pardiso.jl and MUMPS.jl also support single precision.