How to use 96 bits floats?

I am solving some linear systems with ill-conditioned sparse matrices. Using Float64 sometimes the solver returns “Singular matrix”.
I would like to increase the number of bits for the floating-point numbers in the sparse matrix.
How can I get 96 bits floats in Julia? Is there any other option?

Try the built-in type BigFloat or perhaps DoubleFloats.jl
You might need to load GenericLinearAlgebra.jl for the algebra routines to accept the number types.

As mentioned above, DoubleFloats.jl may be a good option, but it is still much slower (about an order of magnitude) than the built-in Float64.

doublefloats should generally only be 3-10x slower. that said, there are a few cases where it is currently worse than that.

2 Likes

Note that increasing the precision is rarely a good solution for ill-conditioning. (If your matrices are ill-conditioned, then you typically have vastly increased sensitivity to any form of error in your problem, including modeling error. Even if you get an answer by increasing the precision, that doesn’t mean the answer is meaningful. Or if you have a catastrophic cancellation, you might simply need vastly increased precision to work around it.)

Often, you need to re-think how you are formulating the problem.

5 Likes