Note that Xsum.jl is actually faster than Float64
compensated summation and is much more accurate — it is equivalent to summing in infinite precision and then rounding the exact sum to Float64
at the end. By the same token, there’s no need to bother with BigFloat
or DoubleFloat
etcetera just to do sums accurately; Xsum will be better.
But I continue to think that if you find yourself needing arbitrary precision in a practical computational setting there is a high probability that you are making a mistake in how you formulated your algorithm. (Or you are simply misunderstanding accuracy, i.e. thinking that a tiny roundoff error is more important than it actually is.)