Julia equivalent of Python's "fsum" for floating point summation

My understanding is that Python’s fsum function doesn’t use Kahan summation, it uses an algorithm by Shewchuk that guarantees an exactly rounded result (i.e. the result as if you did the sum in infinite precision and then rounded it to the closest floating-point value).

Kahan summation, in contrast, doesn’t guarantee an exactly rounded result, but its error (relative to the L₁ norm of the input) is bounded independent of the number of terms. The default sum function in Julia is also quite accurate because it uses pairwise summation, with an error bound that grows only as the log of the number of terms.

(My understanding is that Shewchuk’s algorithm is especially useful if you want a small relative error even for ill-conditioned sums: sums of both positive and negative values where the result is nearly zero due to cancellations. I’ve never used the algorithm myself, though.)

11 Likes