Different Float64 sum on different architectures

I sometimes get different results for summing a vector of Float64 numbers on different Windows machines. The vectors are identical, so it seems sum can produce different results across hardware architectures (even if os is always Windows).

Is this expected?

Is there a way to guarantee the same results?

Would sum_kbn from KahanSummation, or xsum from Xsum guarantee the same result?
Both of these alternative sums produce identical results across machines on my particular data.


Why do you need the exact same result?

Yes, the results will be hardware dependent. Based on the width of the CPU’s vector registers, a different order of adding the numbers will be optimal. These different orders can slightly change rounding.

You can use foldl(+, x) instead of sum(x) to guarantee a particular order.
Most Kahan Summations implementations aren’t SIMD and thus will be exactly the same across architectures (AccurateArithmetic is), and of course Xsum promises to be exactly rounded, so you obviously shouldn’t see any rounding differences there.

6 Likes

See here for a simple example:

Basically, summation is not associative and there is no guarantee for the order of execution of the sum.

2 Likes

Here is a fun example of the lack of associativity (thanks to Stefan Karpinski):

function sumsto(x::Float64)
    0 <= x < exp2(970) || throw(ArgumentError("sum must be in [0,2^970)"))
    n, p₀ = Base.decompose(x) # integers such that `n*exp2(p₀) == x`
    [floatmax(); [exp2(p) for p in -1074:969 if iseven(n >> (p-p₀))]
    -floatmax(); [exp2(p) for p in -1074:969 if isodd(n >> (p-p₀))]]
end

Example:

julia> x = sumsto(2.3);

julia> y = sumsto(1e18);

julia> sort(x) == sort(y)
true

julia> foldl(+,x)
2.3

julia> foldl(+,y)
1.0e18
4 Likes

Thanks for the responses.
Very interesting and enlightening!

It seems the same sum result can be guaranteed with:
foldl(+, x) # accumulates rounding error
xsum(x) # no rounding error

I’ll just have to add a method to allow dims argument for arrays.
I’m surprised a dims kwarg isn’t defined for foldl (or reduce)

Is there a way to disable SIMD and/ or whatever else is reordering the additions, so then standard sum executes deterministically independent of processor?
A @nosimd macro perhaps?

You can get generators over rows and columns with eachrow and eachcol respectively.

While that sounds appealing, it actually makes the summation both slower and less accurate. If you really want that just use foldl(+, a).

4 Likes

I think this is the key question for all these discussions (wrt float arithmetic, reproducibility of random sequences, etc).

If the results are close, this should not matter and things should be compared with some variant of isapprox.

If they are not, and appear to be very sensitive to random seeds or low-level questions of floating point arithmetic, they are not to be trusted anyway.

5 Likes

Yes, of course.

I think my use of standard sum did not convey my intention.
I did not mean for Base.sum to be re-implemented for everyone.

What I meant was could I use @nosimd sum(a) to get a deterministic sum. The advantage over foldl(+, a) is the extra methods available to sum. In particular, sum(a, dims=x).

This is no biggy though, I definitely can work with foldl(+, a).

Thanks again for all the responses.