(“easiest” here may mean many things… these are just tricks to get the idea of floating point summation. If that was an actual problem, one would need to check exactly what is the purpose of the calculation to see what to do)
In general, this is not a language issue. Python is not “failing”, nor is Julia. You’ll get exactly the same thing if you just use + in any language using double-precision floating-point arithmetic (the default precision in most languages).
As another example, sum([1e-100, 1.0, 1e-100, -1.0]) also gives 0.0 — it is exactly the same problem, just rescaled, in which the exactly rounded answer is 2e-100. Is this a big error or a small error?
The key question is, compared to what.
It is a big error (100%) compared to the correct sum (i.e. the relative forward error). That’s because we are computing an ill-conditioned function (a sum with a catastrophic cancellation), so the relative error in the output is large even for tiny roundoff errors. You have to be careful when doing anything ill-conditioned in finite precision.
It is a small error compared to the magnitude of the inputs, i.e. the typical scale of the problem we are giving it (1e100 in your example, 1.0 in mine). A more precise way of saying this is that the “backwards relative error” is small.
Summation is the easiest case to analyze, and the easiest for which you can write exactly rounded algorithms like fsum or xsum that do the computation in effectively infinite precision, as well as nearly perfect algorithms like compensated summation. But this simplicity is a two-edged sword — it also means that summation perhaps gets disproportionate attention, when in fact ill-conditioning and similar problems are something you have to be aware for anything in finite-precision arithmetic.
Agreed. But this could be about language guarantees like speed vs. accuracy (discussions, which we had before about for example checked integer arithmetic or signalling NaN’s).
This is NOT a problem with the programming language.
This is the LIMITATION of the Float64 binary floating point representation of REAL NUMBERS
There is only so much precision in Float64, if you add a small number with a big number, you can only represent the result with so many precision. So you start to lose precision if you exceed the limited precision of Float64.
You will have exactly the same problem with BigFloat too if you exceed its precision as shown below
There is no practical way for any language to guarantee exactly rounded results from all possible sequences of floating-point computations. (And there’s a questionable benefit to doing this for only a tiny number of functions like sum.)
This is very different from the issue of checking for integer overflow, which might indeed be practical in some cases (or maybe someday in all cases) because it is a purely local check on an individual elementary operation like +. In an ill-conditioned computation, in contrast, every individual step can be correctly rounded but the final answer can be completely wrong.