Sum of float64 vector gives slightly incorrect answer

Unfortunately, floating-point arithmetic is complicated when you get down to brass tacks, and it is tricky to write something both short and correct.

If you google for “floating-point introduction” or similar, you’ll find lots of tutorials online. Many of them are written for people with computer-science backgrounds and are rather technical, e.g. this one is well known:
What Every Computer Scientist Should Know About Floating-Point Arithmetic
Here are a couple of gentler introductions, although both of them spend more time than I would like on fixed-point arithmetic:
Floating Point Demystified, Part 1
http://homerreid.dyndns.org/teaching/18.330/Notes/MachineArithmetic.pdf

Agreed, don’t start with the Goldberg article: it really isn’t that good, and a lot of the stuff isn’t that relevant (e.g. every computer nowadays uses guard bits).

I personally think the first few chapters of Nick Higham’s book give a really good introduction.

I was thinking it would be useful to explain

julia> 5/6
0.8333333333333334

julia> 0.4 - 0.3 - 0.1
2.7755575615628914e-17

in a paragraph getting no deeper than binary rep, 64 bits, 52 (53) for mantissa, and how this is not a Juliaism but an IEE standard burned into hardware, with links to deeper explanations and mentions of alternatives like BigFloat. Seems like it’s needed twice a week lately.

6 Likes

This should definitely be added to the FAQ. R has FAQ 7.31 for this and it’s used everytime somebody raises this issue.

1 Like

If the data is not sensitive, would you create a gist with 7 or so rows and post the link. A simple, runnable sequence that does the transform which has been a bother would be helpful, too.

Great idea. On a single page, you can explain the problem, not the solution. Eg something along the lines of

  1. how computers represent floating point numbers,
  2. a few examples like sums that don’t behave like their decimal counterpart, or numbers that are per se unrepresentable eg the 5/6 that came up in another thread,
  3. Julia’s built-in facilities for mitigating, but not eliminating the problem (eg fancy algs in sum, BigFloat),
  4. further reading (just a few nice articles)

Very little would go a long way. Just as a substitute for explaining this every time.

Been trying to use the DecFP package now, but any numbers of type Dec64 (other DecXX im assuming as well) don’t work with the cond() or other eigen related functions. Any ways around this?

If you are doing linear algebra, you should be OK with Float64 numbers. As you noticed, most linear algebra routines work only with standard base 2 numbers (Float64 and Float32), because that’s the fastest with computer architectures.

In any case, what you are doing most likely does not require decimal numbers.

As explained above, decimal numbers will incur the same representation issue as you had originally posted. This is because most real numbers cannot be exactly represented in any basis (be it 2 or 10 or 36) with a finite amount of memory. It was suggested to use DecFP because you were originally concerned with addition of decimal numbers. More complex operations will certainly return numbers not representable in finite precision base 10, in which case DecFP becomes inappropriate.

If you tell us more about your application we might be able to suggest better solutions.

3 Likes

I’ve just tested using BigFloat and the LinearAlgebra package which provides the missing functionality. Unfortunately the slow down is quite a lot and the increased accuracy was only present after the 10th decimal or so, so as you said I will be fine with Float64.

My application is a portfolio optimization and backtest, the reason I started this thread was due to a weird round-off occurring which has since disappeared (weird in that it occurred rarely and but had a pattern). The conversation above made me want to check how much of an impact arbitrary precision decimals may have and its not significant (for financial application), even with a large dataset and many iterations

I don’t think it does that. It just has a smaller error (the magnitude of which you have some control over) that you may not notice.

By “missing functionality” I mean the cond(Matrix) function which didn’t work with Dec64 or BigFloat matrices. The LinearAlgebra package provides the cond and other required functions for BigFloat but not Dec64.

Does this clear it up?

Yes, I thought you meant that it gets rid of floating point inaccuracy.

The built-in cond function relies on the SVD computation provided by the external library LAPACK, which only works with Float32 and Float64. To do this for other number types, you can use the GitHub - JuliaLinearAlgebra/GenericSVD.jl: Singular Value Decomposition for generic number types package.

1 Like

What has happened to that cond function, has it been moved out to stdlib along with LinAlg?

How does the generic code in GenericSVD compare performance-wise when used with Float64?

It hasn’t yet been updated to the v0.6/v0.7 syntax though.

A related problem, which I find disconcerting: floating point arithmetic has no commutative property.

julia> 0.1 + 0.1 + 100.0 == 100.0 + 0.1 + 0.1
false

julia> a = [0.1, 0.1, 100.0];

julia> sum(a)
100.2

julia> sum(reverse(a))
100.19999999999999

Is there a reliable solution for this or it is just one of the many flavors in which floating point rounding inaccuracy unavoidably emerges?

This is not a loss of commutativity, but rather of associativity, i.e. for any given binary operation, the order of both operands is irrelevant.

julia> 100.0 + 0.1 == 0.1 + 100.0
true

However, for any sequence of operations (e.g. additions), the global order of operations matters (i.e. it is necessary to parenthesize all basic operations to uniquely specify the result)

julia> 100. + 0.1 + 0.1 == 0.1 + 0.1 + 100.
false

julia> 100. + 0.1 + 0.1 == 0.1 + (0.1 + 100.)
true

There are many reliable solutions to ensure that you always get a correctly rounded result from your sum (i.e. a result that is rounded to the nearest representable floating-point value, in the same way that the exact mathematical result would). But all these solutions come at a cost.

Using rational numbers based on big integers is a relatively simple example of such techniques. If you are specifically computing a sum, there exist specific accurate algorithms (for example Kulisch accumurators or, more recently, Demmel’s accurate summation algorithm)

4 Likes

Thanks for the explanation. I see, it’s a lack of associativity.
A fast solution I adopted to get consistent results (not exact results) is to sort the vectors before applying a function, e.g. summation:

julia> a = [0.1, 0.1, 100.0];

julia> b = [100, 0.1, 0.1];

julia> sum(sort(a)) == sum(sort(b))
true

This works also with other functions (I spotted the problem calculating entropies of discrete distributions, where a different order of the same values resulted in different entropies).

I am curious why you need that. Also note that the exact results may still depend on the software and hardware environment, so it is only consistent in one particular setting.

It would probably be a bit better to separate into positive and negative values and sort each half by ascending absolute value. But as @Tamas_Papp said, this will only get you so far and may still not produce portable results.

I was comparing entropies (and other measures) of probability distributions, and calculating with them (e.g. H1 - H2, and getting not zero values when I expected zero).
What I would like is consistency within my setting and comparisons, so no need for portability (at the moment).

Thanks @StefanKarpinski for the suggestion.