I will never understand floating-point arithmetic

I will never understand float point arithmetic…

julia> a = 10000 * 7 /100

julia> b = 10000 * 0.07

julia> c = 1000 * 0.69999999999999999

(but at list I am aware of the danger…)


As we say in France: “un homme averti en vaut 1.999999999999999”

(Literally, the saying says something like “a forewarned person is worth two”)


Also, I’m not sure you were really looking for an explanation, but maybe it will help to note that:

julia> 7/100

So this is merely an example of non-associativity of the floating-point operations:

julia> 10000 * (7 / 100)

julia> (10000 * 7) / 100

Look at this excellent presentation by Avik Sengupta


Part of the confusion is that, although this is printed as 0.07, it’s actually a slightly different number. (Binary) floating-point values are integers times powers of two, so 7/100 is not exactly representable and has to be rounded to:

julia> big(7/100)

When you realize that a rounding operation has already occurred here, then the fact that multiplying this by 10000 results in something different from 700 is not so surprising. In contrast, when you compute 10000 * 7 / 100 it is parsed as (10000 * 7) / 100, and each of these operations can be performed exactly (the result of each operation is an integer value) so there is no rounding (even if you write 1e4 * 7 / 100 so that the first multiplication is done in floating point).

The reason that 7/100 prints as 0.07 is tricky — printing floating-point values in decimal has a long history, and basically all of computer science has coalesced around the principle that you should print the shortest decimal number that rounds to the same value when converted back to floating point. (0.07 will print the same way in Python, C, …)


It’s not so hard to understand floating-point arithmetic, actually! Trefethen and Bau’s book “Numerical Linear Algebra” covers the essentials in a couple lectures (each 4 or 5 pages of text). It’ll change your reaction to the above from “Wha…?” to “Yep, that’s how it works, no mystery about it.” Well worth reading.


What is the rule here, is for a / b, when both a and b (or at least b) is an integer (or non-approx), and the result should be, then you get exact, i.e. integer?

About “it is parsed as `(10000 * 7) / 100”, for constants, is it allowed to parse as 7 * (10000 / 100)"; e.g. if 7 had been not a constant?

If you want that calculation, I recommend to use rational number representation:

julia> 10000*7//100

To complement a bit, IEEE754 Float64 are stored as M × 2E, where M is the 53-bit integer mantissa and E the 11-bit exponent (some further details omitted).

700.0 becomes M=700 and E=0, as integer values between and
-2^52 and 2^52-1 are stored with E=0.

0.07 becomes M=6157265115545601 and E=-43, in terms of bits very different from an integer value.

BTW, is there a package for Julia that would reveal mantissa and exponent of IEEE754 floats?

Yes. The result of each elementary fp operation (+,-,*,/) is as if you had done that operation exactly and then rounded to the closest representable value. This means that if the exact result is representable, it will be returned exactly.


There are built-in functions: How to get the significand and the exponent of a floating point number?



slightly more specifically, no unless you use @fastmath

1 Like

If you dislike this, then try the very nice DecFP.jl package, which implements the IEEE 754-2008 decimal floating point standard (Dec32, Dec64, and Dec128 types).
There are some cases dealing with currencies where using decimal floating point may even be required (at least at one point, by EU directive).

As an example, if you are calculating a 5% sales tax on a $0.70 candy bar, if you use binary floating point arithmetic, you get 3 cents, but with decimal arithmetic, it will be 4 cents (and the government will want that extra cent, to be sure, and it can add up!)


…fantastic example…

If you’re after visualisation: ColorBitstring.jl


More specifically, fastmath doesn’t change how it is parsed, but can allow the compiler to re-associate fp expressions.


Ha! Please look at my only message on this forum, with three likes by the way! But keep in mind, in fact, I don’t fully understand either, but I hope that when it comes down to it, I’ll turn to this lesson and definitely understand!

1 Like

6 posts were split to a new topic: Avogadro’s number as a floating-point value