Incorrect summation of Float64

Can others reproduce this? I’m getting a strange result when adding 0.1 +0.2

               _
   _       _ _(_)_     |  Documentation: https://docs.julialang.org
  (_)     | (_) (_)    |
   _ _   _| |_  __ _   |  Type "?" for help, "]?" for Pkg help.
  | | | | | | |/ _` |  |
  | | |_| | | | (_| |  |  Version 1.8.2 (2022-09-29)
 _/ |\__'_|_|_|\__'_|  |  Official https://julialang.org/ release
|__/                   |

julia> 0.1 +0.1
0.2

julia> 0.1 +0.2
0.30000000000000004

julia> 0.2 +0.2
0.4
1 Like

This is the expected behavior for floating-point arithmetic: https://0.30000000000000004.com/

11 Likes
4 Likes

I suspect that almost everyone who does computer programming has asked this question once in some forum😁 It comes up again and again.

Here’s one write up:

Hah. Ninja’ed by the author!

1 Like

Thank you … I had no idea… rather fascinating

For even more fun, check out how tricky rounding is:

julia> (round(-0.5), round(0.5))
(-0.0, 0.0)

Python 3.10:

>>> (round(-0.5), round(0.5))
(0, 0)

Python 2.7:

>>> (round(-0.5), round(0.5))
(-1.0, 1.0)

Javascript (in Edge):

> with(Math){ [round(-0.5), round(0.5)] }
< [-0, 1]
2 Likes

In Julia, we can access the distinct rounding modes.

julia> (round(Int, -0.5), round(Int, 0.5))
(0, 0)

julia> M = RoundToZero
RoundingMode{:ToZero}()

julia> (round(-0.5, M), round(0.5, M))
(-0.0, 0.0)

julia> M = RoundFromZero
RoundingMode{:FromZero}()

julia> (round(-0.5, M), round(0.5, M))
(-1.0, 1.0)

julia> M = RoundUp
RoundingMode{:Up}()

julia> (round(-0.5, M), round(0.5, M))
(-0.0, 1.0)
3 Likes

It’s completely correct (given it’s assumptions) for Float64/IEEE.

You can do accurately with rationals (in Julia, and in Raku, formerly Perl 6, rationals are the default number type), with:

julia> 1//10 + 2//10  # Rational math is slower
3//10

julia> Float64(1//10 + 2//10)  # You can print out in a user-friendly fashion in the end, not sure it always applies, since bitstring(Float64(1//10 + 2//10))
0.3

All decimal numbers are also accurate with (this is slower than Float64, since it doesn’t have hardware support, though some systems have that, and then I think not with this package, though likely faster than with rationals):

Note, with it 0.3 is accurate, but e.g. 1/3 isn’t, then rationals are better).

julia> big"0.1" + big"0.2" # BigFloat has same problem, but is closer to the truth than Float64 (slower than it or rationals):
0.3000000000000000000000000000000000000000000000000000000000000000000000000000017

That’s not tricky because of floating point numbers, there are simply different decisions taken because rounding isn’t unequivocally defined. That has nothing to do with floating point arithmetic, half integers are exactly represented by floating point numbers.

1 Like