Overflow issue?

Hello everyone,

Is this a bug ? Why is it false for some specific numbers while true for others?

julia> 3 * 1e-5 * 1e5  == 3.0
false

julia> 5 * 1e-5 * 1e5  == 5.0
true

julia> 6 * 1e-5 * 1e5  == 6.0
false

julia> 7 * 1e-5 * 1e5  == 7.0
false

julia> 8 * 1e-5 * 1e5  == 8.0
true
1 Like

It’s merely a precision issue of floating numbers. Overflow does not occur here.
The reason is, 6e-5 cannot be exactly represented by floating point numbers.

Also, unless really intended, it’s better to approximately compare floating point numbers:

julia> 6*1e-5
6.000000000000001e-5

julia> 6 * 1e-5 * 1e5 ≈ 6.0
true

EDIT: Similar also in Python or any otherIEEE Standard for Floating-Point Arithmetic:

>>> 6 * 1e-5 * 1e5
6.000000000000001
4 Likes

Even ignoring the leading terms, there is no guarantee that 1/x * x == 1. 49e0 and 41f0 are such numbers. For example,

julia> 49e0 / 49e0 # float division by oneself is always 1 (except for 0 and !isfinite)
1.0

julia> 49e0 * (1/49e0) # invert-and-multiply is not the same because the 2 operations accumulate error
0.9999999999999999

julia> 49e0 / complex(49e0) # this issue leads to this annoying quirk for Complex
0.9999999999999999 - 0.0im

julia> using LinearAlgebra

julia> normalize([49e0]) # another invert-and-multiply example
1-element Vector{Float64}:
 0.9999999999999999

It’s one of my pet projects to change complex division example to equal 1, but it’s challenging to do it without noticeably degrading performance. I kept getting close and then spilling registers. I’m hoping that better compiler optimization might eventually squeeze it in, but maybe it’s just barely out of reach for current architectures.

1 Like

you should see if it’s fast on arm. you get twice as many registers there

1 Like