0.1 is a number that can be exactly represented by a
Float32, or a
(at least according to how we print them)
But not it would seem (at least according to printing) by a
julia> parse(Float64, "0.1")
julia> parse(Float32, "0.1")
julia> parse(Float16, "0.1")
julia> parse(BigFloat, "0.1")
Is this true? Or is it printing wrong?
(I suspect the answer is going to be insightful about floating point representations and printing being round-trip-able with parsing)
As Jeffrey pointed out, this assumption is wrong. Another way to see it:
julia> @printf "%.20f" 0.1f0
julia> @printf "%.50f" 0.1
You can also prove by hand that 1/10 is an (infinitely) repeating decimal in base-2, which is why it can’t be represented exactly by binary floating-point with any finite precision: https://softwareengineering.stackexchange.com/a/237018
How does the show method know that we want to see 0.1 and not the real value that’s stored?
It’s for the same reason that 0.1 + 0.2 != 0.3 (https://0.30000000000000004.com)
Julia prints the shortest value that uniquely identifies the float in question when parsed back. The algorithm in question is called “Ryu”, you can find the julia implementation here.
Is it that we don’t use Ryu for BigFloat but instead use some inferior algorithm?
BigFloat is really a wrapper around the MPFR library, which has its own printing algorithm.
And for fun
BigFloats are binary floating point and it CANNOT exactly represent 0.1 because it is base-2 instead of base-10
I think part of the confusion is that the Julia expression
0.1 is a
Float64 close, but not equal to
Float64(0.1) is really the same thing as
0.1. I try always to be careful to distinguish
0.1 and 1/10 = 0.1 which are different (Julia left and common number right.)
This also implies that
a == 0.30000000000000004 is the correctly rounded answer to the question
0.1 + 0.2 == ? with rounding error
0.00000000000000002 (and not
0.00000000000000004 as one might think.)