In fact, 1/26=0.0384615… 384615 as a decimal cycle section. So I doubt the value of BigFloat(1/26) and 1/26 giving by Julia are not correct. What happened？
What’s the problem/question here? Floating point numbers have limited precisions, they can’t losslessly represent all real numbers
BigFloat(1/26) first computes 1/26 which becomes a Float64 and then converts that to a BigFloat.
BigFloat(1//26) which takes the exact rational
1//26 and converts it to BigFloat.
why the output of 1/26 is 0.038461538461538464, not 0.038461538461538462?
I’m going to assume there are no bugs in the Intel/AMD chips etc, so the number it gives must be the closest 64 bit binary floating point number to the correct value. Let’s see:
julia> 1/26 + eps(1/26)
julia> 1/26 - eps(1/26)
It looks like it was rounded right I guess.
You usually want to use
eps is the maximum of the distances to the two adjacent floating point numbers, which is not always the same between the two of them (otherwise it’d be the same for