julia> nextfloat(round(BigFloat(2)^-76, sigdigits=3)) # scroll to the right to see "11", i.e. it's not just zeros:
1.320000000000000000000000000000000000000000000000000000000000000000000000000011e-23
are further away, than:
julia> round(BigFloat(2)^-76, sigdigits=3)
1.319999999999999999999999999999999999999999999999999999999999999999999999999999e-23
and note 1.32e-23 is also farther from your true number. It may though look like the number you want to get out.
I feel like I’ve seen such an issue with also just Float64. You (or at least the round function) want the closest value to 3 digits (and infinite number of decimal zeros after) to your original number. Then Julia just shows that number in full… If you want to also show it as 3 digits that’s a different problem.
Note,
round(x, [r::RoundingMode]; sigdigits::Integer, base = 10)
You can round in base = 2, it’s just not the default, and then you you will have zeros after always, in that base, but in decimal, you likely will not.
Realize that BigFloat, like Float64, is a binary floating point type: what it is storing is \mbox{(integer)} \times 2^n,not\mbox{(integer)} \times {10}^n as in decimal. This means that the exact value 1.32e-23 actually does not exist (is “not representable”) as a BigFloat.
So, what round is doing is giving you the closestBigFloat value to 1.32e-23, which turns out to be that weird-looking value 1.31999999999…e-23
In Float64, the same thing is happening. However, when a Float64 is printed, it uses an interesting algorithm that prints the shortest decimal value that rounds to the same Float64 value, which ends up being 1.32e-23 in this case. So that gives you the illusion that the decimal rounding was performed exactly (which is, again, impossible for Float64 because 1.32e-23 is not representable). If you want to see what 1.32e-23really is, you can use the Printf library to print more digits:
julia> using Printf
julia> @printf("%0.200e", 1.32e-23)
1.31999999999999993601885787144421313733909849419063708336383021089630029898387419962091371417045593261718750000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000e-23
(None of this is specific to Julia. It’s inherent to how computer arithmetic works. Of course, you could use decimal floating point if you need to represent decimal values and operations exactly, but this is not the default in any common programming language because it’s not supported in typical CPU hardware. Also, focusing on the errors arising in conversion to/from decimal is a bit of a red herring — it’s the first thing users notice, but it’s not generally the dominant source of rounding errors in practice compared to rounding errors that occur during a computation, which happen in any base.)
Another issue is that Float64 uses minimal printing: it prints the smallest number of decimal digits necessary to give the back the same value when that string is parsed as a Float64. This means that when you print the value 0.1 == 7205759403792794//2^56 (note this is not equal to 1//10), it prints as "0.1" because that’s the shortest string that parses to the Float64 value 0.1. Doing this is shockingly complicated and has been the subject of very recent and ongoing research—see Ryu and Grisu algorithms, published in 2004 and 2018, respectively. BigFloat, on the other hand, does not do minimal printing, mainly because we delegate printing to the MPFR and MPFR doesn’t do minimal printing. So you already have big"0.1" printing as
FYI: That is actually the minimal printing (for that number). I.e. taking one 9 off makes then number different in that precision, and 1.32 is also different.
However that value given by nextfloat seems not to be the next float value:
julia> nextfloat(round(BigFloat(2)^-76, sigdigits=3)) # scroll to the right to see "11", i.e. it's not just zeros:
julia> big"1.32e-23" < big"1.320000000000000000000000000000000000000000000000000000000000000000000000000011e-23"
true
I’m not sure it’s a great idea for BigFloat because of the variable precision.
For example, it would mean that big"1.32" would always print as 1.32, even though will be a different value depending on the current precision. At least the current printing reflects some change in the underlying value:
Ah, that’s a good point—the output format would have to capture the precision or it would be bad. Of course it’s already an issue because you can change the precision and get a different value.