Hi,
I am encountering some problems when promoting Float64 to a BigFloat, specifically the fact that I do not get a zero padding beyond the Float64 precision.
For example I have the following:
julia> BigFloat(0.01)
0.01000000000000000020816681711721685132943093776702880859375
julia> BigFloat(1)/100
0.009999999999999999999999999999999999999999999999999999999999999999999999999999995
Do you guys know of any fix or is this the normal behaviour of BigFloat?
jling
August 8, 2021, 8:47pm
2
try:
julia> big"0.1"
0.1000000000000000000000000000000000000000000000000000000000000000000000000000002
oops, itβs 0.01 not 0.1. Looks like big(1)/100 is the closest you can get.
Sure, that works:
julia> big"0.1"
0.1000000000000000000000000000000000000000000000000000000000000000000000000000002
julia> parse(BigFloat, string(0.01))
0.009999999999999999999999999999999999999999999999999999999999999999999999999999995
julia> parse(BigFloat, string(0.1))
0.1000000000000000000000000000000000000000000000000000000000000000000000000000002
but converting to a string and then back to BigFloat seems like a hassle and has overhead:
julia> @benchmark BigFloat(x) setup=(x=rand())
BechmarkTools.Trial: 10000 samples with 958 evaluations.
Range (min β¦ max): 89.468 ns β¦ 1.547 ΞΌs β GC (min β¦ max): 0.00% β¦ 93.54%
Time (median): 99.712 ns β GC (median): 0.00%
Time (mean Β± Ο): 104.143 ns Β± 60.618 ns β GC (mean Β± Ο): 2.72% Β± 4.43%
βββ
βββββ
β
β
βββ
βββββββββββββ
βββββββββββββββββββββββββββββββββββββββββββ β
89.5 ns Histogram: frequency by time 147 ns <
Memory estimate: 112 bytes, allocs estimate: 2.
julia> @benchmark BigFloat(string(x)) setup=(x=rand())
BechmarkTools.Trial: 10000 samples with 175 evaluations.
Range (min β¦ max): 633.166 ns β¦ 7.969 ΞΌs β GC (min β¦ max): 0.00% β¦ 90.84%
Time (median): 697.649 ns β GC (median): 0.00%
Time (mean Β± Ο): 721.763 ns Β± 281.261 ns β GC (mean Β± Ο): 1.71% Β± 4.09%
βββββ
ββββββββ
βββββββββββββββββββββββββββββββββββββββββββββββββββββ β
633 ns Histogram: frequency by time 945 ns <
Memory estimate: 557 bytes, allocs estimate: 5.
I thought of asking here is anyone knew of a workaround for now. But I will open an issue on GitHub to report the problem as well.
Anyway, thank you @jling
jling
August 8, 2021, 8:58pm
4
You donβt need to go to string, big(1)/100
gives you the same thing no?
I donβt think itβs a problem, 0.01 is just not representable in Float, what you get is close enough and you can always increase precision.
Also remember you have 1//100
if you need
2 Likes
Yes, big(1)/100
is correct and no need for strings, but if one wants consistent zero-padding for a random Float64 then the conversion to string and then BigFloat seems to work consistently:
julia> a = rand()
0.6978961477555439
julia> BigFloat(a)
0.69789614775554387193778893561102449893951416015625
julia> BigFloat(string(a))
0.6978961477555438999999999999999999999999999999999999999999999999999999999999987
Float64 is binary, not decimal. The values that are printed arenβt the real value.
1 Like
Sukera
August 8, 2021, 11:09pm
7
1 Like
Palli
August 8, 2021, 11:24pm
8
Thereβs also:
See, here, maybe you donβt want to use it:
Hello everybody !
Iβm busy now with studying the speed of convergence of sequences of real numbers. I begin to study classical algorithms like Archimedes polygonal approximation of pi, and newton computations of a square root. I already illustrated with some simple python programs and I want to translate into Julia . Hereβs an example :
from decimal import Decimal, getcontext
def u(n):
if n == 0:
return Decimal(1)
return (u(n - 1) + (2 - u(n - 1) * u(n - 1)) / (2 * u(n β¦
1 Like
big() and BigFloat() are converting a Float64 value to 256 bits (default). Float64 is stored in IEEE 754 format.
julia> x=0.01 # Float64
0.01
julia> y=BigFloat(x)
0.01000000000000000020816681711721685132943093776702880859375
julia> bitstring(x)
"0011111110000100011110101110000101000111101011100001010001111011"
If we decode that bitstring according to IEEE 754 format, we can see the exact value itβs encoding:
Part
Binary
Decimal
sign
0
+1
Exponent
01111111000
1016
Fraction
0100011110101110000101000111101011100001010001111011
0.2800000000000000266453525910037569701671600341796875
Exact
0.01000000000000000020816681711721685132943093776702880859375
We see that the IEEE 754 binary representation is storing the exact value of 0.01000000000000000020816681711721685132943093776702880859375, which is what was converted to BigFloat.
Float64 has 15.95 digits of precision. The IEEE 754 representation of 0.01 achieves that:
However, for your BigFloat values, you probably want the closest value to 0.01.
julia> zu=BigFloat("0.01",RoundUp)
0.01000000000000000000000000000000000000000000000000000000000000000000000000000013
julia> zd=BigFloat("0.01",RoundDown)
0.009999999999999999999999999999999999999999999999999999999999999999999999999999995
julia> zn=BigFloat("0.01",RoundNearest)
0.009999999999999999999999999999999999999999999999999999999999999999999999999999995
The rounded up value of 0.01 has an error of -1.3E-79, whereas the rounded down value has an error of 5E-81. So the rounded down value is nearest to 0.01.
5 Likes