Promotion to BigFloat -- no zero padding

Hi,

I am encountering some problems when promoting Float64 to a BigFloat, specifically the fact that I do not get a zero padding beyond the Float64 precision.
For example I have the following:

julia> BigFloat(0.01)
0.01000000000000000020816681711721685132943093776702880859375

julia> BigFloat(1)/100
0.009999999999999999999999999999999999999999999999999999999999999999999999999999995

Do you guys know of any fix or is this the normal behaviour of BigFloat?

try:

julia> big"0.1"
0.1000000000000000000000000000000000000000000000000000000000000000000000000000002

oops, it’s 0.01 not 0.1. Looks like big(1)/100 is the closest you can get.

Sure, that works:

julia> big"0.1"
0.1000000000000000000000000000000000000000000000000000000000000000000000000000002

julia> parse(BigFloat, string(0.01))
0.009999999999999999999999999999999999999999999999999999999999999999999999999999995

julia> parse(BigFloat, string(0.1))
0.1000000000000000000000000000000000000000000000000000000000000000000000000000002

but converting to a string and then back to BigFloat seems like a hassle and has overhead:

julia> @benchmark BigFloat(x) setup=(x=rand())
BechmarkTools.Trial: 10000 samples with 958 evaluations.
 Range (min … max):   89.468 ns …  1.547 ΞΌs  β”Š GC (min … max): 0.00% … 93.54%
 Time  (median):      99.712 ns              β”Š GC (median):    0.00%
 Time  (mean Β± Οƒ):   104.143 ns Β± 60.618 ns  β”Š GC (mean Β± Οƒ):  2.72% Β±  4.43%

        β–β–ƒβ–…β–†β–‡β–ˆβ–‡β–…β–‚                                               
  β–…β–„β–†β–…β–†β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–…β–‚β–‚β–‚β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–‚β–‚β–‚β–β–‚β– β–ƒ
  89.5 ns         Histogram: frequency by time          147 ns <

 Memory estimate: 112 bytes, allocs estimate: 2.

julia> @benchmark BigFloat(string(x)) setup=(x=rand())
BechmarkTools.Trial: 10000 samples with 175 evaluations.
 Range (min … max):  633.166 ns …   7.969 ΞΌs  β”Š GC (min … max): 0.00% … 90.84%
 Time  (median):     697.649 ns               β”Š GC (median):    0.00%
 Time  (mean Β± Οƒ):   721.763 ns Β± 281.261 ns  β”Š GC (mean Β± Οƒ):  1.71% Β±  4.09%

            β–„β–ˆβ–ˆβ–†β–ƒ                                                
  β–β–β–β–β–‚β–ƒβ–„β–…β–†β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–†β–„β–ƒβ–ƒβ–‚β–‚β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–β–‚β–‚β–β–β–β–β– β–‚
  633 ns           Histogram: frequency by time          945 ns <

 Memory estimate: 557 bytes, allocs estimate: 5.

I thought of asking here is anyone knew of a workaround for now. But I will open an issue on GitHub to report the problem as well.

Anyway, thank you @jling

You don’t need to go to string, big(1)/100 gives you the same thing no?

I don’t think it’s a problem, 0.01 is just not representable in Float, what you get is close enough and you can always increase precision.

Also remember you have 1//100 if you need

2 Likes

Yes, big(1)/100 is correct and no need for strings, but if one wants consistent zero-padding for a random Float64 then the conversion to string and then BigFloat seems to work consistently:

julia> a = rand()
0.6978961477555439

julia> BigFloat(a)
0.69789614775554387193778893561102449893951416015625

julia> BigFloat(string(a))
0.6978961477555438999999999999999999999999999999999999999999999999999999999999987

Float64 is binary, not decimal. The values that are printed aren’t the real value.

1 Like

See https://0.30000000000000004.com/

1 Like

There’s also:

See, here, maybe you don’t want to use it:

1 Like

big() and BigFloat() are converting a Float64 value to 256 bits (default). Float64 is stored in IEEE 754 format.

julia> x=0.01  # Float64
0.01

julia> y=BigFloat(x)
0.01000000000000000020816681711721685132943093776702880859375

julia> bitstring(x)
"0011111110000100011110101110000101000111101011100001010001111011"

If we decode that bitstring according to IEEE 754 format, we can see the exact value it’s encoding:

Part Binary Decimal
sign 0 +1
Exponent 01111111000 1016
Fraction 0100011110101110000101000111101011100001010001111011 0.2800000000000000266453525910037569701671600341796875
Exact 0.01000000000000000020816681711721685132943093776702880859375

We see that the IEEE 754 binary representation is storing the exact value of 0.01000000000000000020816681711721685132943093776702880859375, which is what was converted to BigFloat.

Float64 has 15.95 digits of precision. The IEEE 754 representation of 0.01 achieves that:

IEEE001

However, for your BigFloat values, you probably want the closest value to 0.01.

julia> zu=BigFloat("0.01",RoundUp)
0.01000000000000000000000000000000000000000000000000000000000000000000000000000013

julia> zd=BigFloat("0.01",RoundDown)
0.009999999999999999999999999999999999999999999999999999999999999999999999999999995

julia> zn=BigFloat("0.01",RoundNearest)
0.009999999999999999999999999999999999999999999999999999999999999999999999999999995

The rounded up value of 0.01 has an error of -1.3E-79, whereas the rounded down value has an error of 5E-81. So the rounded down value is nearest to 0.01.

5 Likes