say I want a number x up to 10^6 decimal precision using BigFloat. What is the value I need to specify in setprecision() ??
How do I calculate it?
In general, n_{10} decimal digits correspond to n_{2} = n_{10} \log_{2}(10) binary digits for the mantissa, so in your case you want to specify
julia> round(Int, 1e6 * log2(10))
3321928
Not sure youâll have some memory issue with such a large precision, though.
Well that would mean that all the n2 bits are used to represent the .xxxx⌠positions but that is not the case.
Some fraction is reserved for the exponents
3321928
is the precision you have to specify with setprecision
, the size of the mantissa.
AhâŚMaybe I completely missunderstand thenâŚsetprecision(n) does specify the precision of the mantissa only to be n bits?
How big is then my full number in bits (sign + exponents)
When I type:
eps( big(1.0) )
I get
4.6566128731e-10
which is
2^-31
while I specified setprecision(32)
Where is the 1 bit going?
Isnât the mantissa only everything on the right of the dot ?
The default exponent range, which I think is compiled into MPFR and cannot easily be changed in Julia, is so large as to be effectively infinite for most purposes. (I think it is around 62 bits.)
Ok. So just to clarify. The sign is then incorporated in setprecision(n), meaning that the precision of the mantissa is n-1 because of the 1 bit for the sign?
Have you considered trying it?
Trying like in Post No5 ?
Thats why I asked, to maybe get some confirmation? It could be possible that the 1 bit is also used for sth elseâŚyou never know.
Also the exponents are treated separately which I just found out.
No, the extra bit is for the leading 1. The precision is defined as the number of bits in the significand. In the case of big(1.0)
, the first bit will be the initial â1â, then there will be n-1
trailing zeros.
You can read about the structure that MPFR uses here:
HmâŚWhen I look at
then I presume that no bit for the 1 is really needed. It is always there so implicitly assumed and just the exponents and the mantissa are modified? So why do we need a Bit here?
Because thatâs how it is defined: we use a pretty standard definition of precision to be the number of digits in the significand. This is standard throughout languages and numeric literature, and works for binary, decimal, hexadecimal formats, in both normalised and unnormalised forms.
If you are only looking at binary, normalised numbers, then you are correct in that the leading bit will always be 1, in which case there is no need to represent it in the format. However this is simply an implementation detail, and doesnât change the definition. Note that we are consistent in this, e.g.
julia> precision(Float32(1))
24
consistent with the wikipedia article? Well there the 24th bit is the signâŚ
I meant consistent within Julia, but the article is also in agreement in that it states that
Thus only 23 fraction bits of the significand appear in the memory format, but the total precision is 24 bits
It doesnât say anything about the 24th bit being the sign.
No it doesnât say it explicitly, but apart from the exponents itâs the 24th bit (actually in the picture the 31st, but that is just a position)
In Julia a Float32 has
23bits for the mantissa
8 for the exponents, right?
1 for the 1 (as you said above)
makes 32. So now we are missing the sign?
AnywayâŚMaybe I can cope with it if it is just a defintion (the word precision as you said in the number of significands) where however the most significant is basically always 1.
No, there is still a bit for the sign. A better way of thinking about it is that the significand is 24 bits, but can be stored using only 23 bits (what the article calls the fraction) since the leading bit is always 1.