Is `BigFloat` loss of precision intended?

when doing any operation on a BigFloat, the precision of the result always reverts back to the default one, instead of getting it from the operands.

Is this behavior intended and if yes, why ?

julia> a=BigFloat("1.1", precision=512);

julia> a.prec
512

julia> (a+a).prec
256

julia> (0+a).prec
256

I don’t know, and I don’t know why it’s like this, but maybe

setprecision(512) do
    x = big"1.1"
    x+x
end

solves your problem?

I know about setprecision, but I was wondering the point of the precision keyword if it doesn’t propagate

1 Like

Hm. base/mpfr.jl defines these functions, and it’s an immediate ccall. I’m not sure there’s much one can do without changing it to BigFloat{512} etc. But I don’t know a lot about it.

No, it is not an immediate ccall. It first allocates the result using BigFloat(), which uses the global precision and not the precision of the input (which it could).

PS.: So yes, this is intended. The precision of the result is always the global precision. This is the same behavior as MPFR itself.

2 Likes

You must use setprecision(BigFloat, nbits) for precision to work properly (at nbits). The precision keyword is there because over time we have been considering how to rework the whole interface.

1 Like