# A question regarding BigFloat

I was using BigFloats to compute the Harmonic series (naïve implementation) with arbitrary precision, and it was brought to my attention a bug in my code: in particular, the difference between `BigFloat(1/i)` and `1/BigFloat(i)`.

Indeed, when I use `BigFloat(1/10^10)` I get:
`1.0000000000000000364321973154977415791655470655996396089904010295867919921875e-10`
While using `1/BigFloat(10^10)` correctly yields:
`1.000000000000000000000000000000000000000000000000000000000000000000000000000003e-10`

I was then left wondering, why this behavior? How can I learn more about properly using arbitrary precision arithmetic?

You want to convert to “big” as early as possible—or more precisely, before any loss of precision. In this case, `10^10` is precise in `Int` but `1/10^10` produces a `Float64` which has already lost precision. When you do `big(1/10^10)` what you get is that already-low-precision `Float64` value at higher precision, which is not what you want. When you do `1/big(10^10)` you do `BigFloat` division instead of `Float64` division, which is what you want.

2 Likes

I’m a bit surprised that Julia doesn’t have a `@big` macro that would turn `@big 1/10^10` into `big(1)/big(10)^big(10)`. That’d be an easy way of avoiding accidental loss of precision.

1 Like

You really just need `1/big(10)^10`. This is also faster than `big(1)/big(10)^big(10)`.

Still, a macro like that looks pretty nice, but, ideally, it shouldn’t have worse performance.

Edit: And `1/big(10^10)` is even faster. A macro that handles this seems like quite a challenge.

I think the best approach (in general) is to first write something that works, then make it fast. Just converting all leaf-nodes to `big` is something that can be done in a few lines of code and should be relatively easy to make bug-free.

For the “make it fast” part, one could perhaps introduce a `LazyBig{T}` type which is simply a thin wrapper indicating that the result of computations should be `BigInt` or `BigFloat` unless it fits exactly in some smaller type. The return type of `*(::LazyBig{Int}, ::LazyBig{Int})` could be `Union{LazyBig{Int128}, LazyBig{Int}})`, which might make taking the power by squaring relatively fast in the `10^10` case.

1 Like

This macro would be a nice addition and should be simple enough to write.

3 Likes

All of this is very interesting.
Also, consider:

``````>julia big(typemax(Int64)^2)
1
``````

vs

``````julia> big(typemax(Int64))^2
85070591730234615847396907784232501249
``````

According to DNF, the second option is slower, despite it being the correct one.
What should the macro do with this?

`@changeprecision BigFloat ...code...` from ChangePrecision.jl already does this and more.

5 Likes

As I understand it, `@changeprecision` only targets literals. I think it’d be useful to have a macro that wraps all leaf nodes, so that expressions like `@big x/y`, where `x` and `y` are variables, would also work.

(The wrapper function would of course have to pass non-numbers through unharmed.)

I don’t know how workable this would be—consider the `typemax` example. It would wrap `Int64` in a big call: if that returns `BigInt` then the typemax call fails; if it returns `Int64` then it has no effect. It seems like the best one could do is also replace arithmetic operations and functions with versions that transform their arguments to higher precision first.

1 Like

The use of the `@big` macro would be to ensure that a certain expression is evaluated at high precision, not to change the default precision for a block of code like `@changeprecision`. Given your `typemax` example, I’ll say the wrapper should be applied to every node. Not just leaves.

So `@big 1/typemax(Int64)` would become `w(w(1)/w(typemax(w(Int64))))` where `w` is the wrapper defined something like:

``````w(x) = x
w(x::Number) = big(x)
``````

After `w` is evaluated, we’d have `big(big(1)/big(typemax(Int64)))`, which I’d say is what we want.

Here’s an example of a wrapper type that will automatically decide whether to convert to `BigInt` before multiplication (enabling power by squaring).

It’s significantly faster than `BigInt` for the specific case `10^10`. (This calculation takes 20 nanoseconds on my system, which is amazing given that I didn’t modify the `power_by_squaring` function at all, and the multiplication code is not even type stable.)

``````struct LazyBigInt{T} <: Integer
x::T
end

# Conservative bound on when to move from Int64 to higher precision
const cutoff64 = 0.5 * Float64(typemax(Int))

function Base.:*(a::LazyBigInt{Int}, b::LazyBigInt{Int})
rf = Float64(a.x) * Float64(b.x)
-cutoff64 <= rf <= cutoff64 ? LazyBigInt(a.x * b.x) : big(a.x) * b.x
end

Base.promote_type(::Type{<:LazyBigInt}, ::Type{BigInt}) = BigInt
Base.promote_type(::Type{<:LazyBigInt{T}}, ::Type{T}) where {T} = LazyBigInt{T}

Base.BigInt(x::LazyBigInt) = BigInt(x.x)

@assert LazyBigInt(10)^10 == 10000000000
@assert LazyBigInt(10)^40 == 10000000000000000000000000000000000000000

using BenchmarkTools
@benchmark \$(BigInt(10))^10
@benchmark \$(LazyBigInt(10))^10
``````

Of course, this is just an example. It would take some effort to design proper heuristics for when to switch to higher precision for each type of operation.

1 Like