# Why am I not getting a slower performance when a set the precision of BigFloat higher?

I have experimented summing the elements of a BigFloat array while varying the precision of BigFloat arithmetic to see how it affects the perfomance.

``````julia> a = rand(BigFloat, 80)
80-element Array{BigFloat,1}:
0.6610535515305509
0.29748711506309622
0.85520996954460315
0.13862508711913524
0.5116194438973114
0.52517291079788997
0.10807964228877776
0.88327719637160662
0.4278771163033781
0.28838404756641989
⋮
0.13699963649831925
0.26263209403851295
0.33494621331565266
0.57042408286991897
0.73885717423584651
0.33249691669654879
0.9353108411195068
0.5492713189719689
0.75465162023478705

julia> function f()
sum = big"0.0"
for x in a
sum = sum + x
end
println(sum)
end
f (generic function with 1 method)

julia> f()
38.51423222479508

julia> setprecision(2)
2

julia> @time f()
4.0
0.000093 seconds (265 allocations: 13.336 KiB)

julia> setprecision(100)
100

julia> @time f()
38.51423222479507568749568235944
0.000100 seconds (271 allocations: 14.912 KiB)

julia> setprecision(250)
250

julia> @time f()
38.51423222479507568749568235944025218486785888671875
0.000104 seconds (271 allocations: 16.237 KiB)

julia> setprecision(350)
350

julia> @time f()
38.51423222479507568749568235944025218486785888671875
0.000098 seconds (271 allocations: 17.517 KiB)

julia> setprecision(500)
500

julia> @time f()
38.51423222479507568749568235944025218486785888671875
0.000103 seconds (271 allocations: 18.811 KiB)

julia> setprecision(1000)
1000

julia> @time f()
38.51423222479507568749568235944025218486785888671875
0.000100 seconds (272 allocations: 24.286 KiB)
``````

What surprises me is that the time to execute `f()` with `setprecision(2)`is almost the same as the time to execute `f()` with `setprecision(1000)`, even though the memory allocated in the later is higher.

`setprecision` sets the number of bits used for BigFloat. Realistically speaking, `2` vs `1000` bits is not going to make a drastic difference, especially since there probably is some minimum size of memory in the background. Also, the default precision for `BigFloat` is 256. Try setting the precision to `100_000` and you may see a difference.

I’ve modified your function to take `a` as a parameter and replaced the `println` with a return of the value. This way, it doesn’t clutter the output and the interpolation done via BenchmarkTools.jl ensures that the global array is not hindering performance.

Click me for benchmarks!
``````julia> using BenchmarkTools

julia> a = rand(BigFloat, 80); # the ; is to not clutter the console with the array

julia> function f(y)
sum = big"0.0"
for x in y
sum = sum + x
end
sum
end
f (generic function with 1 method)

julia> @benchmark f(\$a)
BenchmarkTools.Trial:
memory estimate:  988.75 KiB
allocs estimate:  160
--------------
minimum time:     114.700 μs (0.00% GC)
median time:      139.600 μs (0.00% GC)
mean time:        168.476 μs (9.61% GC)
maximum time:     2.665 ms (88.17% GC)
--------------
samples:          10000
evals/sample:     1

julia> setprecision(1000)
1000

julia> @benchmark f(\$a)
BenchmarkTools.Trial:
memory estimate:  16.25 KiB
allocs estimate:  160
--------------
minimum time:     5.067 μs (0.00% GC)
median time:      5.233 μs (0.00% GC)
mean time:        7.471 μs (6.82% GC)
maximum time:     533.467 μs (94.97% GC)
--------------
samples:          10000
evals/sample:     6

julia> setprecision(2)
2

julia> @benchmark f(\$a)
BenchmarkTools.Trial:
memory estimate:  6.25 KiB
allocs estimate:  160
--------------
minimum time:     4.571 μs (0.00% GC)
median time:      4.757 μs (0.00% GC)
mean time:        6.011 μs (5.06% GC)
maximum time:     379.314 μs (96.78% GC)
--------------
samples:          10000
evals/sample:     7
``````

The algorithms used by BigFloat and BigInt are intended for truly humongous numbers, on the order of multiple megabytes or gigabytes per number. That would be on the order of 10^6 to 10^9 bits. That’s when they’ll really shine and that’s when you’ll see an increase in runtime.

2 Likes