Sum over BigInt better performance without generator

I use the following script:

using BenchmarkTools

total_gen(k) = sum(BigInt(2)^(n-1) for n in 1:k)
total_arr(k) = sum([BigInt(2)^(n-1) for n in 1:k])

display(@benchmark total_gen(10_000))
display(@benchmark total_arr(10_000))

This results in:

BenchmarkTools.Trial: 
  memory estimate:  13.37 MiB
  allocs estimate:  69996
  --------------
  minimum time:     3.899 ms (0.00% GC)
  median time:      4.628 ms (0.00% GC)
  mean time:        9.792 ms (21.02% GC)
  maximum time:     141.717 ms (84.33% GC)
  --------------
  samples:          511
  evals/sample:     1
BenchmarkTools.Trial: 
  memory estimate:  7.07 MiB
  allocs estimate:  50159
  --------------
  minimum time:     3.242 ms (0.00% GC)
  median time:      3.894 ms (0.00% GC)
  mean time:        7.158 ms (19.16% GC)
  maximum time:     141.290 ms (83.02% GC)
  --------------
  samples:          698
  evals/sample:     1

I wonder why giving sum a generator instead of an array leads to more allocations.

sum calls a specialized routine that does the additions in-place for an Array.

julia> @which sum(BigInt[3,4,5])
sum(arr::AbstractArray{BigInt,N} where N) in Base.GMP at gmp.jl:556

It can’t do that for a generic iterator like a generator because it doesn’t know in advance that the elements are BigInt.

1 Like

I see, thanks a lot. I also found methods(sum) which gives me a nice overview of specializations.

This is probably faster still:

sum(n → BigInt(2)^n, 0:k-1)

That doesn’t allocate so much, but is still predictable to compiler, iirc.