Performance / Memory differences between constructors of BigInts

Hi all,

I’m playing around with arbitrary precision integers in a jupyter notebook (Macbook air, Julia v1.10.0-beta1). I noticed a difference between two constructors of BigInts:

@btime BigInt(1234567890)
#   45.294 ns (2 allocations: 40 bytes)

@btime big"1234567890"
# 1.333 ns (0 allocations: 0 bytes)

I’m guessing this has something to do with the way Julia wraps the GMP package, or C code in general? Or maybe it’s something even more general, having to do with string literals versus constructors?

Anyway, just curious why these two would be different. Any pointers to the manual/resources where I could learn more would be greatly appreciated!

Thanks for the help.

It’s possibly slightly surprising, but big"1234567890" isn’t actually a constructor. It just returns the same BigInt each time. (while BigInt(1234567890) does actually allocate a new BigInt each time.


To expand on what @Oscar_Smith has shown consider:

julia> f1() = BigInt(1234567890)
f1 (generic function with 1 method)

julia> f2() = big"1234567890"
f2 (generic function with 1 method)

julia> f1() === f1()

julia> f2() === f2()

The reason is that big"..." is evaluated only once at compile time while BigInt(...) is evaluated at execution time. This happens because BigInt is mutable, so the compiler cannot optimize-out the BigInt call in the second case (if it did the f1() === f1() would produce true, and it should not).

You have the same situation with regex - Regex(...) vs r"...".


Ah… okay, I think I understand. And @btime only measures allocations in runtime, not compile time?

Thanks @bkamins and @Oscar_Smith

Yes - @btime measures execution time only. Even with @time you would not catch these allocations as they happen before code is run:

julia> @time big"123456789"
  0.000001 seconds

But this is also quite risky to use:

julia> foo() = big"0"
foo (generic function with 2 methods)

julia> x = foo()

julia> Base.GMP.MPZ.add!(x, big(1))

julia> foo()  # look out!