Noticed the OP post was for an extremely high number of significant figures,
so pushed Nemo.jl out further to 4096 bits which is over 1000 sig figs base10.
julia> using Nemo
julia> RR = RealField(4096) ## using Nemo.jl for Definable precision mathematics with error bars.
Real Field with 4096 bits of precision and error bounds
julia> Nemo.const_pi(RR)
[3.141592653589793238462 ... +/- 5.51e-1233]
julia> Nemo.const_pi(RR)^Nemo.const_pi(RR)^Nemo.const_pi(RR)
[1340164183006357435.297 ... +/- 4.12e-1212]
julia> using BenchmarkTools
julia> @benchmark Nemo.const_pi(RR)^Nemo.const_pi(RR)^Nemo.const_pi(RR)
BenchmarkTools.Trial:
memory estimate: 3.92 KiB
allocs estimate: 12
--------------
minimum time: 232.576 μs (0.00% GC)
median time: 238.957 μs (0.00% GC)
mean time: 245.541 μs (0.00% GC)
maximum time: 716.879 μs (0.00% GC)
--------------
samples: 10000
evals/sample: 1
julia> @time Nemo.const_pi(RR)^Nemo.const_pi(RR)^Nemo.const_pi(RR)
0.000317 seconds (12 allocations: 3.922 KiB)
What was surprising and was, even using 4096 bit reals, Nemo.jl had very little, almost not noticeable CPU usage and very slight additional memory allocations when it calculated more than 1000 sig figs. So wondering
if Nemo is exploiting a shortcut or “magic” that might be implicitly hidden in the Nemo features listed here >> Getting Started · Nemo.jl