[ANN] DoubleFloats.jl



DoubleFloats.jl is available for v0.7.
Arithmetic and elementary functions. Expect 85 bit accuracy using Double64.

Double64 relative to BigFloat

op speedup
+ 11x
* 18x
\ 7x
trig 3x-6x
  • results from testing with BenchmarkTools on one machine
  • BigFloat precision was set to 106 bits, for fair comparison


Is there a summary somewhere of how this compares to other options besides BigFloat?


HigherPrecision.jl is v0.6 and its author is using DoubleFloats.jl. The 0.7 branch for ArbFloats.jl was experimental and discontinued. Wait for JuliaCon.


OK got it, thanks!