[ANN] DoubleFloats.jl

package
announcement

#1

DoubleFloats.jl is available for v0.7.
Arithmetic and elementary functions. Expect 85 bit accuracy using Double64.

Double64 relative to BigFloat

op speedup
+ 11x
* 18x
\ 7x
trig 3x-6x
  • results from testing with BenchmarkTools on one machine
  • BigFloat precision was set to 106 bits, for fair comparison

#2

Is there a summary somewhere of how this compares to other options besides BigFloat?


#3

HigherPrecision.jl is v0.6 and its author is using DoubleFloats.jl. The 0.7 branch for ArbFloats.jl was experimental and discontinued. Wait for JuliaCon.


#4

OK got it, thanks!


#5

now v1 ready, some buglets addressed and a few more features added:
d64"0.3" works like big"0.3", maxintfloat, div, cld, fld, rem, mod available.