DoubleFloats.jl is available for v0.7.
Arithmetic and elementary functions. Expect 85 bit accuracy using Double64
.
Double64 relative to BigFloat
op |
speedup |
+ |
11x |
* |
18x |
\ |
7x |
trig |
3x-6x |
- results from testing with BenchmarkTools on one machine
- BigFloat precision was set to 106 bits, for fair comparison
22 Likes
Is there a summary somewhere of how this compares to other options besides BigFloat?
HigherPrecision.jl is v0.6 and its author is using DoubleFloats.jl. The 0.7 branch for ArbFloats.jl was experimental and discontinued. Wait for JuliaCon.
now v1 ready, some buglets addressed and a few more features added:
d64"0.3"
works like big"0.3"
, maxintfloat, div, cld, fld, rem, mod available.
DoubleFloats v0.7 is here with some docs.
v0.7 exports ComplexD64
with Double64
(and so for 32, 16 bits).
3 Likes
bikeshedding: Do you plan to eventually rename the types to Float128, Complex128,etc? Using “double” to mean “floating-point” has always seemed to me like a misnomer.
1 Like
The name comes from the software technology, doubling the floating point format by pairing two floats to form an extended precision value. To your point, here “double” does not mean floating-point, it means two-of.
These types are not IEEE 754 compliant so naming them Float128, Complex128 is not an option (this has been suggested, discussed by the core Julia developers and not used for that reason). Julia does expect to have a legit
Float128 type sometime (it has an Int128 type already). One difference is that standard Float128s have a larger exponent range than do the Double64s – their exponents are the same as a Float64.
8 Likes
Thanks for the explanation.
1 Like
DoubleFloats.jl v0.9.0 is released.
The docs are here. Let me know if something in the docs needs more clarification.
9 Likes