I’m proud to announce HigherPrecision.jl. A package intended as a drop-in replacement for
Float64 if you need higher precision but
BigFloat is too heavyweight. @ChrisRackauckas already gave it a successful spin in DifferentialEquations.jl.
They way it works is that it emulates a 128 bit float (and 256 bit float, not yet implemented) as an unevaluated sum of 2 (resp. 4)
Float64. This strategy is known as double-double (resp. quad-double) precision.
The heavy lifting was done from the QD C++ library and I merely ported it to Julia (and implemented the additional Julia specific functions).
Besides the implementation of of quad-double precision (where the QD library gives a good blue print), there are also still a lot of tricky open problems (especially the performance of the transcendental functions). Needless to say that contributions are very welcome
I hope the library is useful for the broader community.