I am hoping to design algorithms that can be used efficiently with other fixed higher precision libraries such as DoubleFloats.jl, Quadmath.jl, DecFP.jl, MultiFloats.jl, etc… This usually involves polynomial approximations to a designated precision (128, 256). Is there a flexible way to store high precision coefficients to be used with all of these libraries?
For example, ideally something like this would work…
function test(x::T) where T
# strings usually contain 30-50 digits
P = (parse(T, "1.2"), parse(T, "1.3"), parse(T, "1.4"), parse(T, "1.5"))
return evalpoly(x, P)
end
with benchmarks…
julia> @btime test(x) setup=(x=Float128(rand()))
330.556 ns (0 allocations: 0 bytes)
1.36276565872626782310144123826112398e+00
julia> @btime test(x) setup=(x=Dec128(rand()))
781.630 ns (0 allocations: 0 bytes)
3.119328959904789231759682793174765
# allocates for some reason
julia> @btime test(x) setup=(x=Double64(rand()))
1.258 μs (20 allocations: 848 bytes)
2.2168204194286036
But we are paying the cost to parse the strings. A solution to avoid this is to just hard code the constants in a higher precision…
const P_f128 = (parse(Float128, "1.2"), parse(Float128, "1.3"), parse(Float128, "1.4"), parse(Float128, "1.5"))
function test2(x)
return evalpoly(x, P_f128)
end
julia> @btime test2(x) setup=(x=Float128(rand()))
71.869 ns (0 allocations: 0 bytes)
2.02825985806247608356097004488423765e+00
This approach works well and fast but a little less flexible than I was hoping for as it requires hard coding the constants for a specific package (and take it on as a dependency.) You could also cast any AbstractFloat
to whatever internal precision you use but slightly less flexible.
Is there perhaps a good approach to be flexible enough to handle all the different higher precision packages in the current and future ecosystem but also avoid allocations and parsing of string constants during runtime?