I seem to run into this problem frequently (and should probably learn more about the recent improved effect analysis) when writing higher precision algorithms. But it would be really convenient if a structure like.
function foo(x::T) where T
P = ("1.23", "1.24", "1.2345")
return evalpoly(x, parse.(T, P))
end
# or storing big floats as constants
const P2 = (big"1.23", big"1.24", big"1.2345")
function foo2(x::T) where T
return evalpoly(x, T.(P2))
end
could just work. A recent similar discussion was at here and I posted a related question here.
julia> using DoubleFloats, Quadmath
julia> @btime foo($Double64(1.2))
1.046 μs (16 allocations: 673 bytes)
4.49568
julia> @btime foo($Float128(1.2))
295.539 ns (1 allocation: 32 bytes)
4.49567999999999981335818688421568617e+00
julia> @btime foo2($Double64(1.2))
373.980 ns (7 allocations: 344 bytes)
4.49568
julia> @btime foo2($Float128(1.2))
395.318 ns (13 allocations: 608 bytes)
4.49567999999999981335818688421568617e+00
julia> @btime foo($Float64(1.2))
133.277 ns (1 allocation: 16 bytes)
4.49568
julia> @btime foo2($Float64(1.2))
93.990 ns (1 allocation: 16 bytes)
4.49568
Generally, with explicit coefficients like this the algorithm isn’t arbitrary precision so the precision of BigFloats
can be fixed. But ideally, we would be able to write algorithms for fixed higher precisions (Float128
, Float256
, Double64
) that users could also potentially use any external package and not experience all these allocations.
It looks like though the external packages could also handle this conversion better. I must admit most of this issue is my general aversion to specifically loading these packages. Or perhaps a general desire for a julia native Float128
or Int256
type.