We’re outside of my area of expertise, so (a) happy to have an excuse to play around and learn a bit and (b) would love to hear from someone on this forum that knows more! My understanding was that sqrt itself is required to be accurate by IEEE to 0.5 ULP, and that as a transcendental function pow is recommended but not mandated to be accurate to the same level (and, hence, whether it is or not is is implementation-defined).
I like the idea of using BigFloat as an oracle to easily measure how accurate different versions are. At a minimum, though, I think we should be measuring in ULPs relative to the answer. Something like:
using Statistics
function ulp_test(f, domain; n_samples=100000)
a, b = extrema(domain)
x = a .+ rand(Float64, n_samples) .* (b - a)
vals = f.(x)
big_float_x = BigFloat.(x)
exact_vals = f.(big_float_x) # "exact"
ulp_scale = eps.(Float64.(exact_vals)) # measure ULP relative to answer
ulp_errors = (vals .- exact_vals) ./ ulp_scale
return (
std_ulp = Float64(std(ulp_errors)),
max_abs_ulp = Float64(maximum(abs, ulp_errors))
)
end
Running a few tests:
julia> ulp_test(x->sqrt(x),(0.0,1.0))
(std_ulp = 0.28975401566751907, max_abs_ulp = 0.49999874016920975)
julia> ulp_test(x->x^pi,(0.0,1.0))
(std_ulp = 0.875775912554217, max_abs_ulp = 9.312060290475852)
julia> ulp_test(x->x^3.1415926,(0.0,1.0))
(std_ulp = 0.2892737541146482, max_abs_ulp = 0.5210410859100494)
So far, as expected: the sqrt function correctly rounds, and exponentiation doesn’t have to according to this (probably not-quite-correct) function. On the other hand, switching to exponentiation by a quantity which is more clearly a float looks basically fine.
Finally, we can look at the question at hand:
julia> ulp_test(x->x*x*x*sqrt(x),(0.0,100.0))
(std_ulp = 0.6003549004331392, max_abs_ulp = 2.5012293636064946)
julia> ulp_test(x->x^3.5,(0.0,100.0))
(std_ulp = 0.2887374737640995, max_abs_ulp = 0.5205982181157858)
Quite clear: in the first (faster) version, the ULP errors accumulate over all of the internal operations used to build up an expressions, as one would expect. In the second version, Julia is clearly using a slower version of exponentiation that maintains accuracy over the domain.
Whether this difference in accuracy is relevant is, of course, domain-specific.