@F-YF posted an ill-conditioned sum whose terms (in exact arithmetic) should exactly cancel to give zero. Any roundoff error can give a nonzero answer, which is an infinite relative error compared to 0.0 (no digits are correct), no matter how much precision you have.
(From another perspective, compared to the magnitude of the summand terms\sim 1/x^6 \approx 10^{58}, an error of 10^{43} is quite small: 10^{43} / 10^{58} = 10^{-15}.)
I tried herbie for fun, and it suggested: 1.2025012025012024 \cdot 10^{-5} \cdot {x}^{5} + \left(0.0005291005291005291 \cdot {x}^{3} + x \cdot 0.009523809523809525\right)
Not exact, but the error is small
julia> function f(x)
(1.2025012025012024e-5 * ^(x, 5.0)) + ((0.0005291005291005291 * ^(x, 3.0)) + (x * 0.009523809523809525));
end
f (generic function with 1 method)
julia> f(2.1e-10)
2.0000000000000004e-12
julia> f(2.0e-10)
1.904761904761905e-12
It used a Taylor expansion to get this. However, their plot suggests it should be good for a wide range of inputs.
Humble question: using big floats the error is close to zero (~2e-12) and a 5% perturbation of the input doesn’t catastrophically explode the output from near 0 to ~ -1e43. This gain in stability should be important too, right?
Compared to what? Both of 2e-12 and 1e43 are infinitely bigger than 0.0. Both are a catastrophic explosion of the forward error (infinite relative error, no correct significant digits).
“Stability” has a technical meaning in numerical analysis, and whether an algorithm is “stable” or “unstable” is independent of the precision. It’s a category error to talk about “gaining stability” by increasing the precision.
This algorithm for f(x) is numerically unstable (regardless of the precision), I think (though I haven’t performed a formal analysis). You can only make the computation stable by changing the algorithm (e.g. by refactoring as @StefanKarpinski suggests).
So, the formula does not evaluate to zero at x0 = 2.0e-12.
In this case, if we perturb the input by 5% from x0 to x0+dx = 2.1e-12, we would not expect dy to vary by 1e48 (in absolute terms) when evaluating the function between x0 and x0+dx.
And please do not take this as a challenge, as I am light-years away from your knowledge. It is just my limited understanding. Thank you.
You’re quite right, the exact value is not zero, so the forward relative error is finite, not infinite; I mis-read the formula.
The basic numerical instability, however, is that no matter what precision you use, for sufficiently small input x you will get a catastrophic cancellation in the original formula.
I don’t see a clever trick to implement this function accurately except by Taylor expanding around x=0 and switching between this series and the exact formula at some precision-dependent threshold:
function f(x::Union{Float64,ComplexF64})
if abs(x) < 1.0 # empirical threshold for double precision
return x * evalpoly(x^2, (1/105, 1/1890, 1/83160, 1/6486480, 1/778377600, 1/132324192000, 1/30169915776000))
else
return (x*(x^2 + 15)*cosh(x) - 3(2x^2 + 5)*sinh(x))/x^6
end
end