Numerical precision

Recently I had a small problem with Julia that confused me. The code is shown below:


Julia> x=2.0e-10
2.0e-10

Julia> -1.0*(-0.5*x^3 - 3.0*x^2 - 7.5*x - 7.5)*exp(-x)/x^6 + 1.0*(0.5*x^3 - 3.0*x^2 + 7.5*x - 7.5)*exp(x)/x^6
0.0

Julia> x=2.1e-10
2.1e-10

Julia> -1.0*(-0.5*x^3 - 3.0*x^2 - 7.5*x - 7.5)*exp(-x)/x^6 + 1.0*(0.5*x^3 - 3.0*x^2 + 7.5*x - 7.5)*exp(x)/x^6
-1.1150372599265312e43

I’m sorry if it’s a very simple question, but I don’t understand.

https://0.30000000000000004.com/

5 Likes
4 Likes

In particular, you are seeing a catastrophic cancellation.

4 Likes

Results will behave by using big floats.

Only kind of. The errors will be smaller, but still non-zero.

@F-YF posted an ill-conditioned sum whose terms (in exact arithmetic) should exactly cancel to give zero. Any roundoff error can give a nonzero answer, which is an infinite relative error compared to 0.0 (no digits are correct), no matter how much precision you have.

(From another perspective, compared to the magnitude of the summand terms \sim 1/x^6 \approx 10^{58}, an error of 10^{43} is quite small: 10^{43} / 10^{58} = 10^{-15}.)

This is exactly the same as the issue discussed in another thread here: "sum([1.0, 1e100, 1.0, -1e100])" fails - #10 by stevengj

2 Likes

The general guidance is to mathematically rewrite your expressions to avoid catastrophic cancellation. In this case Wolfram Alpha simplifies this to

(x(x^2 + 15)\cosh(x) - 3(2x^2 + 5)\sinh(x))/x^6

This doesn’t exhibit the same cancellation issue:

julia> f(x) = (x*(x^2 + 15)*cosh(x) - 3(2x^2 + 5)*sinh(x))/x^6
f (generic function with 1 method)

julia> f(2e-10)
0.0

julia> f(2.1e-10)
0.0

It could exhibit catastrophic cancellation elsewhere, however. But this is the general way you need to deal with such problems.

9 Likes

I tried herbie for fun, and it suggested:
1.2025012025012024 \cdot 10^{-5} \cdot {x}^{5} + \left(0.0005291005291005291 \cdot {x}^{3} + x \cdot 0.009523809523809525\right)

Not exact, but the error is small

julia> function f(x)
         (1.2025012025012024e-5 * ^(x, 5.0)) + ((0.0005291005291005291 * ^(x, 3.0)) + (x * 0.009523809523809525));
       end
f (generic function with 1 method)

julia> f(2.1e-10)
2.0000000000000004e-12

julia> f(2.0e-10)
1.904761904761905e-12

It used a Taylor expansion to get this. However, their plot suggests it should be good for a wide range of inputs.

5 Likes

Humble question: using big floats the error is close to zero (~2e-12) and a 5% perturbation of the input doesn’t catastrophically explode the output from near 0 to ~ -1e43. This gain in stability should be important too, right?

right

2 Likes

Compared to what? Both of 2e-12 and 1e43 are infinitely bigger than 0.0. Both are a catastrophic explosion of the forward error (infinite relative error, no correct significant digits).

“Stability” has a technical meaning in numerical analysis, and whether an algorithm is “stable” or “unstable” is independent of the precision. It’s a category error to talk about “gaining stability” by increasing the precision.

This algorithm for f(x) is numerically unstable (regardless of the precision), I think (though I haven’t performed a formal analysis). You can only make the computation stable by changing the algorithm (e.g. by refactoring as @StefanKarpinski suggests).

2 Likes

Using big floats and plotting around 2e-12 the cosh()... equivalent expression provided above by @StefanKarpinski , we get:

So, the formula does not evaluate to zero at x0 = 2.0e-12.
In this case, if we perturb the input by 5% from x0 to x0+dx = 2.1e-12, we would not expect dy to vary by 1e48 (in absolute terms) when evaluating the function between x0 and x0+dx.

And please do not take this as a challenge, as I am light-years away from your knowledge. It is just my limited understanding. Thank you.

3 Likes

You’re quite right, the exact value is not zero, so the forward relative error is finite, not infinite; I mis-read the formula.

The basic numerical instability, however, is that no matter what precision you use, for sufficiently small input x you will get a catastrophic cancellation in the original formula.

3 Likes

Unfortunately, it’s still susceptible to spurious underflow:

julia> f(1e-100)
NaN

julia> setprecision(2^16)
65536

julia> Float64(f(big"1e-100"))
9.523809523809523e-103

and it fact it looks like there is still a catastrophic cancellation in the numerator for small x, where x * cosh(x) ≈ sinh(x).

I don’t see a clever trick to implement this function accurately except by Taylor expanding around x=0 and switching between this series and the exact formula at some precision-dependent threshold:

function f(x::Union{Float64,ComplexF64})
    if abs(x) < 1.0 # empirical threshold for double precision
        return x * evalpoly(x^2, (1/105, 1/1890, 1/83160, 1/6486480, 1/778377600, 1/132324192000, 1/30169915776000))
    else
        return (x*(x^2 + 15)*cosh(x) - 3(2x^2 + 5)*sinh(x))/x^6
    end
end
7 Likes