Difference between f(1.0e-7) & f(1.0e-8)

the limit of the following function at 0 is 1,

QQęˆŖ图20191229180612

yet f(1.0e-8) gives 0 while f(1.0e-7) gives the correct answer, why?

julia> f(x)=2*(1-x*exp(-x)-exp(-x))/(1-exp(-x))^2
f (generic function with 1 method)

julia> f(1.0e-7)
0.9992008230547272

julia> f(1.0e-8)
0.0

The problem is catastrophic cancellation due to limited accuracy of double precision floating point arithmetic:

julia> 1 - (1e-7) * exp(-1e-7)
0.99999990000001

julia> exp(-1e-7)
0.999999900000005

julia> 1 - (1e-7) * exp(-1e-7) - exp(-1e-7)
4.9960036108132044e-15

julia> 1 - (1e-8) * exp(-1e-8)
0.9999999900000001

julia> exp(-1e-8)
0.9999999900000001

julia> 1 - (1e-8) * exp(-1e-8) - exp(-1e-8)
0.0
7 Likes

Note also that your f is probably quite wrong also for x = 1e-7:

julia> f(big"1e-7")
1.000000033333333333333322222222222222226190476190476189153439153575262308010033

Using textbook formulas involving very small numbers is often wrong in numerical computing.

4 Likes

The following implementation computes your g(\delta) accurately in double precision, by switching over to a Taylor series for small \delta and exploiting the expm1 function for larger \delta.

function g(Ī“::Union{Float64,ComplexF64})
    if abs(Ī“) < 0.2
        return 1 + Ī“*@evalpoly(Ī“^2, 1/3,-1/90,1/2520,-1/75600,1/2395008,-691/54486432000)
    else
        y = expm1(-Ī“)
        return -2(y + Ī“*exp(-Ī“)) / y^2
    end
end

(Itā€™s possible that there is some clever algebraic transformation that lets you avoid the explicit Taylor series, but Iā€™m not seeing it at the moment.)

16 Likes

you can also take the fact of the existence of expm1 and expanding each term:

f1(x)= 2*(1-x*exp(-x)-exp(-x))/(1-exp(-x))^2 #for comparison
function f2(x)
    denom= -expm1(-x)
    return 2/denom+ 2*x/denom- 2*x/(denom*denom)
end

Testing for equality on other number

julia> f1(4)
1.885271061051406

julia> f2(4)
1.8852710610514052

near zero:

julia> f1(1e-7)
0.9992008230547272

julia> f2(1e-7)
1.0000000335276127

julia> f2(1e-8)
1.0

julia> f1(1e-8)
0.0

to be fair it also fails, but around 1e-11

2 Likes

Often times for numerical computing, in the field I was in, we would either rigorously determine, or crudely determine limits for significance/guard digits. Rules of thumb for floats were usually 1e-5, for doubles usually 1e-10 (because we used ffts and things a lot).

Main take away is, thereā€™s error in every calculation you do basically. Thereā€™s also a finite resolution to what a digital object can store/handle. When you do lots of calculations that error can propagate, or do unexpected things.

2 Likes

Does this means that in any serious calculations where human lives are at stake, you need to do the calculations twice? Once in Float64 and once in Float32. Then compare the results to make sure they come up with roughly the same value in case there are any numerical instabilities?

1 Like

Well Iā€™m not so sure about the answer to that. Another thing that can be done, instead, would be to plot a range of solutions near the answer, and test results of known similar conditions.

I mostly did work like this. When something goes awry youā€™ll be able to tell most of the time. Complex real and imag values become underdetermined youā€™ll see steppy swaps. Really though you can either measure or find the floating point error for most calculations. Julia is actually amazing for thisā€¦ I need to write a paper about it - but have negative time to do so. I have tons of results and code for it thoughā€¦

When someone has done a careful analysis, this is not necessaryā€”in some cases one can prove that the result has a certain accuracy.

3 Likes

thanks all of you for such detailed explanation and diverse solutions

Although it is not entirely foolproof, this method is likely to work well in practice. For a rather extensive discussion about such techniques, see for example this paper by Kahan:

The discussion about possible round-off error analysis schemes and Kahanā€™s opinion about them starts at page 13.
The first of the five schemes listed in this paper is the one you mention: running the same computation several times with different FP precisions. The second scheme (Kahanā€™s favorite) is very similar: it consists in running the same computation several times with the same FP precision, but with different rounding modes.

4 Likes

Sometimes numerical work may inform decisions and influence actions relevant to anotherā€™s health, wealth, or contentment. I have found it better to compute more accurately than may obtain using Float32 or Float64, then converting the likely more accurate result to Float32 or Float64. The choice of computational type and the choice of presentational type are made with an understanding of the specifics of the work and its context.

There are some publicly available Julia packages designed for this work. For your purposes, I suggest becoming familiar with DoubleFloats.jl. Where algorithmic choices, data uncertainties, or natural inexactness suggest characterizing the results as spans, The types ArbReal and ArbComplex from ArbNumerics.jl do just that. Also see packages from JuliaIntervals.

4 Likes

I saw your talk on this!!!

1 Like

You can also use TaylorSeries.jl to do Taylor series manipulations:

julia> using TaylorSeries

julia> n = 16
16

julia> t = Taylor1(Rational{Int}, n)
 1//1 t + š’Ŗ(tĀ¹ā·)

julia> ee = sum( (-t)^i / factorial(i) for i in 0:n )
 1//1 - 1//1 t + 1//2 tĀ² - 1//6 tĀ³ + 1//24 tā“ - 1//120 tāµ + 1//720 tā¶ - 1//5040 tā· + 1//40320 tāø - 1//362880 tā¹ + 1//3628800 tĀ¹ā° - 1//39916800 tĀ¹Ā¹ + 1//479001600 tĀ¹Ā² - 1//6227020800 tĀ¹Ā³ + 1//87178291200 tĀ¹ā“ - 1//1307674368000 tĀ¹āµ + 1//20922789888000 tĀ¹ā¶ + š’Ŗ(tĀ¹ā·)

julia> 2 * (1 - t * ee - ee) / ((1 - ee)^2)
 1//1 + 1//3 t - 1//90 tĀ³ + 1//2520 tāµ - 1//75600 tā· + 1//2395008 tā¹ - 691//54486432000 tĀ¹Ā¹ + 1//2668723200 tĀ¹Ā³ - 59513//156920924160000 tĀ¹āµ - 42397//94152554496000 tĀ¹ā¶ + š’Ŗ(tĀ¹ā·)

(Note that although exp(t) is defined for a Taylor series, it currently converts the arguments to floats.)

You can also bound the remainder using TaylorModels.jl.

1 Like

Thatā€™s a neat writeup of very relevant problems and methods from a giant in the field, but it is surprising how jaded and bitter the tone of the article is.

1 Like