The following implementation computes your g(\delta) accurately in double precision, by switching over to a Taylor series for small \delta and exploiting the expm1 function for larger \delta.
function g(Ī“::Union{Float64,ComplexF64})
if abs(Ī“) < 0.2
return 1 + Ī“*@evalpoly(Ī“^2, 1/3,-1/90,1/2520,-1/75600,1/2395008,-691/54486432000)
else
y = expm1(-Ī“)
return -2(y + Ī“*exp(-Ī“)) / y^2
end
end
(Itās possible that there is some clever algebraic transformation that lets you avoid the explicit Taylor series, but Iām not seeing it at the moment.)
Often times for numerical computing, in the field I was in, we would either rigorously determine, or crudely determine limits for significance/guard digits. Rules of thumb for floats were usually 1e-5, for doubles usually 1e-10 (because we used ffts and things a lot).
Main take away is, thereās error in every calculation you do basically. Thereās also a finite resolution to what a digital object can store/handle. When you do lots of calculations that error can propagate, or do unexpected things.
Does this means that in any serious calculations where human lives are at stake, you need to do the calculations twice? Once in Float64 and once in Float32. Then compare the results to make sure they come up with roughly the same value in case there are any numerical instabilities?
Well Iām not so sure about the answer to that. Another thing that can be done, instead, would be to plot a range of solutions near the answer, and test results of known similar conditions.
I mostly did work like this. When something goes awry youāll be able to tell most of the time. Complex real and imag values become underdetermined youāll see steppy swaps. Really though you can either measure or find the floating point error for most calculations. Julia is actually amazing for this⦠I need to write a paper about it - but have negative time to do so. I have tons of results and code for it thoughā¦
Although it is not entirely foolproof, this method is likely to work well in practice. For a rather extensive discussion about such techniques, see for example this paper by Kahan:
The discussion about possible round-off error analysis schemes and Kahanās opinion about them starts at page 13.
The first of the five schemes listed in this paper is the one you mention: running the same computation several times with different FP precisions. The second scheme (Kahanās favorite) is very similar: it consists in running the same computation several times with the same FP precision, but with different rounding modes.
Sometimes numerical work may inform decisions and influence actions relevant to anotherās health, wealth, or contentment. I have found it better to compute more accurately than may obtain using Float32 or Float64, then converting the likely more accurate result to Float32 or Float64. The choice of computational type and the choice of presentational type are made with an understanding of the specifics of the work and its context.
There are some publicly available Julia packages designed for this work. For your purposes, I suggest becoming familiar with DoubleFloats.jl. Where algorithmic choices, data uncertainties, or natural inexactness suggest characterizing the results as spans, The types ArbReal and ArbComplex from ArbNumerics.jl do just that. Also see packages from JuliaIntervals.
Thatās a neat writeup of very relevant problems and methods from a giant in the field, but it is surprising how jaded and bitter the tone of the article is.