Is there an implementation of Trefethen's contour integrals for evaluating [1 - exp(-x)] / x as x->0?

This kind of evaluation frequently occurs in numerical analysis, and I believe there should be an implementation in Julia. If so, I am happy to use it rather than re-invent the wheel:) Thanks!

Do you mean (1-exp(x))/x? Otherwise, this diverges near 0.

1 Like

Yes. Sorry for the ambiguity in the original title. I have revised it accordingly. Thanks!

Currently, I use TaylorSeries.jl to evaluate the expression when x < 0.01. It works great!

using TaylorSeries

h(x) = (1 - exp(-x)) / x

t = Taylor1(6)
taylorh = (1 - exp(-t)) / t

x = 0.001

If you want something that will give good (but not great) accuracy over the entire domain, -expm1(x)/x should do what you want. Combining that with TaylorSeries should get you <1.5 ULP over the whole domain (I think).


Out of curiosity, how did you get the 1.5 ULP?

ApproxFun.jl and friends might be able to do that.

1 Like

I’ve done a bunch of testing of expm1 for so I know that the current implimentation is around .75 ULP. One extra rounding caused by the division adds a bit more than .5 ULP since the error isn’t quite linear.




I’m not sure why you would need TaylorSeries at all here; expm1 is already accurate for small arguments.

f(x) = iszero(x) ? float(one(x)) : -expm1(-x)/x

x = exp10.(range(-300,log10(700),length=10^7)); x = [-x; x]
err(x) = let exact=f(big(x)); Float64(abs(f(x)-exact) / abs(exact)); end
maximum(err, x) / eps(Float64)

gives 1.02 (≈1ulp).

That will give a much larger error — your h(x) = (1 - exp(-x)) / x has an error of ≈24ulps when x=0.01.


Good catch! I was a little worried that the 0/0 in the limit would cause issues, but apparently not.

(You do need an explicit iszero(x) check as in my code above, as otherwise you will get NaN when x is exactly zero.)


@stevengj, sorry for asking a dumb question: as nextfloat(0.) is equal to 5.0e-324, why not starting the range after -324? Thank you.

For this particular f(x) you actually get 1.0 as soon as x is less than about 1e-16, so there is not much point in worrying about the exact bottom end of the range.

In general calculations, however, if you work with subnormal values you may get larger relative errors because the precision is reduced. The smallest non-subnormal value is floatmin(Float64) ≈ 2.2e-308. I usually just remember 1e-300 as a “round” approximation for this, which is why my quick test code started there.


Thank you! I only expect an error below 1e-8. So I am satisfied for now:) Thanks for all the in-depth discussion here. Although I use it to evaluate the Deby function and the h function, i.e.

g(x, f) = (2/x^2)(e^{-xf} + xf - 1) = (2/x)[f - h(x, f)]

h(x, f) = (1 - e^{-xf}) / x

These kind of expression is also very useful for computing the coefficient for the ETDRK4 solver.

1 Like

To compute this function accurately you need more than just expm1, because the numerator cancels both the 0th- and 1st-order terms in xf (for small xf). I’m not sure if one of the tricks for expm1 can be adapted here or if you just have to use a Taylor series for small xf.

If you do use a Taylor series and care about performance, I would hard-code the coefficients and use evalpoly, e.g. this gives at least 14 digits:

function g(x,f) 
    xf = x*f
    abs(xf) > 0.1 && return 2(exp(-xf) + xf - 1) / x^2
    return f^2 * evalpoly(xf, (1, -1/3, 1/12, -1/60, 1/360, -1/2520, 1/20160, -1/181440, 1/1814400))

Thanks! I will use your function. It is exactly what I want.