Richardson.jl: Richardson extrapolation in Julia

Richardson extrapolation is a technique for extrapolating a given function f(x) to f(x_0), taking a one-sided x \to x_0 limit without evaluating f(x_0) explicitly, to high-order accuracy using adaptive polynomial extrapolation.

I couldn’t find any library in Julia for this, so I put together a little package:

I plan to register it soon (perhaps under JuliaMath). (Please let me know if this duplicates functionality already implemented elsewhere!)

24 Likes

There’s ODE solver versions of Richardson extrapolation in OrdinaryDiffEq: https://docs.sciml.ai/latest/solvers/ode_solve/#Parallelized-Explicit-Extrapolation-Methods-1 but while it’s the same general method, that seems to be a fairly different use case from what you’re doing here.

I coded simpler (and MUCH wronger) versions of this general code countless times…

Thank you Steven for the marvellous work! register it ASAP, someone much wiser than me said: “its usefulness for practical computations can hardly be overestimated.” :smiley:

EDIT 1:
It can be useful to protect tricky computations in AD:

using Richardson
using ForwardDiff: Dual
f(x) = cos(x + sqrt(x) * x)

A,_=extrapolate(Dual(0.1,0.0),x0=Dual(0.0,1.0)) do x
    @show x
    f(x)
end
#=
x = Dual{Nothing}(0.1,2.0)
x = Dual{Nothing}(0.0125,1.125)
x = Dual{Nothing}(0.0015625,1.015625)
x = Dual{Nothing}(0.0001953125,1.001953125)
x = Dual{Nothing}(2.44140625e-5,1.000244140625)
x = Dual{Nothing}(3.0517578125e-6,1.000030517578125)
=#
B=f( Dual(0.0,1.0) )

@show A # Dual{Nothing}(0.9999999999995401,1.740011327412078e-8)
@show B # Dual{Nothing}(1.0,NaN))

The following is now fixed/supported! :smiley:

#EDIT 2: via super ugly, surely broken approach it seems to work for vector valued functions, is there a way to truly support vector functions? 
#```julia
using LinearAlgebra: norm
import Base.abs
import Base.isless
Base.abs(x::AbstractVector) = norm(x)
Base.isless(y::Number,x::AbstractVector) = y<norm(x)
extrapolate(-0.1,x0=0) do x
   @show x
   [x/sqrt(-x),sin(x)/x]
end
# ([-4.76091483644921e-9, 1.0], 8.704985825570316e-9)
#```

#EDIT 3: limits in R^N can be easily supported i think

# ```julia
using Richardson

@inline function Richardson.extrapolate(f,h::AbstractArray{T};x0::AbstractArray{T},args...) where T
   @boundscheck size(h)==size(x0)
   @inline g(t)=@inbounds x0.+t.*h
   extrapolate(T(1),x0=T(0);args...) do t
       f(g(t))
   end
end

extrapolate([0.1,-0.2],x0=[0.0,1.0]) do x
   @show x
   sin(x[1])/x[1]*log(x[2])/(x[2]-1)
end
#=
x = [0.1, 0.8]
x = [0.0125, 0.975]
x = [0.0015625, 0.996875]
x = [0.0001953125, 0.999609375]
x = [2.44140625e-5, 0.999951171875]
=#

#(1.000000000000058, 1.4038002982275088e-9)
#```

#EDIT 4: for some functions it can go into endless loops (valid up to commit: https://github.com/JuliaMath/Richardson.jl/commit/c828d05a0b07fde094092496eecdea0e1c4588c9)

#```julia
using Richardson
extrapolate(0.2,x0=1.0) do x
   @show x # this is there to slow it down just enough to be able to abort the computation easily
   log(x)/sqrt(x-1)
end
#```

Thanks for the comments, but please file bug reports and feature requests as github issues rather than posting here.

There are two issues here. First, if the limit could be zero then you need to specify an absolute tolerance (atol); we can’t have a default atol because it is dimensionful.

Second, Richardson extrapolation with polynomials assumes an analytic function, i.e. a Taylor series, which does not hold for \log(x) / \sqrt{x-1}, so you will get rather slow convergence (and eventually NaN values). If you change variables to x = 1 + u^2, and hence use \log(1+u^2) / u or log1p(u^2) / u, then it is analytic in u and you recover rapid convergence.

Technically, we could do Richardson extrapolation for an arbitrary Puiseux series if you specify the power law, but I haven’t implemented this yet.

3 Likes

ofcourse, the example was chosen on purpose \log(1+\epsilon)/\sqrt\epsilon \propto \sqrt\epsilon, slow convergence was expected, a parametrized cutoff on the number of iterations is enough i think. fixed! :smiley:

i thank you again for filling this gap in the ecosystem, i already modified some of my code to use Richardson.jl, (i was using a fixed order extrapolation) and with your package the accuracy is much better :slight_smile:

sorry if i posted all that shebang here, the first post started of as a simple thank you.

1 Like

That’s a reasonable feature, I’ll add something like that.

However, I’ve fixed it to terminate on your example (basically, it needs to terminate if it hits a NaN). I also fixed support for vector-valued functions (or any type that supports a norm).

4 Likes

Now implemented. Just pass power=0.5 and you get decent convergence for your example function (which has a Puiseux series in \sqrt{h}):

julia> extrapolate(0.2, x0=1.0, power=0.5) do x
           @show x
           log(x)/sqrt(x-1)
       end
x = 2.2
x = 1.15
x = 1.01875
x = 1.00234375
x = 1.00029296875
x = 1.00003662109375
x = 1.0000045776367188
x = 1.00000057220459
x = 1.0000000715255737
x = 1.0000000089406966
(-2.1650306436709847e-13, 2.2837517952902017e-13)

As an added bonus, if you have an even/symmetric function f(x_0 + h) = f(x_0 - h), which has only even powers of h in its Taylor series, then you can pass power=2 to accelerate convergence.

7 Likes