Romberg.jl: Simple Romberg integration in Julia

Sorry to revive an old thread, but I have some updates and a question about the best place for future development.

Actually, contrary to the typical presentation of Romberg integration, the method is not limited to power-of-two+1 sizes, and instead can work with any factorization. Moreover, if you use the latest version of the Richardson.jl package, it can handle this for you and results in a small and elegant implementation:

import Primes, Richardson

@views function romberg(Δx::Real, y::AbstractVector; kws...)
    n = length(y)
    n > 1 || throw(ArgumentError("y array must have length > 1"))
    endsum = (y[begin]+y[end])/2
    n == 2 && return endsum * Δx
    factors = Primes.factor(n-1)
    vals = [(Δx * (sum(y[begin+1:end-1]) + endsum), Δx)]
    sizehint!(vals, sum(values(factors)))
    step = 1
    for (f,K) in factors
        for k = 1:K
            step *= f
            sΔx = step*Δx
            pushfirst!(vals, (sΔx * (sum(y[begin+step:step:end-step]) + endsum), sΔx))
        end
    end
    return Richardson.extrapolate(vals, power=2; kws...)
end

romberg(x::AbstractRange, y::AbstractVector; kws...) = romberg(step(x), y; kws...)

Not only is the above code quite a bit shorter than the implementations in Romberg.jl and NumericalIntegration.jl, it also has the following advantages:

  • It supports arbitrary array lengths n, although it works best when n-1 is highly composite.

  • It returns the estimated integral as well as an error estimate.

  • It supports integrands in arbitrary normed vector spaces.

  • Even for n = 2^k + 1 lengths, it seems to be more accurate than the implementation in NumericalIntegration.jl (see below for an example).

  • In general, Richardson.jl implements more refined extrapolation than most implementations of this algorithm, because it compares the estimated errors from every level of the Neville tableau and picks the estimate with the smallest error.

julia> x = range(0, 10, length=65); y = cos.(x);

julia> val, err = romberg(x, y)
(-0.5440211107599316, 2.0984771309517924e-7)

julia> val - sin(10)    # exact error
1.2943812688348544e-10

julia> val2 = NumericalIntegration.integrate(x, y, NumericalIntegration.RombergEven())
┌ Warning: RombergEven :: final step reached, but accuracy not: 0.5440207498733947 > 1.0e-12
└ @ NumericalIntegration ~/.julia/packages/NumericalIntegration/3mVSe/src/NumericalIntegration.jl:175
-0.5440205148227985

julia> val2 - sin(10)   # worse error than above
5.960665713233837e-7

As mentioned above, it still works well when n-1 is any highly composite size:

julia> x = range(0, 10, length=61); y = cos.(x);

julia> val, err = romberg(x, y)
(-0.544021113675439, 4.007245680170968e-6)

julia> val - sin(10)
-2.786069264182345e-9

The performance typically seems a bit worse than NumericalIntegration, probably because it has more dynamic allocations — but could be sped up further, e.g. by replacing the sum calls over views with direct loops and omitting the call to Primes.factor in the power-of-two case.

Question: should I submit a PR? If so, to which package?

17 Likes