Romberg.jl: Simple Romberg integration in Julia

Romberg integration combines trapezoidal integration with Richardson extrapolation for significantly more accurate estimates at slightly more cost than trapezoidal integration alone.

This package uses Trapz.jl for the trapezoidal integration, and then calculates a basic extrapolation.

Like trapz, romberg is meant to be used for integrating discrete points. For integrating a function, there are much better packages available. Additionally, there are a few limitations to Romberg:

  1. The points must be equally spaced along x
  2. The number of points - 1 must be a power of 2
  3. 1-dimensional integration only

Practically, limitation 2 is the most significant, but it is probably possible to relax this in a future update. If these three conditions can be met, romberg is much more accurate than trapz.

I originally considered making a pull request to add this functionality to Trapz.jl, but decided that it makes more sense to keep it separate as it’s a different integration technique. I plan to register it soon.

14 Likes

very happy you used my package for building a higher order method! :smiley:

you can get around limitation 2 via:

using Romberg
using Trapz

function arb_romberg(x,y)
    L=length(x)
    L<=1 && return zero(eltype(x)) * zero(eltype(x))
    pL=2^(leading_zeros(oftype(L,0)) - leading_zeros(L)-1)+1
    if pL<=L
        return romberg(x[1:pL], y[1:pL]) + arb_romberg(x[pL:end],y[pL:end])
    else
        return trapz(x[1:end], y[1:end])
    end
end
vx=0:0.05:1
vy=@. vx*vx
arb_romberg(vx,vy) # 0.33333333333333326
trapz(vx,vy) # 0.33375
romberg(vx,vy) #currently a DomainError

(this code is not optimized)

Very interesting! Regarding this point:

If your integrand is in functional form, then other methods are probably a better choice, e.g. QuadGK.jl or HCubature.jl, among others.

is this a limitation of the idea of romber integration, or is it more of an implementation problem?

I wonder if GitHub - JuliaMath/Richardson.jl: Richardson extrapolation in Julia could be used to make a robust functional version of this.

How does this compare to GitHub - dextorious/NumericalIntegration.jl: Basic numerical integration routines for presampled data.

1 Like

@francesco.alemanno Thanks for writing the Trapz.jl package!

Your suggestion for arb_romberg is very clever. I had only thought about padding the vectors with zeros, like an FFT, but that wouldn’t work here because it would cause a discontinuity in the integrand. I’ve coded up a modified version of arb_romberg and built it into the package. It works, although it is slower than if the length(x)-1 is a power of 2.

@Mason the method works by applying trapezoidal integration using just 2 samples, then it applies trapezoidal integration using 3 samples, then 5, 9, etc, all the way to the 2^n+1 samples. The extrapolation then occurs between results of those integrations and they are combined in such a way that there is cancellation of errors. The formula for this combination is quite specific, so there is no advantage to using the more general Richardson.jl for extrapolation, even if a functional integrand is available.

@jtravs I was unaware of NumericalIntegration.jl. I needed Romberg integration for something I’m working on, but nothing came up when searching on discourse or Julia Packages.

NumericalIntegration.jl has a Romberg integration routine and it is significantly faster than Romberg.jl. In my effort to take advantage of Trapz.jl, I think I handicapped myself when it comes to benchmarking. Most of the time in Romberg.jl is spent on creating views of the data and running trapz on the view. NumericalIntegration.jl has coded up the entire procedure in a single function which is much more efficient.

I hate to say it after coding up Romberg.jl, but NumericalIntegration.jl should probably be preferred.

10 Likes

julia 1.5 will significantly improve the performance of view code

2 Likes

you may also try to @inline the smaller functions. As long as a view does not cross a function barrier its creation will be elided and e.g. correct indexing computed by the compiler

Sorry to revive an old thread, but I have some updates and a question about the best place for future development.

Actually, contrary to the typical presentation of Romberg integration, the method is not limited to power-of-two+1 sizes, and instead can work with any factorization. Moreover, if you use the latest version of the Richardson.jl package, it can handle this for you and results in a small and elegant implementation:

import Primes, Richardson

@views function romberg(Δx::Real, y::AbstractVector; kws...)
    n = length(y)
    n > 1 || throw(ArgumentError("y array must have length > 1"))
    endsum = (y[begin]+y[end])/2
    n == 2 && return endsum * Δx
    factors = Primes.factor(n-1)
    vals = [(Δx * (sum(y[begin+1:end-1]) + endsum), Δx)]
    sizehint!(vals, sum(values(factors)))
    step = 1
    for (f,K) in factors
        for k = 1:K
            step *= f
            sΔx = step*Δx
            pushfirst!(vals, (sΔx * (sum(y[begin+step:step:end-step]) + endsum), sΔx))
        end
    end
    return Richardson.extrapolate(vals, power=2; kws...)
end

romberg(x::AbstractRange, y::AbstractVector; kws...) = romberg(step(x), y; kws...)

Not only is the above code quite a bit shorter than the implementations in Romberg.jl and NumericalIntegration.jl, it also has the following advantages:

  • It supports arbitrary array lengths n, although it works best when n-1 is highly composite.

  • It returns the estimated integral as well as an error estimate.

  • It supports integrands in arbitrary normed vector spaces.

  • Even for n = 2^k + 1 lengths, it seems to be more accurate than the implementation in NumericalIntegration.jl (see below for an example).

  • In general, Richardson.jl implements more refined extrapolation than most implementations of this algorithm, because it compares the estimated errors from every level of the Neville tableau and picks the estimate with the smallest error.

julia> x = range(0, 10, length=65); y = cos.(x);

julia> val, err = romberg(x, y)
(-0.5440211107599316, 2.0984771309517924e-7)

julia> val - sin(10)    # exact error
1.2943812688348544e-10

julia> val2 = NumericalIntegration.integrate(x, y, NumericalIntegration.RombergEven())
┌ Warning: RombergEven :: final step reached, but accuracy not: 0.5440207498733947 > 1.0e-12
└ @ NumericalIntegration ~/.julia/packages/NumericalIntegration/3mVSe/src/NumericalIntegration.jl:175
-0.5440205148227985

julia> val2 - sin(10)   # worse error than above
5.960665713233837e-7

As mentioned above, it still works well when n-1 is any highly composite size:

julia> x = range(0, 10, length=61); y = cos.(x);

julia> val, err = romberg(x, y)
(-0.544021113675439, 4.007245680170968e-6)

julia> val - sin(10)
-2.786069264182345e-9

The performance typically seems a bit worse than NumericalIntegration, probably because it has more dynamic allocations — but could be sped up further, e.g. by replacing the sum calls over views with direct loops and omitting the call to Primes.factor in the power-of-two case.

Question: should I submit a PR? If so, to which package?

17 Likes

I’d be willing to accept the new algorithm into Romberg.jl — but I also don’t mind if others think it makes more sense to put it in NumericalIntegration.jl.

I opted not to register Romberg.jl after realizing it is outperformed by NumericalIntegration’s more classic and faster implementation, but I would register it if we decide the algorithm should go there.

1 Like

With further thought, I’d encourage you to submit the PR to Romberg.jl.

The existence of Trapz.jl already gives some precedence to having Romberg integration in its own package as opposed to another method in NumericalIntegration.jl. Additionally, the owner of NumericalIntegration.jl hasn’t had public Github activity for over 5 months, so it’s unknown when a PR there would be accepted.

I’ll tag a version of Romberg.jl as it is, but with this PR, much of the current package will change.

2 Likes

I’m happy to announce that Romberg.jl is now a registered package and can be installed with

] add Romberg

from the REPL. v0.1.0 is the old implementation and v0.2.0 is the new one. Huge thanks to @stevengj for providing the new faster and more accurate implementation.

4 Likes

As the owner in question, it’s true that I haven’t been very active on GitHub lately, but I do usually respond within two weeks or so. That said, NumericalIntegration.jl has been feature complete for my own purposes for a long time, so I mostly just accept PRs when people propose new things. I also don’t mind accepting other people as maintainers or transferring the package to an appropriate organization if the community would like more active maintenance.

As for this particular algorithm, it looks quite interesting and I’ll benchmark it when I get a chance. If it’s considerably better than what NumericalIntegration.jl has now, I might just add Romberg.jl as a dependency.

8 Likes