I’m requesting feedback on OscillatoryIntegralsODE.jl, which numerically evaluates integrals
I = \int_a^b f(x) S(rx) \, dx
where f(x) is smooth and not oscillatory, but S(x) is highly oscillatory for large frequencies r. Examples of oscillatory functions include Bessel functions J_{\nu}(rx), spherical Bessel j_{\nu}(rx), \cos(\omega x), and the harmonic transform e^{i \omega x}. Gaussian quadrature is inefficient for these integrals, since the number of nodes required for accurate integration scales with frequency.
Two popular methods for highly-oscillatory quadrature are Filon and Levin-type integration (see Huybrechs and Olver 2012), both relying on polynomial interpolation in different ways. I believe @dlfivefifty has had some influential papers in this field, and wrote his thesis about such integrals. Within the Julia package ecosystem, Sheehan Olver and Mikael Slevinsky (the real experts) have worked on ApproxFun approaches for the harmonic transform.
It only gets worse. My average integrand contains frequencies 50-100x higher than this.
I need to efficiently compute thousands of these integrals, where S is typically a spherical Bessel function and f contains an interpolation from an ODE solution. This package attempts to deal with this using a Levin method combined with OrdinaryDiffEq.jl. The basic idea (Levin 1994) is that you can transform something gross, like the integral of the function in the figure, into something easier, like a non-oscillatory bivariate ODE
\mathbf{p}^\prime + \mathbf{A}^T \mathbf{p} = \mathbf{f}.
Here \mathbf{f} contains the function you want to integrate, and \mathbf{A} is a time-dependent matrix. Under specific conditions, the integral is easily expressed in terms of the dot product of the ODE solution \mathbf{p} and the oscillatory functions, evaluated at the endpoints of the integration interval.
The literature typically approximates the solution of this ODE using collocation on a polynomial basis, which reduces the problem even further to a linear solve per subinterval. In my package, I just pass this to an ODE solver – I think this is equivalent to collocation, but I’m just making @ChrisRackauckas et al do my matrix factorizations. Every collocation scheme corresponds to a Runge-Kutta method, so something like Vern9
should basically be the 9-point collocation given in Levin’s original paper. In this case, the subinterval sizes are determined by the ODE timestep.
Here’s an example. The package is currently unregistered, so you’ll need to get it from GitHub.
using OscillatoryIntegralsODE
using OrdinaryDiffEq
f(x) = exp(-x^2 / 16) # we integrate the Bessel with this f
# set up problem with output type Float64, Bessel order nu=100, frequency r=100
bi = BesselJIntegral{Float64}(100., 100.) # nu, r
# now integrate over (a, b) = (1, 5). kwargs are passed to the ODE solve
levintegrate(bi, f, 1.0, 5.0, Vern9(); abstol=1e-6, reltol=1e-6)
This integrates the function in the figure over the interval (a,b) = (1,5), and for reference, Mathematica gets 0.006311630277650583
. For nice f, a nice property is that the local tolerances used in the ODE solve propagate linearly to the global error on the integral.
@btime levintegrate($bi, $f, 1.0, 5.0, Vern9(); abstol=1e-6, reltol=1e-6)
120.600 μs (19 allocations: 8.22 KiB)
Anyway, I’m hoping for both numerical analysis and code commentary! Is there a better scheme out there for adaptive oscillatory quadrature?
Remark 1. Levin mentions in his paper that collocation at the Chebyshev points would be good for reducing the Runge phenomenon. I think this should be something like the high stability RK methods? Do any of them work with StaticArrays?
Remark 2. 0 cannot be in the interval [a,b] for Bessel functions when using a Levin method, as A(x) diverges at x=0. If you need to reach 0 like me, you can use Gaussian quadrature from 0 to the first peak of the Bessel function. There are plenty of asymptotics for finding it, I have one in the docs.