Package to evaluate oscillating integrals with increasing frequency

I’m looking to numerically integrate a function of the form

int[g(x)*sin(r(x)*x) dx, x=a...infinity]

Where g(x) is exponentially decaying and the frequency r(x) = A+B*sqrt(1+x^2) is increasing with x.
I saw GitHub - xzackli/OscillatoryIntegralsODE.jl and GitHub - machakann/DoubleExponentialFormulas.jl: One-dimensional numerical integration using the double exponential formula, but they seem to require a constant oscillation frequency.

Maybe a change of variable

(A+B\sqrt{1+x^2})x=Ay

could work out ? (can be solved for x^2 to express dx and g(x) in terms of y)

3 Likes

I’ll see if that works.

If g decays quickly enough, you probably don’t need to do anything special. Just integrate normally until g<tol

1 Like

Do you already know that QuadGK.jl doesn’t work? Since it uses adaptive routines, I’d think it would be able to do it unless the oscillation was very very extreme. It may be that it is not the most efficient method though since it will not take advantage of the knowledge of the integrand.

1 Like

I’m new to this, but my understanding was that a quadrature made for specifically oscillating functions was always much more efficient. I’ll experiment and report back.

Have you tried just brute-forcing it with adaptive integration? For example, QuadGK.jl works just fine with a few hundred thousand evaluations for a simple example of the form you proposed:

julia> using QuadGK

julia> quadgk_count(x -> exp(-x) * sin((1 + 2hypot(1,x))*x), 1, Inf) # default 1e-8 tolerance
(-0.05977814307911682, 8.548339483678494e-10, 2115)

julia> quadgk_count(x -> exp(-x) * sin((1 + 2hypot(1,x))*x), 1, Inf, rtol=1e-13)
(-0.05977818481933722, 5.807105271695107e-15, 11175)

Of course, if you analyze your problem and are clever there are almost always ways to do integrals much more efficiently, but it’s not clear from your question whether this is actually necessary for you.

2 Likes

In general, you only really need specific oscillating function integrator when your decay term is slow. If your decay term is 1/sqrt(x) you need to do special stuff because you need to integrate really far out to capture all the mass, but with exponential decay all the mass is close to 0 anyway so you don’t need to care about the oscillations because after not very long the function is basically equal to 0.

This program will need to call this function a few thousand times, so I would like it to be efficient. But I think I’ll start with the QuadGK version and only get more clever if and when required.

Note also that if your decay term is slow (relative to the oscillation period), then direct methods (not specialized for the oscillation term) correspond to very ill-conditioned sums that are subject to large cancellation errors — basically, you have a lot of positive and negative contributions that are supposed to exactly cancel, but don’t because of roundoff and other errors.

Even with exponential decay, if the rate is slow enough then you could run into trouble with non-specialized integration schemes.

A classic example of this is asymptotics of Fourier transforms — even if you have a nice smooth, rapidly decaying function f(x), if you want to compute high-frequency asymptotics of its Fourier transform F(k) (which goes to zero for large k, but maybe you want to know just how small it is), then you often need analytical tricks such as saddle-point methods, and/or specialized quadrature schemes.

3 Likes

Thanks for the information.

I’ve come back to this as it turns out the decay rate can become very slow in my application. There is a region of stationary phase, and this dominates the integral, but it’s unclear to me how to take advantage of this numerically. Any papers or packages that deal with such functions is greatly appreciated.

Other than the papers mentioned above which seem intended for finite intervals (edit: the numerical support of an exponentially decaying function in floating point arithmetic is finite), another general option for specialized quadrature schemes is called Generalized Chebyshev quadrature (which is a precursor for Generalized Gaussian quadrature). The basic idea is to choose a basis of functions which efficiently captures the behavior of those you would like to integrate and that is in some sense low-rank. Then using the integrals of the basis functions (obtained either analytically or by brute force), sample the basis functions densely and find the most linearly-independent points to use for a specialized quadrature scheme. There is overhead to calculating the quadrature, which could use as many points as you need and be reused for many integrals. For badly behaved integrals, e.g. power-law singularities, I’ve seen I can get 3-5 digits of accuracy in the integral from ~40 quadrature points, often because of the aforementioned cancellation errors.

Moreover, it looks like there are some unregistered packages for exactly these calculations: GitHub - daanhb/GeneralizedGauss.jl: Package for the computation of generalized Gaussian quadrature rules

2 Likes

Thanks for the information and the nod towards that package authored by @daanhb. In his paper related to that repo, he gives an example dealing with boundary element method quadratures, which is what I need. Looks like I’ll need to read up on numerical steepest decent methods to finish this up…

Checking in late here, but thought I’d mention there is a book on the topic of computing oscillatory integrals: https://epubs.siam.org/doi/book/10.1137/1.9781611975123

Lots of work in the area happened for highly oscillatory integrals closely related to asymptotics as mentioned above. The goal here is to evaluate an oscillatory integral at a cost that is independent of the frequency parameter involved. Not a lot of software exists, one recent exception being the PathFinder toolbox (GitHub - AndrewGibbs/PathFinder: A Matlab/Octave package for oscillatory integration, Matlab I’m afraid) which automates the numerical method of steepest descent for polynomial phases (i.e. for integrands of the form f(x) exp(iomega*g(x)) with polynomial g).