Insight on QuadGK instability on backwards integration

Has someone an insight on why

julia> quadgk(s->(1-s)^(-0.9),0,1,rtol=1e-6,atol=1e-8,maxevals=10^10)
(9.772220920299885, 0.011179284983380677)

julia> quadgk(s->s^(-0.9),0,1)
(9.999999279144092, 1.4665863410199572e-7)

correct answer is 10, but when I do the integral backwards I get some instabilities!! I’m trying to ask the integrator to reduce the error, how can I go around that? (Say I’m doing convolutions and I want to integrate backwards that kernel, that’s the context.) Also, the relative and absolute error are both higher than what it returns.

Since you are integrating a rational function with an endpoint singularity, which quadgk cannot integrate efficiently since that point is not smooth, I believe you are seeing how the adaptive grid constructed by quadgk resolves the distribution of floating-point numbers, which are clustered logarithmically about the origin. If you reverse the interval, as you did, or just shift it like this

julia> quadgk(s->(s-1)^(-0.9),1,2)
(9.75587242705648, 0.011981660960530958)

the singularity is located at a point where the Float64s are less dense and quadgk will give up as soon as it has subdivided intervals so close to the singularity that the quadrature nodes it want to use are rounded off to the endpoints. Since evaluating at the endpoint, which is singular, would give infinite error quadgk returns early and with an error estimate greater than your tolerance.

If you want to fix this, consider a change of variables to cancel the endpoint singularity or a specialized quadrature scheme for kernels with endpoint singularities in mind. Most likely they will be more accurate and more efficient, although it may take some work to find the best strategy for your kernel. One possibility for kernels with power-law singularities is to use Gauss rules for Jacobi weight functions, which you can compute with QuadGK.jl. I’ve also tried to automate h-adaptive Gauss-Kronrod rules for Jacobi weight functions here with some examples here, but the Kronrod rules may or may not exist for all power law divergences.

2 Likes

Yup, this is it — basically, quadgk can refine the interval much closer to 0.0 than it can to 1.0, since the closest you can get to 1.0 is prevfloat(1.0) \approx 1 - 10^{-16}, whereas the closest you can to 0.0 is < 10^{-300}. Here, your 1/s^{0.9} singularity is so strong (nearly non-integrable) that it needs to refine a lot to compute the integral accurately.

Definitely you should try to handle that singularity analytically if possible, as is also discussed in the QuadGK manual.

2 Likes