(Insanely-) Higher Order Derivatives

In practice, it’s not so simple, especially at ordinary error tolerances (e.g. rtol=1e-12).

  • if your integrand is sharply peaked (e.g. you have a pole close to the contour) then h-adaptive integration can often reach a fixed tolerance much faster than a trapezoidal rule, because it is able to adaptively place more quadrature points close to the peak (whereas a uniform-spacing trapezoidal rule won’t start to converge exponentially until the resolution everywhere is high).

  • I often observe h-adaptive quadrature with a Gauss–Kronrod rule (of reasonable order) to have convergence that appears exponential for smooth functions until you go to very high precisions.

For example, consider the function f(z) = 1/(z-0.979-0.198im), which has a pole just inside the unit circle, and suppose we want the contour integral of f around the unit circle: \oint f(z) dz = \int_0^{2\pi} f(e^{i\phi}) i e^{i\phi} d\phi. By the Cauchy residue theorem, the exact integral is 2\pi i. Let us compare quadgk vs the trapezoidal rule (equivalent to a Riemann sum for periodic functions), plotting the error vs. the number of evaluations (limited by maxevals=N in quadgk). I find:
image
Note that there is not much noticeable difference in the rate of convergence once the two methods start converging — they both seem to drop almost exponentially fast (nearly straight lines if you switch to a semilog scale) once you get sufficiently many points, until it reaches the limits imposed by machine precision. However, adaptive quadrature (quadgk) reaches this fast-converging regime much earlier than the trapezoidal/Riemann quadrature method due to the latter’s uniform spatial resolution.

Now, if you go to much higher precision, e.g. BigFloat with setprecision(512), then you will notice that asymptotically quadgk is eventually converging more slowly than the trapezoidal rule — you only get power-law rather than exponential convergence. On the other hand, the QuadGK manual advises you to increase the order of the rule in rough proportion to the precision—the default order is 7, so with 512 bits of precision you should use about 8x the order, say order=55. In this case, the convergence rate is again nearly indistinguishable from the trapezoidal rule until you hit the 1e-154 precision limit, and again the adaptive method hits the “fast converging regime” much faster:
image

25 Likes