Guide to numerical integration methods?

To pile on with a related question, as a EE without a particularly deep math background, @stevengj do you have any handy references for the different integration methods and their advantages/pitfalls?

I find myself writing a lot of integral-heavy EM code and occasionally stumble into what seem to be pathological cases. I typically default to adaptive quadrature implementations like QuadGK.jl and HCubature.jl but occasionally have integrands with step-like behavior that seem to be pathological cases for them.

There are plenty of articles and books on quadrature methods, but many of them may be more technical than you want. The Davis & Rabinowitz book hits many of the basic ideas, but is a bit old. Perhaps other people can suggest more recent practical reviews?

Perhaps the most important rule to remember is that the convergence rate is typically limited by the “worst” singularity that you haven’t accounted for analytically. (For example, a discontinuity in the function or any of its derivatives, an integrable singularity, or failing that even just a pole close to the integration domain.) Singularities inside the domain (as opposed to at the endpoints) are especially hard for numerical methods to deal with.

(Boyd’s book calls a related idea “Darboux’s Principle”, but this moniker doesn’t seem to have caught on.)

So, if you possibly can, you should remove any singularity by a change of variables, build the singularity into the quadrature scheme, or break the integration domain into pieces to put singularities at endpoints. The QuadGK manual has some examples of this, but the general ideas are applicable to any numerical integration scheme.


For my part I wonder if there is any sort of guide to 2d quadrature (cubature). I’m struggling to find much that is straightforward to implement except product-rules. The rest seem to be articles that prove various properties of integration rules, not really algorithms.

There’s of course Cubature.jl etc, but adaptive methods are not applicable in my case.

I’m actually a bit curious why there exist mature packages for 1d adaptive and non-adaptive quadrature, but only adaptive for higher dimensions. And not just in Julia. I’ve found it difficult to discover software for non-adaptive 2d or N-d quadrature.

Tensor-product rules are by far the most common building blocks here, at least in \lesssim 7 dimensions (followed closely by sparse grids, which are still built out of 1d rules). Is there a reason you want a non-product rule? Non-hypercube domain? Non-separable singularity or weight function? (There is a lot of research on cubature with corner singularities for integral-equation methods.)

Cools (1997) is a survey of cubature methods, and he also has a chapter in this 1991 collection of numerical-integration papers, and he published another survey in 2003.

As I understand it, the basic issue is that finding the optimal points and weights for a given set of basis functions is nonlinear. In 1d, this nonlinearity simplifies to finding roots of orthogonal polynomials, hence eigenproblems, the Golub–Welch algorithm, and more recent developments. In general higher-dimensional domains, there is apparently no such “easy” algorithm; at best you get systems of quadratic equations.

(Of course, if you are in very high dimensions, it is an entirely different ballgame.)


Chapter 5 of the 2008 book Numerical Methods in Scientific Computing, Vol. 1 by Dahlquist and Bjork discusses quadrature and cubature methods. The last paragraph of the chapter contains several references for multidimensional integration formulas and Monte Carlo methods. They cite Bungartz and Griebel, “Sparse Grids”, Acta Numerica 13:147-270, 2004, as a “survey of recent results on sparse grids and breaking the curse of dimensionality”.

The reason is mainly that product rules are a bit redundant, and performance is a high priority. The function is (probably) smooth, and the domains are either a rectangle or a disc. The redundancy looks particularly striking on the disc.

Ryu and Boyd have an article Extensions of Gauss Quadrature Via Linear Programming | Foundations of Computational Mathematics which gives more ‘pleasing’ results, but it’s not obvious to me how to implement it.

But perhaps I’m underestimating product rules, and they’re ok. Sparse grids I’m not familiar with.

Thanks, I hadn’t seen this one.

There was a recent paper with new cubature formulations for disc domains. This will give you some state-of-the-art rules (downloadable in their supplementary information, albeit only to double precision), as well as giving an indication of how complicated it is to compute them.

They don’t give a comparison to tensor-product rules (where you would ideally use equally-spaced points in φ by Gauss–Jacobi points in the radial dimension, which can include the radial factor r analytically). I wonder if the improvement of a specialized disc rule is better than a factor of 2 (in the number of points required for a given accuracy)? (The other nice thing about tensor products of 1d rules is that it is easy to have embedded rules to get both an integral and an error estimate, e.g. QuadGK can generate nested Kronrod Gauss–Jacobi rules for you. Whereas people developing fancy cubature rules rarely seem to worry about this.)

1 Like

Just wanted to follow up with a thanks for everyone’s inputs here!

Perhaps a good additional reference is the book “Approximate Calculation of Multiple Integrals” by Stroud. Many of the methods are implemented in the python package QuadPy and the C code STROUD. As far as I am aware, there are no equivalent ones in julia.

I think that many of those formulas have been superseded by now?

Probably, but he was looking for implementations of multidimensional non-product rules.

EDIT: quadpy has many implementations of more recent rules as well.