AD-able cubature

I have functions defined on [0,1]^4 \times [0,\infty) that I want to integrate numerically, ideally in a way that I can use with an automatic differentiation framework. I know very little about these functions ex ante other than the fact that they are continuous and the integral is well-defined.

I can evaluate the functions everywhere in their domain, but they are not cheap. Speed is preferred even with slight loss of numerical accuracy.

Suggestions would be appreciated, including existing packages or methods that I have to implement myself.

GitHub - SciML/Integrals.jl: A common interface for quadrature and numerical integration for the SciML scientific machine learning organization ?

3 Likes

Note that most things in Integrals.jl are differentiable, though right now we are missing a few derivatives with respect to boundaries in higher dimensions and that is something we are planning on finishing pretty soon. But with the 4.0, if your boundaries are constant then everything AD wise should work, and of course we’ll have the other derivatives in no time.

1 Like

Thanks, it looks like this package is ideal for my purposes.

I am still a bit lost when it comes to selecting an algorithm though. Is there an introduction to cubature that you would recommend?

Also, is there a package that does monomial rules (Stroud 1971), or are they now dominated by other methods?

That’s probably a question for @stevengj. I at least don’t know of a package that implements monomial rules, though if one does we’d be happy to wrap it into Integrals.jl for AD support and do the same testing around it.

I’ve refrained so far from adding algorithm recommendations to Integrals.jl so far because we haven’t done enough benchmarking. But this package is slated to evolve like all of the others in SciML, where we set it up on SciMLBenchmarks with a bunch of different problems and generate recommendations in the docs at the top of the solver page based on that. But the general understanding right now that I have is:

  • QuadGKJL for one dimensional problems
  • CubatureJLh or CubatureJLp if dimensions are say <6-8 (where p is only if smooth?)
  • CubaVegas and CubaSUAVE above that dimension.

But again, this is before setting up all of the formal benchmarks that we have done in the other domains, so this is a decent starting point but not something I would commit to being really solid at this point. Also we haven’t integrated GitHub - numericalEFT/MCIntegration.jl: Robust and fast Monte Carlo algorithm for high dimension integration into the package yet (pun intended) and have no idea how that performs vs the Cuba methods and that will be interesting to benchmark

2 Likes

@Tamas_Papp I recently rewrote the interface of Integrals.jl and its AD, and if you run into any issues I would be happy to assist, since it would be great to “break in” the package on some scientific problems.

Regarding the choice of cubature rules, an important question is whether you need adaptive algorithms that provide error estimates, or whether a fixed-order quadrature will suffice. In the adaptive case, the AD implemented in Integrals is most efficient using ForwardDiff, since it combines the primal and dual problems into a single library call, and in the non-adaptive case it would be very straightforward to write your own code, since this hasn’t been tested in Integrals.jl yet.

Otherwise, any additional information you can provide about the functions being integrated is very helpful, e.g. are they smooth, nearly singular, have non-analytic behavior? Smooth functions can be well-approximated by monomial rules, and special things can be done in other cases. Also, algorithmically there may be an advantage to doing iterated integration instead of cubature, depending on the geometry of the integrand (e.g. if it is localized on a manifold).

As for an introductory text on cubature, Stroud’s “Approximation of multiple integrals” is useful, but by no means complete. For example, Generalized Chebyshev and Gaussian quadratures can be created to accurately integrate many families of functions (not just monomials)

2 Likes

Thanks. The integrand is “nice”, effectively f(g(x)) where f is analytic, increasing, and may even convex, and g is a linear combination of Chebyshev polynomials. No singularities etc.

I am after a fast approximation that does not have to be great, a relative error of 0.001 or even 0.01 is fine. Ideally the error would be smoothly changing when I change g, which I guess rules out adaptive rules, but I can work around this.

So is there a Julia package which implements the kind of rules in the Stroud book?

Nothing is really smooth on the computer — it is just a question of whether you tolerate non-smoothness on the level of roundoff errors (which is unavoidable) or on the level of the truncation error. If the latter is unacceptable, then I agree that you need a non-adaptive rule.

If you just want to do a fixed-order/non-adaptive tensor-product rule, you can implement that yourself easily. e.g. for d-dimensional integration on [a,b]^d given the points and weights (x,w) of a 1d rule on [a,b], you can just do:

function cubature(f, x::AbstractVector, w::AbstractVector{<:Real}, ::Val{d}) where {d}
    (ax = axes(x,1)) == axes(w,1) || throw(DimensionMismatch())
    integral = zero(eltype(w))
    for i in Iterators.product(ntuple(_ -> ax, Val{d}())...)
        point = getindex.((x,), i)
        weight = prod(getindex.((w,), i))
        integral += f(point...) * weight
    end
    return integral
end

For example, to integrate e^{x_1 x_2 x_3 } in [0,1]^3:

julia> using QuadGK

julia> x, w = gauss(10, 0, 1); # 10-point Gauss rule on [0,1]

julia> cubature((x1,x2,x3) -> exp(x1*x2*x3), x, w, Val(3))
1.1464990725286444

which is correct to 13 significant digits.

This should be AD-able as-is (with respect to parameters of the integrand) using e.g. ForwardDiff, assuming that your integrand f is AD-able.

3 Likes

PS. @btime says that this has 7000 allocations, which confuses me since all of the types seem to be inferred and small tuples shouldn’t allocate. I tried replacing the getindex broadcast with a lower-level ntuple construction:

@inbounds point = ntuple(k -> x[i[k]], Val{d}())
@inbounds weight = prod(ntuple(k -> w[i[k]], Val{d}()))

but it doesn’t help. I’m probably forgetting something simple?

If g is a sum of Chebyshev polynomials, then the change of variables x = cos(theta) allows you to use the periodic trapezoidal rule, i.e. integration with equispace nodes and uniform weights, which converges rapidly. A product quadrature of this rule in the variables amenable to this transformation is probably optimal unless f is analytic only in a strip of the complex plane that is near the real axis, in which case adaptive methods would converge faster

I don’t know of any Julia implementations of Stroud’s rules. If a product rule uses too many points, the closest thing would be the order 7 Genz-Malik rule implemented in HCubature

1 Like

Is the integral variable boxed? Maybe the first loop iteration would have to be unrolled to get the right type of f(x)*w

1 Like

Thank you, I was not aware of the periodic trapezoidal rule.

What I am most interested in is whether I have to use products of 1d rules with the curse of dimensionality that entails, or if it is possible to work around that. Hence my interest in monomial rules.

In this example the Float64 type is correctly inferred. I agree that in general you would need to either unroll or do one additional function evaluation to get the correct type, but that’s not the source of the allocations in this example.

In 4d I wouldn’t worry about the curse of dimensionality — tensor-product rules and variants thereof usually seem to be about as good as you can do for integrating over low-dimensional hypercubes.

1 Like