Integral of a function inside non-rectangular domain

I want to integrate some function f(x,y) inside a non-rectangular domain, say a disk defined by x² + y² ≤ 1. Is there a package for doing this?

I actually tried to define g(x,y) = f(x,y) * (x^2+y^2 ≤1). Then g(x,y) is equal to f(x,y) inside the disk and zero outside, so by integrating g(x,y) inside a rectangular domain containing the disk will do the job. However, when I tried this with HCubature package, I was not able to get a solution for several minutes, so I had to halt the calculation.

Here is the example I tried:

julia> VERSION

(@v1.6) pkg> st HCubature
      Status `~/.julia/environments/v1.6/Project.toml`
  [19dc6840] HCubature v1.5.0

julia> using LinearAlgebra, HCubature

julia> f(p) = sum(abs2, p);  # some arbitrary function defined on xy-plane; p = [x,y]

julia> g(p) = f(p) * (norm(p)≤1);

julia> @time hcubature(f, [-2,-2], [2,2])  # generates solution quickly
  2.026476 seconds (4.40 M allocations: 267.180 MiB, 3.88% gc time, 99.97% compilation time)
(42.66666666666667, 7.105427357601002e-15)

julia> @time hcubature(g, [-2,-2], [2,2])  # doesn't finish

Have you considered cylindrical coordinates?

In general, you want to do some change of variables to map the domain to a hypercube (or a collection thereof). See also 2D integration over non-rectangular domain using cubature - #7 by stevengj

The problem with integrating this is that now you have a discontinuous integrand. Quadrature schemes are generally designed for smooth integrands. (If you have a singularity, you want to build it into the quadrature scheme if possible.)

Realize that hcubature defaults to \sqrt{\varepsilon} tolerance — so it is trying to integrate your discontinuous integrand to 8 significant digits by refining the domain, and it probably won’t converge until it has > 10^8 subdomains; it’s not surprising that it takes a ridiculously long time.


For example:

julia> using HCubature, StaticArrays

julia> f(p) = sum(abs2, p);  # some arbitrary function defined on xy-plane

julia> hcubature(r -> f(SVector(reverse(sincos(r[2]) .* r[1]))) * r[1], SVector(0,0), SVector(1,2π)) 
(1.5707963267948961, 0.0)

julia> count = 0; # count of function evaluations

julia> hcubature(r -> (global count += 1; f(SVector(reverse(sincos(r[2]) .* r[1]))) * r[1]), SVector(0,0), SVector(1,2π)) ; count

i.e. it gets the correct answer (\pi/2) to machine precision in 17 function evaluations (not surprising, since this integrand is a constant in φ and a degree-3 polynomial in r, so the Genz–Malik quadrature rule is actually exact).


In particular, we can estimate this by slowly decreasing the tolerance and printing out the number of evaluations:

julia> count = 0; hcubature(x -> (global count += 1; g(x)), [-2,-2], [2,2], rtol=1e-3); count

julia> count = 0; hcubature(x -> (global count += 1; g(x)), [-2,-2], [2,2], rtol=1e-4); count

julia> count = 0; hcubature(x -> (global count += 1; g(x)), [-2,-2], [2,2], rtol=1e-5); count

Note that the number of required function evaluations is roughly linear in 1/rtol, corresponding to first-order convergence … not too unexpected for discontinuous integrands. If we extrapolate, it should take almost 10^{10} integrand evaluations to reach \sqrt{\varepsilon} tolerance (but it might run out of memory before that point!).


Definitely agree with @stevengj that you should always transform your integration bounds when possible (such as this case)

However, in cases when you can’t, the answer is often to just make your cutoff smooth rather than sharp. e.g.

using HCubature, BenchmarkTools

θ(x; k=1e3) = (1 + tanh(k*x))/2
f(p) = sum(abs2, p)
g(p; k=1e3) = f(p) * θ(1 - norm(p); k)
julia> @btime hcubature(f, (-2,-2), (2,2))
  822.137 ns (21 allocations: 1.11 KiB)
(42.66666666666667, 7.105427357601002e-15)

julia> @btime hcubature(g, (-2,-2), (2,2))
  225.598 ms (5944430 allocations: 212.92 MiB)
(1.570718116968963, 2.3405182989144795e-8)

Adjusting the k used will make the cutoff sharper or smoother. Infinite k gives the Heaviside step function. Obviously this is not only slower but also less accurate than what @stevengj suggests, but sometimes it’s helpful if you have very complicated integration domains and you don’t care too much about the time it takes to solve.

I’m not sure why this is better than simply increasing the error tolerance?

On my machine, using the discontinuous g(p) = f(p) * (norm(p) ≤ 1) and setting rtol = 1e-3 gives about the same accuracy as your smoothed step function with k=1e-3 and is 30× faster.


Thanks for all the suggestions. My non-rectangular domain is not always a disk and arbitrary in general, so the coordinate transformation is not always applicable. I think using a larger rtol as @stevengj suggested is a practical solution for me, because I don’t need a very accurate solution.

It is amazing how easy it can be to compose different packages in Julia: PolygonOps.jl could be used for more general polygons together with HCubature.jl by setting for example:

 g(p) = f(p) * inpolygon(p, polygon)
1 Like

Given a function to decompose a polygon into a collection of triangles, you could implement a much more efficient and accurate cubature (e.g. via HCubature) without introducing artificial discontinuities.

One option is EarCut.jl, although it would be nicer to have a Julia-native library. (The C++ library was apparently ported from ~600 lines of JavaScript, so why not port the JavaScript to Julia?)

PS. In fact, HCubature could easily be modified to support “globally adaptive quadrature” over a collection of triangles (mapped into squares), so that it would look at all the subregions simultaneously when deciding where to refine; this is really the right way to attack such problems. It would be pretty easy to write a PolygonQuadrature.jl package on top of HCubature and EarCut (or similar).


@stevengj, the license for Earcut.js, and Earcut.hpp is and ISC license. If one were to write a julia port, this would also use the ISC license right?
Also, Earcut.jl uses Earcut.hpp which is ISC, hence wouldn’t Earcut.jl need to be ISC too?

Technically, you need to include the ISC license statement, which applies to the original library and needs to be reproduced in copies and derived works thereof, and then apply any ISC-compatible license (i.e. pretty much any license, since ISC is a permissive non-copyleft license similar to MIT or 2-clause BSD) to your own code (e.g. Julia wrappers). As a permissive non-copyleft free/open-source license, ISC compatible with any other code/licenses you might want to use it with.


Just an update on this issue. We do have Julia-native implementations in Meshes.jl nowadays of different triangulation algorithms:

Nice! (And I see that you have the same algorithm as EarCut.)

Is there any possibility of an algorithm that does quadrilateralization instead of triangulation?

1 Like

It would be a nice addition @stevengj. What we can currently do is triangulate and then do QuadRefinement:

If you have a quadrilarization algorithm in mind, we are happy to review PRs :slight_smile:

That subdivides the triangles into quadrilaterals, right? I’d prefer something that goes in the other direction, merging as many triangles as possible into convex quadrilaterals.

This algorithm seems promising: [cs/9908016] Quadrilateral Meshing by Circle Packing

It would be nice to have a quadrilarization algorithm in the suite of discretization algorithms.

If the domain of integration is a convex polygon, you can use this method.