I want to integrate some function f(x,y) inside a non-rectangular domain, say a disk defined by x² + y² ≤ 1. Is there a package for doing this?

I actually tried to define g(x,y) = f(x,y) * (x^2+y^2 ≤1). Then g(x,y) is equal to f(x,y) inside the disk and zero outside, so by integrating g(x,y) inside a rectangular domain containing the disk will do the job. However, when I tried this with HCubature package, I was not able to get a solution for several minutes, so I had to halt the calculation.

Here is the example I tried:

julia> VERSION
v"1.6.5-pre.0"
(@v1.6) pkg> st HCubature
Status `~/.julia/environments/v1.6/Project.toml`
[19dc6840] HCubature v1.5.0
julia> using LinearAlgebra, HCubature
julia> f(p) = sum(abs2, p); # some arbitrary function defined on xy-plane; p = [x,y]
julia> g(p) = f(p) * (norm(p)≤1);
julia> @time hcubature(f, [-2,-2], [2,2]) # generates solution quickly
2.026476 seconds (4.40 M allocations: 267.180 MiB, 3.88% gc time, 99.97% compilation time)
(42.66666666666667, 7.105427357601002e-15)
julia> @time hcubature(g, [-2,-2], [2,2]) # doesn't finish

The problem with integrating this is that now you have a discontinuous integrand. Quadrature schemes are generally designed for smooth integrands. (If you have a singularity, you want to build it into the quadrature scheme if possible.)

Realize that hcubature defaults to \sqrt{\varepsilon} tolerance — so it is trying to integrate your discontinuous integrand to 8 significant digits by refining the domain, and it probably won’t converge until it has > 10^8 subdomains; it’s not surprising that it takes a ridiculously long time.

julia> using HCubature, StaticArrays
julia> f(p) = sum(abs2, p); # some arbitrary function defined on xy-plane
julia> hcubature(r -> f(SVector(reverse(sincos(r[2]) .* r[1]))) * r[1], SVector(0,0), SVector(1,2π))
(1.5707963267948961, 0.0)
julia> count = 0; # count of function evaluations
julia> hcubature(r -> (global count += 1; f(SVector(reverse(sincos(r[2]) .* r[1]))) * r[1]), SVector(0,0), SVector(1,2π)) ; count
17

i.e. it gets the correct answer (\pi/2) to machine precision in 17 function evaluations (not surprising, since this integrand is a constant in φ and a degree-3 polynomial in r, so the Genz–Malik quadrature rule is actually exact).

Note that the number of required function evaluations is roughly linear in 1/rtol, corresponding to first-order convergence … not too unexpected for discontinuous integrands. If we extrapolate, it should take almost 10^{10} integrand evaluations to reach \sqrt{\varepsilon} tolerance (but it might run out of memory before that point!).

Adjusting the k used will make the cutoff sharper or smoother. Infinite k gives the Heaviside step function. Obviously this is not only slower but also less accurate than what @stevengj suggests, but sometimes it’s helpful if you have very complicated integration domains and you don’t care too much about the time it takes to solve.

I’m not sure why this is better than simply increasing the error tolerance?

On my machine, using the discontinuous g(p) = f(p) * (norm(p) ≤ 1) and setting rtol = 1e-3 gives about the same accuracy as your smoothed step function with k=1e-3 and is 30× faster.

Thanks for all the suggestions. My non-rectangular domain is not always a disk and arbitrary in general, so the coordinate transformation is not always applicable. I think using a larger rtol as @stevengj suggested is a practical solution for me, because I don’t need a very accurate solution.

It is amazing how easy it can be to compose different packages in Julia: PolygonOps.jl could be used for more general polygons together with HCubature.jl by setting for example:

Given a function to decompose a polygon into a collection of triangles, you could implement a much more efficient and accurate cubature (e.g. via HCubature) without introducing artificial discontinuities.

One option is EarCut.jl, although it would be nicer to have a Julia-native library. (The C++ library was apparently ported from ~600 lines of JavaScript, so why not port the JavaScript to Julia?)

PS. In fact, HCubature could easily be modified to support “globally adaptive quadrature” over a collection of triangles (mapped into squares), so that it would look at all the subregions simultaneously when deciding where to refine; this is really the right way to attack such problems. It would be pretty easy to write a PolygonQuadrature.jl package on top of HCubature and EarCut (or similar).

@stevengj, the license for Earcut.js, and Earcut.hpp is and ISC license. If one were to write a julia port, this would also use the ISC license right?
Also, Earcut.jl uses Earcut.hpp which is ISC, hence wouldn’t Earcut.jl need to be ISC too?
Thanks.

Technically, you need to include the ISC license statement, which applies to the original library and needs to be reproduced in copies and derived works thereof, and then apply any ISC-compatible license (i.e. pretty much any license, since ISC is a permissive non-copyleft license similar to MIT or 2-clause BSD) to your own code (e.g. Julia wrappers). As a permissive non-copyleft free/open-source license, ISC compatible with any other code/licenses you might want to use it with.

That subdivides the triangles into quadrilaterals, right? I’d prefer something that goes in the other direction, merging as many triangles as possible into convex quadrilaterals.