Numerical integration over 3D domain of vector-valued function

Hi all,

I am interested in evaluating a volume integral of a vector-valued function. So the integral has the following form:

image

I already tried to use nested quadgk and hcubature. Both methods give me the correct results, but also result in a high number of allocations.
Here is my code for using nested quadgk.

function f(r_accent_x, r_accent_y, r_accent_z, r::SVector, M, inside_cube)

    r_accent = @SVector[r_accent_x, r_accent_y, r_accent_z]
    
    teller = cross(M, r - r_accent)
    noemer = norm(r - r_accent)^3
    
    #Add 0..00001 to avoid singularity
    if inside_cube
        noemer += 0.00001
        result = teller/noemer
        return @SVector[result[1], result[2], result[3]]
    else 
        result = teller/noemer
        return @SVector[result[1], result[2], result[3]]
    end
end

# Function to check if r is inside the cube
function is_inside_cube(r, cube_vertices)
    min_coords = minimum([vec[i] for vec in cube_vertices for i in 1:3])
    max_coords = maximum([vec[i] for vec in cube_vertices for i in 1:3])
    all(r .>= min_coords) && all(r .<= max_coords)
end

function evaluate_integral_nested_QuadGK(r::SVector, M, source_vertices)
    x_vals = [v[1] for v in source_vertices]
    y_vals = [v[2] for v in source_vertices]
    z_vals = [v[3] for v in source_vertices]

    min_x = minimum(x_vals)
    max_x = maximum(x_vals) 
    min_y = minimum(y_vals)
    max_y = maximum(y_vals)
    min_z = minimum(z_vals)
    max_z = maximum(z_vals)
    
    y_middle_edge = (source_vertices[1][2] + source_vertices[3][2])/2
    count_smaller = count(y -> y < y_middle_edge, y_vals)

    inside_cube = is_inside_cube(r, source_vertices)

    g(r_accent_x, r_accent_y, r, M) = quadgk(r_accent_z -> f(r_accent_x, r_accent_y, r_accent_z, r, M, inside_cube), min_z, max_z)[1]
    
    function h_varies(count_smaller, r_accent_x, r, M)
        if count_smaller >= 4
            return quadgk(u -> g(r_accent_x, u*(r_accent_x - min_x - min_y) + min_y , r, M) * (r_accent_x - min_x - min_y), 0, 1)[1]
        else
            return quadgk(u -> g(r_accent_x, -u*(max_y - (r_accent_x - min_x)) + max_y, r, M) * (max_y - (r_accent_x - min_x)), 0, 1)[1]
        end
    end

    
    h(r_accent_x, r, M) = h_varies(count_smaller, r_accent_x, r, M)
    l(r, M) = quadgk_count(r_accent_x -> h(r_accent_x, r, M) , min_x, max_x)[1]
    
    return l(r,M)
    
end

@btime evaluate_integral_nested_QuadGK(center_vertices, [1, 0, 0], source_vertices)

#  125.739 ms (2686730 allocations: 209.38 MiB)

Both center_vertices as source_vertices are StaticArrays.
Is there a method or an existing Julia package for solving this integral that is more efficient?

Thanks!

You have to show your code. Allocations may be due to using Arrays instead or StaticArrays.

1 Like

Yes, if the vector is a small size (< 100 elements), I would definitely recommend using an SVector from StatticArrays.jl for the integrand. If you return an ordinary array, there will be lots more allocations (every integrand evaluation will allocate an array, but also a lot of the internal calculations will allocate arrays).

Otherwise, you can use the in-place quadgk! API to re-use pre-allocated buffers for the vectors. (For multidimensional integration, you might also try the IteratedIntegration.jl package, which does nested integrals more efficiently, and also has an in-place API.)

1 Like

Thank you for your reply. I am using SVectors, but I still have a large number of allocations. Is the way I use it correct?

I don’t know, you didn’t show us the way you are using it.

Please read: make it easier to help you — in particular, see the advice about providing a minimal working example.

Sorry, new to these kind of forums.

Here below I posted a simple example.

function f(r_accent_x, r_accent_y, r_accent_z, r::SVector)

    r_accent = @SVector[r_accent_x, r_accent_y, r_accent_z]
    
    result = r/norm(r - r_accent)
    return @SVector[result[1], result[2], result[3]]
end

g(r_accent_x, r_accent_y, r) = quadgk(r_accent_z -> f(r_accent_x, r_accent_y, r_accent_z, r), 0, 1)[1]
h(r_accent_x, r) = quadgk(r_accent_y -> g(r_accent_x, r_accent_y, r), 0, 1)[1]
l(r) = quadgk(r_accent_x -> h(r_accent_x, r), 0, 1)[1]

@btime l(@SVector[1,1,1])
#  691.100 μs (1565 allocations: 316.72 KiB)

Unfortunately, IteratedIntegration.jl doesn’t offer the in-place API for nested integration, however I implemented it at a higher level in AutoBZCore.jl. For example

julia> using AutoBZCore, StaticArrays

julia> f(y, x, p) = y .= sin.(x .* p)
f (generic function with 1 method)

julia> prob = AutoBZCore.IntegralProblem(AutoBZCore.InplaceIntegrand(f, zeros(3)), CubicLimits(zeros(SVector{3}), ones(SVector{3})), @SVector[1,2,3]);

julia> AutoBZCore.solve(prob, NestedQuad(QuadGKJL()))
AutoBZCore.IntegralSolution{Vector{Float64}, Float64}([0.4596976941318603, 0.7080734182735712, 0.6633308322001485], 2.9909410343949205e-13, true, -1)

AutoBZCore.jl is also similar to Integrals.jl with the main difference being that it has nested integration if that happens to be a more efficient algorithm for your problem

1 Like

There are probably two reasons why nested quadgk allocates:

  1. The quadgk algorithm may allocate a heap to perform adaptive refinement via the segbuf keyword. If you don’t pre-allocate and reuse it then it may be allocated once per inner integrand
  2. The Julia compiler gives up on recursive code such as nested quadgk. The compilation time for triply-nested quadgk can take up to a minute in my experience. For a related discussion see Make Julia complete the inference of some recursive code

How does AutoBZCore.jl address these issues?

  1. AutoBZCore.jl will automatically pre-allocate the intermediate segbufs for you
  2. AutoBZCore.jl will wrap your inner integrals using FunctionWrappers.jl to stop inference from taking so long

Maybe you can take a look at GitHub - mikeingold/MeshIntegrals.jl: Numerical integration of functions over Meshes.jl geometries, if you are able to define your geometry within Meshes.jl. @mike.ingold do you get a ton of allocations with your setup using QuadGK?

1 Like

For adaptive integration, your allocations won’t be zero because of the data structures needed to keep track of the dynamic subdivision of the domain, though pre-allocation can help.

Your integrand has a singularity at the corner — you really want to do a transformations to eliminate this singularity to make the integration more efficient. Algorithms to do this are widely known in the integral-equation community (indeed, your integral looks much like a panel integral for a surface integral equation).

You might also want to lower the tolerance — by default it integrates to at least 8 digits, which might be more than you need.

Thanks @kylebeggs for the mention.

Yes, MeshIntegrals.jl was designed for pretty much exactly this purpose. You define an integration region using a geometry from Meshes.jl and just specify which integration package you’d like to use with which options. Usage would look something like

using Meshes, MeshIntegrals

# Define an integration: the interior of a sphere
center = Point(0, 0, 0)
radius = 5.0
ball = Ball(center, radius)

# Define a function to be integrated
# This function must have a method that takes a Meshes.Point
f(x, y, z) = exp(-x^2 + -y^2 + -z^2)
f(p::Point) = f(p.coords...)

# Integrate the function over the integration domain using HCubature.jl
integral(f, ball, HAdaptiveCubature())

My goal is to allow a simple integral(f, domain) call to adaptively select a reasonable backend, but the solver can be selected manually with the third argument. You can select QuadGK.jl as a solver by switching the third argument to GaussKronrod(). If you have a particular set of keyword arguments you’d like to pass along to QuadGK.jl you can load them directly into that third argument like GaussKronrod(order=11, maxevals=1_000_000, ...) and they’ll be forwarded in the internal quadgk call.

The primary goal of the package has simply been to make this kind of problem easy to solve from the user’s perspective. I wrote an electromagnetics solver package last year, JefimenkoModels.jl, that implemented a lot of this functionality ad hoc, and a package like this would’ve saved me a ton of time. In any event, now that MeshIntegrals.jl seems to work reasonably well, I’m trying to focus more on making it more robust and performant.

1 Like

I’ve not really benchmarked them against one another, but if allocations are particularly troublesome, you could also try using FastGaussQuadrature.jl. The main drawback here is that this isn’t an adaptive method, so you’ll need to characterize what number of nodes/weights is sufficient for the integral value to converge.

I implemented a method for that in MeshIntegrals.jl, and you can see the source for a 3D integration region here

This code calculates Gauss-Legendre nodes/weights using FastGaussQuadrature.jl, and uses a combination of iterators and sum to efficiently calculate the integral without needing to allocate the full 3D matrix of results.

This is probably a poor choice for a singular integrand like the one @Lisette_de_Bruin posted — unless you transform the integral to remove the singularity, or built it into the quadrature scheme (which is tricky here because of the dimensionality of the integral), a Gaussian quadrature scheme will converge very slowly.

Singular integrands are still not ideal for adaptive quadrature, but at least it will concentrate the quadrature points near the location of the singularity.

Thank you for your response. For now, I want to ignore the fact that it is a singular integral and add a small number to the denominator. Is there an example how to use FastGaussQuadrature with a vector-valued function?

That will still make Gaussian quadrature converge slowly until you have a large number of quadrature points, similar to the behavior described here with a near pole: (Insanely-) Higher Order Derivatives - #17 by stevengj

Thanks for your quick reply. Suppose I do a transformation (or something else such that the singularity is not the problem anymore). How to use FastGaussQuadrature.jl for a vector-valued function?

Once you have nodes and weights of a quadrature rule, e.g.

nodes, weights = gausslegendre(100);

for \int_{-1}^{1}, then you integrate a vector-valued function f(x) the same as any other function, e.g.

integral = sum(i -> f(nodes[i]) * weights[i], 1:length(nodes))

(This can be allocation-free if f(x) returns an SVector.)

1 Like

The FastGaussQuadrature.jl package can be used to efficiently calculate quadrature nodes and weights, but you still have to apply those yourself to calculate integral approximations. The code I linked to above from MeshIntegrals.jl uses it to calculate Gauss-Legendre nodes and weights. The function gausslegendre(n) returns a pair of N-length vectors containing nodes x_i and weights w_i that can be used to make the following approximation.

\int_{-1}^1f(x) \, \text{d}x \approx \sum_{i=1}^N w_i \, f(x_i)

You can extend this process to approximate multi-dimensional integrals, but the integration bounds will always natively be [-1,1].

Within MeshIntegrals.jl, I’m using an integration formula akin to this reference for surface integrals (eq 16.6.2). Meshes.jl defines parameterization functions for geometries that transform the domain [0,1] into a set of points that trace the entire geometry. This method requires something like a Jacobian matrix to be calculated to generate a weight that accounts for how much space is traced out by the parameterization function at any point in its domain.

A big benefit of using this kind of fixed quadrature rule is simply that it evaluates the integrated function f a fixed number of times, so practically constant-time. The big down-side, however, is that you’re missing out on a lot of the benefits of adaptive integration methods. You have to figure out what N, the number of nodes and weights, should be for the type of problem you’re solving. If you want to know what kind of error bars you should put on your approximation, that’s on you to determine. If the function changes more rapidly in one part of the domain than others, you’re pretty much stuck evaluating f on a fixed grid over the entire domain whereas adaptive methods can focus more of those evaluations in regions that need more resolution.

2 Likes

Sincere thanks! I wonder:

1/ Does Meshes.jl allow to encapsulated meshes generated by GMSH.jl ? Are there examples of such a use case?

2/ Can Meshes.jl be instructed to recognize regions in the mesh that GMSH.jl creates?

3/ Can MeshIntegrals.jl be extended to treat a singular part of the integral analytically?

4/ Can MeshIntegrals.jl be extended to treat integrals that depend on a parameter allowing these integrals to be differentiated in a next step?

You can GeoIO.load("file.msh") to load a .msh file as a geotable over a mesh from Meshes.jl.

If these regions are stored as vertex/face properties, then GeoIO.jl will load them as columns, which you can use to select specific geometries.

Consider reading the GDSJL book to learn more: Geospatial Data Science with Julia

I will leave the other questions to @mike.ingold who is leading the numerical integration work.

2 Likes