Convolution quadrature

I am looking to perform convolution quadrature, but cannot seem to find any ready-made solution. HCubature.jl and QuadGK.jl seem very capable for ordinary quadrature, but I cannot find any resources for convolution.

To be more precise, this is what I am trying to calculate:

h(z) = \iint_\Omega h(x, z-y)\,G(x, y)\,dx\,dy

where G is smooth in parts but has several spiky regions, and h is a smoothing function. (\Omega is just a rectangular domain). G is quite expensive to evaluate, so an adaptive scheme like in QuadGK would be very advantageous.

Does anyone know about anything like this? Even something in Matlab or Python?

Edit: both h and G are complex-valued, while x,y,z are real-valued, if that makes any difference.

1 Like

I read your question as h(z) = \int dx \int dy\, \phi(x, z-y)G(x,y) ?

Unless you have tiny supports, you could fast fourier transform \phi and G in the second variable (for each fixed x). Then you multiply pointwise, add up (integrate) over x, and fast fourier transform back? (no need to store the fourier transforms for more than a single x and an accumulator)

That’s basically where I started.

The problem is that G can get very spiky, between regions that are super smooth, and it’s almost impossible to get sufficiently high sampling to capture those spikes, without blowing up the total number of samples.

It’s even hard to tell when the sampling is high enough, which is the reason why I really like the adaptiveness of something like QuadGK.

Could it be possible to take the Fourier route with a non-equispaced FFT from FastTransforms.jl? That way if you know where the spikes are you can sample more densely around them.

1 Like

I guess it’s possible. But it will involve implementing a lot of smart approximation stuff concerning adaptive refinement of grid, and when to break off the refinement, which I was hoping someone with more expertise and time/resources on their hands had already figured out.

Right now, I think I can do this by just stepping the z value and running something from hcubature on each step. But it would be pretty wasteful, with a lot of recomputation of the same values.

Memoize could be useful for mapping the arguments (x, y) to G(x, y). Be careful with that though as it can soak up a lot of memory. Otherwise you could do something like this, where you can ensure Gdata goes out of scope.

function memoise(f::T, data::Dict=Dict()) where {T<:Function}
  function fmemoised(i...)
    !haskey(data, i) && (data[i] = f(i...))
    return data[i]
  return fmemoised, data
Gmemoised, Gdata = memoise(G)

Hmm. Yes, maybe memoization could help.

You might try HCubature with a vector-valued integrand to evaluate h(z) at a bunch of different z values simultaneously (so that it only computes G once). That is, if Z is a vector, do

H = hcubature(x -> ϕ.(x[1], Z .- x[2]) .* G(x[1], x[2]), a, b)

to get a vector H corresponding to h(z) at each z \in Z.

To decide which z values to evaluate, if your convolved function h(z) is smooth then a good choice is Chebyshev interpolation points, as implemented by ApproxFunc.jl … that is, given the values of h(z) at appropriate Chebyshev points, you can construct an exponentially accurate interpolating polynomial.

Here, however, you can’t use the built-in ApproxFun constructors, because you need to evaluate h(z) at all the Chebyshev points at once. There used to be an ApproxFun tutorial on how to do this, but it got removed (presumably it bitrotted), unfortunately. There is a FAQ that explains how to do this.


Oh, I didn’t realize that this might actually work, that hcubature could return a vector. I can get away with evaluating at equispaced points, z.

Would this approach work as a general approach to convolution quadrature? (I’m not actually 100% sure I am using the expression correctly, as I discovered it today.) It seems almost too simple. Is it efficient, or more of a workaround?

Chebyshev points will almost always be better for smooth functions.

Would this approach work as a general approach to convolution quadrature?

It is general, but it won’t always be the most efficient approach, depending on your functions. If you have N interpolation points and N quadrature points, the vector-integrand scheme still requires O(N^2) work, as opposed to the O(N \log N) work required by FFT-based methods for equally spaced points.

1 Like

I see. In my case, though, the equispaced values are actually the end goal, for various reasons.

OK, thanks a lot.

An important caveat is that of periodic functions, where equispaced points are better. If your problem involves periodicity, that’s something to consider.

If your function has smooth and spiky parts, and you have some information on this, you might try splitting it in two and using different methods (eg FFT for the smooth part and quadrature for the spiky part)