Hi Julia people 
I made this registered package for interpolation: GitHub - NikoBiele/ConvolutionInterpolations.jl: A Julia package for smooth high-accuracy N-dimensional interpolation using separable convolution kernels
As visible from the plot below, it features a wide range of options in terms of accuracy: Kernels which yield nearest neighbor, linear, cubic, quintic etc. interpolation, all the way up to highly smooth C11 continuous 7th order accurate 13th degree interpolation (I derived these highly accurate kernels myself using SymPy).
Since convolution kernel-based interpolation depend on boundary conditions, I’ve made the package such that it auto-detects the most suitable boundary condition based on the data (linear, quadratic or periodic), but this behaviour can be customized, as explained in the readme.
The boundary handling is the main limitation of this package: When setting up the interpolator, each boundary is extrapolated to create the appropriate boundary condition, which is prohibitive in higher dimensions, due to the many boundary elements. Therefore, there is a trade-off between accuracy and high dimensionality: For high dimensions, using nearest neighbor og linear interpolation requires no boundary condition and should be efficient. As the degree of the polynomial kernel is increased, more boundary condition handling is necessary, increasing the time demand for setting up the interpolator. Also, for high-dimensional data, it is recommended to use the quadratic boundary condition since it is very efficient to evaluate compared to the default auto-detect behaviour.
This package has zero dependencies outside core Julia, so it should be stable over time and require minimal maintenance. I was very inspired by the API design of Interpolations.jl, since my package started out as a PR for that package.
I hope you will check out the package and find it useful. 
26 Likes
What do you use for fast implementation of the convolution?
Hi RoyiAvital,
Thank you for you response. 
Convolution interpolation is based on a weighted average of the kernel functions, where the weights are the uniformly spaced data points in the neighborhood around the desired interpolated point. Since the kernel functions are independent of the data, for the fast implementation I simply precalculate the kernel functions at a specified number of points and store their values in a matrix. Then interpolation reduces to a single sorted lookup and a dot product between the data points and the relevant row in the matrix. Since sorted lookups are very efficient you can precalculate more points without a penalty.
2 Likes
I am not sure what you mean.
If data is equispaced then the interpolation is, as the name suggests, becomes a convolution operation.
Since you did not use any other package, I was wondering how fast your convolution implementation.
Have you compared your implementation of 1D convolution to other implementation in the eco system?
For instance:
- ImageFiltering.jl.
- DSP.jl.
- Optimizing Direct 1D Convolution Code.
2 Likes
Hi again RoyiAvital.
Here’s what I mean:
The way convolution interpolation works in general is to make a convolution between the data points and a number of predefined polynomials (the kernel). Therefore, it is indeed a convolution, but it is not a convolution between two signals, but rather a convolution between the data and the kernel which is designed to make the interpolated signal continuous. The way I speed up this convolution is to precompute these polynomials at a specified number of discrete points, and then use lookup and scaling. This method takes advantage of the fact that one of the ‘signals’ (the kernel) is known beforehand (and constant in a non-dimensional sense). This approach would not be possible for general convolution between two unknown signals. I have not compared my method to those that you mention. But I have compared it to ‘Interpolations.jl’ and mine is slightly slower (but higher continuity). You are welcome to make this comparison yourself. 
Hi @NikoBiele
This looks very exiting. It is nice to see more N dimension interpolations.
One question for me is how well it handles scattered/sparse data?
Hi marianoarnaiz,
Thank you for your message.
For my package to be applicable, the grid spacing must be uniform for each dimension (but can be different between dimensions). So it cannot handle truly scattered data. But my experience from testing on sparsely sampled trigonometric functions (down to 3 samples per period) is that it works well for these (and often better than cubic splines). Other than your question I’ve also answered this question in an issue on GitHub: Support for non-uniform grid? · Issue #3 · NikoBiele/ConvolutionInterpolations.jl · GitHub.