Following @Paul_Schrimpf 's example, ended up making an implementation that just brute forces through the simplexes. This way I could compare performance against KernelInterpolations. Made a gist with it here, and it allows me to do this:
# create test data
test_function = (x) -> sin(x[1]^2 + x[2]^2)
npoints = 1000
points = rand(2,npoints)
values = zeros(npoints)
for i in 1:npoints
values[i] = test_function(points[:,i])
end
# create interpolant
si = SimplexInterpolant(points);
# evaluate interpolant in regular grid
xs = range(0, 1, length=100)
ys = range(0, 1, length=100)
zs = [0.0 for x in xs, y in ys]
for (i,x) in enumerate(xs)
for (j,y) in enumerate(ys)
info = interpolation_info([x,y], si)
zs[i,j] = compute_simplex_interpolation(info, values)
end
end
#plot result
using CairoMakie, GeometryBasics
fig, ax, hm = heatmap(xs, ys, zs)
tri = delaunay(points)
wireframe!(ax, GeometryBasics.Mesh(tri))
Colorbar(fig[:, end+1], hm)
save("plot.png", fig)
fig
So with this example I did some benchmarking against KernelInterpolations. Using the same data initialized above I have:
using KernelInterpolation
nodeset = NodeSet(transpose(points))
kernel = PolyharmonicSplineKernel{dim(nodeset)}(1)
ff = test_function.(nodeset)
itp = interpolate(nodeset, ff, kernel)
##
using BenchmarkTools
@benchmark itp(pt) setup=(pt=rand(2))
##
@benchmark begin
info = interpolation_info(pt, $si)
compute_simplex_interpolation(info, $values)
end setup=(pt=rand(2))
which gives me a median runtime of 41 microseconds for KernelInterpolations and 13 microseconds for my interpolation. Both seem to scale more or less linearly with number of points which I pretty much expect in my simplex interpolation. But an enormous amount of the time I spend is just in finding the appropriate simplex, would not be surprised if I could get a ~2 order of magnitude speedup just by implementing an efficient algorithm to find the right simplex.
