A Julian function interpolation library?

When you want to interpolate a function (match its values on some grid points and approximate it off grid) you:

  • Generate a matrix with evaluations of basis function on the grid and find a linear combination of those that fit the original function
  • Generate a matrix with evaluations of basis function on arbitrary points in the domain and multiply it with the coefficients of the linear combination

My question: Is there, or why is there no package, that generates those matrices using a macro. Or going even one step further: after determining the coefficients of the linear combination generates code with these coefficients hard coded, and then compiling it.
As an example, Horner’s method of evaluating polynomials coded up as a macro is extremely fast, the same could be done for Chebyshev polynomials, or splines, etc.
For the first variant, it is relevant how fast a compiled for-loop (e.g. see BasisMatrices) runs compared to a macro that already “expanded” these two loops. For the second variant, it is relevant how fast a BLAS matrix-vector multiplication is compared to its “expanded” version (maybe this could also save some memory allocations?).

1 Like

Have a look at ApproxFun.jl

4 Likes

ApproxFun.jl is fantastic for global basis functions, but look also at Dierckx.jl, Interpolations.jl, but especially the latter can use some improvements.

2 Likes

First, generally you should be trying to avoid generating those matrices in the first place. They are dense (for Chebyshev), and get exponetially larger as the dimension grows and you tensor them together. There are tricks that help you avoid that.

Second, in Julia you usually don’t need macros as part of the API for efficiency (they can help with the low-level details, like Base.@evalpoly — but the compiler is getting really good these days).

ApproxFun.jl is a very clever solution to this problem for (mostly) linear problems, but when I last looked at it I found that the interface was not a good match for what economists usually do (evaluate residuals of functional equations which are heavily non-linear).

RJDennis/ChebyshevApprox.jl looks promising but I have not yet used it. For domain transformations, I wrote a library that might help tpapp/ContinuousTransformations.jl.

Assuming you want this for econ models: when you have smooth problems, IMO the best approach is to

  1. get a first-order solutions using perturbations,
  2. use that as a starting point for global methods on a course grid,
  3. the solution of which can be used as a starting point on a fine grid.

This is fast and robust, unless you have non-smooth problems, for which the discretized methods in the other topic you asked recently are a better match.

Unfortunately, I am not aware of a toolkit that does everything. You have to assemble the pieces yourself. Currently I am working on an approach to (1) you can automatically differentiate through, PM me if you are interested. PS: greetings from Vienna! :smile:

2 Likes

For your reference, ApproxFun has a macro for Clenshaw’s algorithm for the summation of Chebyshev series
https://github.com/JuliaApproximation/ApproxFun.jl/blob/a4141f6562eec07ec85752af355e744bf1a3a5f7/src/LinearAlgebra/clenshaw.jl#L26

1 Like

I tagged SparseInterp which I use for econ models. Its fairly fast for medium dimensional problems (5-12)

4 Likes