When you want to interpolate a function (match its values on some grid points and approximate it off grid) you:
Generate a matrix with evaluations of basis function on the grid and find a linear combination of those that fit the original function
Generate a matrix with evaluations of basis function on arbitrary points in the domain and multiply it with the coefficients of the linear combination
My question: Is there, or why is there no package, that generates those matrices using a macro. Or going even one step further: after determining the coefficients of the linear combination generates code with these coefficients hard coded, and then compiling it.
As an example, Horner’s method of evaluating polynomials coded up as a macro is extremely fast, the same could be done for Chebyshev polynomials, or splines, etc.
For the first variant, it is relevant how fast a compiled for-loop (e.g. see BasisMatrices) runs compared to a macro that already “expanded” these two loops. For the second variant, it is relevant how fast a BLAS matrix-vector multiplication is compared to its “expanded” version (maybe this could also save some memory allocations?).
ApproxFun.jl is fantastic for global basis functions, but look also at Dierckx.jl, Interpolations.jl, but especially the latter can use some improvements.
First, generally you should be trying to avoid generating those matrices in the first place. They are dense (for Chebyshev), and get exponetially larger as the dimension grows and you tensor them together. There are tricks that help you avoid that.
Second, in Julia you usually don’t need macros as part of the API for efficiency (they can help with the low-level details, like Base.@evalpoly — but the compiler is getting really good these days).
ApproxFun.jl is a very clever solution to this problem for (mostly) linear problems, but when I last looked at it I found that the interface was not a good match for what economists usually do (evaluate residuals of functional equations which are heavily non-linear).
Assuming you want this for econ models: when you have smooth problems, IMO the best approach is to
get a first-order solutions using perturbations,
use that as a starting point for global methods on a course grid,
the solution of which can be used as a starting point on a fine grid.
This is fast and robust, unless you have non-smooth problems, for which the discretized methods in the other topic you asked recently are a better match.
Unfortunately, I am not aware of a toolkit that does everything. You have to assemble the pieces yourself. Currently I am working on an approach to (1) you can automatically differentiate through, PM me if you are interested. PS: greetings from Vienna!