Pros/Cons of ApproxFun

I am not too familiar with the ApproxFun “style” of numerical analysis. From what I gather relies on doing Chebyshev or Fourier approximations which sound really nice since they can have spectral convergence. However, there must be some cons to keep in mind if they aren’t the “standard go-to methods”: what are these cons?

It seems like one of them is that they have an assumption of smoothness, and so they lose accuracy when a function is not smooth. However, both of these expansions can approximate non-smooth functions, so are there good benchmarks or heuristics for knowing how well this will do for something like approximating derivatives vs say automatic differentiation or numerical differentiation?

When is it the right time to use these methods? And when is it not?

4 Likes

Have a look at the ChebFun docs which are extensive, ApproxFun is very similar.

http://www.chebfun.org/

4 Likes

Is there a comparison or benchmarks to other methods somewhere in there? I’m not finding one.

2 Likes

Really hard to answer this question in a vacuum; pros/cons depend strongly on what precisely you are doing. Solving a PDE? Finding roots? Interpolating?

3 Likes

That’s why I’m probably looking for a good read: a paper/blogpost/book chapter which goes into some detail.

I’m usually numerically solving ODEs/SDEs/(S)PDEs. But that means root finding and interpolating are common operations.

2 Likes

“Chebyshev and Fourier Spectral Methods” by Boyd is a pretty good textbook on the sort of thing that ApproxFun and Chebfun automate, and can be downloaded free of charge online. Trefethen has a couple of books on this sort of thing, too.

My short answer is that spectral methods of this sort are extremely effective when the functions are smooth, or all singularities can be accounted for analytically, and the domain can be mapped smoothly to a box. In that case the exponential convergence is awesome, and also the fact that it turns everything into a polynomial is great — e.g. things like root-finding are a snap.

For complicated domains or problems filled with oddly arranged discontinuities, especially in 3d, however, it can be really hard to get spectral (faster than polynomial) convergence, and once that happens schemes based on generic grids/meshes are often more competitive. (Sheehan and Alex have done a great job in extending the power of spectral methods, however, especially in 2d, and they can now handle a surprising range of problems.)

For example, I do a lot of work on 3d electromagnetic scattering problems off of materials arranged into complicated shapes (e.g. imagine computing the radar cross-section of an airplane). Because there are discontinuities in the electric field at every material interface, not to mention corner singularities all over the place, it is usually not practical to get spectral convergence in these problems.

Another drawback of spectral methods is that you can’t easily obtain the kind of nonuniform spatial resolution that you can get with adaptive mesh refinement in FEM. So, if you need a lot more spatial resolution in some regions than in others, they aren’t as attractive. (There are ways to do it, but they aren’t automated AFAIK.) An example that I’ve encountered is a scattering problem involving two nearly touching spheres: even though spheres are smooth high-symmetry shapes that admit a natural spectral basis in spherical harmonics (“Mie scattering”), when the spheres are almost touching the uniform angular resolution of spherical harmonics kills you.

12 Likes

Thanks! That’s just the kind of overview with references I was looking for.

Taking a quick look at Chapter 9 of the Boyd book, their relation to ODEs/PDEs is they give a different way to discretize space which converges exponentially (in some problems), and then you can throw this discretization into standard ODE timestepping routines as an alternative to FEM/FDM for certain special (but growing) classes of PDEs (with fairly “simple” boundaries) for which they are highly optimized for?

1 Like

It’s more a question of basis functions than discretization, but yes, you can then use standard ODE integration schemes like Runge-Kutta. And the key question is not so much the PDE as the shape of the domain and the smoothness/regularity of the PDE coefficients, boundary conditions, and solutions.

1 Like

Boyd is more collocation focused, whereas ApproxFun poses equations in coefficient space to get structured (e.g. Banded) operators and linear complexity.

Combining ApproxFuns approach with traditional time stepping code is possible, but hasn’t been done yet. There’s a placeholder package ApproxFun/SpectralTimeStepping.jl that is meant to be the home for that. It currently has some simple time steppers built on top of ApproxFun’s backslash.

As to why they are not “standard”: (1) until recently people focussed on collocation, which is slow and unstable , (2) history, (3) effective extension to multivariate PDEs on complicated geometry is still work in progress

4 Likes

Looking at your SpectralTimeStepping.jl codes, the operators that ApproxFun makes end up working a lot like functions, right? I’m wondering what exactly would have to be done to support solving PDEs defined by ApproxFun in the DiffEq common solve interface, do you know? Maybe we should take this to an issue.

1 Like

I started the following issue:

1 Like

This is for constant-coefficient (or at least piecewise-constant) PDEs only, right? Presumably you still have to transform to coordinate space, requring O(N log N) complexity FFT-based operators and iterative solvers, for situations where PDE coefficients are varying continuously in space, e.g. for things like the Schrödinger operator -∇² + V(x) with a continuously varying potential V(x), or for nonlinear equations like reaction-diffusion. i.e. any situation where you need to multiply two “ApproxFuns” together.

1 Like

Variable coefficients are fine: the operators are still banded (ODEs) or block banded (PDEs) in coefficient space, where the bandwidth is dictated by the regularity of the coefficient. In ApproxFun v0.4, the following solves variable coefficient Helmholtz:

d=Interval()^2
B=neumann(d)
Δ=Laplacian(d)
x,y=Fun(d)
V=cos.(x*y)

k=10.0
@time u=linsolve([B;Δ+k^2*V],[ones(4);0.0];tolerance=1E-5)
ApproxFun.plot(u)

Or, here is advection-diffusion on a square solved with Backward-Euler:

d=Interval()^2
Dx=Derivative(d,[1,0])
Dy=Derivative(d,[0,1])
B=Dirichlet(d)

L=0.01Δ+x*Dx+y*Dy

h=0.001
QR=qrfact([B;I-h*L])

u0=Fun((x,y)->(1-x^2)*(1-y^2)*exp(-x^2-y^2),domainspace(QR))

u=Array(typeof(u0),100)
u[1]=u0

@time for k=1:99
    u[k+1]=linsolve(QR,[zeros(∂(d));u[k]];tolerance=1E-10)
    u[k+1]=chop(u[k+1],1E-10)  # make sure the number of coefficients doesn't grow out of control
end 

ApproxFun.plot(u[100])
2 Likes

FYI You’ll notice the syntax appears to be inconsistent between neumann and Dirichlet. One of the reasons why I haven’t put much effort into the documentation is that I don’t want to encourage users (who are not also contributors to ApproxFun) until the syntax has stabilized, which it should do over the next year.

1 Like

But if the variable coefficient has regularity similar to the coefficient (i.e. requires a similar-degree Chebyshev polynomial), then the matrix in coefficient space is nearly full, no? So that generically you still will need the O(N log N) approach of switching back and forth between coefficient and real space.

1 Like

Yes, for nonlinear ODEs you lose O(N) because the variable coefficient has the same length as the solution, in which case you are back to the same complexity as collocation. But for time stepping you can get around this using a splitting method: handle the nonlinear part explicitly (using transforms as you said) and the linear part implicitly.

1 Like

That being said, that’s the complexity for direct solvers. But for “traditional” methods one usually resorts to problem dependent iterative solvers, so there the complexity is dictated by applying the operator. For spectral methods, always have O(n log n) application of operators so it’s roughly the same complexity as standard methods, though one would need suitable preconditioners, something I don’t know anything about.

2 Likes

The nice thing about preconditioning a spectral method, in my experience, is that the matrix typically becomes diagonally dominant at high frequencies (in coefficient space), so a simple Jacobi (diagonal) or block-Jacobi preconditioner can sometimes be very effective. (e.g. this is typical for density functional theory, where one often uses a Fourier basis to solve the nonlinear eigenproblem on periodic domains.)

1 Like