### TL;DR

Indirect (fixed length) ApproxFun works, but is slightly wasteful because it grows then chops. Fully adaptive ApproxFun works in theory, but to get it to work well in practice that’s a research project that’s at least a year (probably more) off (but it definitely has potential to be “the best”) because right now the spatial adaptivity is working against me and not with me. Specializing methods for spectral discretizations is easy in the DiffEq framework, and the methods that you show in that paper as counterexamples to the possibility of this are actually really straightforward application of the refined ODE interface. Some examples exist to show that it will indeed work well. But there’s still work to do.

If you want to test something right now, test the fixed-length version with I don’t know, `Tsit5()`

, `radau()`

, or `CVODE_BDF()`

. Not going to guarantee it’s good, but it should work.

### Setup

Well I might as well share my experimental code to watch it get stomped. I’ll probably be releasing the alpha soon:

There’s two setups.

### Indirect

One indirectly uses ApproxFun by saving out the coefficients each time. This setup is nice because it can be used with any of the common interface ODE solvers, so you can just plop it from OrdinaryDiffEq.jl into Sundials.jl or whatnot. It looks like this:

```
using DiffEqBase, OrdinaryDiffEq, Sundials, ApproxFun, DiffEqApproxFun
using ODEInterfaceDiffEq
using Base.Test
S=Fourier()
u0=Fun(θ->cos(cos(θ-0.1))-cos(cos(0-0.1)),S)
c=Fun(cos,S)
ode_prob = ODEProblem((t,u)->u''+(c+1)*u',u0,(0.,1.))
prob = ApproxFunProblem(ode_prob)
@time sol=solve(prob,Tsit5())
sol(0.5) # solution at t=5
sol(0.5,0.2) # solution at t=5, θ=0.2
@time sol=solve(prob,CVODE_BDF())
@time sol=solve(prob,radau())
```

What that’s doing is using the number of coefficients from the initial condition to set the length of the vector, and then it solves the ODE defined by the vector of that length by converting it into the appropriate ApproxFun each time `f`

is called. That part is cheap, but it’s slightly wasteful because after the `f`

step, it may have grown the ApproxFun (for accuracy) but then we just chop those right off. But still, this seems pretty fast for what it is, and it’s robust+useful… so its okay for the time being.

It’s also able to handle boundary conditions which are not just set by the space by a projection callback. This does pretty well.

It does have a bug though that, while you can in theory choose the length of the coefficient vector via `ApproxFunProblem(ode_prob,N)`

, in practice that makes it instantly blow due to some bug I haven’t worked out… so this mode is unusable right now.

### Direct

Now this is the mode that sounds interesting. This is directly using an ApproxFun `Fun`

type inside of OrdinaryDiffEq.jl. Does it work? Yes! Does it work well? No. There seems to be some fundamental issues with the approach right now. `Fun`

types are adaptively sized, which works great in many cases, but when left unchecked it just grows and grows here. It’s like interval arithmetic where after time it blows up. So even on relatively simple problems:

```
using DiffEqBase, OrdinaryDiffEq, Sundials, ApproxFun, DiffEqApproxFun
using ODE, ODEInterfaceDiffEq
using Base.Test
S=Fourier()
u0=Fun(θ->cos(cos(θ-0.1))-cos(cos(0-0.1)),S)
c=Fun(cos,S)
prob = ODEProblem((t,u)->u''+(c+1)*u',u0,(0.,1.))
@time sol=solve(prob,Euler(),dt=1/1000)
```

you’ll end up with about 2000 coefficients for Euler by `t=0.4`

and get CFL instability (for comparison: the indirect ApproxFun version uses a fixed vector of 36 coefficients and solves this just fine). Then this has some issues plugging into system tooling: autodifferentiation to get the Jacobian needs a separate path. With that issue worked out by handle, technically implicit methods work, but the coefficient blow up still makes it difficult, and the implicit solve can make coefficients blow up faster. Maybe this will do better when the linear solve can be swapped out in NLsolve, because then it would be easier to `chop`

right after the linear solve.

### Other “Research” In This

There’s also SpectralTimeStepping.jl:

it directly defines some common methods for ApproxFun `Fun`

types. To keep the coefficients from growing too fast, it aggressively `chop`

s and uses multistep methods to reduce the number of function calls. Indirect seems to do better though.

### Intermediate Conclusion

Naive usage of adaptive space with and without adaptive time doesn’t work well. I am sure we can refine this approach much more, in which case it could be really really good. But that’s likely a full research project. The linear operator approaches from SpectralTimestepping.jl can also be taken in OrdinaryDiffEq.jl, and I’ll explain that in a sec, and that might do much better.

You can just directly write the timestepping method to match the spatial discretization. Here’s a few examples that already have things working towards them:

http://docs.juliadiffeq.org/latest/types/refined_ode_types.html

Actually, the method in the paper that you linked is of the Linear-Nonlinear Form:

http://docs.juliadiffeq.org/latest/solvers/refined_ode_solve.html#Linear-Nonlinear-(LNL)-ODE-1

It’s a Crank-Nicholson Linear + Adams-Bashforth nonlinear (with a constant term added, but that’s simple to add) and can be directly integrated using that method when it’s complete. Then the second method they list is a Linear Crank + nonlinear RK. Why not? Same form of equation, it can get an implementation.

If it’s put into DiffEq then it takes about 2-3 lines to get event handling and adaptivity going. I don’t see a paper which does this kind of method with adaptive timestepping, so that sounds like an interesting things to do since it can lead to a quick publication.

Whether it’s good for a specific problem or not depends on the application further analysis, but just because it hasn’t been abstracted like that before doesn’t mean it can’t be: these “special PDE methods” are written for the problem but apply in a much greater domain if one instead thinks about them as a method on split ODEs with some linear and some nonlinear terms. So thanks for giving me some examples to show exactly how it does fit into DiffEq.

As for how far along that stuff is, it’s already seeing results. The docs say that none of the exponential integrators exist yet, but they do:

This stuff just isn’t released because the ecosystem needs an `expmv`

for this to be efficient on large PDEs (for small PDEs, this works great already. It has to do with the fact that `expm`

is dense even if `A`

is sparse, so you need `expmv`

to compute it, or a very good lazy exponentiation for specific discretizations).

### Conclusion

See the TL;DR. Work with ApproxFun to get the full adaptivity working for us instead of against us is more difficult and will take time, but I have high hopes for the future. However,

that statement is false: DiffEq has the language to implement those specialized PDE algorithms as “refined ODE problems” as shown above. I just need to clone myself, or find a contributor who’s interested in helping with this part. Or just someone to help implement algorithms and finish up `expmv`

other lower-level ecosystem type things. Whichever is easier.

That’s a pretty thorough overview of the current state.

#### Edit

Pinging @dlfivefifty on this update. Maybe when we meet in person we can brainstorm a plan for how to whip this into shape.