I am interesting in simulating delayed reaction-diffusion equations of the form
u_t = \Delta u + f(u,u(t-\tau))
in a 2-D spatial domain of modest size (10,000 equations) and a non-stiff nonlinearity (for now). My current progress involves optimizing and benchmarking the ODE function and algorithm/settings choices for the undelayed problem, which splitting methods seem well-suited for at some tolerances and integration lengths. As I understand it, the extension to a DDEProblem
more or less just involves rewriting your ODE function to incorporate the history function, then MethodOfSteps
just wraps some adaptive time-stepping integrator and makes sure that the integration resolves the discontinuities well. Therefore, I expect at least some of the optimization for the non-delayed case to transfer to the delayed case.
Of course I can probably get away with just using a fully implicit method in MethodOfSteps
, but I’d at least like to know if it’s possible to use a splitting method within MethodOfSteps
. I will eventually be using a variant of this problem in some Bayesian inference, so trying eke out as much performance as possible is preferable. As I currently understand, to solve a DDE, you create a DDEProblem
from a function of the state u
and history function h
, then the solver alg is just MethodOfSteps(ode_alg_of_choice())
. For a SplitODEProblem
, I have to give it two functions/a linear operator + function representing the stiff and non-stiff RHS terms then call a split algorithm in solve
. It’s not clear to me from my own understanding and from browsing the many examples in the DE.jl documentation on how these two methods can interface (if at all). I’m not at the stage yet to provide a MWE of the delayed problem, but I wanted to know if there was some general framework to stitch together these two features for future use.
Understanding this potential code interface is important for future extensions of this problem such as
- Introduction of a stiff, but easily separable, component to the nonlinearity
- Galerkin projection onto a lower dimensional subspace (problem size decreases drastically but matrices/tensors become dense)
- Increasing tolerances to allow for faster short-time integration