I’m new to Julia, but looking to port a neural mass network model with both delays & (linear additive) stochastic forcing (which I’ve solved elsewhere by adding delays by hand to typical SDE methods). Looking through the DiffEq docs, I haven’t see how the two would be combined, since one has to choose a problem type.
Any update on this? I just looked through the docs and there doesn’t seem to be anything there. To be honest, I’m not looking for anything too fancy: we currently use a Heun step with a ring buffer to implement the delays. We do parameter sweeps and Bayesian inference with this, so I’m looking at how to hit the GPU support and gradient as well for the ML packages. I really can’t from the docs if I should try to shoehorn this into DiffEq or grow it myself; if you had a gut reaction to share I’d be interested.
Reading through the code, I noticed that support for multiple lags resembles other solvers, in that all delayed states (or a subset specified by indices) are returned for each lag. I have a particular problem type which is structured as a network, where each node is coupled with a constant lag to other nodes, and the lag are specific to each pair of nodes. In other words, for N nodes I would have NxN lags. The first column would be lags required for the first node, the second column are lags for the second node, and so on.
In solvers like dde23 this requires N times more memory than when I code up a custom solution, but my reading of this code is that the delayed states are constructed on demand. Does this mean that in the functions defining the problem, instead of calling the history object with all lags, I could call it once per column with the idxs set to the node index? Or should I try to implement support for this at a lower level such in the interpolant methods?