Avoiding high plotting densities when plotting Solutions to SDE with small time steps

I am looking at a biochemical reaction networks and how it behaves under noise. The code looks something like:

gr()
...
prob_sde = SDEProblem(model,u0,(0.,1000.))
sol = solve(prob_sde,ImplicitEM(),dt=1//2^(7),callback=cb])
plot(sol)

(the callback ensures that the solution do not go into negative values)

I have a term which I use to linearly scale the amount of noise. One problem I have had is that if the noise terms is large and make the noise big enough I get an error

WARNING: Newton steps could not converge and algorithm is not adaptive. Use a lower dt.

However when I increase the dt to over dt=1//2^(10) things gets very slogish and jupyter starts to crash. I have identified that this is due to the plotting (if I remove it things work). When I have dt=1//2^(10) and have a time interval (0.0,1000.0) that means a million of timepoints (multiply that by 5, the number of variables of the system) and maybe I should not be surprised that I am having problems?

What is the correct approach to my issues? For the deterministic solution I can use the plotdensity flag, but not here. Maybe that is with good reason as well as interpolations are not as obvious when I have a stochastic problem? I have a feeling that I might have to be more careful here than in the deterministic case… Anyone who have any input? Certainly there should be some way to get away with small time steps and also have something useful to plot?

Why not? The plotdensity flag still works.

Yeah, adaptivity is fundamentally required for a good implicit algorithm. I address this in a new paper in the case of affine noise, so message me if your problem is like that. It’s about 6000x faster on problems where this occurs with the standard methods, but (currently) only applies to affine noise. I need to get a solution for non-diagonal noise though. In theory it wouldn’t be too difficult to add an error estimate to these if you can think of one that theoretically would work well.

A good practical way to handle this is to use saveat. This will also reduce the saving load. You likely don’t need to save at every time step if it’s this dense, so you might as well do something like saveat=0.1 and get 10,000 points which are pretty finely spaced enough.

1 Like

Thank you, saveat works as expected and solves the problem. Cheers.

6000x speed sounds like it can be useful… I’d love to hear updates on this.

Finally regarding the plotdensity. I use

@time plot(sol,plotdensity=x)

and then I have used various value of x (1000,100, etc.). While the seconds output from time varies a little (but about the same and in the same way if I would re-do it for the same plotdensity) the two following numbers (e.g. (999.65 k allocations: 57.115 MiB, 17.49% gc time) is always the same, no matter what the plot density. The last value, % gc time also changes slightly. However when I used you suggested saveat these numbers went down as expected. Also, for very low value of x (10, 1) I see no actual visual difference in the plot as I would expect and do see for the deterministic solution.

If you have a huge number of points, then the interpolation can be costly because it needs to search for the correct interval in the large time arrays.

@Gaussia I wanted to make a note that the newest release of StochasticDiffEq.jl upgrades these implicit algorithms to have adaptive time stepping. This was noted in the newest release notes:

http://juliadiffeq.org/2018/03/31/AdaptiveLowSDE.html

As I note in the post, there’s some tweaking that can be done to reduce the overall number of steps quite a bit, but it exists. One thing to note from my research (which as the affine and diagonal methods, should be on Arxiv soon which will then lead to their release) is that these current implicit methods are not stable for high noise, so

and make the noise big enough

ImplicitEM, ImplicitEulerHeun, and ImplicitRKMil will still have troubles on these algorithms since they are drift implicit. There are truncated implicit EMs I can try, or doubly implicit algorithms, and now that I know of ways to make this adaptive I think this would be a good thing to try next. So your problems should be solved much better, but I think there’s an even better way to do this. Stay tuned.

Thanks for notifying me about this. Will try things out when I get back home after the weekend, but if this is anything like I understand it to be then it is really good stuff. Many times I have wished for some kind of adaptive time stepping, I could definitely see a lot of improvements with this!

There’s still a lot of things to do, but if you want to help tune the methods comment over at https://github.com/JuliaDiffEq/StochasticDiffEq.jl/issues/62 . Then the fully-implicit methods should be coming ASAP to satisfy the large noise issue needs.

The new ISSEM method is this. Its finished and should be released soon. It has similar adaptivity added onto it, should in theory it should be exactly what you were looking for, sans tweaks to the algorithm to make it more efficient.

Looking forward to try this out.