I am running simulations of SDEs and plotting the results. I keep having problem that Jupyter crashes, despite that the simulations shouldn’t be that intense. I think hat I have tracked the problems to the plots being to big. I have earlier had this issue and got help with that. However the use of `saveat`

(as suggested there) only seems to get me that far.

I have been investigating and it seems like the plots still are much larger than `saveat`

indicates. Here I have tried to make a minimal(ish) example:

```
using Plots
gr();
using DifferentialEquations
function positive_domain()
condition(u,t,integrator) = (any(u .< 0))
affect!(integrator) = integrator.u .= integrator.uprev
return DiscreteCallback(condition,affect!)
end;
rn = @reaction_network rnType begin
0.01, (X) → ∅
end
prob_sde = SDEProblem(rn,[1.],(0.,2000.))
sol = solve(prob_sde,ImplicitEM(),dt=0.001,callback=PositiveDomain(),saveat=1.);
length(sol)
```

using `saveat=1.`

I would expect to get a solution about the size of `2000`

, in fact its length is `2002002`

, which is quite a lot. I can also change to `plotly()`

, zoom in and confirm that a `plot(sol)`

shows things at a very small scale. I have tried other `saveat`

values, like `1`

and `0.5`

, but success.

Am I using `saveat`

wrong somehow? I am having stochastic simulations over a time of about 2000, but not really interested in anything of a timescale less than 1, is there a good way to only plot about 2000 timepoints of my solution?