I am wondering how to make sure that an SDE solver has converged on a trajectory level. In other words, I want to make sure that I have chosen abstol and reltol small enough.
What I would do in the deterministic case is that I would just solve the differential equation twice for different tolerances. If the results lie on top of each other, I can be sure that I have chosen the tolerances low enough.
In the stochastic case, however, this is not possible. I am new to these types of equations, but as far as I understand, stochastic solvers only have strong convergence, which means that the difference between two trajectories with different step sizes dt1 and dt2 vanishes as ~ |dt1 - dt2|^p.
I find this a bit unintuitive, as I don’t want my solution to depend on my choice of the time step (or the tolerances). To test this, I took an example from the tutorial and executed it with the same seed but different tolerances:
import StochasticDiffEq as SDE
α = 1
β = 1
u₀ = 1 / 2
f(u, p, t) = α * u
g(u, p, t) = β * u
tspan = (0.0, 1.0)
prob = SDE.SDEProblem(f, g, u₀, tspan)
sol = SDE.solve(prob, SDE.SRIW1(), seed = 123, abstol = 1e-7, reltol = 1e-7, maxiters = Int(1e10), saveat = 0.01)
sol_lower_tol = SDE.solve(prob, SDE.SRIW1(), seed = 123, abstol = 1e-8, reltol = 1e-8, maxiters = Int(1e10), saveat = 0.01)
Plotting the two solutions yields
Therefore my questions:
- Is it better to play with
abstolandreltolor withdtanddtmaxto ensure convergence? - How can I be sure that I have chosen the correct solver and that my solver has converged?
- Is it not a problem that my solution depends on the solver parameters? Or is it just that the stochastics differ (i.e., that different random numbers are chosen) when I modify the tolerances?
