Stochastic differential equations: how to improve the exactness of the solution?

I’m trying to find out how to integrate stochastic differential equations. I would like to see how an ODE that I have already solved behaves when subjected to a noisy signal. I wrote the code below. I don’t know if it is correct, but that’s what I could make sense of from the SDE tutorial here: Stochastic Differential Equations · DifferentialEquations.jl. To see how SDEProblem works, I first tried it out without noise to check whether I obtain the same solution to the ODE that I have already solved with high precision and accuracy (black curve - obtained with Feagin 14, and tested for convergence up to 100 significant digits and 1e-50 (!!) tolerance). As you can see, there is a substantial divergence from the SDE with zero noise (red curve). I tried using all the stiff methods listed here: SDE Solvers · DifferentialEquations.jl. The only method that follows the “true” solution for about the first 10 seconds is STrapezoid. Then I tried to improve it with lower tolerances and more max iterations (as far as my PC doesn’t get stuck), but that did not improve things much further (I need to reach convergence up to 100 seconds).

So, the question is: Is there a way to improve the exactness of the solution? Am I doing something wrong to obtain such a poor result? Or is this the best one can achieve?

using DifferentialEquations, Plots
setprecision(333); 

# Some weird ode
function pend(dy, y, p, t)
    (;gamma, w, w0, B) = p
    dy[1] = y[2]
    dy[2] = gamma*w0^2*cospi(w*t) - w0^2*sin(y[1]) - 2*B*y[2]
end 

# Noise function 
function noise(dy, y, p, t)
    dy[1] = 0.0 # No noise
    dy[2] = 0.0 # No noise
end

# Intital conditions and parameters
InPos=BigFloat("0.0"); InMom=BigFloat("0.0");
Tin=BigFloat("0.0"); Tfin=BigFloat("100.0");
parameters=(gamma=BigFloat("1.16"), B=3*big(pi)/4, w0=3*big(pi), w=BigFloat("2"));

# Solve
prob_sde = SDEProblem(pend, noise, [InPos,InMom], (Tin,Tfin), parameters, maxiters = 1e7);
sol = solve(prob_sde, STrapezoid(), abstol=1e-7, reltol=1e-7);
t=sol.t; L=length(t);
u=[sol.u[i][1] for i in 1:L];

# Plot
dtPlot=Int64(round(L/1920));
plot(t[1:dtPlot:end],u[1:dtPlot:end],lc=:red, lw=.8, label="SDE")

In general, when simulating chaotic systems you should expect O(1) divergence after a few lyoponov time periods

1 Like

Are we sure this is or isn’t chaotic? Can someone do a quick test of that first?

Yes, it is the driven damped pendulum in chaotic regime.

Yeah so then everything here applies:

It will just fall off the trajectory, and the accuracy of the solution will choose the point where it becomes O(1) different.

STrapezoid is 2nd order if there is no noise, and 1/2 order with noise. Vs a 14th order method, that’s a huge difference in error: making dt = dt/8 changes the error to be 1/64th less for one method, and 2^(3*14) = 4e12 less error for the other. It’s not even comparable how accurate the two will be.

In general it’s very difficult for methods on SDEs to get high order. But when there is noise, all ODE solvers change to only having 1/2 order convergence due to the regularity issues (at most, they can be non-convergent), and this is a pretty fundamental thing.

1 Like

Thank you, the article is insightful. It is in line with what I have experienced so far.

So, the first takeaway is that it is a waste of time to try further with SDE solvers… good to know.

The ensemble method is interesting. Can’t one use this for high accuracy solutions with noise? However, it isn’t clear to me how the absolute size of the noise is determined in relation to the adaptive time steps and the order of the algorithm. I need to add a controlled minuscule (Gaussian) random “kick” at each regular time interval (not adaptive times step). Setting a Tspan=T, and a predetermined total error ET, an error E=ET/N must be added N times at each time step T/N, with N \rightarrow infinity. I don’t see how this could be done with the scaling factor in the callback, because it is an adaptive time step procedure… or is this feasible?

For problems with noise there is strong and weak convergence. These notions are separate. If you care about statistical properties, you can converge faster than the actual trajectories.