[Solved] Solution to differential equation changes with Julia version

Hey everyone!!

I am stuck on trying to replicate some previous results for my thesis right now.

For that, I want to generate the solution to a system of differential equations using the OrdinaryDiffEq package.
Previously, those results were generated using Julia 1.5.3. Now I am using Julia 1.10.
For some reason I am unable to replicate them even when using the exact same lines of code.

My system of differential equation is a kuramoto model, solved using Tsit5 (Runge-Kutta). Afaik (not a physicist though), this solver and model should be numerically stable and deterministic, aka yield the same results for the same set of initial conditions and parameters.

Here are the relevant lines of code:

"""
	kuramoto_model(ddu, du, u, kura_param, t)

The Kuramoto model of coupled oscillators.

# Arguments
- `ddu`: The second derivative of the state vector.
- `du`: The first derivative of the state vector.
- `u`: The state vector.
- `kura_param`: The parameters of the Kuramoto model.
- `t`: The current time.

# Notes
This function modifies `ddu` in-place.
"""
function kuramoto_model(ddu, du, u, kura_param, t)
	V = exp.(u .* im)
	flow = real.(V .* conj(kura_param.L * V))
	@. ddu = -kura_param.kuram_α .* du + kura_param.P + flow
end

kura_problem = SecondOrderODEProblem(kuramoto_model, du0, u0, tspan, kura_param_data) # initial ODE problem
sol = solve(kura_problem, Tsit5(); save_idxs = 1:(d.N), reltol = 1e-6, abstol = 1e-6, callback = cb, saveat = save_at_range)

Neither callback nor save_at_range changes between the julia versions.

When I evaluate this code using 1.5.3 I get a slightly different result than when I evaluate it using 1.10.

These changes can not be attributed to floating point errors I think, as the maximum of the state variable at the last timestep sometimes differs by more than 1.

Examples:
julia 1.5.3 yields max(sol.u[end]) = 4.166
while 1.10 yields max(sol.u[end])=3.3866

The initial conditions and all kuramoto parameters are the same for both versions, i triple-checked that.

Does anyone here know if something changed in between those julia versions that could explain this drastic difference? Am I overlooking something simple?

Thank you very much in advance, I’ve been stuck on this bug (?) or user error for the past two weeks and don’t know how to continue. If you need further information lmk, I’ve created an entire powerpoint trying to debug this lol (not kidding).

here’s an example plot of one of the solutions where 1.5 yields 0.0822 and 1.10 yields 0.0031:

|489px;x326px;

What happens if you lower the tolerances? The solutions look pretty much alike from the plots.

It is true that I remember that there was a time in the past where SecondOrderODEProblem returned incorrect results for oscillating systems. I do not know if that is your case, since I do not recall what was the exact problem from these previous version, but your solutions seem to be close.

Although I do not know if those sharp oscillations for longer times are physical or are a byproduct of not having sufficiently small tolernaces.

does your system converge to a steady state? From the plots you show it looks like it but i am not sure. If you think it does, integrate for twice as long and report whether you get the same results.

if you system is chaotic, then it is normal that you don’t get the same result in any “exact” manner by changing julia versions. even more mild changes like changing your operating system would lead to changes.

a different question to ask however is whether the exact final state of this particular simulation matters. this is off topic from your question, I understand, but worth asking yourself though, to decide whether you should be spending any time at this at all.

Is the change caused by a newer version of Julia (v1.5.3 vs. v1.10), or is it due to a different version of DifferentialEquations.jl? I would guess that it is due to a new version of DifferentialEquations.jl, but I may be wrong.

1 Like

Sadly I think it does, as the changes are bigger than 0.1 ;_;
But thanks for the reply!! It is certainly a good question

I am unsure, that was one of the things I was trying to evaluate.

Copy-Pasting @ChrisRackauckas answer here, as I guess it answers the question and so other people can find the solution in the future:

'> The Kuramoto model of coupled oscillators.

That’s a well-known and well-studied chaotic system. See for example [1802.05481] Chaos in Kuramoto oscillator networks. You didn’t share your tspan, but I presume you’re solving past a Lyopunov time because your solution looks like it’s in the chaotic regime. Given that, a property of chaos is exponential sensitivity to initial conditions and numerical error. So any change like how exp is calculated could have a change on the order of 1e-16 and by a Lyopunov time that would perturb the system’s solution by O(1). Because of that, long time solutions to chaotic solutions are better thought of as random samples from the chaotic attractor as the only thing you can guarentee is the Shadowing Lemma (Shadowing lemma - Wikipedia) which is that the exact solution to some epsilon-close initial condition was computed accurately, but due to exponential divergence from numerical error the simulator cannot guarentee it’s anything except a sample from the epsilon ball.

In other words, chaotic systems are weird in a fun way, but it means directly reproducing the simulation of a chaotic system is almost impossible unless every single step of the computation is exactly the same. I used to run a game in some talks where I would tell people to solve the Lorenz equation with a specific set of initial conditions and parameters for (0,100) and they would tell me the solution at the end, and I would tell them what operating system and solver they were using. The results are O(1) different, so you can know from that information whether someone was using SciPy on Windows vs Linux (since they use a different math library and thus have a different implementation of floating point power which is used in the compiled Fortran code in the dopri5 solver that SciPy defaults to), while R uses a different RMath via deSolve, etc.

If I had to guess, Julia v1.5.3 was long enough back that there was a change to how fastmath floating point power was computed and so this likely stems from the ~2ulp or what not difference there. Chaos is fun.’

3 Likes