Copy-Pasting @ChrisRackauckas answer here, as I guess it answers the question and so other people can find the solution in the future:
'> The Kuramoto model of coupled oscillators.
That’s a well-known and well-studied chaotic system. See for example [1802.05481] Chaos in Kuramoto oscillator networks. You didn’t share your tspan, but I presume you’re solving past a Lyopunov time because your solution looks like it’s in the chaotic regime. Given that, a property of chaos is exponential sensitivity to initial conditions and numerical error. So any change like how exp is calculated could have a change on the order of 1e-16 and by a Lyopunov time that would perturb the system’s solution by O(1). Because of that, long time solutions to chaotic solutions are better thought of as random samples from the chaotic attractor as the only thing you can guarentee is the Shadowing Lemma (Shadowing lemma - Wikipedia) which is that the exact solution to some epsilon-close initial condition was computed accurately, but due to exponential divergence from numerical error the simulator cannot guarentee it’s anything except a sample from the epsilon ball.
In other words, chaotic systems are weird in a fun way, but it means directly reproducing the simulation of a chaotic system is almost impossible unless every single step of the computation is exactly the same. I used to run a game in some talks where I would tell people to solve the Lorenz equation with a specific set of initial conditions and parameters for (0,100) and they would tell me the solution at the end, and I would tell them what operating system and solver they were using. The results are O(1) different, so you can know from that information whether someone was using SciPy on Windows vs Linux (since they use a different math library and thus have a different implementation of floating point power which is used in the compiled Fortran code in the dopri5 solver that SciPy defaults to), while R uses a different RMath via deSolve, etc.
If I had to guess, Julia v1.5.3 was long enough back that there was a change to how fastmath floating point power was computed and so this likely stems from the ~2ulp or what not difference there. Chaos is fun.’