I am solving a set of differential equations, see eqs C3a-C3e from https://journals.aps.org/prresearch/pdf/10.1103/PhysRevResearch.5.013091
I separated the equations into real and imaginary parts and propagate them using different solvers. In the beginning, I set the algorithm to nothing
and chose alg_hints = :stiff
(which corresponds to choosing the solver automatically). Here is how I call the solver:
sol = solve(
prob,
alg,
alg_hints = :stiff,
dtmax = 0.01,
maxiters = Int(1e10),
saveat = 0.01,
reltol = reltol,
abstol = abstol
)
Depending on how I choose abstol and reltol, the dynamics can have weird wiggles or even become unstable. For new configurations, I always need to check which tolerances work best.
For example, I consider 3 particles (~19 real equations) that interact strongly (J_ij ~ 3000) than it is best to choose abstol = 1e-9 and reltol = 1e-8, see screenshot.
However, if I consider 5 particles (~105 real equations) that interact less strongly (J_ij ~ 375), then it is best to choose abstol = 1e-6 and reltol = 1e-5, see screenshot.
My questions are: Is it expected that one has to play with abstol and reltol for different configurations? Is there not an automatic way of determining them?
Can one understand why in one case it is good to have low tolerances and in the other it is not?
Just linking for reference an in-depth discussion about abstol and reltol from a few months ago: Setting `abstol` and `reltol` when solving the Schrƶdinger equation with OrdinaryDiffEq (note that the answer thatās marked as correct isnāt 100 % correct, so donāt stop reading there).
I have no idea why lowering the tolerance makes your solver unstable, though. Probably a good idea to experiment with different solvers, but it would also be good to know why the default choice screws up like this.
2 Likes
You really need to manually choose the solver that works best for your problem. Might take a couple of days because there are ~150 solvers to choose from. And many of them also have extra options. We need to use for example:
solver = FBDF(nlsolve=OrdinaryDiffEqNonlinearSolve.NLNewton(relax=0.4, max_iter=1000))
integrator = OrdinaryDiffEqCore.init(s.prob, solver; dt, abstol=s.set.abs_tol, reltol=s.set.rel_tol, save_on=false, save_everystep=false)
for some problems.
Trying different solvers is often necessary, but even an unsuitable solver usually shouldnāt go unstable when adaptive time stepping is used. It should either be painfully slow or error out with dt < dt_min
. It would be good to understand why stability is lost in this case, and especially why itās triggered by tightening the tolerances.
1 Like
Are you modifying u
in f
? What is plotted doesnāt look natural, it looks like itās violating something. And the instabilities around low tolerances is a hallmark that youāre doing something that violates step rejection.
See
Thanks for all the input!
You really need to manually choose the solver that works best for your problem.
I have actually tried a few different solvers. So far, I have not found any solver that worked for all configurations I have considered. Also, sometimes, high tolerances work better than low tolerances, see screenshot for a 3x3 lattice.
Are you modifying u
in f
? What is plotted doesnāt look natural, it looks like itās violating something. And the instabilities around low tolerances is a hallmark that youāre doing something that violates step rejection.
No I am not modifying u
, just computing du
. It is entirely possible that I made a mistake somewhere, but I compared my results to some cases shown in the paper, where I got the equations from, and my code matched their results.
It is important to mention that we expect there to be some instabilities in the equations (as also mentioned in the paper). I am working on removing those by performing a projection between propagation steps. In that case, I would really modify u
between time steps - but I am not at this point yet.
What troubles me is that I donāt know when the instabilities or the unphysical wiggles come from my solver choice, and when the equations are in fact unstable.
That doesnāt look like a possible solver instability. That looks like something youād get if you violated one of the principles of the ODE solverās assumptions. Did you check the whole list?
Yes, as far as I can tell I did.
- I do not modify u
- There is no stochastic term in the equation
- I do not use values from āpreviousā time steps
- There is not unsmoothness like an if condition
- There are some constraints like that the population of the ground state
ng
and the population of the excited state ne
need to sum up to 1. However, I completely remove ng
by replacing it ng = 1 - ne
.
Here is the my function
function eoms!(du, u, p, t)
slv, sys, params = p
# Extract the atomic operators from u
Ļee, Ļeg_Ļge, Ļee_Ļee, Ļee_Ļeg_Ļge, Ļee_Ļee_Ļee = to_sigma(slv, sys, u)
# Use multithreading to loop over all complex equations
@threads for i = 1:slv.n_cmplx_eqs
if i <= sys.n1_cmplx
# Get indices pertaining to the ith equation
# E.g., the first equation pertains to the population of the
# first atom Ļ_ee^1, whose indices are (e, e, 1)
inds = System.inds_from_equationind(sys, i)
res = eom1(Ļee, Ļeg_Ļge, inds, params)
elseif i <= (sys.n1_cmplx + sys.n2_cmplx)
# Equations for second-order operators, e.g., Ļ_eg^1 Ļ_ge^2
inds = System.inds_from_equationind(sys, i)
res = eom2(
Ļee,
Ļeg_Ļge,
Ļee_Ļ_ee,
Ļee_Ļeg_Ļge,
inds,
params
)
else
# Equations for third-order operators
inds = System.inds_from_equationind(sys, i)
res = eom3(
Ļee,
Ļeg_Ļge,
Ļee_Ļee,
Ļee_Ļeg_Ļge,
Ļee_Ļee_Ļee,
inds,
params
)
end
# Get the indices of u that pertain to the ith equation
uinds = slv.equationind_to_uind[i]
# If the equation is real, it pertains to 1 index of u,
# if it is complex then it pertains to 2
if length(uinds) == 1
du[uinds[1]] = real(res)
else
du[uinds[1]] = real(res)
du[uinds[2]] = imag(res)
end
end
end
Is your function holomorphic?
I think to help more Iāll need something I can run.