KenCarp47 is giving wrong result no matter how tight the tolerance is. Is there a way to resolve this?

I am trying to solve a set of coupled nonlinear equations, and the system has physical instability. The equations of motion are

i\frac{d\rho(\cos\theta,t)}{dt}=[H,\rho(\cos\theta,t)]
i\frac{d\bar{\rho}(\cos\theta,t)}{dt}=[\bar{H},\bar{\rho}(\cos\theta,t)]

where H and \bar{H} are functions of \rho and \bar{\rho} making the system highly nonlinear and the system has a physical instability. The solution grows exponentially. I solve it by discretizing over \cos\theta and I end up with a few thousand odes. If I solve it using KenCarp47 the solution is stable. Are these algorithms mistaking a real instability for a numerical one and avoiding it somehow? I have pasted by Julia code below.

using DifferentialEquations
using OrdinaryDiffEq
using Plots
using Sundials
using LinearSolve
const θ = 1.0e-6
const Energy=50
const Δmsq = 2.5e-3*(1.0e6/197.3269631)
const Hvac=(Δmsq/(4.0*Energy))*[[-cos(2.0*θ)+0im, sin(2.0*θ)+0im] [sin(2.0*θ)+0im, cos(2.0*θ)+0im]]
const c = 3e5
const mup = 1.0e5
@time function eoms!(du,u,p,t)
    println(t)
    AngleBins=p[1]
    for i = 1:AngleBins*8
	    du[i]=0.0e0
    end
	Hself0 = zeros(ComplexF64,2,2)
    Hself1=zeros(ComplexF64,2,2)
    dcosth=2.0e0/AngleBins
    costharr=range(-1.0+dcosth/2.0e0,1.0-dcosth/2.0e0,AngleBins)
    for i = 0:AngleBins-1
        rho=[[u[i*8+1]+0im, u[i*8+3]-1im*u[i*8+4]] [u[i*8+3]+1im*u[i*8+4],u[i*8+2]+0im]]
        rhobar=[[u[i*8+5]+0im, u[i*8+7]-1im*u[i*8+8]] [u[i*8+7]+1im*u[i*8+8],u[i*8+6]+0im]]
		Hself0=Hself0+(rho-rhobar)*dcosth
	    Hself1=Hself1+(rho-rhobar)*costharr[i+1]*dcosth
	end
	for i = 0:AngleBins-1
		rho=[[u[i*8+1]+0im, u[i*8+3]-1im*u[i*8+4]] [u[i*8+3]+1im*u[i*8+4],u[i*8+2]+0im]]
        rhobar=[[u[i*8+5]+0im, u[i*8+7]-1im*u[i*8+8]] [u[i*8+7]+1im*u[i*8+8],u[i*8+6]+0im]]
		Hself=mup*(Hself0-Hself1*costharr[i+1])
    	dudt=-c*1im*((Hvac+Hself)*rho-rho*(Hvac+Hself))
        dudtbar=-c*1im*((-1.0e0*Hvac+Hself)*rhobar-rhobar*(-1.0e0*Hvac+Hself))
        du[i*8+1]=real(dudt[1,1])
        du[i*8+2]=real(dudt[2,2])
        du[i*8+3]=real(dudt[1,2])
        du[i*8+4]=imag(dudt[1,2])
    	du[i*8+5]=real(dudtbar[1,1])
        du[i*8+6]=real(dudtbar[2,2])
        du[i*8+7]=real(dudtbar[1,2])
        du[i*8+8]=imag(dudtbar[1,2])
    end
end
AngleBins=100
size=8*AngleBins
dcosth=2.0e0/AngleBins
costharr=range(-1.0+dcosth/2.0e0,1.0-dcosth/2.0e0,AngleBins)
u0=zeros(Float64,size)
for i=0:AngleBins-1
    u0[8*i+1]=0.5e0
    u0[8*i+5]=0.47e0+0.05*exp(-(costharr[i+1]-1.0e0)*(costharr[i+1]-1.0e0))
end
p=[AngleBins]

tspan = (0.0,1e-6)
prob = ODEProblem(eoms!,u0,tspan,p)
#reltolp=1e-10
#abstolp=1e-10
#@time sol1 = solve(prob,ARKODE(Sundials.Explicit(),etable = Sundials.DORMAND_PRINCE_7_4_5),reltol=reltolp,abstol=abstolp,saveat=1e-8)
#reltolp=1e-10
#abstolp=1e-10
#@time sol2 = solve(prob,QNDF(),reltol=reltolp,abstol=abstolp,saveat=1e-8)
reltolp=1e-12
abstolp=1e-12
@time sol1 = solve(prob,CVODE_Adams(linear_solver=:GMRES,krylov_dim=30),reltol=reltolp,abstol=abstolp,saveat=1e-8)



@time sol3 = solve(prob,KenCarp47(linsolve=KrylovJL_GMRES(),concrete_jac=true),reltol=reltolp,abstol=abstolp,saveat=1e-8)
#@time sol1 = solve(prob,TRBDF2(linsolve=KrylovJL_GMRES(),concrete_jac=true),reltol=reltolp,abstol=abstolp,saveat=1e-8)
using Plots 
plot(sol1,vars=(2),yscale=:log10,ylims=(1e-15,1e-3),label="CVODE_Adams")
plot!(sol3,vars=(2),yscale=:log10,ylims=(1e-15,1e-3),label="KenCarp47")

What about other stable methods like CVODE_BDF, TRBDF2, or ODEInterfaceDiffEq.radau?

1 Like

CVODE_BDF, TRBDF2 and Radau all capture the instability but they are as slow as explicit runge-kutta. I am trying to find an algorithm that is fast. I haven’t been able to find anything faster than CVODE_Adams but that too has the issue of intermittently not capturing the instability. So to be sure, I have to run in on very tight tolerance.

As a fellow ESDIRK enjoyer, I just wanted to share a mostly irrelevant discussion about RK and BDF. This totally does not answer any of your questions (sorry!), but I find the story in the link oddly charming.

Can you share a picture of the comparison? Did you try VCABM?

For tolerance of 1e-10 VCABM does not work. For fixed tolerance of 1e-10 (both abs and rel), KenCarp47 and TRBDF2 are not able to capture the instability. CVODE_BDF is slow and the error is much larger than the explicit method. I had not tried VCABM but it seems it is also not giving accurate results as you can see in the attached figure. I have put the runtime in the legend after the method. I know that the blue and green lines (DORMANDPRICE and CVODE_Adams) are correct results because I have a well tested c++ code that gives the same result. Thanks a lot for taking an interest in my problem. I just find it strange that some of the algorithms are not able to capture an exponential growth at a tolerance of 1e-10.

2 Likes

It looks like the difference is between methods which are L-stable and ones that are not L-stable.

Is that code L-stable?

Is that code L-stable?

No, it is not. It uses one of the explicit Runge-Kutta methods. Depending on the input parameters I use RKF45 or Prince-Dormand 8th order. I did not know L-stability can cause a problem at such tight tolerances.

Edit: Aren’t both VCABM and CVODE_Adams the same algorithm (adaptive order adaptive time Adams Moulton method)? How come one is L-stable and the other is not?

I dont get what is the problem, correctness or speed? How do you know the correct trace?

Is the problem stiff? If so, why do you use CVODE_Adams?

To tackle the correctness issue, I would remove the convergence issue of GMRES by using a direct solver. Once, you know the correct solver to use, then aim for speed. Talking about speed, a lot is left on the table judging from the code of the vector field

I know the correct results in two ways. I have a well-tested c++ code that gives the same results as CVODE_Adams. Also, the equations of motion can be linearized and the solution is known to be exponentially increasing at early times. As seen in CVODE_Adams solution. The plots I am presenting is only the initial part of the calculation I am attempting to perform. I want to continue the calculations up to t~1e-5 which takes a long time. I know how to do that using the explicit Runge-Kutta code I have. But I hoping to use Julia to come up with a code that is faster than what I have. Since the system is so nonlinear and unstable I want a reliable approach that I can trust when I change the input parameters and not worry whether the system is really stable or the stability is a result of an error in the algorithm. At the same time, I need a code that is faster than what I have. I do not need a tolerance of 1e-10. I am using that because that is the only way I can capture the exponential growth. I was hoping that using the Krylov-based solvers I can run the code with a tolerance of 1e-6, gain speed while preserving the qualitative features of the solution.

Talking about speed, a lot is left on the table judging from the code of the vector field

I am sorry but I am new to Julia and I do not know how one should write the vector field in an optimized manner. I was surprised that I could gain a big performance boost by adding +0im to some real numbers in definition of rho. I don’t have to do that in c++. Are there any other problems in the eoms function that are obvious to you as being inefficient? I would very much appreciate your insight.

1 Like

I am curious, what is this well-tested C++ code? If that code is not L-stable, what makes it a reliable source? I don’t think I would trust anything more than ODEInterfaceDiffEq.radau() or Rosenbrock23(autodiff=true), so what do those say? Those two algorithms have some of the most robust stability properties, with radau being a lot more efficient at the lower tolerance case though. What does ODEInterfaceDiffEq.radau() give at abstol=1e-12,reltol=1e-12? Using a fast converging A-B-L-stable high stiffly accurate Fortran reference should be a good independent source.

My guess is that it would show that it’s stable, and there’s just an issue with using non-L-stable methods here. Note that if your problem is high stiffness to require L-stability, then other methods like explicit Runge-Kutta methods are not even convergent. Even CVODE’s BDF only is 2nd order stiffly accurate (stiffly accurate is a technical term for convergence criteria under high stiffness, where a lot of methods for stiff equations will still fail to be more than order 1 in that scenario. Rodas Rosenborck methods and FIRKs (Radau) are pretty good with stiff accuracy though).

Both are not L-stable. I was suggesting that because of performance.

4 Likes

I have a c++ code that uses odeint library from boost. I run the code at reltol=1e-16 and abstol=1e-14 as default setting. The eoms function is different because I don’t use complex numbers in my code. However, the most important thing is that in the time scales considered in the plot above, the system can be linearized and solved analytically. The analytical solution is exponentially growing.

As you predicted Radau (absrel=1e-12,reltol=1e-12) and Rosenbrock(absrel=1e-6,reltol=1e-6) are stable as shown in figure below. All runs except Rosenbrock were done with abstol=1e-12 and reltol=1e-12. Sorry for the legend. The third entry should have been CVOBDE_Adams 6.37 secs. For reltol=1e-12 and abstol=1e-12 VCABM seems to be better than CVODE_Adams.

Just to be sure, I also ran Radau with reltol=1e-14 and abstol=1e-14. Then Radau gives the correct solution as shown in the figure below:

So the solution is indeed unstable, for some solvers including Radau, the instability is captured only for very low values of tolerances. This seems a bit odd to me. Are you aware of any other ode problems where this happens? I am just curious. Once again I want to thank you for taking interest in my problem.

I considering shifting from c++ to Julia and right now I am just trying to figure out whether there is an advantage from a running time point of view. I already see an advantage in the fact that my code is much shorter even if it is not optimized.

4 Likes

Once we get this figured, out, a cleaned up version of this would be great to have as a benchmark in DiffEqBenchmarks.

4 Likes

See if you can recreate the result with ODEInterfaceDiffEq.radau5, then RadauIIA5. With that, I would try pushing the L-stable methods further by using higher precision arithmetic (BigFloat) and then start trying to solve to 1e-20 accuracy or so. radau is an adaptive order Radau, radau 5 is fixed 5th order, RadauIIA5 is a pure-Julia implementation of that. The Fortran codes cannot handle arbitrary precision, so this would be a way to get us to the arbitrary precision domain.

Also try running VCABM at bigfloat and really low tolerances, and Vern9.

This is interesting. Normally, L-stable integrators are the ones you would trust over anything else because of how they remove unphysical oscillations, with many results like Frontiers | Stable time integration suppresses unphysical oscillations in the bidomain model . However, this is done by “damping at infinity”, which maybe could be damping a real divergence. Or, it’s a stiffly accurate thing. Either way, the behavior of either the L-stable or non-L-stable codes should flip when the tolerance gets low enough, and that would tell us the truth.

7 Likes

In the figure below you can see that ODEInterfaceDiffEq.radau5 and RadauIIA5 show the same behavior; they are wrong for tol=1e-12 but correct at tol=1e-14. But at t~500 seconds they are more than 20 times slower than VCABM. I am not familiar with using BigFloat. I will figure it out and try using bigfloats to see whether there is an advantage. Is there a disadvantage when it comes to speed when using BigFloats? In Python I have seen a huge degradation in performance when an arbitrary precision library like mpmath is used.

Edit: Big float at tol=1e-12 just slows down the code by a factor of 2 and at low tolerance gives wrong results. The plot below is for tol=1e-6.

Edit2: Vern9 does not run. It says instability detected: aborting. Is there a way to not abort Vern9 when instability is detected?

2 Likes

Definitely. This is not for performance but to figure out what is “correct”.

I’d be very weary of saying that. BigFloat increases precision and accuracy. See what happens with tol=1e-20 with BigFloats. Make sure you convert everything to BigFloat, even Hself0, Hself1, etc. GitHub - JuliaMath/ChangePrecision.jl: macro to change the default floating-point precision in Julia code can be helpful with that.

4 Likes

I am not sure what is the BigFloat equivalent is for ComplexF64. As far as the “correct” solution is concerned, I know it is the exponentially growing one. It can be shown analytically by linear stability analysis.

1 Like

Complex{BigFloat} should do it.

1 Like

Thanks, I changed Hself to Complex{BigFloat} and changed reltol and abstol to 1e-20. With VCABM I get exponentially increasing solution. as you can see in the figure below. I think what you said earlier makes a lot of sense. L-stable methods seem to make the solution stable even if the instability is physical. The exponential growth in the equations of motion I have in the code is due to non-linear feedback. Some solvers may find it difficult to capture that.

3 Likes

You need to change everything in the solve to BigFloat, not just one part. Everything.

1 Like