Accuracy of TRBDF2 vs matlab's ode23tb?

I noticed that trbdf2 solved my system the fastest but was much less accurate than matlab’s ode23tb. Some derivatives that should be zero are somehow non-zero even 100s of steps later.

By this I mean, du[m:n]=0 from solution index=i to i+100 when calculated on the input u, but still u[m:n] keeps getting updated.
I am using TRBDF2 without auto-differentiation.
The issue goes away if I terminate the solver and restart the calculation when I am sure the derivatives wil be zero.

Is this a side effect of ‘smoothed derivatives’ mentioned in its documentation? When the derivatives are non-zero, they are pretty huge. Solvers like Rodas5P or Rosenbrock23 are correct but much slower. (Sometimes slower than using ode15 in matlab).

What other solvers should I try to use? My system isn’t big right now (~50 variables) but could be later.

Do you set the tolerance for each solvers?

Yes each solver is supplied a reltol and abstol (of 1e-6)

Can you provide some example code for the folks here to check?

I’ll try and come up with a MWE, but it’ll take some time. I don’t know much about solvers, so I was wondering if someone encountered this before and had an opinion until I could make a mwe.
A naive pseudocode I have in mind could be something like

du[1:5] = rand(5)
du[6:8] = t in (10,12)? rand(3)*1e10 : 0
tspan = (0, 50)

But i’ll have to get back and run it

This is not an ODE. Is your actual system sampling random numbers?

My actual system is not random numbers. I was just thinking of a system that’d recreate a regime with large derivatives and then zero derivatives later.

My system is more like two balloons floating in air and forces peak when they collide.

1 Like

This needs an example. With rands in there, I suspect there’s model code issues… hard to know what’s going on here without seeing code.

Okay, so I have a mwe example here. This model may not make much physical sense but I made this to recreate sharp gradients with small oscillation restricted to a certain domain. (My actual problem is much bigger and complicated than this)
Also, here the condition on u[3] is easily translatable to a condition in time as u[3] is just 3*t.

function dynamics!(du,u,p,t)
    du[1] = 2.0*u[3]
    du[2] = 4.
    du[3] = 3.
    du[4] = ((u[3]>6) && (u[3]<20)) ? p*1e5*(u[2]-u[3])^2*u[1]*sin(t) : 0.0
    du[5] = ((u[3]>6) && (u[3]<20)) ? 1e8*u[1]*cos(t) : 0.0
end

function condition(u,t,integrator)
    u[3]>25.5 && norm(get_du(integrator)[4:5])!=0
end
function affect!(integrator)
    @show integrator.t, integrator.u
    terminate!(integrator)
end
cb = DiscreteCallback(condition,affect!)

prob =  ODEProblem{true}(dynamics!, zeros(5), (0,10), 20);    
sol = solve(prob, Tsit5(); callback = cb, dtmax=0.01, reltol =1e-6, abstol =1e-6, maxiters =2*1e5)
sol2 = solve(prob, TRBDF2(); callback = cb, dtmax=0.01, reltol =1e-6, abstol =1e-6, maxiters =2*1e5)

The Tsit5() solution is not stopped by the callback and runs smoothly while TRBDF2 is stopped. If TRBDF2 is run without callback, u[4] and u[5] still keep getting updated at t=10 (which is atleast 300 time steps away from u[3]=20).
With same tolerances matlab’s ode23tb does not run into these issues.

My question is if they are similar algorithms, there should not be such a big difference.

This seems to be plenty fast and working just fine as long as you don’t expect the du to be exactly 0, but rather a number <abstol. The Rosenbrock methods get your du correct to floating point precision, but TRBDF2 only computes the du to within numerical tolerance.

I’m also not seeing the timing differences between TRBDF2 and Rodas5P if you don’t set dtmax (which is an option that should basically never be used).

julia> @btime sol = solve(prob, Rodas5P(); reltol =1e-6, abstol =1e-6);
  432.221 μs (674 allocations: 66.91 KiB)

julia> @btime sol = solve(prob, TRBDF2(); reltol =1e-6, abstol =1e-6);
  381.361 μs (580 allocations: 53.39 KiB)

Is this solved? The issue was just that dtmax was set really low. I presume you didn’t do that on the MATLAB side? I presume that instead of dtmax you instead intended to use saveat.

I believe it’s solved. The one thing that is slightly interesting is that get_du for TRBDF2 is a bit inaccurate for the last 2 equations. I think that might be an inherent property of the the newton solver (i.e. it stops improving once it’s found a solution within tolerance) but it is somewhat surprising that when your equation is du[5] = 0 getdu doesn’t return 0 exactly.

I think it’s partially because this method does not have a great interpolation on it, and we should improve the interpolation choice here.

1 Like

I’d say its solved if non-zero gradients for last two equations is expected behavior for TRBDF2. Thanks @Oscar_Smith for your inputs.
My main issue stemmed from expecting exact zero derivatives (because I intend to use that as a callback condition later), for which I’ve switched to non-stiff solvers just to be sure of the accuracy. I’ll then again try TRBDF2 and compare derivatives to a tolerance rather than 0, to see if the results match.

I presume you didn’t do that on the MATLAB side? I presume that instead of dtmax you instead intended to use saveat .

I use the same dtmax on both platforms. I want to set a dtmax to resolve the timescales associated with the problem. I believe that must have been the reason to have that feature in the first place?

dtmax should almost never be used (it’s only use is when debugging/working with solvers with buggy error control). Unless you have very strong reasons to distrust the accuracy of your solver, you should not set dtmax and should let the adaptivity figure out appropriate dt on it’s own.

Rosenbrock methods should be able to enforce exact zeros better than an implicit method.