# Awkward ODE problem

I have a differential equation of the kind

\dot q_1 = f(q_1, q_2)\\ A(q_1,q_2)\dot q_2=B(q_1,q_2)\dot q_1

where q_1, q_2 are vectors, A is a symmetric, positive-definite real matrix, and B is a real (non-square) matrix.

When I use Tsit5 or RK4, the time step dt fluctuates a lot and sometimes the solver goes into a loop (evaluating the r.h.s. over and over without moving forward in time). The problem goes away when I set \dot q_2=0.

So, I suspect that the solution to the second equation (the linear equation) is not good enough and thus the error estimate for adaptive methods is unreliable.

Any ideas what could be a better approach here? Am I using the wrong solvers?

Is this setup using mass matrices? That wouldn’t work with explicit methods so I’m a bit confused there.

How do the implicit or Rosenbrock methods do?

In practice I am inverting A (it should be invertible). I am not using the mass matrix solvers in OrdinaryDiffEq (yet)

EDIT: I should be more precise. I run the problem as

\dot q_2 = A^{-1}B\dot q_1

using ldiv from LinearAlgebra

The vectors q_1 and q_2 have combined 4000 components. I’m not sure implicit / Rosenbrock methods are feasible here. I have no idea how I would calculate the Jacobian in a practical amount of time.

try FBDF. it’s often really good.

Thanks, I tried ImplicitEuler with Krylov GMRES and autodiff=false which works (albeit very slowly).

I’ll try FBDF next

ImplicitEuler is essentially always slow. That’s kind of by definition though. FBDF, QNDF, etc. try the efficient methods if you need efficiency.

2 Likes

So, just to close this thread out:

I changed the definition of the problem so that the integrand fluctuates less and therefore appears less stiff to the solvers. Now it runs nicely with RK4() (and probably Tsit5() but each round of testing takes a long time so I won’t let perfect be the enemy of good and leave it at that).