Imposing constraints on solutions by DifferentialEquations.jl

I have N linearly coupled ODEs that DifferentialEquations.jl is able to solve - but with a huge amount of error. I detect the error because the solutions are supposed to follow a particular constraint at all time steps but they don’t - they violate this constraint strongly and very quickly.

Is it somehow possible to tell the solver that the solutions are supposed to obey a particular constraint (the absolute value squared of the solutions must sum to 1 at each time step)? If I write this constraint as another differential equation in the problem then it’s an over-determined system and I am not sure the solver can handle this.

Thanks!

This sounds like you want the energy (e.g., norm squared) of the system to be discretely conserved. If your system has the appropriate structure, symplectic timesteppers would preserve the energy.

1 Like

actually, I’m solving for probability amplitudes and want the probability to be conserved - would the symplectic timesteppers help with those too?

They don’t quite, but close enough :sweat_smile:. See

Give symplectic integrators a try.

2 Likes

Oh right, they preserve a “nearby” energy…

1 Like

thanks, I tried using them but I get an error saying that they are designed for Partitioned ODEs but my ODEs don’t appear to be Split/Partitioned. I have N functions and the first order time derivatives of them depend on themselves as well as the previous and next function (of course, this is different for the 1st and Nth function). But from what I read in the documentation, the Split ODEs require that the derivatives of the function don’t depend on themselves!

Try GitHub - SciML/IRKGaussLegendre.jl: Implicit Runge-Kutta Gauss-Legendre 16th order (Julia)

1 Like

thanks, I think it doesn’t work for complex values because I get “MethodError: no method matching isless(::Complex{BigFloat}, ::Float64)” - it’s the same as I get with Rosenbrock23(autodiff=true) which goes away if I use Tsit5() or Rosenbrock23(autodiff=false)!

…or do you know of any other way to reduce error given by any of the default solvers? I’ve already tried to tweak the tolerance but they don’t have any appreciable effect on the error!

Did you double check that the Hamiltonian is truly Hermitian? I’m assuming your doing QM because you were speaking of probability amplitudes.

1 Like

Actually, I’m not working with Hamiltonian exactly but very much related to it (and yes, I’m doing QM) and my equations look like the following picture and all the a_i and b_i are real so I’m sure things are okay and it’s a solvable system. The constraint \sum_i |\phi_i(t)|^2 = 1 ~ \forall ~t comes from physical requirements.

not sure if I managed to “reply” to you in the previous comment so making another comment (I’m very new to discourse and Julia)

You did reply to me :slight_smile:

I’d think the eigenvalues of the matrix lie outside the region of stability for the explicit algorithms you’ve tried. Maybe try ImplicitMidpoint(autodiff=false) ? (autodiff is false, because it won’t work for complex valued functions.)

I tried that with a two spin Heisenberg XY toy model and it remains stable for long times.

1 Like

Thanks, unfortunately, it doesn’t work for complex equations (I think) - I used the following M(n-)WE and got MethodError: no method matching eps(::Type{ComplexF64}). I think the solver is using eps() somewhere to compute the distance between two numbers and it doesn’t account for complex numbers - it’s roughly the same issue I get with IRKGL16() as well. Both of these solvers work when I have real equations :frowning:

function simpleprob!(du::Vector{Complex{Float64}}, u::Vector{Complex{Float64}}, p, t)
    du[1] = u[2]
    du[2] = -im*u[1]
end

u0=[1.0+0.0im,0.0+0.0im]

prob1 = ODEProblem(simpleprob!,u0,(0.0,1.0),nothing)
sol1 = solve(prob1,ImplicitMidpoint(autodiff=false),dt=0.1)

Your example runs out-of-place using static vectors - which is what I tried too. Depending on the size of your system, in-place updating might not even be worth it.

Maybe worth filing an issue over.

1 Like

But that aside, do you have an example of the error growth? I don’t see an MWE that shows a low tolerance solution of what you’re looking for. Symplectic only matters for long time integration, and I don’t see an example of that here.

1 Like

Yeah, I did file an issue at the github of IRKGaussLegendre.jl for a solution that incorporates complex numbers. Can you also kindly try with the following M(n-)WE where I’ve put im on both du[1] and du[2]?

function simpleprob!(du::Vector{Complex{Float64}}, u::Vector{Complex{Float64}}, p, t)
    du[1] = -im*u[2]
    du[2] = -im*u[1]
end

u0=[1.0+0.0im,0.0+0.0im]

prob1 = ODEProblem(simpleprob!,u0,(0.0,1.0),nothing)
sol1 = solve(prob1,ImplicitMidpoint(autodiff=false),dt=0.1)

As I was constructing a MWE to share here I removed the an and bn from the MWE and magically, the integrator with Tsit5() works perfectly! I get really excellent results.

I’m sure that an and bn I have are correct. They are real with values between -1 and10, in BigFloat. Would you be able to tell me what’s going wrong? Maybe I should convert them to Float64 or Float32 before using them in the integrator as I’m working with reltol=1e-16 and abstol=1e-16?

You cannot solve things in 32 bits or 64 bits to a relative tolerance of 1e-16. That’s just impossible, you cannot even represent numbers to 16 digits exactly.

1 Like

right, I tried with tolerance of 1e-6 and it still doesn’t work with these numbers for some reason!