DifferentialEquations.jl error when solving equations with complex variables

I tried to solve the problem u’ = -im * E * u, of which the solution is like u = const * e^(-im * E * t).

My code is:

using DifferentialEquations
using Plots; pyplot()

E = 200.0
function Eq(t::Float64, u::Array{Complex128,1}, du::Array{Complex128,1})
    du[1] = -im * E * u[1]
    du[2] = -im * E * u[2]
end

u = [1.0 + 0.0im, 1.5 + 0.0im]
T = 1000.0
prob = ODEProblem(Eq, u, (0, T))
prob.u0[1] = u[1]; prob.u0[2] = u[2]
sol = solve(prob; save_everystep=true, dense=false)

nt2 = length(sol.t)
c1 = zeros(nt2)
c2 = zeros(nt2)
for i in 1:nt2
    c1[i] = abs(sol.u[i][1])
    c2[i] = abs(sol.u[i][2])
end
plot()
plot!(sol.t, c1)
plot!(sol.t, c2)

I expected c1 and c2 to be const, but the results computed with DifferentialEquations.jl are not. How would this happen?

2 Likes

This is a crosspost: Problem occurs when solving ODE with complex number variables · Issue #227 · SciML/DifferentialEquations.jl · GitHub .

It’s just standard numerical integration error. Take a look at this notebook which explains the three things you can do in more detail:

http://nbviewer.jupyter.org/github/JuliaDiffEq/DiffEqTutorials.jl/blob/master/PhysicalModels/KeplerProblem.ipynb

  1. You can reduce the tolerances enough so that way the drift is smaller. When the tolerances are low enough the drift is seemingly non-existent because you have a more exact solution.
  2. You can setup your problem to use a symplectic integrator. These have good properties for long time integration to stop this kind of numerical drift (well, they don’t actually stop it, they just are designed for make the drift be less).
  3. You can use the manifold projection to project back to the solution manifold whenever you’re sufficiently far away. I.e., you can define the manifold g(u) to be the set of u with constant energy, and thus it will project to ensure the solution has constant energy. This method actually keeps the same order of accuracy, the proof is a simple triangle inequality left for you to check.

Tolerance control

Let’s take a quick look at these. Changing the tolerances is the easiest. The setup code is:

using DifferentialEquations
using Plots; plotly()

E = 200.0
function Eq(t::Float64, u::Array{Complex128,1}, du::Array{Complex128,1})
    du[1] = -im * E * u[1]
    du[2] = -im * E * u[2]
end

u = [1.0 + 0.0im, 1.5 + 0.0im]
T = 1000.0
prob = ODEProblem(Eq, u, (0, T))
prob.u0[1] = u[1]; prob.u0[2] = u[2]
nt2 = length(sol.t)
c1 = zeros(nt2)
c2 = zeros(nt2)

Now the Base case is:

sol = solve(prob; save_everystep=true, dense=false)
for i in 1:nt2
    c1[i] = abs(sol.u[i][1])
    c2[i] = abs(sol.u[i][2])
end
plot(sol.t, [c1,c2])

newplot

Oof, that is rough because the absolute value should be constant but in this case there’s a clear drift. What happens is the error grows slowly until it gets away from the manifold where it should be on, but off the correct solution manifold the solution actually diverges and gets worse!

But if we lower the tolerances:

sol = solve(prob; save_everystep=true, dense=false,abstol=1e-6,reltol=1e-6,maxiters=Int(1e6))
for i in 1:nt2
    c1[i] = abs(sol.u[i][1])
    c2[i] = abs(sol.u[i][2])
end
plot(sol.t, [c1,c2])

newplot

You can see it’s great: the total drift is less than 0.01 for each!

And of course you can keep lowering the tolerance, making the integration more exact at the trade off of requiring more computational power.

Symplectic and Nystrom Integration

For this case I won’t create the DynamicalODEProblem for this because it would require separating the real and imaginary parts, but this notebook shows how you’d do it for most problems you’d see where you’d want a symplectic integrator:

http://nbviewer.jupyter.org/github/JuliaDiffEq/DiffEqTutorials.jl/blob/master/PhysicalModels/ClassicalPhysics.ipynb

Manifold projection

Let me show off the manifold projection now. In this case we know the solution must live on the manifold where its absolute value is 1. So let’s define that manifold as the residual of a function g(u) and use the ManifoldProjection callback from the callback library.

http://docs.juliadiffeq.org/latest/features/callback_library.html#Manifold-Conservation-and-Projection-1

The code for this would be:

function g(u,resid)
  resid[1] = abs(sol.u[1]) - 1
  resid[2] = abs(sol.u[2]) - 1
end
cb = ManifoldProjection(g)
sol = solve(prob; save_everystep=true, dense=false,abstol=1e-6,reltol=1e-6,maxiters=Int(1e6),callback = cb)
for i in 1:nt2
    c1[i] = abs(sol.u[i][1])
    c2[i] = abs(sol.u[i][2])
end
plot(sol.t, [c1,c2])

I say would be because this actually fails right now because the rootfinding package ( GitHub - JuliaNLSolvers/NLsolve.jl: Julia solvers for systems of nonlinear equations and mixed complementarity problems ) cannot handle complex numbers (which is actually because the differentiation packages, ForwardDiff.jl and Calculus.jl don’t support complex numbers, this is something we’re actively working on). When NLsolve.jl is compatible with complex numbers, then this will work. So you can see that you simply define the zeros of g to be the solution manifold and it will project it there. Lovely.

But which do you use?

Yes, that’s always the interesting question. Those are three methods, which is best? Well, for now if your problem has complex numbers then don’t use the manifold projection :smile:.

But for these kinds of things, take a look at the DiffEqBenchmarks.jl repository.

For this kind of problem we have two tests which examine the different choices if you want a high accuracy integration.

http://nbviewer.jupyter.org/github/JuliaDiffEq/DiffEqBenchmarks.jl/blob/master/DynamicalODE/Quadrupole_boson_Hamiltonian_energy_conservation_benchmark.ipynb

http://nbviewer.jupyter.org/github/JuliaDiffEq/DiffEqBenchmarks.jl/blob/master/DynamicalODE/Henon-Heiles_energy_conservation_benchmark.ipynb

From the benchmarks it seems that the high order Runge-Kutta-Nystrom method DPRKN12 is a very efficient choice for this kind of problem, so you’ll likely want to setup your problem as a partitioned ODE and use that.

If you want to read more about integration errors for long-time integrations, I would suggest picking up “Hairer 3”, i.e. Hairer’s Geometric Integrations book. As for benchmarks, if you want some more which are closer to some specific problem you have in mind, feel free to suggest a problem and we could help you make a notebook. Hopefully this explained it in enough detail as it’s a very large subject to condense into a single post.

6 Likes

For this very simple constraint it’s overkill to try to solve the projection by a Newton method, can’t you just renormalize u?

PS Optim has a library of manifolds with explicit projections and tangent space calculation at https://github.com/JuliaNLSolvers/Optim.jl/blob/master/src/Manifolds.jl (small right now because spun out of my usage but with potential to grow), if you’re interested in that we could spin-off Manifolds to a package and decide on an API that works for DiffEq as well as Optim.

Yes, so for this simple case you can just make a DiscreteCallback that renormalizes u at the end of each step. And for this case that’s what I would recommend doing. But that doesn’t generalize well, and I assume the OP is looking at this for something like “energy preservation” in which case projecting the manifold via a rootfinder is necessary. Of course, always pick the right tool for the job.

That could be awesome! Much more efficient than just general manifold projections. Could you spin that off to a separate package and give it a little bit of documentation? That’s probably all that’s needed to build a callback for the callback library that utilizes it.

Thank you very much. I should have figured it out when I “println(sol.t)” and found that there are only about 3000 time steps for the integration.