Only the explicit Euler method successfully solves my ODE

I realize this is not a very good question due to the nature of the problem, but I have to try since I’ve spent hours debugging this. I am interested in solving a set of ODEs that can be written as follows:

using Elliptic.Jacobi

function dn_ab!(du,u,p,t)
    du[1] .= +im*p[1].*u[1] .+ im.*u[2].*dn.(t,real(p[2]))
    du[2] .= -im*p[1].*u[2] .+ im.*u[1].*dn.(t,real(p[2]))
end
# Initial Condition
function ab_dn_t0(x)
    A = +χ .+ 0.5.*Ω*λ.*(x) .- π/4
    B = -χ .+ 0.5.*Ω*λ.*(x).- π/4
    return [2*im*sin(A); 2*cos(B)]
end

I asked about this before here: Solving same DiffEq repeatedly but with different initial conditions

The equation is solved multiple times and the solutions are recursed over and combined nonlinearly to generate a final solution, \psi, which is the only thing I have some physical intuition about. I will include a MWE showing how this \psi is generated (sorry about that, I did my best to simplify things, I just have no intuition at all on how to deal directly with the solutions of the ODE since they are just intermediate steps).

Using any algorithm except Euler() to solve the equation, I get a garbage solution, something like (using Tsit5()):

The correct solution, only obtainable using Euler().

Everything else used to generate this final solution \psi is correct, I have spent countless hours checking, and the result is perfectly correct when I use Euler().

My question are:

  1. Why does nothing work but Euler()
  2. Is there a way to use a different algorithm? Some flag I missed in the documentation somewhere?

MWE (I did my best to make it as small as possible, I promise):
You only need to change the line near the end that defines algo to see the drastic effect on the solution.
I’ve tried almost all the explicit non-stiff algorithms in the documentation and they all give various types of garbage results.

using Elliptic.Jacobi
using DifferentialEquations
using Plots

function solve!(λ, m, χ, Ω, Nₓ, Nₜ, x, t, algo)
    N = length(λ)
    ψₜ = Array{Complex{Float64}}(undef, Nₜ, Nₓ, N+1)
    ψₜ[:,:,1] .= exp.(im*x'.*(1-m/2)).*dn.(t,m)
    calc_rs(N, 1, ψₜ, λ, χ, Ω, Nₓ, Nₜ, x, t, algo)
    return ψₜ[:,:,N+1]
end

function calc_rs(n, p, ψₜ, λ, χ, Ω, Nₓ, Nₜ, x, t, algo)
    @info "Calculating Lax pair generating functions rₙₚ(x,t) and sₙₚ(x,t) for (n,p) = ($n,$p)"
    # Base case
    if n == 1
        # Allocate empty arrays to hold lax pair generating functions
        rf = Array{Complex{Float64}}(undef, Nₜ, Nₓ)
        sf = Array{Complex{Float64}}(undef, Nₜ, Nₓ)
        # Transpose x
        x = x'

        # Set up a funnction for the ODE
        function dn_ab!(du,u,p,t)
            du[1] = +im*p[1].*u[1] .+ im.*u[2].*dn.(t,real(p[2]))
            du[2] = -im*p[1].*u[2] .+ im.*u[1].*dn.(t,real(p[2]))
        end
        # And the initial condition
        function ab_dn_t0(x)
            A = +χ[p] .+ 0.5.*Ω[p]*λ[p].*(x) .- π/4
            B = -χ[p] .+ 0.5.*Ω[p]*λ[p].*(x) .- π/4
            return [2*im*sin(A); 2*cos(B)]
        end

        # Prepare the ODE intnegrator
        tspan = (0.0,abs(minimum(t)))
        u0 = ab_dn_t0(x[1])
        prob = ODEProblem(dn_ab!, u0, tspan, [λ[p], m])
        dt = t[2]-t[1]
        integrator = init(prob, algo, saveat=abs.(t[Nₜ÷2+1:-1:1]), dt = dt/10)

        # Loop over x and solve the ODE for each initial condition
        # Should do this using the ensemble interface in the future
        for i in eachindex(x)
            # Set up the new initial condition
            u0 = ab_dn_t0(x[i])
            reinit!(integrator, u0)
            # Solve
            DiffEqBase.solve!(integrator)

            # Multiply by phase factor
            rf[Nₜ÷2+1:-1:1, i] .= integrator.sol[1, :].*exp.(+im*(x[i])/4*(2*m-1))
            sf[Nₜ÷2+1:-1:1, i] .= integrator.sol[2, :].*exp.(-im*(x[i])/4*(2*m-1))
            # Mirror x -> x
            rf[Nₜ÷2+2:end, i] .= rf[Nₜ÷2:-1:2, i]
            sf[Nₜ÷2+2:end, i] .= sf[Nₜ÷2:-1:2, i] 
        end
        # Compute ψ₁ when p = 1
        if  p == 1
            ψₜ[:,:,n+1] .= ψₜ[:,:,1] .+ (2*(λ[n]' - λ[n]).*sf.*conj.(rf)) ./
                              (abs2.(rf) + abs2.(sf))
        end
    else
        # Recursion when n != 1
        # Calculate r_{n_1, 1} and r_{n-1, p} and similarly for s
        r1, s1 = calc_rs(n-1, 1, ψₜ, λ, χ, Ω, Nₓ, Nₜ, x, t, algo)
        r2, s2 = calc_rs(n-1, p+1, ψₜ, λ, χ, Ω, Nₓ, Nₜ, x, t, algo)
        # Apply the DT equations
        rf = ((λ[n-1]'  - λ[n-1] ) .* conj.(s1) .* r1 .* s2 +
              (λ[p+n-1] - λ[n-1] ) .* abs2.(r1) .* r2       +
              (λ[p+n-1] - λ[n-1]') .* abs2.(s1) .* r2        ) ./
              (abs.(r1).^2 + abs.(s1).^2);
        sf = ((λ[n-1]'  - λ[n-1] ) .*       s1   .*conj(r1) .* r2 +
              (λ[p+n-1] - λ[n-1] ) .*  abs2.(s1) .*     s2        +
              (λ[p+n-1] - λ[n-1]') .*  abs2.(r1) .*     s2         ) ./
              (abs.(r1).^2 + abs.(s1).^2);
        if p == 1
            # Compute ψₙ when p = 1
            ψₜ[:,:,n+1] .= ψₜ[:,:,n] + (2*(λ[n]' - λ[n]).*sf.*conj.(rf)) ./
                                      (abs2.(rf) + abs2.(sf))
        end
    end

    return rf, sf
end

# Solution Parameters
λ = [0.9703563759131811im, 0.9278305490267467im, 0.8522446238645338im, 0.7334225464201684im]
m = 0.0625
χ = [0.7030210896262424, 0.618294940689623, 0.5281884841525464, 0.4276187698960945]
Ω = [0.32801962530412965, 0.6560392506082596, 0.9840588759123894, 1.3120785012165201] 

# Simulation Parameters
Nₓ = 1000
xᵣ = -10=>10
dx = (xᵣ.second - xᵣ.first)/Nₓ
x = collect(xᵣ.first:dx:xᵣ.second)[1:end-1]
Nₜ = 1024
N = 3
T = 3*28.7323599983721
dt = T/Nₜ
t = dt * collect((-Nₜ/2:Nₜ/2-1))
algo = Tsit5()   <----- Change This

ψ = solve!(λ, m, χ, Ω, Nₓ, Nₜ, x, t, algo)
heatmap(t, x, abs.(ψ)')

Doesn’t answer your original question but this is a time dependent schrodinger equation, right? Have you tried doing explicit time propagation at fixed Hamiltonian (aka exponential splitting). Ie un+1 = exp(-iH(tn)dt) un, all implemented by hand with staticarrays? @ChrisRackauckas will know but I suspect you can’t really do better (does timestep control work for this kind of equation?) I’m very interested in the optimal method to solve this in any case.

Any way of solving differential equations that are first order in time should work for solving the TDSE, what you gain from explicitly considering the exponentiation of the Hamiltonian is preservation of unitarity and related things, which are definitely nice (since you usually prefer to preserve the norm of the wavefunction).

Explicit Euler can be viewed as keeping the first term in the Taylor expansion of the exponential:

\exp(-i\tau H) = 1 - i\tau H + \mathcal{O}(\tau^2)

A simple Runge–Kutta 4 will work excellently, with fourth-order convergence, as long as |\tau H| is within the stability region (see e.g. figure 10.4 here), outside which it will promptly explode.

I have had most luck with Crank–Nicolson, a Padé approximant to the exponential:

\exp(-i \tau H) = \frac{1 - i \frac{\tau H}{2}}{1 + i\frac{\tau H}{2}} + \mathcal{O}(\tau^3),

which is second-order accurate and A-stable, preserves unitarity, blah blah, but of course requires you to efficiently solve a linear system every time step.

1 Like

I would imagine the main thing you gain with an exponential is that you decouple the timescales, eg in the adiabatic limit where the variation of the solution is much faster than the variation of the hamiltonian. Any nonexponential scheme will require a dt of the order of the variation of the solution, which you bypass with exponential schemes. So basically it depends on 1 timescales 2 dimensionality of the problem (for 2 dof it’s fine to compute matrix exponentials)

That is true, but one point that I did not mention is that not only do you need to compute a matrix exponential, you also need to perform a time-integral of the Hamiltonian with high enough order of convergence (since the Hamiltonian does not commute with itself at different times), so even with exponential schemes you may be forced to take smaller time steps to reach convergence.

When you have really different time scales going on in your system, you may be forced to be very clever in dealing with those: Solving Schrödinger equation in semiclassical regime with highly oscillatory time-dependent potentials - ScienceDirect

The largest benefit for me using exponential schemes (with splittings) has been to alleviate issues with instabilities; my results may not be fully converged with using Crank–Nicolson, because my time step is too large, but I can get a rough idea, because CN will never explode. With RK4 I’m forced to take small time steps (actually much smaller, in the atomic case with a Coulomb singularity at r=0).

Did you change the tolerance? Or Tsit5 with adaptive=false? Most likely what’s happening is just adaptive time stepping is giving you the tolerance you requested, which is not what you were looking for.

For time-dependent Schrodinger, Magnus methods are the method people usually point to.

https://diffeq.sciml.ai/stable/solvers/nonautonomous_linear_ode/#State-Independent-Solvers

Those will use the matrix exponential but mix it with an RK scheme that is optimized by ignoring terms related to state-dependent RHS.

Notice this paper uses a Magnus method. So I guess we’re very clever.

2 Likes

Any plans to add automatic timestep control to these?

It would be nice. They aren’t adaptive in the literature, though deriving a nice error estimate can’t be too hard.

Thank you, using adaptive=false resolves this. I had a misconception that specifying dt explicitly would turn off adaptivity. I will work on finding a proper tolerance for this problem.

Thank you to everyone else for the suggestions as well, this was a nice exposition on solving TDSEs. I am familiar with splitting methods but wanted to resolve the problem within DifferentialEquations.jl.

These equations actually come up in solving the cubic nonlinear Schrodinger equation on elliptic backgrounds. See the appendix here in case you’re curious https://arxiv.org/pdf/1810.02887.pdf