I apologize for posting about this PDE many times over the past few days, but I am pursuing several different approaches and each comes with its own issues.
The system is given by:
where:
and we’re using no-flux (Neumann 0) boundary conditions.
My best attempt so far is cobbled together from many tutorials, @ChrisRackauckas’s blog posts, and slack advice, leading to the following pared-down example (not quite a MWE, sorry).
using DifferentialEquations, LinearAlgebra, SparseArrays, Symbolics
using ModelingToolkit, SparseDiffTools, Sundials
# Generate the constants
N = 60
dx = 0.1
Ax = Array(Tridiagonal([1.0 for i in 1:N-1],[-2.0 for i in 1:N],[1.0 for i in 1:N-1]))
Ay = copy(Ax)
Ax[2,1] = 2.0
Ax[end-1,end] = 2.0
Ay[1,2] = 2.0
Ay[end,end-1] = 2.0
#Parameters
f = 1.2
m = 190
q = 0.001
ϵ₁ = 0.01
ϵ = 0.01
K = 1000
Θ = 0.044
d₀ = 0.01
D₁ = 1
p = [f, m, q, ϵ, ϵ₁, D₁, d₀, K, Θ, dx]
function basic_version!(du,u,p,t)
f, m, q, ϵ, ϵ₁, D₁, d₀, K, Θ, dx = p
D₂ = D₁/d₀*(1 + (K-1)/(1+3/Θ))^(-3/2)
c₁ = @view u[:,:,1]
c₂ = @view u[:,:,2]
dc₁ = @view du[:,:,1]
dc₂ = @view du[:,:,2]
Δ₁ = (1/dx^2)*D₁*(Ay*c₁ + c₁*Ax)
Δ₂ = (1/dx^2)*D₂*(Ay*c₂ + c₂*Ax)
@. dc₁ = Δ₁ .+ 1/ϵ*(f*c₂.*(q .- c₁)./(q .+ c₁) .+ c₁ .* (1 .- m*c₂)./(1 .- m*c₂ .+ ϵ₁) .- c₁.^2)
@. dc₂ = Δ₂ .+ c₁ .* (1 .- m*c₂)./(1 .- m*c₂ .+ ϵ₁) .- c₂
end
tspan = (0.0,0.1)
u0 = rand(Float64, (N,N,2))
# Jacobian Approach
du0 = similar(u0)
sparsity_pattern = Symbolics.jacobian_sparsity(basic_version!,du0,u0,p,0.0)
jac_sparsity = Float64.(sparse(sparsity_pattern))
colorvec = matrix_colors(jac_sparsity)
@show maximum(colorvec)
f = ODEFunction(basic_version!;jac_prototype=jac_sparsity,colorvec=colorvec)
prob = ODEProblem(f,u0,tspan,p)
## Solve!
println("Solving")
@time sol = solve(prob,Tsit5(), saveat=range(0, stop=tspan[2], length=101))
This code runs just fine. I haven’t been able to fully verify the solution yet since I don’t have an analytical expression, but it behaves seemingly well and nothing blows up (although Turing patterns don’t show up, I don’t know why yet). It is slow, however, and there are many optimizations left to do. It also scales very poorly with N, and I need to go up to N=300 or so.
The biggest optimization is of course using a stiff solver. I tried quite a few of them, e.g., KenCarp47
with and without linsolve=KrylovJL_GMRES()
, CVODE_BDF
, etc. They all run for a really long period of time then crash with dt <= dtmin
. Changing tolerances hasn’t fixed this. I also tried autodiff=false
.
I know this error indicates a possibility of model error and I promise I will keep checking. I am simply curious why the stiff solvers don’t work and the explicit ones work fine.
tl;dr: all stiff solvers I tried don’t work (dt <= dtmin
), non-stiff explicit ones do work just fine. What’s going on?
Resources I’ve used (non-exhaustive):
- Solving Large Stiff Equations · DifferentialEquations.jl
- https://www.stochasticlifestyle.com/solving-systems-stochastic-pdes-using-gpus-julia/
- Optimizing DiffEq Code
- 2d Swift-Hohenberg equation: snaking, Finite Differences · Bifurcation Analysis in Julia
- Frequently Asked Questions · DifferentialEquations.jl