Let’s simplify. Take a simple differential algebraic equation. I’ll just write this in ModelingToolkit.jl notation for simplicity:
D(x) ~ x + y
y ~ x + 1
Notice that PDE semi-discretizations give you differential-algebraic equations, where the interior are differential equations and the boundary conditions are algebraic equations. There are a few different ways to solve this. One is that you solve the DAE directly. This is shown in the documentation Differential Algebraic Equations · DifferentialEquations.jl. For example, here you would treat it as a mass matrix DAE:
Mu' = f(u)
where M is singular, via:
using DifferentialEquations
function massmatrixdae!(du, u, p, t)
x,y = u
du[1] = x + y
du[2] = x + 1 - y
nothing
end
M = [1.0 0.0
0.0 0.0]
f = ODEFunction(massmatrixdae!, mass_matrix = M)
prob_mm = ODEProblem(f, [1.0, 2.0], (0.0, 1.0))
sol = solve(prob_mm, Rodas5P(), reltol = 1e-8, abstol = 1e-8)
That directly encodes the algebraic conditions and boom you’re off to the races. Or you could use DAEProblem, which is
f(u',u,p,t) = 0
and can be written as:
using Sundials
function f2(resid, du, u, p, t)
dx, dy = du
x, y = u
resid[1] = x + y - dx
resid[2] = x + 1 - y
end
u₀ = [1.0, 2.0]
du₀ = [-0.04, 0.04] # guess for the initial derivatives, corrected during initialization
tspan = (0.0, 1.0)
differential_vars = [true, false]
prob = DAEProblem(f2, du₀, u₀, tspan, differential_vars = differential_vars)
sol = solve(prob, IDA())
Both of those will work, but the problem with the DAE approach is that it requires the use of an implicit (or semi-implicit) method to handle the DAE constraint equations. For many systems, for example stiff equations, this is totally fine! If your equation is already stiff, yeah whatever you already are use an implicit method, this is a simple change. If your equation is non-stiff, then this can add computational overhead.
But you can then also just factor equations out. For example, if
D(x) ~ x + y
y ~ x + 1
then
D(x) ~ 2x + 1
you can just solve 1 ODE, and if you need to compute y then you just take the solution and add 1. Note that this is the kind of thing that ModelingToolkit.jl will do automatically (so I would recommend MethodOfLines.jl for this kind of thing because it’ll do all of this automatically, but wait a month or so for a few more optimizations for large PDE cases that are scheduled to arrive by the end of summer).
But you can do it by hand: that’s what I was recommending by “at all”: just completely get rid of u[i, 8] from the array, it’s not even in the equations, solve without it, and then post-process to build your extended equations.
This will be faster than the DAE approach if you want to use an explicit method. If you’re using an implicit method, it’s not worth the trouble.
There’s also three more approaches for DAE handling. One is that you can assume D(y) ~ 0 in steps and then project the algebraic conditions after steps. This is done via a callback Manifold Projection · DiffEqCallbacks.jl and can be a reasonably easy solution. It actually keeps the same order for the solution, easy proof via the triangle inequality. However, it has a downside that the continuous (dense) interpolation is ruined by the projection of the callback, so you only can trust the step values not the intermediate saveat values (so you’d need to tstops the save points). But, this can be a reasonably quick way to handle some hard DAEs.
Another approach is to psuedo-remove the algebraic equations. Instead of giving the solver:
D(x) ~ x + y
y ~ x + 1
you would instead give the solver
D(x) ~ x + y
where you inside of f you have a subroutine that solves for the algebraic equations y ~ x + 1. For simple equations this is equivalent to the elimination approach I described, but it has good scaling to other cases. It can be tedious to do by hand though, and can have issues with local minima and Newton guesses, so it generally isn’t something I’d do by hand.
Finally, you can do a singular perturbation approach. Instead of:
D(x) ~ x + y
0 ~ x + 1 - y
you’d relax it to:
D(x) ~ x + y
eps*D(y) ~ x + 1 - y
where as eps -> 0 it limits to your DAE. You flip this with mu = -1/eps
D(x) ~ x + y
D(y) ~ -mu*(x + 1 - y)
where the sign flip then ensures that the algebraic condition is the fixed point of the system, so more error pushes you back towards x + 1 - y, and then you just make mu “big enough”. This approach introduces a numerical error but can be a good way to relax some algebraic constraints in a way that makes them simpler to solve. But if mu is too large you just get a stiff ODE, in which case you should just solve the DAE directly.
So that’s the 5 ways to handle it. I’m just suggesting the elimination route as the fastest, but you can also just use the DAE solver to make it easier. Or use ModelingToolkit.jl as a tool that helps with some of the simplifications and other things. Or singular perturbation or manifold projection. Take your pick. But whatever it is, the answer doesn’t have u[i] = ... in the f.
Hopefully that full description helps.