How many differential equations can I solve simultaneously? Julia is crashing for more than a few thousand odes

I was trying to solve a discretized partial differential equation when I realized that Julia is crashing when I try to solve more than a few thousand differential equations. When I try to run the following code, I get a message that says Julia “Julia has exited. Press Enter to start a new session.” in Juno and if I run the same code in the terminal then I simply get an output saying “Killed”. If I change the grid to 20 x 20 the code runs, so there is no issue with the code. Is there a trick to solving more differential equations in Julia?


using DifferentialEquations
using Sundials



function eoms!(du,u,p,t)
    println(t)
    for i in 1:200
        for j in 1:200
            du[i,j]=-1.0*u[i,j]
        end
    end
end

u0=ones(Float64,(200,200))
tspan=(0.0,1.0e-7)
p=[]
prob=ODEProblem(eoms!,u0,tspan,p)

tol=1e-2
reltolp=tol
abstolp=tol

@time sol=solve(prob,KenCarp4(),abstol=abstolp,reltol=reltolp,saveat=1e-7)

using Plots
plot(sol,vars=(0,1))

Probably depends on the RAM of your computer. With 64 GB on mine the MWE is barely running.

But the same code works when I implement it in c++; even if I increase the dimensions to 2000 \times 2000. So if it is a RAM issue there must be a way around it because the problem itself does not require that much RAM.

Edit: for 100 \times 100, the allocation is 1.90 M. It cant be running out of RAM at 200 \times 200, if it is easily running for 100 \times 100 grid.

I don’t believe the low level interfaces are automatically computing the jacobian sparsity. You’ll either need to declare it manually (easy in your case, since the equations are not coupled) or use one of the automatic sparsity detection packages. Or just use one of the higher level packages like MTK that does this automatically.

I have posted the minimal example that reproduces the problem. The actual problem I am trying to solve is coupled and the jacobian is not very sparse. It is sparse for the first index but not the second.

Edit: You were right though. If I change the linear solver to GMRES. Then the code runs without any problem for 2000 \times 2000 system.

1 Like

Ok, but a dense jacobian for a 2000x2000 system is 128TB, which though in reach for an HPC system is not something you’d run on a single node, so if that’s the size you care about I’d take another look at the sparsity of your jacobian.

4 Likes

A partial differential equation that doesn’t give a sparse Jacobian? I’ve never heard of that. The quintessential example of sparse matrices is PDEs.

Anyways, here’s a sparse PDE code optimization example.

https://diffeq.sciml.ai/stable/tutorials/advanced_ode_example/

1 Like

Thanks for sharing the example. Actually, the problem I am trying to solve is an integro-differential equation. So the pde part of it is sparse but I have integration (or sum in the discretized setup) over the second index.

Then you might need to develop a matrix-free preconditioner

I don’t know how to do that. Can you point me towards an example on the internet where this is done for simple differential equations? If I can understand the syntax I can modify it for my case.

Can’t tell if this is satire or not ^^

It’s not satire.

A “matrix-free” method is one in which you act a linear operator on a vector without explicitly constructing a matrix. For example, a you can do a convolution in O(n \log n) time and O(n) memory via FFTs, whereas an explicit matrix would be O(n^2). For integral operators — relevant to the present case with an integro-differential equation — there are things like the fast multipole method and related tricks (e.g. hierarchical low-rank approximations) to multiply quickly and with much less memory than an explicit matrix.

If you are doing iterative solves on each step for a stiff system, then you probably additionally want a preconditioner, and again you may want to use a matrix-free method if your explicit matrices are not sparse.

3 Likes

Nope, and here is an example implementation:

https://diffeq.sciml.ai/stable/tutorials/advanced_ode_example/#Adding-a-Preconditioner

Thank you — I actually meant to respond to a different comment :see_no_evil:

So to start, there are two other methods I thought of.

One is to build the preconditioner based on an approximation. If you do sparsity detection without the integral, you’ll get a sparse matrix that approximates your Jacobian. You can then use that inside of DifferentialEquations.jl as the jac_prototype and that will generate an approximate Jacobian through color differentiation to that sparse matrix. That approximate Jacobian can be used inside of an ILU/AMG preconditioner. This is fine because preconditioners are only supposed to be an approximation anyways: whether this works is really whether it’s a “good enough” approximation, which depends on your equation and the parameter values. This only takes like 5 lines of code to try (copy-pasted out of the tutorial) so I would try this first.

The second thing you can do is use an IMEX integrator:

https://diffeq.sciml.ai/stable/types/split_ode_types/
https://diffeq.sciml.ai/stable/solvers/split_ode_solve/#OrdinaryDiffEq.jl

I’d recommend KenCarp47, where you split the integral to be part of the explicit part. If the integral has well-behaved eigenvalues, this would let you treat it explicitly, and then your true Jacobian is all of the non-integral stuff, and then once again sparse matrix preconditioners etc. etc. that tutorial applies.

One of those two “should” work. But let’s address the other thing just for full clarity:

Note that those preconditioners are not matrix-free. ILU needs to know the sparse matrix and a tau cutoff value in order to construct the preconditioner. Algebraic Multigrids also need to know the structure of the equations through the sparse matrix. That’s really the issue: preconditioners need to match details of the equation since they need to be an approximate inverse, so the easiest way to construct an algorithm that knows “some structure of the equations” is to have as input a sparse matrix because that naturally will have much of the structure for many PDEs.

That said, as @stevengj discussed, there are preconditioners that do not require you have the sparse matrix. All of those will naturally require that you write a function based solely on your PDE, for example fast multiple methods and geometric multigrids, so it’s hard to ever give someone a software that does this automatically like you can with AMG and ILU (well, there is a way to do it, but that’ll be a research project over the next year :wink:).

I think the easiest to write might be a geometric multigrid, since it just requires knowing how to write your PDE with a different dx, along with expanding and contracting the grid. Gil has some good notes on it:

But other than that, for now it’s kind of “may the force be with you” in terms of automation, which I hope can be solved better in the future.

1 Like

Awesome! Thanks for the help.

For the record, boundary element methods (BEM) produce PDEs with dense Jacobian

I would say that BEM solves a surface-integral equation (SIE), not a PDE.

(The SIE is generally equivalent to a PDE, so in that sense you are also solving a PDE. But it is the integral and not the differential equation that is the starting point for the numerics, and nontrivial computation is required to get the PDE solution at an arbitrary point from the SIE solution. Conversely, traditional FEM solves a PDE, which of course also gives you the corresponding SIE solution, but most people would not describe FEM as an “integral-equation solver”.)

Put another way, Galerkin BEM and FEM both assemble a matrix A_{ij} = \langle b_i, \hat{L} b_j \rangle where b_i and b_j are localized polynomial basis functions on a mesh (a surface mesh in BEM and a volume mesh in FEM) and \langle \cdot, \cdot \rangle is an inner product (integral), but the linear operator \hat{L} is a differential (PDE) operator for FEM that yields a sparse A, and \hat{L} is an integral (SIE) operator for BEM that yields a dense A (which you represent in a matrix-free manner for large-scale problems).

5 Likes