Comparisons between Julia (NeuralPDE.jl and DiffEqFlux.jl) and DeepXDE python package?

Hello everyone, I’ve recently come across the DeepXDE python package (https://github.com/lululxvi/deepxde), which can be used for formulating and solving a variety of neural differential equation - type problems. Beyond the basics, it has support for:

  • Operator-learning for PDEs (DeepONet)
  • More complicated domain geometries: intervals, triangles, disks, etc.
  • Multi-physics neural networks

Does the Julia SciML ecosystem support these features at the moment? If not, are they on a planned feature roadmap? For features that neuralPDE.jl and deepXDE currently both support, are there any comparison benchmarks available?

Thanks!

3 Likes

Hey,
Note that NeuralPDE.jl and DiffEqFlux.jl are very different libraries. DiffEqFlux.jl will be a few orders of magnitude faster when you can phrase the problem as a universal differential equation, and should be preferred whenever possible. PINNs are a relatively slow method in comparison, with their main benefit being their ease of applicability to high dimensional PDEs or non-local PDEs. They can be used on ODEs or methods that have simple discretizations like reaction-diffusion, but they are nowhere near as efficient. And, even with high dimensional PDEs, you can phrase some of them as universal differential equations, in which case again that will outperform PINNs by a mile.

So with that preface, PINNs are a computationally intensive last resort mainly only used for non-local PDEs (fractional derivatives, integro-differential equations, etc.) and high dimensional PDEs where other tricks do not exist. For this reason, the SciML ecosystem had focused first on universal differential equations which are now relatively “complete”, with more of a focus moving towards PINNs to cover the remaining cases now.

With that in mind.

This is so much more efficient with universal differential equations it’s not even a comparison. Just do a UDE training sampling u-space similarly (using any of the classical basis layers to do what’s shown in the paper on Chebyshev), and define the system by a neural network. Train it over the sample data.

You will be able to do this soon with NeuralPDE.jl, but given the efficiency difference on things like ODEs and reaction-diffusion (the examples in the paper) you’d still want to use DiffEqFlux for it on most problem (again, just switching when you have something like a non-local PDE).

That is coming this summer. What you’d do right now is throw a bounding box over your equation and an indicator function on the PDE for whether it’s in the box, otherwise a loss of zero. While that works, it relies a bit too much on adapting the sampler so we’re working to improve that. We’re going to allow any level set and give more explicit handling to any user-defined characteristic domain.

You can already do this.

9 Likes

@ChrisRackauckas Thanks, this is super helpful! A few followup questions:

DiffEqFlux.jl will be a few orders of magnitude faster when you can phrase the problem as a universal differential equation, and should be preferred whenever possible. PINNs are a relatively slow method in comparison, with their main benefit being their ease of applicability to high dimensional PDEs or non-local PDEs.

Are you saying that certain problems (e.g., non-local PDEs) can be formulated as PINNs but not as UDEs? My understanding was that UDEs could be viewed as a generalization of PINNs that you get by combining a partially known model with a neural DE-learner, but apparently this is incorrect.

This is so much more efficient with universal differential equations it’s not even a comparison. Just do a UDE training sampling u-space similarly (using any of the classical basis layers to do what’s shown in the paper on Chebyshev), and define the system by a neural network. Train it over the sample data.

Are there published examples of using UDEs for operator-learning? In the experiments I’ve run, UDEs tend to struggle a lot with even memorizing a few training points unless you combine them with something like SInDy.

Yes, there are PINNs that do not have good UDE formulations. PINNs are a fully continuous training process while the UDEs are trained through a discretization process.

In the paper we describe how it can be used to setup a problem and learn a differential operator in a semilinear PDE. You have to describe the problem in a discretized form (like method of lines) and learn the discretized operator though. There’s some other things you can do, but they aren’t published yet.