I can tell you right now it won’t come close. PINNs are an extremely compute-wasteful methodology: they will take plenty more parameters on low dimensional spaces than something more tuned like spectral elements, or even finite differencing. For reference, I’ve discussed with a researcher in this area and we’ve now gotten a GPU-accelerated PINN solving the Lotka-Voltera equation to 3 decimal places in about 240 seconds, compared to a serial ODE solver to like 8 digits in a few microseconds. ODEs are the worst case scenario, but even the heat equation (with not too big of a domain) is a sub-second solve with finite difference methods but a few GPU minutes for a PINN. If there is good theory around the problem, PINNs aren’t good and won’t be good. They should be efficient when the PDE is high dimensional (Schrodinger equations, Kolmogorov equations, … essentially things that are evolving probability distributions over a large number of states), or if the PDE is non-local and good theoretical methods don’t exist (fractional PDEs, integro-differential equations, etc.)

The only reason why I recommend it here is it seems like the OP might just want one solution without much brainpower put into it. In that case, the use case is like Mathematica’s NDSolve, and PINNs are pretty good for building functionality like that (since it’s trivial to generate a loss function for “any” PDE), so the ModelingToolkit general PDE interface’s first generally usable method is a PINN. It’ll sap more electricity than it should, but it’ll let you describe the PDE symbolically and spit out something that can make plots. @stevengj brings up a really good point though that you should manually check the result because PINNs do not have good theoretical bounds on the result. In fact, the current results are essentially just that they converge but not necessarily with any good error bounds or with a high convergence rate. These results also don’t tell you whether it’s converged to the right solution, say a viscosity solution, so if you have a PDE with potential issues of that sort you will need to take more care (but this is only semilinear so if I’m not mistaken you can prove uniqueness under mild assumptions like L2 of the added function using a similar approach to what’s done on semilinear heat equations).

So there are a lot of caveats, but I think we can at least get error estimators on the result. This is still quite an active project though.

You can’t really take a finer mesh with a PINN because it’s a mesh-free method. This is where the suggestion of random sampling in other ways comes from: indeed you should quantify the error at different points from how you trained the network. It turns out to be quite similar to adaptive Monte Carlo quadrature methods, which do have error estimators but they have hard to predict failure conditions (for example, if you have a thin mass with all of the density and none of the initial points hit it, AMC quadrature methods will calculate that the integral is zero with zero variance and exit…). That’s not necessarily the kind of behavior you will see in PDEs, but if a PDE has something like a kink in the domain, you might expect that kink to be where the maximal error is at but purely stochastic methods may never sample the loss there (or around there), so you do have to be careful. This is something we’re looking a bit more closely at.

But even then, it’s not going to be efficient (on “standard PDEs”), just hopefully very automatic.