# Parallel computing and GPU support in neuralPDE.jl package

I’m going through the documentation that you had shared…but I’m confused about how to convert the documentation given for the Lotka Voltera model to suite my application Chris…Could you please elaborate on this one and guide me please?

Chris…At least can you hint me on how I can add the pde system and the data to the code that is given in the documentation for UDE’s…(the documentation in the link that you shared)…please…

``````# Define the hybrid model
function ude_dynamics!(du, u, p, t, p_true)
û = U(u, p, _st)[1] # Network prediction
du[1] = p_true[1] * u[1] + û[1]
du[2] = -p_true[4] * u[2] + û[2]
end

# Closure with the known parameter
nn_dynamics!(du, u, p, t) = ude_dynamics!(du, u, p, t, p_)
# Define the problem
prob_nn = ODEProblem(nn_dynamics!, Xₙ[:, 1], tspan, p)
``````

How would I translate this idea for my PDE system of equations which involve 1 continuity, 2 momentum and an energy equation (2D system) along with data Chris?

Hi Chris…Can you just explain me how I can do the UDE formulation of the RANS equations with energy on a PDE discretization please?..
I have attached the screenshot containing the equations that I’m using to solve along with the BC’s…

Can you atleast divert me to some example which exists in the documentation that is more closely aligned to my problem specification? That is solving the PDE using a UDE forumation…I’m confused in the implementation of the same…Thanks…

Just semi-discretize the UDE, like is done in the papers. See things like:

@ChrisRackauckas Can you just give me an example of how to do a UDE formulation on a set of PDE’s and how to incorporate the neural networks for extrapolating accurately beyond the experimental data that I have…?

See those links above. Or see:

@ChrisRackauckas I was going through the documentation chris, but I feel it too overwhelming to grasp the program construct and the logic…Could you please help me with any simpler example that is available or any tutorials which can explain in a step by step manner how to scientific machine learning for extrapolation in the non-existant data region…Please?

Automatically Discover Missing Physics by Embedding Machine Learning into Differential Equations · Overview of Julia's SciML is a step by step tutorial, or follow the chromatography repo which does exactly that on a PDE system.

@ChrisRackauckas So the paper in the Chromatography repo uses the finite element PDE discretization scheme right? Can I just use the second order accurate differences for all the terms in my equations?

You can use whatever discretization you want.

@ChrisRackauckas Why are only selected terms in the PDE replaced with neural networks as universal approximators and what is the rationale behind choosing which term in the PDE to be replaced with a neural network?

Those are the pieces that you don’t know.

Because you know things.

@ChrisRackauckas Is there any way that we can parallelize the training of multiple networks in the neuralPDE framework?..If so how can we do it to achieve maximum performance?

@ChrisRackauckas Is it that for GPU training, only the initial parameters and experimental data (if available) needs to be put on the GPU right? We need not put the neural network chain on to the GPU?

@ChrisRackauckas If I have additional loss as the data loss, how do I program it for computing it on the GPU in the NeuralPDE package?

In the NeuralPDE documentation usage of GPU’s is explained only wth incorporation of Physics. It is not entirely clear if there is additional data loss term to be included in the training, how to do so?. I think if I put only the initial parameters onto the GPU, it throws an Error saying that “LoadError: GPU compilation of MethodInstance for…failed!”.

@ChrisRackauckas I would like to convey to you that the problem use case outlined in ude_chromatography repo does’nt use system of equations (involves only pde equation and boundary equation) and also, the way the data that is being fed to the neural network is little confusing to translate to my use case Chris. Can you please help me as I’m not able to solve my problem efficiently using the NeuralPDE efficiently and have to rely on alternate techniques to solve the problem currently at hand…Also, I was looking at using the MethodOfLines.jl to solve the problem. But, how do I incorporate the experimental data into that. Also does it scale it well for more than 2 system of pde equations and boundary conditions? The documentation does’nt explain the application of MethodOfLines for system of PDE’s. Is it possible to shed some light on this one please ?

┌ Warning: Using `gpu` inside performance critical code will cause massive slowdowns due to type inference failure. Please update your code to use `gpu_device` API.
└ @ Lux C:\Users\Sunda.julia\packages\Lux\hlo4t\src\deprecated.jl:32
┌ Warning: No functional GPU backend found! Defaulting to CPU.

│ 1. If no GPU is available, nothing needs to be done.
│ 2. If GPU is available, load the corresponding trigger package.
└ @ LuxDeviceUtils C:\Users\Sunda.julia\packages\LuxDeviceUtils\rMeCf\src\LuxDeviceUtils.jl:158

Here is the issue when I use the exact code in the documentation and run it on my NVIDIA GPU, and its massively slow, slower than CPU for the case of a example problem outlined in the documentaiton for “Training NeuralPDE on the GPU”.

I request you if you can kindly look into the issue and prompt me a solution to this Chris…