Parallel computing and GPU support in neuralPDE.jl package

Hi @ChrisRackauckas, extremely sorry for posting the irrelevant question here…apologies…I’m having the exact same problem as outlined in NeuralPDE features and GPU compatibility. No wthat I have shifted to Flux and the problem seems to be solved… However, I ran a matmul smoke test with x=rand(100000,100000), y = rand(100000,100000) and I got 11.2 s of computation time on cpu and “Out of Memory error” on the GPU (Nvidia RTX 3090 24 GB VRAM). With the matrix size reduced by an order the time taken with CPU was lowest when compared to GPU. So that means GPU doesn’t perform well right?. However I have 9 neural networks to predict the 9 variables in my governing equation. I’m unable to imrpove the performance of the NeuralPDE for my specific problem. Is it possible to have a single neural network with 9 output neurons, instead of creating separate networks for learning each of the variables in the neuralPDE framework? As adviced by you, I had visited the Chromatography repo…but, the problem is the formulation of their problem is entirely different from mine and I’m not knowing how to adapt to my specific case…What to do? Please help me out/…Thanks a lot in advance Chris…