If you have 2 or more functions as Neural nets, say f and g.
And they are coupled by a differential equation.
And we already idea how f should look like say we have a mock function f_m for f
How would I?
- Train f to f_m.
- Solve/optimize the PDE for g, without updating f.
- Then solve/optimize the PDE for f, without updating g.
It always trains them both at the same time right now. This would require changes to the package (which would be interesting to explore).
For step 1 ? If I want to train f to f_m over a Domain D, Is there any function that let me do that easy ?
Naturally, it can be done by calculating f_m on a discretization of D and the train on that data. I was just wondering if there was an easier way, as discretization suffer from the curse of dimensionality,
I would try splitting your system in to 2 with a registered interpolation providing the value of f to the first, which solves for g, then wrap the solution in a registered function and use it as the definition of g in the second.
Finding a good initialisation for f will be important here, perhaps you can run this scheme recursively starting from a random initialisation, but a different low order approximation would be better