Difference-Differential system in ModelingToolkit.jl

Dear All,

I would like to ask for advice regarding ModellingToolkit.jl. I am working on a problem (macroeconomic model) that reduces to the solution of a highly nonlinear dynamic system. Because of high-nonlinearity and lack of theoretical results regarding properties of the solution to that system, I would like to solve the system using a neural network as @ChrisRackauckas recommended here many times for “weird” problems. I looked at some examples/tutorials of ModelingToolkit.jl, and I like its syntax and ability to autoparallelize on GPUs really a lot.

However, my system is a mix of nonlinear difference/recurrence equations with differential equations, some parts contain integral (expectation) with respect to some random variables and these dynamic equations are coupled by few algebraic/static equations.

So, my question is, whether ModelingToolkit.jl could handle this type of system, or I had to write something on my own. If it is the case that ModelingToolkit.jl can’t handle this type of system, I would be grateful for any suggestion for some packages/frameworks that could substitute Toolkit, ideally as idiot-proof and parallel as possible.

As a second question, I would like to ask, whether there is some easy way, how to parallelize this type of code on TPUs, I saw some examples of Flux.jl running on TPUs, but I am not sure how robust/easy to use it is.

Thanks in advance!


Hi. Any guidance about difference equations in ModelingToolkit? Is ModelingToolkit able to handle difference equations or mixed difference-differential systems?

It’s not implemented yet but I’m hoping to try and get there ASAP. Differential-difference equations would likely not be very good on TPUs since BFloat16 numbers won’t be sufficiently accurate for most control systems.

1 Like

@ChrisRackauckas Thank you very much for your reply! I will try to fiddle with my own code in the meantime. So TPU-training of the network for an approximation of the solution function of my system isn’t a good idea? What about GPU, do you think that there is potential for GPU acceleration of training or had to do it on CPUs?

Regarding my system, it is time-independent problem (typical in macro) whose dynamics is captured just by state variables, and those variables live on a compact subset of R^n. However, the system is quite nonlinear and includes the differential component.


Doing it via a physics-informed neural network could work if the stiffness is sufficiently low. GPU is fine though: Float32 is a lot easier to handle than BFloat32.

That does seem like something to try a PINN on, and it’s something we’re looking into.


@ChrisRackauckas Thank you very much for your guidance!