Acausal component based modelling including neural network components

Hello all,

A high level question.

I want to do acausal, component-based modelling as on the ModelingToolkit.jl docs.

In particular, I want to include some components that are neural networks. IE build a ‘universal differential equation’. And some that are not.

Is it a reasonable idea to build a symbolic, modelingtoolkit representation of a neural network? If so, what would be a good way of doing it? I haven’t seen an example of this anywhere in the documentation.

I was thinking of e.g. using SimpleChains to make a numerical representation of a neural network, and then converting it to a symbolic representation by passsing symbolic variables through the network. Would this be the right approach?

Thanks

You could, subject to a few things to keep in mind

  • Most neural network are constructed in a way that they are not invertible, i.e., causality y = f(x) is fundamental to the function implemented by the network. There are invertible formulations of neural networks, that may be used instead.
  • Symbolics scales somewhat poorly with the size and complexity of functions you trace through. A large neural network would quickly screw with you here. One solution is to @register_symbolic the function implemented by the network to prevent Symbolics from tracing through it, but then you are not really building a Symbolic representation of the network.
  • It’s quite possible that what you need is not actually an acausal model, you are just looking for the component aspect, in the sense of blocks that can be assembled in a block diagram? The distinction is that blocks in a block diagram can only be used with one causal direction, whereas acausal components are more flexible. Depending on what goal you actually have in mind, the restriction of causal models may or may not be limiting.

What are you trying to achieve?

Thanks!

Yes indeed I wasn’t clear. I’m not looking for an invertible neural network. Just a simple feedforward y=f(x). EG a multilayer perceptron. The acausal part meaning ‘as a component in a block diagram incorporating feedback.’

I’d be very happy with a non-symbolic neural network component, as long as

  1. I can add the neural network as a component in a larger (ODE) system of ModelingToolkit components.
    2.) I can optimise the parameters in the neural network.

Indeed there are two interpretations. One is implementing a neural network in an acasual way. I’m not entirely sure that’s possible, as @baggepinnen describes.

The other interpretation is using a neural network as a component. In current ModelingToolkit it won’t scale all that well. But we do have a fix for this coming… it’s really soon and I just need to clean some stuff up for release. So stay tuned. It’s the same as the PDE scaling improvement.

Thanks! Yes the second interpretation is the relevant one for me.

If I implement it now (only need very small NNs), would your fix sort out the scaling issues downstream later on? And if so, should I just push symbolic variables through a neural network? Or will there be a completely new set of functions for doing this, in which case I’ll have to wait

Cheers!

Hi! Just following up on this…is there a fix, and if so, is there a recommended method for building ModelingToolkit components that are/include neural networks in their equations?
Thanks!

The core thing that was missing has been added, but we haven’t gotten it to the point of a first demo or tutorial on this yet. That’s what we are aiming for in the next month or so though.

Great, thanks. Look forward to it! I have a student on a rotation hoping to use the capability.