I am looking around for online algorithms to fit probablistic models for timeseries data. Neural Stochastic Differential Equations (as a subclass of Universal Differential Equations if I understood it right) seem to be an especially interesting class of models.
I am new to the field, hence it may be that my questions sounds rather naive. Is there a way to learn a Neural SDGs incrementally, like data point by data point, in an efficient manner? Ideally the Neural SDGs representation should have constant memory, i.e. without the need to store the datapoints itself inside the SDG.
If SDG are too complex, I would be glad about advice for online learning with other NeuralDiffEq.
You can effectively do online learning through data shooting methods where you take random data points as initial conditions and then solve to the next data point, or a few in the future. Other forms of multiple shooting as well. In general for it to be completely correct you do need to fit on the whole time series though, or some randomly sampled subset.
thank you. As I understood, for the online learning I would still need a good loss function.
E.g. meansquarederror. If I would learn using multiple shoots with MSE on top, I somehow need heuristics where to stop learning, as the gradient descent would learn until all my historic learn is lost and the parameters are updated to just the new points.
In a Bayesian Setting I have a well behaved loss function like free energy. Maybe it is best to combine all three - Bayesian, NeuralNetworks, and SDEs - however I was hesitant as these would combine two different modellings of stochasticity.