I recall previous diffeqflux.jl docs having an example of how to train a neural network as a control of a ODE. Essentially subbing the control term for a neural network call.
https://docs.juliahub.com/DiffEqFlux/BdO4p/1.10.1/examples/NeuralOptimalControl/
I can’t see the same in the newer docs is this no longer supported? Are there any good tutorials you can point to on how to set up such a system and learn a control given some objective function?
apologies if this has already been addressed I couldn’t find a similar topic from a search of discourse or github issues.
You could probably just look at the example here, which is about using NN for model discovery, and instead of letting the NN model missing dynamics you let it represent a control signal.
Instead of defining a loss based on the difference between predicted states and recorded states, you define a loss based on the difference between some desired trajectory and the predicted trajectory (or some other loss you want).
1 Like
There are control examples in the SciMLSensitivity docs:
DiffEqFlux was restructured to be only about the pre-defined neural architectures, while all of the things which are just using sensitivity analysis on ODEs (including with neural networks inside of the ODEs) are in SciMLSensitivity.jl. The front page of the docs says this:
DiffEqFlux.jl is only for pre-built architectures and utility functions for deep implicit learning, mixing differential equations with machine learning. For details on automatic differentiation of equation solvers and adjoint techniques, and using these methods for doing things like calibrating models to data, nonlinear optimal control, and PDE-constrained optimization, see SciMLSensitivity.jl
If that’s unclear and should be updated to be more clear please let me know.
1 Like
thanks this showcase is super helpful