# Neural ODEs and ODE parameter estimation

I’m trying to do some parameter estimation of parameters in a set of coupled (stiff) differential equations by using some limited data and found out about Julia’s ODE suite.

Looking through some of the examples, such as Neural Ordinary Differential Equations · DiffEqFlux.jl, I kind of have a sense of what is going on but one thing I don’t understand here is:

In neural odes you have parameters for the nn, say theta, but how do you also optimize for the parameters of the ode say phi?

Any example or link would be greatly appreciated.

Additionally, if you have incomplete state data, how would you use neural ODE? If I have a set of ODE:

dx1/dt = f(x1, t)
dx2/dt = f(x2, t)
dx3/dt = f(x3, t)
dx4/dt = f(x4, t)

And in my dataset I only have data about x1 and x2 how do I use a NODE paradigm?

Thanks.

If you’re just looking for parameter estimation, then use the SciMLSensitivity docs which is just around sensitivity analysis and differentiation.

I understand but I am looking for something more along the lines of DiffEqFlux.jl – A Julia Library for Neural Differential Equations

and more specifically:

``````  Conv((2,2), 1=>16, relu),
x -> maxpool(x, (2,2)),
Conv((2,2), 16=>8, relu),
x -> maxpool(x, (2,2)),
x -> reshape(x, :, size(x, 4)),
x -> solve(prob,Tsit5(),u0=x,saveat=0.1)[1,:],
Dense(288, 10), softmax) |> gpu
``````

In what situations would you say this method would be more relevant than just the example you linked to? Or is it a case of iterating over different methods and these two are examples of the different methods?

I don’t understand the question. DiffEqFlux is just a neural network architecture. It uses SciMLSensitivity to train the neural network architectures. If you just care about the training process, i.e. parameter estimation, then look at SciMLSensitivity.