Is there any way in Lux.jl or Flux.jl to create a function for a UDE and add some parameter, for instance the parameter c in the function rhs!(...) to a neural network so that the parameter c is also estimated through the neural network?
function rhs!(du, u, p, t)
û = dudt2(u, p, st)[1]
du[1] = û[1] + log.(c./u[1])
du[2] = û[2]
du[3] = û[3]
end
# `c` will have to be an array for it to be trainable
my_custom_model = @compact(; model, c = [2.0]) do x, ps
# Note that state handling is automatic with `@compact`
function rhs!(du, u, p, t)
û = model(u, p.model) # Parameters accessible via p.<fieldname>
du[1] = û[1] + log.(p.c[1]./u[1]) # p.c[1] since we set it up as an array
du[2] = û[2]
du[3] = û[3]
end
prob = ....
return solve(prob, ....)
end
ps, st = Lux.setup(rng, my_custom_model)
# ps.c and ps.model are automatically populated
my_custom_model(x_input, ps, st)
The custom model code I shared already encapsulates the odeproblem so you don’t need to define the predict function ( you effectively created a neural ode inside a neural ode iiuc)
The null params error is because you did not specify parameters p to the odeproblem struct
It seems that when I give structure to the RHS the performance becomes significantly worse rather than using a pure neural ODE. It has a lot of difficulties estimating the parameter c and is largely dependent on the initialization. Is there a way to train the network and the estimation of the parameter c separately?