Hi everyone. I am trying to implement the Deep Bayesian Model Discovery on the Lotka-Volterra model discussed in the Automated Discovery of Missing Physics example. The problem I am facing is that I am not able to figure out a way to pass the parameters of the neural network embedded in the ODE of the Lotka-Volterra model to the Hamiltonian as done here. The main issue here is that the hamiltonian is fed a vector of parameters and they are updated naturally as the optimization is carried out. I am having trouble achieving the same with the missing physics example.

Any pointers as to how this can be achieved or existing code will be very helpful. Thanks.

Recently @Astitva_Aggarwal faced the same issue while working on Bayesian PINNs and worked around it with this function.

```
function vector_to_parameters(ps_new::AbstractVector, ps::NamedTuple)
@assert length(ps_new) == Lux.parameterlength(ps)
i = 1
function get_ps(x)
z = reshape(view(ps_new, i:(i + length(x) - 1)), size(x))
i += length(x)
return z
end
return fmap(get_ps, ps)
end
```

Basically, you want to be able to recreate the ComponentArray (/NamedTuple) from the vector while passing it to Lux chains. Since this will probably come up frequently enough when working with optimizer libraries and ppls I wonder if Lux should be able to handle it automatically, until then it might be nice to put it somewhere in the docs. @Astitva_Aggarwal if you can do that it will be pretty helpful to people!

Yeah will do!

Also @gsh19 if you are using Lux chains then Lux.setup extracts the NN structure information as st and p (named tuple of initial parameter values). something like this:

~~

dudt2 = Lux.Chain(x → x.^3,

Lux.Dense(2, 50, tanh),

Lux.Dense(50, 2))

p, st = Lux.setup(rng, dudt2)

inititalparams=collect(Float64, vcat(ComponentArrays.ComponentArray(p)))

now initialparams is a vector of initial parameters of the lux chain used to create the NeuralODE.

this can be directly used for the sampling in AdvancedHMC(pass into find_good_stepsize())

~~

In case you are using a flux chain then simply after the Flux chain creation adding:

~~

θ, re = Flux.destructure(chain)

~~

this returns θ(vector of initial parameters), re(recontruct funciton to recreate a NN with a diff set of parameters θi), now θ can be directly passed into the find_good_stepsize() again

Thanks @Astitva_Aggarwal and @Vaibhavdixit02 for your inputs. Will implement this soon and will let you know how it goes. Really appreciate the help.

Hi @Astitva_Aggarwal @Vaibhavdixit02 . I was using a Flux chain in my code and using `θ, re = Flux.destructure(chain)`

did the trick for me. @Vaibhavdixit02 to your point, I think these subtle differences of how handling the above is not very straightforward in Lux does need to be included in the tutorials.