I am currently working on a Reinforcement Learning problem, which requires to form and store (matrices) of gradients and then update the parameters of the Neural Network manually.

Basically, what I have done is take the gradient of a loss function with respect to the parameters of a Neural Network and then reshape this gradient to a column vector (which needed to be done for my purposes). I redo this a couple of times and end up with a vector w that I want to update, i.e. my update rule is

```
θ = θ + α*w
```

where θ denote the parameters of the network. Due to the way Flux works, the parameters are stored in some kind of special structure, so I do not necessarily have acess to the parameters as a single vector.

The way I obained the gradients as a vector is:

```
function gradcollector(gs)
gradients = []
for i in 1:length(collect(gs.grads))
push!(gradients,collect(gs.grads)[i][2])
end
return collect(Iterators.flatten(gradients))
end
```

where gs is obtained by doing something like

```
ps = Flux.params(model)
```

`gs = Flux.gradient(() -> loss(input), ps)`

I tried to manually reshape w into the form that the parameters have, by iterating through the shapes of the gradients and then reshaping each respective bit into a matrix, but I cannot even do that, since the order in which the parameter matrices are stored is different than the order which I obtain by performing a gradient.

Any help on how to fix this problem would be greatly appreciated.