Hi,
I´m trying to feedback the output of the output layer of a fully feedfoward neural network to the input layer of the network ?
I would like to create a similiar structure like in the illustration.
Detailed explanation: So imagine you have a technical system with the input u and the output y and you would like to train a nonlinear dynamic data based model to approximate the system behaviour. You already have excitated the system and you have gathered the informations in two time series for the input u and the output y. The q^{-1} in the illustration are unit delays, which means that you delay the time series about one discrete time step. Now I would like to feed the fully feed foward neural network i.e. with one dense hidden layer with the delayed time series of the input u and delayed time series of the model output \hat{y} (output of neural network). Therefore I would like to feedback the delayed output of the neural network to the input layer of the neural network.
I started to implement such a structure and I think it is correct (here a single input single output system with 3 states for the model output and one delay of input signal), but if I now call ps = Flux.params(model)
the ps
variable is empty. Does anybody knows why? Furthermore I would like to know, if there is a more elegant way to stack the layers also for custom layers, where the layers aren´t just directly connected or the feedback is included in a cell.
using Flux
mutable struct RNNCellNOE
W
b
h
σ
end
RNNCellNOE(in::Integer, out::Integer, states::Integer, σ = tanh) =
RNNCellNOE(randn(out, in), randn(out), zeros(states), σ)
function (m::RNNCellNOE)(x)
σ, W, b, h = m.σ, m.W, m.b, m.h
for i in 2:length(h)
h[i-1] = h[i]
end
h[end] = σ.(W*x .+ b)[1]
return h[end], h
end
layer1 = Dense(4, 10, σ)
rnn_noe = RNNCellNOE(10, 1, 3)
model(x) = rnn_noe(layer1(vcat(x, rnn_noe.h)))
ps = Flux.params(model)
u = vcat([0], rand(99, 1))'
y = model.(u)