My network is as follows:
- LSTM Encoder (Take a sequence of
n
lags and output a hiddenend
vector) - Fully Connected Dense Layer (To create a context vector)
- Decoder network (LSTM layer that process context vector for
m
lags)
I am having trouble with the decoder part. Basically, I need to repeat
my context vector and then apply LSTM recursively. Any suggestion on how to do that would be helpful? For multi-step output is there a better method?