Hi, I have some problems coding a ( multilayer perceptron using Flux. My data is given in the form

D_t = (x_i, yi)_{i=1}^n where x_i is a scalar and y_i a $3-$dim vector and t=1,\ldots,T the timesteps. One shold notice that (x_1, y_1) in D_1 and (x_1, y_1) in D_2 correspond to each other i.e. is is the data point for the next time step. For one time-step T=1, I constructed the data for my neural network by the following:

```
using Flux
n = 10
nsamples = n; in_features = 1; out_features = 3;
X = randn(Float32, in_features, nsamples);
Y = randn(Float32, out_features, nsamples);
data = Flux.Data.DataLoader((X,Y), batchsize=4); # automatic batching convenience
X1, Y1 = first(data);
@show size(X1) size(Y1)
```

Now I want to extend the network for T>1. My first idea was to reshape the data but I could not really find a way to extract the results later on. My other idea was to in crease the input and output features. Therefore I used

`in_features = T`

But for the output features I could not really find a way, since in my case it symbolizes the dimension. So actually I would need something like

` out_features = (T, 3)`

But I am pretty sure that this is not working. Can someone tell me a better strategy to deal with these kind of problems?