I´m starting to work with Flux and I have a neural network with 63 entries, 2 hidden layers and 1 exit. Each entry and exit is a vector of 113344 elements, from which 78801 elements belong to the training set and the remainder, to the test set.

I trained the network with 5000 epochs, and it took around 8 hours to complete the training. So, in order to decrease this time, I thought about using early stopping and defining batch sizes, but I´m not sure how to do it. At this moment, I´m using the following code to train the network:

function loss_training(x_train::Array{Float64,2}, y_train::Array{Float64,2})

model = Chain(Dense(63,63,sigmoid),Dense(63,63,sigmoid),Dense(63,1,sigmoid))

loss(x,y) = Flux.mae(model(x),y)

ps = params(model)

dataset = [(x_train’,y_train’)]

opt = ADAGrad()

cb = () -> println(loss(x_train’,y_train’))

Flux.@epochs 5000 Flux.train!(loss,ps,dataset,opt,cb=cb)

y_hat = model(x_train’)’

return y_hat, model

end

y_hat, model = @time loss_training(x_train, y_train)

y_test_hat = @time model(x_test’)’