My own Feedforward Neural Network library :-)

Even better, with the new scaling/rescaling of the y (in branch master):

using BetaML.Nn, Plots, Random
Random.seed!(123)
xtrain = pi*rand(1000)
ytrain = sin.(xtrain) + 0.5 * cos.(xtrain)
xtest = pi*rand(200)
ytest = sin.(xtest) + 0.5 * cos.(xtest)
all_layers = [DenseLayer(1,3,f=tanh,df=dtanh),
              DenseLayer(3,1,f=identity,df=didentity)]
myfnn = buildNetwork(all_layers,squaredCost,dcf=dSquaredCost);
xScaleFactors = getScaleFactors(xtrain)
yScaleFactors = getScaleFactors(ytrain)
train!(myfnn,scale(xtrain),scale(ytrain),epochs=100,batchSize=8)
y_pred = scale(predict(myfnn,scale(xtest,xScaleFactors)),yScaleFactors,rev=true)
sortIdx = sortperm(ytest)
sortedYtest = ytest[sortIdx]
sortedYpred = y_pred[sortIdx]
plot(1:size(ytest,1),[sortedYtest  sortedYpred],label=["ytest" "y_pred"])

:slight_smile: :slight_smile: :slight_smile: :slight_smile:

(ps: using two-side scaling improves also the original 5-dimensional problem with an average relative error (l-1) of 8%)