Dense
wants either a vector (for one sample) or a matrix (whose columns are many samples), but it’s getting a vector of tuples. One way to convert is this:
julia> rawInputs
100-element Vector{Tuple{Float32, Float32}}:
(-0.953125, -0.203125)
(0.046875, 0.796875)
(0.546875, -0.703125)
...
julia> stack(rawInputs)
2×100 Matrix{Float32}:
-0.953125 0.046875 0.546875 -0.453125 … -0.882812 0.117188 0.617188 -0.382812
-0.203125 0.796875 -0.703125 0.296875 -0.867188 0.132812 -0.367188 0.632812
julia> size(ans, 1) == 2
true
The way you have written training is the old “implicit” style, I’d recommend writing it like this (see docs here for more):
julia> loss(m, x, y) = Flux.Losses.mse(m(x), y); # takes model as explicit argument
julia> opt = Flux.setup(Descent(lr), model); # state necc. really for other opt rules
julia> train_data = [(stack(rawInputs), stack(rawOutputs))];
julia> @time for ii in 1:epochs
Flux.train!(loss, model, train_data, opt)
end
0.483642 seconds (375.89 k allocations: 1.744 GiB, 23.82% gc time, 14.30% compilation time: 100% of which was recompilation)