The following should work.
First, introduce some packages…
# Packages
using Flux
using Plots; pyplot()
using LaTeXStrings
using Statistics
Next, your data that you wish to fit a model to:
X = rand(100)
Y = 0.5X + rand(100)
plot(X,Y,st=:scatter,label=L"y")
plot!(xlabel=L"x",ylabel=L"y",title="Data for linear model")
The data may look as follows:
Next, put your data in data arrays of a shape that Flux can use:
# Preparing data in correct data structure
Xd = reduce(hcat,X)
Yd = reduce(hcat,Y)
data = [(Xd,Yd)]
Next, set up the Flux model:
# Set up Flux problem
#
# Model
mod = Dense(1,1)
# Initial mapping
Yd_0 = Tracker.data(mod(Xd))
# Setting up loss/cost function
loss(x, y) = mean((mod(x).-y).^2)
# Selecting parameter optimization method
opt = ADAM(0.01, (0.99, 0.999))
# Extracting parameters from model
par = params(mod);
Comments:
- Since you have a monovariable, linear (affine) mapping, a single layer (no hidden layers) with a linear activation function (default) is sufficient.
Dense
is the Flux name for the standard Feedforward Neural Net (FNN) block.
Yd_0
is the mapping from x to y with the initial (randomly generated) set of model parameters in model mod
.
- In the last line, I name the parameters by
par
so that I can refer to par in the next code block where I train the “network” (the linear model).
Next, you need to train the model against the data – a major iteration is denoted an “epoch”:
# Training over nE epochs
nE = 1_000
for i in 1:nE
Flux.train!(loss,par,data,opt)
end
# Final mapping
Yd_nE = Tracker.data(mod(Xd));
Here, Yd_nE
is the mapping from x to y with the model parameters as they are after nE
epochs:
plot(X,Y,st=:scatter,label=L"y")
plot!(Xd',Yd_0',lc=:green,label=L"y_0")
plot!(Xd',Yd_nE',lc=:red,label=L"y_{n_\mathrm{E}}")
plot!(xlabel=L"x",ylabel=L"y",title="Data for linear model")
… and then the result:
Of course, in this case, it would be much simpler and faster to solve the model using linear algebra.