using CSV, DataFrames
using Flux, Statistics, ProgressMeter
using Plots
using CUDA,AMDGPU;
# Read the CSV file into a DataFrame
df = CSV.File("real_values_ex_00_fixed_time_step.csv") |> DataFrame
# Extract the temperature and time steps vectors
Temperatura = df[1:10:end, 2]
tsteps = df[1:10:end, 1]
#normalizze temperatura tra 0 e 1
Temperatura= (Temperatura .- minimum(Temperatura)) ./ (maximum(Temperatura) - minimum(Temperatura))
# Convert vectors into matrices
Temperatura = reshape(Temperatura, 1, :)
tsteps = reshape(tsteps, 1, :)
# Example: Using DataLoader
# Assuming `tsteps` are features and `Temperatura` are labels
loader = Flux.DataLoader((tsteps, Temperatura))
model = Flux.Chain(Flux.Dense(1 => 23, tanh),Flux.Dense(23 => 23,tanh),Flux.Dense(23 => 1, bias=false))
opt_state = Flux.setup(Flux.OptimiserChain(Flux.WeightDecay(0.42), Flux.Adam(0.01)), model)
loss(x,y)= mean(abs2.(model(x) .- y));
for epoch in 1:100
Flux.train!(model, loader, optim) do m, x, y
loss(m(x), y)
end
end
out2 = model(tsteps)
plot(tsteps[1,:], Temperatura[1,:], label="Real values")
plot!(tsteps[1,:], out2[1,:], label="Predicted values")
Hi Mak, it’s not clear what you are trying to ask. Maybe take a look here Please read: make it easier to help you
I have temperature and date data and my goal is to train an MLP neural network in flux for future temperature prediction
And the question is…
I am recently in julia and would like to know how to improve performance, after a while my training reaches a minimum it seems to me
Is there a reason to expect more performance out of this simple network?
We don’t really know what data you have, or how well it performs. It might be easier to give feedback if you provide more details.
Is there a reason you don’t use a recurrent network for this, they are typically good at time-series modelling.
That’s good! It means you’ve successfully converged on the optimal parameters.
For a simpler example, you can think of linear regression, where we have a function like
output = a * input + b
“Training” just means looking for the best values of a
and b
to predict the output. Once we’ve found the correct values, training longer doesn’t do anything, because a
and b
aren’t changing–any other values of a
and b
would be worse.