Hello Everyone, I’m new to Flux and I’m following the course on book.sciml.ai, right now I’m in lesson three, specifically where is discussed modelling Hooke’s Law and the construction of a PINN, and I’m havong trouble training the model.
I think is because the book is from 2020 and Flux 0.13 introduced some breaking changes, but no matter how I change the code the gradient always comes as nothing
. I altered some things of the book, this is the code I’m running:
k = 1.0
force(dx,x,k,t) = -k*x + 0.1sin(x) # True force
prob = SecondOrderODEProblem(force,1.0,0.0,(0.0,10.0),k)
sol = solve(prob)
# Generate the dataset of specifc points for the neural network
# Here we have a limited number of data points
t = 0:3.3:10
position_data = [state[2] for state in sol(t)]
force_data = [force(state[1],state[2],k,t) for state in sol(t)]
NNForce = Chain(
x -> [x], # Transform the input into a 1-element array
Dense(1, 32, tanh),
Dense(32, 1),
first # Extract the first element of the output
)
loss(nn) = sum(abs2, nn(position_data[i]) - force_data[i] for i in 1:length(position_data))
loss(NNForce) |> x -> println("Initial loss: ", x)
# Standard gradient descent
opt = Flux.setup(Descent(0.01), NNForce)
# Training loop
for i in 1:20
∂loss∂m = gradient(loss, NNForce)[1]
Flux.Optimisers.update!(opt, NNForce, ∂loss∂m[1])
i%10 == 0 ? println("loss: ", loss(NNForce)) : nothing
end
My versioninfo and status is as follows:
julia> versioninfo()
Julia Version 1.11.3
Commit d63adeda50d (2025-01-21 19:42 UTC)
Build Info:
Official https://julialang.org/ release
Platform Info:
OS: macOS (arm64-apple-darwin24.0.0)
CPU: 10 × Apple M4
WORD_SIZE: 64
LLVM: libLLVM-16.0.6 (ORCJIT, apple-m1)
Threads: 1 default, 0 interactive, 1 GC (on 4 virtual cores)
(intro-sciml) pkg> status
Status `~/Developer/intro-sciml/Project.toml`
[0c46a032] DifferentialEquations v7.15.0
[7da242da] Enzyme v0.13.28
[587475ba] Flux v0.16.2
[91a5bcdd] Plots v1.40.9
[10745b16] Statistics v1.11.1