Neural ODE: MethodError: no method matching copyto!

Hello,

I have been trying to implement the neural ODE code example in the blogpost for a random 1 dimensional array data. But I am getting the error InexactError: Int64(-1.8308279662274551)

If I switch to abs instead of abs2, I get MethodError: no method matching copyto!

I haven’t been able to figure out what’s happening.
The code in the post is working for me fine. If anyone can point to what is happening, it would be helpful.

v1.6

EDIT: The following suggested change by Chris remove the Inexact error but reduce to the other mentioned error.

Code:



using Flux, DiffEqFlux, DifferentialEquations, Plots

data = rand(10)



t = range(1.0,10.0,length=10)
tspan = (1.0,10.0)
NN = Chain(x-> [x], Dense(1,32,tanh), Dense(32,32,tanh), Dense(32,1), first)
p = Flux.params(NN)
n_ode = NeuralODE(NN,tspan,Tsit5(),saveat=t,reltol=1e-7,abstol=1e-9) 
	
ps = Flux.params(n_ode)

function predict_n_ode()
  Array(n_ode(0.0))
end

loss_n_ode() = sum(abs,data .- predict_n_ode())

opt = ADAM(0.1)
Flux.train!(loss_n_ode,ps,Iterators.repeated((), 1000), opt)

Array(n_ode(0.0))

Hi!

Thank you very much for the reply. I did try that after posting and it reduce to the other error
MethodError: no method matching copyto!(::Float64, ::Base.Broadcast
I rechecked again after your reply to make sure but still that.

Any directions?

Btw, big fan of your SciML lectures. Thank you very much for posting them. I stumbled on them about 15 days ago, heard about Julia for the first time and I am a convert from then on!

The problem is you weren’t using 1-dimensional arrays but instead scalars as your state. Flux.jl doesn’t do very well with that. This is a a working code:

using Flux, DiffEqFlux, DifferentialEquations, Plots

data = rand(10)
t = range(1.0,10.0,length=10)
tspan = (1.0,10.0)
NN = Chain(Dense(1,32,tanh), Dense(32,32,tanh), Dense(32,1))
p = Flux.params(NN)
n_ode = NeuralODE(NN,tspan,Tsit5(),saveat=t,reltol=1e-7,abstol=1e-9)

ps = Flux.params(n_ode)

function predict_n_ode()
  Array(n_ode([0.0]))
end

loss_n_ode() = sum(abs,data .- vec(predict_n_ode()))

opt = ADAM(0.1)
Flux.train!(loss_n_ode,ps,Iterators.repeated((), 1000), opt)

I would recommend using FastDense instead here.