Not exactly my latest code, but the interesting part was trying to get the last time of the solution.
When I used it in the loss function, it changed how the solution was reported, because the “Array” command failed.
function loss_adjoint(θ)
s=predict_adjoint(θ)
#println(typeof(s))
#if isa(s,RecursiveArrayTools.DiffEqArray) # Bug fix
# println(typeof(s))
# @show s
temp = Array(s)
x=temp[:,end]
# x=s.u[end]
#@show x
t=size(temp,2) #Can't seem to get t out
#println("tlen = $t")
# println(typeof(t))
# else
# x = s[:,end]
# #@show x
# t=s.t[end]
# #@show t
# end
miss = tgt_miss_distance(x)
miss = miss < maxMiss ? miss : maxMiss
#println("miss = $miss")
loss = miss + t*2
return loss
end
The solve command is:
res = DiffEqFlux.sciml_train(x->loss_adjoint(x,10), θ, ADAM(0.001), cb = cb_plot, maxiters = 50)
Also, are differential equations with neural networks not able to train on GPUs? When I tried to add the |>gpu to the chain and then to the u0 it failed with a lot of red. I can probably send you the code to you if you can’t see the problem from that snippit.