Hey @ChrisRackauckas, I have solved the problems I had at 29th of June (I used the whole tspan = 0.1 s to 200 s
and for the loss I just considered specific indices of it e.g. 0.1s to 1.6s etc.)
function loss(p)
sol = predict_neuralode_rk4(p)
N = sum(idx)
return sum(abs2.(y[idx] .- sol'))/N
end
Now I change the tspan see code below. In this case stiffness or instability weren’t problem.
However, now I get an error message of the loss function
ERROR: DimensionMismatch("arrays could not be broadcast to a common size; got a dimension with lengths 122 and 102")
.
I already have proved the dimensions of the prediction and the data (both 122). I don’t know the reason for the error and why it just occurs at iteration 13.
using DifferentialEquations, Flux, Optim, DiffEqFlux, DiffEqSensitivity, Plots, GalacticOptim
plotly()
ex = [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.24, 0.24, 0.24, 0.24, 0.24, 0.62, 0.62, 0.62, 0.62, 0.62, 0.62, 0.0, 0.0, 0.0, 0.0, 0.0, 0.38, 0.38, 0.38, 0.38, 0.38, 0.78, 0.78, 0.78, 0.78, 0.78, 0.78, 0.78, 0.78, 0.78, 0.78, 0.78, 0.78, 0.78, 0.78, 0.78, 0.44, 0.44, 0.44, 0.44, 0.44, 0.18, 0.18, 0.18, 0.18, 0.18, 0.18, 0.18, 0.18, 0.18, 0.18, 0.18, 0.18, 0.18, 0.18, 0.18, 0.56, 0.56, 0.56, 0.56, 0.56, 0.42, 0.42, 0.42, 0.42, 0.42, 0.3, 0.3, 0.3, 0.3, 0.3, 0.08, 0.08, 0.08, 0.08, 0.08, 0.08, 0.08, 0.08, 0.08, 0.08, 0.08, 0.08, 0.08, 0.08, 0.08, 0.88, 0.88, 0.88, 0.88, 0.88, 0.88, 0.88, 0.88, 0.88, 0.88, 0.88, 0.88, 0.88, 0.88, 0.88, 0.88, 0.88, 0.88, 0.88, 0.88]
idxs = [16, 21, 27, 32, 37, 52, 57, 72, 77, 82, 87, 102,122]
tspan = (0.1f0, Float32(length(ex)*0.1))
## process definition
f(x) = (atan(8.0 * x - 4.0) + atan(4.0)) / (2.0 * atan(4.0))
function hammerstein_system(u)
y= zeros(size(u))
for k in 2:length(u)
y[k] = 0.2 * f(u[k-1]) + 0.8 * y[k-1]
end
return y
end
## simulation
y = Float32.(hammerstein_system(ex))
## model design
nn_model = FastChain(FastDense(2,8, tanh), FastDense(8, 1))
p_model = initial_params(nn_model)
function dudt(u, p, t)
nn_model(vcat(u[1], ex[Int(round(10.0*t))]), p)
end
function predict_neuralode_rk4(p)
_prob = remake(prob,p=p)
Array(solve(_prob, RK4(), dt = 0.01f0, saveat=[0.1f0:0.1f0:Float32(tspan[2]);]))
end
## loss function definition
loss(p) = sum(abs2.(y_sub .- vec(predict_neuralode_rk4(p))))
function train(t0, t1, dudt, p, loss)
tspan = (t0, t1)
u0 = [Float32.(y[1])]
global y_sub = y[1:Int(round(t1*10.0f0))]
global prob = ODEProblem(dudt,u0,tspan,nothing)
adtype = GalacticOptim.AutoZygote()
optf = GalacticOptim.OptimizationFunction((x, p) -> loss(x), adtype)
optfunc = GalacticOptim.instantiate_function(optf, p, adtype, nothing)
optprob = GalacticOptim.OptimizationProblem(optfunc, p)
res_tmp = GalacticOptim.solve(optprob, ADAM(0.05), maxiters=50)
optprob = remake(optprob,u0 = res_tmp.u)
return GalacticOptim.solve(optprob, LBFGS(), allow_f_increases = false)
end
p_tmp = deepcopy(p_model)
sim_idx = 13
for i in 1:sim_idx
println("Iteration $i - 0.1s - $(idxs[i]*0.1) s")
res_tmp = train(0.1f0, Float32(idxs[i]*0.1), dudt, p_tmp, loss)
p_tmp = deepcopy(res_tmp.u)
end
Furthermore, I tried to train with multiple shooting, but for this I’m getting a domain error although my specified group size is in the limits.
group_size =12
datasize = 102
continuity_term = 200
tspan = (0.1f0, Float32(102*0.1))
tsteps = range(tspan[1], tspan[2], length = idxs[12])
function loss_function(data, pred)
return sum(abs2, data - pred)
end
function loss_multiple_shooting(p)
return multiple_shoot(p_model, y[1:idxs[12]], tsteps, prob, loss_function, RK4(),
group_size; continuity_term)
end
res_ms = DiffEqFlux.sciml_train(loss_multiple_shooting, p_model)
Note that the code of multiple shooting depends on the above code.