# Continue solving ODE problem when using NeuralPDE.NNODE

I tried to continue solving by using the previous weight as the initial weight for the new one. But the loss for the first iteration of the new one is higher than the final loss of the previous one.
My code is below:

``````using NeuralPDE, Lux, OptimizationOptimisers
SIMB_NN(u,p,t) = [u[2], 5- 10*sin(u[1])-1.7*u[2]]
prob = ODEProblem(SIMB_NN, u0, tspan)
chain = Lux.Chain(Dense(1,5,Lux.σ), Dense(5,2))
sol = solve(prob, NeuralPDE.NNODE(chain,opt), verbose = true, dt = 0.01f0, abstol=1e-10, maxiters = 200)
``````

the loss for the first time solving:

``````Current loss is: 2295.899122938303, Iteration: 198
Current loss is: 2287.8828325994946, Iteration: 199
Current loss is: 2281.6355418052744, Iteration: 200
Current loss is: 2281.6355418052744, Iteration: 201
``````

then I continue solving follow:
`sol = solve(prob, NeuralPDE.NNODE(chain,opt,init_params = sol.k), verbose = true, dt = 0.01f0, abstol=1e-10, maxiters = 200)`
However, the loss of the new one is:

``````Current loss is: 10131.155170881239, Iteration: 1
Current loss is: 8778.128648065542, Iteration: 2
Current loss is: 7562.72487697895, Iteration: 3
Current loss is: 6747.883320241421, Iteration: 4
Current loss is: 6407.502788917461, Iteration: 5
``````

There is any way to continue solving the ODE problem from the end of the previous solve?

With Tag the original solution to sol.original and simplify dependencies by ChrisRackauckas · Pull Request #846 · SciML/NeuralPDE.jl · GitHub, sol.original returns the internal optimization, so `sol.original.u` would be the parameters of the optimization. You could use that to seed the next optimization at the same parameters as the current.