Testing your DP model after training

Hi there, I have trained a model to approximate a system of differential equations using certain initial conditions and parameters. I would now like to see how my models performs if I use different starting conditions. I thus want to test it, because I’ve already trained my model, but I have no clue how to do it. Can somebody help me?

Here is my programme:
using DiffEqFlux, OrdinaryDiffEq, Flux, Optim, Plots

u0 = Float32[2.; 3.] #initial conditions
datasize = 50 #timesteps
tspan = (0.0f0,2.5f0) #timespan
#System of differential equations
function trueODEfunc(du,u,p,t)
x, y = u
a, b, c, d = p
du[1] = dx = ax - bxy
du[2] = dy = -c
y + dxy
p = [1.5,1.0,3.0,1.0] #initial parameters
t = range(tspan[1],tspan[2],length=datasize) #timespan + steps
prob = ODEProblem(trueODEfunc,u0,tspan,p) #makes the ODE problem
ode_data = Array(solve(prob,Tsit5(),saveat=t)) #generates the data

dudt2 = Chain(Dense(2,50,relu), Dense(50,2)) #define a multilayer perceptron with 1 hidden layer and a relu activation function
p,re = Flux.destructure(dudt2) # use this p as the initial condition!
dudt(u,p,t) = re(p)(u) # need to restrcture for backprop!
prob2 = ODEProblem(dudt,u0,tspan)

#prediction function
function predict_n_ode()
#loss function
function loss_n_ode()
pred = predict_n_ode()
loss = sum(abs2,ode_data .- pred)

loss_n_ode() # n_ode.p stores the initial parameters of the neural ODE
#makes the graphs and displays the loss for every cycle through the dataset
cb = function (;doplot=false) #callback function to observe training
pred = predict_n_ode()
display(sum(abs2,ode_data .- pred))
display(sum(abs2,ode_data[1,:] .- pred[1,:]))
display(sum(abs2,ode_data[2,:] .- pred[2,:]))

plot current prediction against data

pl = Plots.scatter(t,ode_data[1,:],label=“data prey”)
Plots.scatter!(pl,t,ode_data[2,:],label=“data predator”)
Plots.scatter!(pl,t,pred[1,:],label=“prediction prey”)
Plots.scatter!(pl,t,pred[2,:],label=“prediction predator”)
return false

Display the ODE with the initial parameter values.

epochs = 300
learningrate = 0.01
data = Iterators.repeated((), epochs)

Flux.train!(loss_n_ode, Flux.params(u0,p), data, ADAM(learningrate), cb = cb)

You could wrap your prob2 in an ensemble problem:

function prob_func(prob,i,repeat)

then allows you to define different initial conditions.