Lotka Volterra Neural ODE

Hello All,

Just started learning about neural ODEs, i was trying to solve the Lotka Volterra equation using neural ODE.
here is my code


using JLD, Lux, DiffEqFlux, DifferentialEquations, Optimization, OptimizationOptimJL, Random, Plots
using ComponentArrays

#equations for Lotka-Volterra
#dx = (α * x) - (β * x * y)
#dy = (-1 * δ * y) - (γ * x * y)
N_days = 10
tspan = (0.0, float(N_days))

datasize = N_days
t = range(tspan[1], tspan[2], length=datasize)

p0 = Float64[
    1.5, # α
    1.0, # β
    3.0, # γ
    1.0, # δ
]
u0 = [1.0, 1.0]
function LotkaVolterra!(du, u, p, t)
    (α, β, γ, δ) = p
    pp = u[1] * u[2]
    du[1] = (α * u[1]) - (β *  pp)
    du[2] = (γ * pp) - (δ * u[2])
end

prob = ODEProblem(LotkaVolterra!, u0, tspan, p0)
sol = Array(solve(prob, Tsit5(), u0=u0, p=p0, saveat=t))


#prey_data = Array(sol)[1, :]
#predator_data = Array(sol)[2, :]

#plot(sol)
#plot(sol, vars=(1,2))
sol
#plot(sol[1,: ], label = "Prey")
#plot!(sol[2,: ] , label = "Predator")


NN1 = Lux.Chain(Lux.Dense(3,10,relu),Lux.Dense(10,1))
p1, st1 = Lux.setup(rng, NN1)


NN2 = Lux.Chain(Lux.Dense(3,10,relu),Lux.Dense(10,1))
p2, st2 = Lux.setup(rng, NN2)


p0_vec = (layer_1 = p1, layer_2 = p2)
global p0_vec = ComponentArray(p0_vec)



function NNLotkaVolterra!(du, u, p, t)
    (α, β, γ, δ ) = p
    
    PreyNN = abs(NN1([α, u[1], u[2]], p0_vec.layer_1, st1)[1][1])
    PredNN = abs(NN2([β, u[1], u[2]], p0_vec.layer_2, st2)[1][1])

    du[1] = (α * u[1]) - PreyNN
    du[2] = PredNN - (δ * u[2])
end

prob_nn = ODEProblem(NNLotkaVolterra!, u0, tspan, p0)

sol_nn = solve(prob_nn) 
#plot!(sol_nn)

function predict_LotkaVolterra!(params)
  pred_x = Array(solve(prob_nn,Tsit5(), p=params, saveat=t,
                  sensealg=InterpolatingAdjoint(autojacvec=ReverseDiffVJP(true))))
 return pred_x
end

function loss_LotkaVolterra(θ)
  pred_x = predict_LotkaVolterra!(θ)
  loss =  sum( abs2, (prey_data .- pred_x[1,:])[1:end])
  loss += sum( abs2, (predator_data .- pred_x[2,:])[2:end])
  return loss
end


function callback_LotkaVolterra(θ,l) 
  return false
end


adtype = Optimization.AutoZygote()
optf = Optimization.OptimizationFunction((optVec, p0) -> loss_LotkaVolterra(optVec), adtype)
optprob = Optimization.OptimizationProblem(optf, p0_vec)
res1 = Optimization.solve(optprob, OptimizationOptimisers.ADAM(0.005), callback = callback_LotkaVolterra, maxiters = 5000)

data = predict_LotkaVolterra!(res1.u)
plot(data[1,:])
plot!(data[2,:])

Here i am trying to replace

(β * x* y) with 1st neural network NN1
and
(γ * x*y) with 2nd neural network NN2

I just need to understand if i am missing any step or done something wrong in the code,
i tried to plot the graph which is not matching with the actual solution and i am confused with the approach.
any pointers to understand this better are really appreciated,
Thnaks in advance.

1 Like

You would generally write this as just 1 neural network with 2 outputs. See

1 Like

You may also want to see this thread. When using the PhysicsInformedNN interface, you could use different NN to represent each solution. When using the NNODE interface, you can use a single NN with multiple outputs to represent each solution. I see that you are not using either interface and rolling your own solution, so I am not sure if my reply really helps.

Thank you @ChrisRackauckas , @affans
I will go through the threads and docs.
ideally I should be using 1 Neural network. i will try this