Lotka Volterra Neural ODE

Hello All,

Just started learning about neural ODEs, i was trying to solve the Lotka Volterra equation using neural ODE.
here is my code


using JLD, Lux, DiffEqFlux, DifferentialEquations, Optimization, OptimizationOptimJL, Random, Plots
using ComponentArrays

#equations for Lotka-Volterra
#dx = (α * x) - (β * x * y)
#dy = (-1 * δ * y) - (γ * x * y)
N_days = 10
tspan = (0.0, float(N_days))

datasize = N_days
t = range(tspan[1], tspan[2], length=datasize)

p0 = Float64[
    1.5, # α
    1.0, # β
    3.0, # γ
    1.0, # δ
]
u0 = [1.0, 1.0]
function LotkaVolterra!(du, u, p, t)
    (α, β, γ, δ) = p
    pp = u[1] * u[2]
    du[1] = (α * u[1]) - (β *  pp)
    du[2] = (γ * pp) - (δ * u[2])
end

prob = ODEProblem(LotkaVolterra!, u0, tspan, p0)
sol = Array(solve(prob, Tsit5(), u0=u0, p=p0, saveat=t))


#prey_data = Array(sol)[1, :]
#predator_data = Array(sol)[2, :]

#plot(sol)
#plot(sol, vars=(1,2))
sol
#plot(sol[1,: ], label = "Prey")
#plot!(sol[2,: ] , label = "Predator")


NN1 = Lux.Chain(Lux.Dense(3,10,relu),Lux.Dense(10,1))
p1, st1 = Lux.setup(rng, NN1)


NN2 = Lux.Chain(Lux.Dense(3,10,relu),Lux.Dense(10,1))
p2, st2 = Lux.setup(rng, NN2)


p0_vec = (layer_1 = p1, layer_2 = p2)
global p0_vec = ComponentArray(p0_vec)



function NNLotkaVolterra!(du, u, p, t)
    (α, β, γ, δ ) = p
    
    PreyNN = abs(NN1([α, u[1], u[2]], p0_vec.layer_1, st1)[1][1])
    PredNN = abs(NN2([β, u[1], u[2]], p0_vec.layer_2, st2)[1][1])

    du[1] = (α * u[1]) - PreyNN
    du[2] = PredNN - (δ * u[2])
end

prob_nn = ODEProblem(NNLotkaVolterra!, u0, tspan, p0)

sol_nn = solve(prob_nn) 
#plot!(sol_nn)

function predict_LotkaVolterra!(params)
  pred_x = Array(solve(prob_nn,Tsit5(), p=params, saveat=t,
                  sensealg=InterpolatingAdjoint(autojacvec=ReverseDiffVJP(true))))
 return pred_x
end

function loss_LotkaVolterra(θ)
  pred_x = predict_LotkaVolterra!(θ)
  loss =  sum( abs2, (prey_data .- pred_x[1,:])[1:end])
  loss += sum( abs2, (predator_data .- pred_x[2,:])[2:end])
  return loss
end


function callback_LotkaVolterra(θ,l) 
  return false
end


adtype = Optimization.AutoZygote()
optf = Optimization.OptimizationFunction((optVec, p0) -> loss_LotkaVolterra(optVec), adtype)
optprob = Optimization.OptimizationProblem(optf, p0_vec)
res1 = Optimization.solve(optprob, OptimizationOptimisers.ADAM(0.005), callback = callback_LotkaVolterra, maxiters = 5000)

data = predict_LotkaVolterra!(res1.u)
plot(data[1,:])
plot!(data[2,:])

Here i am trying to replace

(β * x* y) with 1st neural network NN1
and
(γ * x*y) with 2nd neural network NN2

I just need to understand if i am missing any step or done something wrong in the code,
i tried to plot the graph which is not matching with the actual solution and i am confused with the approach.
any pointers to understand this better are really appreciated,
Thnaks in advance.

2 Likes

You would generally write this as just 1 neural network with 2 outputs. See

1 Like

You may also want to see this thread. When using the PhysicsInformedNN interface, you could use different NN to represent each solution. When using the NNODE interface, you can use a single NN with multiple outputs to represent each solution. I see that you are not using either interface and rolling your own solution, so I am not sure if my reply really helps.

Thank you @ChrisRackauckas , @affans
I will go through the threads and docs.
ideally I should be using 1 Neural network. i will try this

Hi,
I tried to solve it using a single NN

using JLD, Lux, DiffEqFlux, DifferentialEquations, Optimization, OptimizationOptimJL, Random, Plots
using ComponentArrays, OptimizationOptimisers

#equations for Lotka-Volterra
#dx = (α * x) - (β * x * y)
#dy = (-1 * δ * y) - (γ * x * y)
N_days = 10
tspan = (0.0, float(N_days))

datasize = N_days
t = range(tspan[1], tspan[2], length=datasize)
rng = Random.default_rng()

p0 = Float64[
    1.5, # α
    1.0, # β
    3.0, # γ
    1.0 # δ
]
u0 = [1.0, 1.0]
function LotkaVolterra(du, u, p, t)
    (α, β, γ, δ) = p
    pp = u[1] * u[2]
    du[1] = (α * u[1]) - (β *  pp)
    du[2] = (γ * pp) - (δ * u[2])
end

prob = ODEProblem(LotkaVolterra, u0, tspan, p0)
sol = solve(prob, Tsit5(), u0=u0, p=p0, saveat=t)

data = Array(sol)
prey_data = data[1, :]
predator_data = data[2, :]
#plot(sol)
#plot(sol, vars=(1,2))
#sol
plot(sol[1,: ], label = "Prey")
plot!(sol[2,: ] , label = "Predator")

#rbf(x) = exp.(-(x .^ 2))
#
# Multilayer FeedForward
#const U = Lux.Chain(Lux.Dense(2, 5, rbf), Lux.Dense(5, 5, rbf), Lux.Dense(5, 5, rbf),
#    Lux.Dense(5, 2))
# Get the initial parameters and state variables of the model
#p, st = Lux.setup(rng, U)
#const _st = st


NN1 = Lux.Chain(Lux.Dense(2,5,tanh), Lux.Dense(5,5,tanh), Lux.Dense(5,5,relu), Lux.Dense(5,2))
global p1, st1 = Lux.setup(rng, NN1)

function NNLotkaVolterra(du, u, p, t)
    (α, β, γ, δ ) = p
    
    NN = abs2(NN1([u[1], u[2]], p1, st1)[1][1])
    #print(NN)
    du[1] = (α * u[1]) - (β * NN)
    du[2] = (γ * NN) - (δ * u[2])
end

prob_nn = ODEProblem(NNLotkaVolterra, u0, tspan, p0)

#sol_nn = solve(prob_nn) 
#plot!(sol_nn)

function predict_LotkaVolterra(params)
  pred_x = Array(solve(prob_nn,Tsit5(), p=params, saveat=t))
  return pred_x
end

function loss_LotkaVolterra(θ)
  pred_x = predict_LotkaVolterra(θ)
  loss =  sum(abs2, (data .- pred_x))

  return loss
end

function callback_LotkaVolterra(θ,l) 
  return false  
end

adtype = Optimization.AutoZygote()
optf = Optimization.OptimizationFunction((optVec, p0) -> loss_LotkaVolterra(optVec), adtype)
optprob = Optimization.OptimizationProblem(optf, p0_vec)
res1 = Optimization.solve(optprob, OptimizationOptimisers.ADAM(0.5, (0.05,0.005)), callback = callback_LotkaVolterra, maxiters = 2000)

dataNN = predict_LotkaVolterra(res1.u)

dataNN
plot!(dataNN[1,:])
plot!(dataNN[2,:])

Just a few queries,

  1. From the https://docs.sciml.ai/Overview/stable/showcase/missing_physics/ looks like rbf is not available in Lux.jl
  2. should the loss be combined for both variables x and y

Yes it’s defined in the code.

That depends on the data you have