Hi, I want to use NVIDIA GPU in my laptop to speed up Neural ODE computation. I started with testing out GPU on ODEs but I am getting the below error during the “solve” step :
ERROR: Scalar indexing is disallowed.
Invocation of getindex resulted in scalar indexing of a GPU array.
This is typically caused by calling an iterating implementation of a method.
Such implementations do not execute on the GPU, but very slowly on the CPU,
and therefore should be avoided.
If you want to allow scalar iteration, use allowscalar
or @allowscalar
to enable scalar iteration globally or for the operations in question.
The code is here :
using DiffEqFlux
using Lux
using Optimization
using OptimizationOptimJL
using OptimizationOptimisers
using OrdinaryDiffEq
using Plots
using Random
using CUDA
using LinearAlgebra
println("Use NN to solve SIR ODE model")
const cdev = cpu_device()
const gdev = gpu_device()
# u = [s(t), I(t), R(t)]
function trueSirModel!(du, u, p, t)
beta, gamma, N = p
du[1] = -(beta * u[1] * u[2]) / N
du[2] = ((beta * u[1] * u[2]) / N) - (gamma * u[2])
du[3] = gamma * u[2]
end
# Boundary conditions
N = 1000
i0 = 1
r0 = 0
s0 = (N - i0 - r0)
u0 = cu(Float32[s0, i0, r0])
# constants
beta = 0.3
gamma = 0.1
p = cu(Float32[beta, gamma, N])
# time duration
tspan = (0.0, 160.0)
datasize = 160
tsteps = cu(range(tspan[1], tspan[2]; length=datasize))
# Solving the ODE solution
trueOdeProblem = ODEProblem(trueSirModel!, u0, tspan, p)
sol = solve(trueOdeProblem, Tsit5(), saveat=tsteps)
I have been reading the online help / tutorials but still could not find a solution. Any help is much appreciated.