According to the following blog post http://www.stochasticlifestyle.com/solving-systems-stochastic-pdes-using-gpus-julia/ it is very easy to turn on GPU parallelism with DifferentialEquations.jl. The only thing one needs to do is to replace ordinary arrays with CLArrays or CuArrays. However, the following code shows that the GPU version is orders of magnitude slower:
using DifferentialEquations
using CuArrays
function func(du, u, p, t)
@inbounds @. du = u
return nothing
end
tspan = (0., 10.)
alg = BS3()
# ******************************************************************************
println("CPU:")
u0 = fill(1., 500)
prob = ODEProblem(func, u0, tspan)
@time sol = solve(prob, alg, saveat=0.01)
@time sol = solve(prob, alg, saveat=0.01)
@time sol = solve(prob, alg, saveat=0.01)
# ******************************************************************************
println("GPU:")
# CuArrays.allowscalar(false)
u0_gpu = CuArray(convert(Array{Float32, 1}, u0))
prob_gpu = ODEProblem(func, u0_gpu, tspan)
@time sol_gpu = solve(prob_gpu, alg, saveat=0.01)
@time sol_gpu = solve(prob_gpu, alg, saveat=0.01)
@time sol_gpu = solve(prob_gpu, alg, saveat=0.01)
The output on my machine with Nvidia GTX 1070 is the following:
CPU:
3.627708 seconds (14.59 M allocations: 756.548 MiB, 5.28% gc time)
0.002929 seconds (1.11 k allocations: 4.082 MiB)
0.003089 seconds (1.11 k allocations: 4.082 MiB)
GPU:
32.020281 seconds (43.64 M allocations: 2.092 GiB, 1.71% gc time)
24.563494 seconds (20.42 M allocations: 951.305 MiB, 0.49% gc time)
27.890973 seconds (20.42 M allocations: 951.711 MiB, 0.48% gc time)
Did you run with that line uncommented? If that errors, there’s a scalar operation that isn’t supported by CuArrays.jl (yet) and will completely ruin performance.
Well, then I will try to reformulate the question. Are there any examples which show how DifferentialEquations can be speed up using GPU? So far I have found only the above blog post which discusses this possibility.
it will be difficult to speed up real problems on gpu, even big ones, without having to write a custom kernel somewhere
differential equations solvers are typically iterative in nature, so the speedup from the GPU needs to be the individual array operations themselves. This is a hard problem to speed up on GPU unless the operations involved are very large and very computationally intensive
for your example, the GPU probably won’t beat the CPU by much even if func is written as a custom kernel. you’re pretty much just testing the L2 cache bandwidth of a single CPU core vs L2 cache on a GPU.
You’re timing the worst case scenario for GPUs, which is where all of your kernel calls are trivial. Make your ODE something with a matrix multiplication or something else with a lot of linear algebra and you’ll see a different result. BLAS1 computations (“element-wise” or broadcast) is usually not generally enough to warrant GPU usage.
We’re working on this. We will have a GSoC that optimizes GPU code and gets it running with methods for stiff ODEs, focusing on getting solvers for PDEs.
I notice that you use OrdinaryDiffEq instead of DifferentialEquations. Does that make any difference? Is that means DifferentialEquations cant be used this way?
DifferentialEquations is a way heavier package since it includes code for different kinds of differential equations (including stochastical/delay differential equations) as well as automatic solver choices.
If you are just interestend in solving Ordinary Differential Equations OrdinaryDiffEq is the way to go, since it loads faster. @ChrisRackauckas as one of the mayor contributors to the differential equations ecosystem probably just intuitively used the package that used the fast loading required package.