Having issues with running NeuralPDE on GPU

Was trying to train a PINN for NLSE using NeuralPDE. Everything runs fine on CPU, but when I try to move calculations to GPU, I get the following error:

ERROR: CuArray only supports element types that are allocated inline.
Real is not allocated inline

My code is below. The error is raised at prob=discretize... line.

using NeuralPDE, Lux, LuxCUDA, Random, ComponentArrays
using Optimization
using OptimizationOptimisers
import ModelingToolkit: Interval

@parameters t x
@variables u(..)
Dt = Differential(t)
Dxx = Differential(x)^2
L = 10.0
w = 0.2
A = 1/w
ic(x) = A / cosh(x/w)^2

eq = Dt(u(t, x)) ~ 0.5 * Dxx(u(t, x)) + u(t, x) * abs(u(t, x))^2
bcs = [u(t, 0) ~ u(t, L), u(0, x) ~ ic(x)]
domains = [t ∈ IntervalDomain(0.0, 10.0), x ∈ IntervalDomain(-L/2, L/2)]

chain = Lux.Chain(Lux.Dense(2, 16, Lux.σ), Lux.Dense(16, 16, Lux.σ), Lux.Dense(16, 1))

strategy = GridTraining(0.05)

ps_cpu = Lux.setup(Random.default_rng(), chain)[1]
ps = ps_cpu |> ComponentArray |> Lux.gpu_device() .|> Float64
discretization = PhysicsInformedNN(chain, strategy, init_params=ps)

@named pde_system = PDESystem(eq, bcs, domains, [t, x], [u(t,x)])

prob = discretize(pde_system, discretization)

callback = function (p, l)
    #println("Current loss is: $l")
    return false
end

res = Optimization.solve(prob, Adam(0.01); callback = callback, maxiters = 2500)

EDIT: Updated the code to my last version. Also ignore the equation being wrong (NLSE has imaginary unit multiplied by time derivative, mine doesn’t)

Make sure everything is either Float64 or Float32. I see some Float64 literals in 0.5 and 0.05.

Already tried, getting the same error message

Everywhere? Things like this would give Float64s that would give the sampling some issues. What is your Float32 everywhere code?

i switched to Float64. If Lux doesn’t sneak any Float32s internally, everything should be Float64.

We just released an update, NeuralPDE.jl v5.10, which should fix this issue. GPU tests are passing.