I’ve been trying to exploit GPU capabilities for solving a NeuralPDE with GalacticOptim.solve, but I’ve been stumbling into a problem related to scalar operations on GPU arrays.
line 123 I use the
|> gpu operator and in
line 179 if
CUDA.allowscalar(true) I get:
Warning: Performing scalar operations on GPU arrays: This is very slow, consider disallowing these operations with allowscalar(false)
Indeed, if I set
CUDA.allowscalar(false) I get an error:
`scalar getindex is disallowed``
Is there a way to exploit GPU’s speedup with these settings?
Here you can find my code, thanks