I’m currently using
NeuralPDE.jl to learn the solution to various PDEs. Everything is working well, however I would like to train on GPUs to speed up some of my processing. For building the neural networks I am using
Lux.jl. When I try and move parameters of the network to the GPU for training using the following code
ps = Lux.setup(Random.default_rng()) ps = ps |> Lux.ComponentArray |> gpu .|> Float32
I get the output
┌ Info: The GPU function is being called but the GPU is not accessible. │ Defaulting back to the CPU. (No action is required if you want └ to run on the CPU).
Is there anything special I need to do for an M1 Mac or is the GPU currently not accessible with Julia and NeuralPDE?
Here are the packages and versions I am using:
[052768ef] CUDA v3.12.0 [aae7a2af] DiffEqFlux v1.52.0 [0c46a032] DifferentialEquations v7.6.0 [5b8099bc] DomainSets v0.5.14 [033835bb] JLD2 v0.4.26 [b2108857] Lux v0.4.33 [961ee093] ModelingToolkit v8.33.0 [315f7962] NeuralPDE v5.3.0 [7f7a1694] Optimization v3.9.2  OptimizationOptimJL v0.1.4 [42dfb2eb] OptimizationOptimisers v0.1.0 [1dea7af3] OrdinaryDiffEq v6.31.2 [91a5bcdd] Plots v1.36.1 [f2b01f46] Roots v2.0.8 [1ed8b502] SciMLSensitivity v7.11.0 [90137ffa] StaticArrays v1.5.9 [9a3f8284] Random [10745b16] Statistics
Any help is appreciated!