Hi all,
I am a GPU beginner so I probably ask something obvious.
I defined a neural net with four Lux.jl chains:
input_ = 2
n = 20
chain1 = Lux.Chain(Dense(input_,n,Lux.σ),Dense(n,n,Lux.σ),Dense(n,n,Lux.σ),Dense(n,n,Lux.σ),Dense(n,n,Lux.σ),Dense(n,n,Lux.σ),Dense(n,1))
chain2 = Lux.Chain(Dense(input_,n,Lux.σ),Dense(n,n,Lux.σ),Dense(n,n,Lux.σ),Dense(n,n,Lux.σ),Dense(n,n,Lux.σ),Dense(n,n,Lux.σ),Dense(n,1))
chain3 = Lux.Chain(Dense(input_,n,Lux.σ),Dense(n,n,Lux.σ),Dense(n,n,Lux.σ),Dense(n,n,Lux.σ),Dense(n,n,Lux.σ),Dense(n,n,Lux.σ),Dense(n,1))
chain4 = Lux.Chain(Dense(input_,n,Lux.σ),Dense(n,n,Lux.σ),Dense(n,n,Lux.σ),Dense(n,n,Lux.σ),Dense(n,n,Lux.σ),Dense(n,n,Lux.σ),Dense(n,1))
And I would like to train it on a GPU. For that purpose, I ensure that the initial parameters are on the GPU as in the example here
ps1 = Lux.setup(Random.default_rng(), chain1)[1]
ps1 = ps1 |> Lux.ComponentArray |> gpu .|> Float64
ps2 = Lux.setup(Random.default_rng(), chain2)[1]
ps2 = ps2 |> Lux.ComponentArray |> gpu .|> Float64
ps3 = Lux.setup(Random.default_rng(), chain3)[1]
ps3 = ps3 |> Lux.ComponentArray |> gpu .|> Float64
ps4 = Lux.setup(Random.default_rng(), chain4)[1]
ps4 = ps4 |> Lux.ComponentArray |> gpu .|> Float64
And finally I use the symbolic discretization provided by NeuralPDE:
discretization = NeuralPDE.PhysicsInformedNN([chain1 , chain2, chain3, chain4],training_strategy, init_params = [ps1 ps2 ps3 ps4], param_estim=true, additional_loss=additional_loss)
When doing this, I have the following error AssertionError: length(init_params) == length(depvars), I also tried to replace init_params = [ps1 ps2 ps3 ps4] by init_params = [ps1, ps2, ps3, ps4] but had the error CuArray only supports element types that are stored inline…
Any ideas ?
Thanks in advance!