Hi,

I’m trying to implement a PINN (physiscs informed neural network) to solve the heat equation in Julia. I come from pytorch and so my code probably doesn’t look very Julia-like. I managed to get a version of my code running on the cpu, but I want to run it on a GPU now and I’m encountering a lot of issues. Here’s the code I have. First I initialise some constants (not clear to me if I should send them all to the GPU if I want my computation to happen there)

```
using LinearAlgebra
using Parameters,BenchmarkTools
using Flux
using PyPlot
using Sobol
σ = Float32(0.1) |>gpu
d = Int32(2) |>gpu
const Δx = Float32(1e-2) |>gpu
const Δt = Float32(1e-2) |>gpu
```

Then I define some functions that will be useful in the future:

```
function fExpl0(x)
sol = 1/(4*π*σ)^(d/2)*exp(-norm(x)^2/(4*σ))
return Float32(sol)
end
function ∂t(f,point)
t = point[1:1]
x = point[2:end]
g(t,x) = f([t,x...])
return ∂t(g,t,x)
end
function ∂t(f,t,x)
return (f(t .+ Δt,x) .- f(t,x))./(Δt)
end
function Δ(f,point)
t = point[1:1]
x = point[2:end]
g(t,x) = f([t,x...])
return Δ(g,t,x)
end
function Δ(f,t,x)
d = length(x)
Δf = eltype(x)(0.0)
for i = 1:d
Δf += (f(t,x .+ Δx*I(d)[i,:]) .- 2*f(t,x) .+ f(t,x .+ (-Δx)*I(d)[i,:]))./Δx^2
end
return Δf
end
```

I didn’t manage to use automated differences in my loss function and from here I understood that it could be easier to use finite differences (if you have a way to use autodiff I’d greatly appreciate) .

Then I define a custom neural network with Flux that matches perfectly the initial condition

```
struct NNInitialCondition
chain::Chain
f0::Function
end
function (m::NNInitialCondition)(point)
t = point[1:1]
x = point[2:end]
return t .* m.chain(point)[1:1] .+ (1 .- t) .* m.f0(x)
end
Flux.@functor NNInitialCondition
NN = Chain(
Dense(d+1 => 32, tanh),
Dense(32 => 32, tanh),
Dense(32 => 32, tanh),
Dense(32 => 1)
) |> gpu
NNCustom = NNInitialCondition(NN,fExpl0)
```

Finally I define my loss function and the evaluation points:

```
function loss(f,point)
lossVal = (∂t(f,point) .- σ*Δ(f,point) ).^2
return lossVal
end
NPtTrain = 1000
s = SobolSeq(d+1)
evalPointsTrain = [Float32.(next!(s)) for i in 1:NPtTrain] |> gpu
```

Then I call:

```
NNCustom(evalPointsTrain[1])
loss(NNCustom,evalPointsTrain[1])
```

I have three questions at the moment:

1 - If I do not wrap the output of ` fExpl0(x)`

into a `Float32()`

instruction then my custom net returns a Float64 value. Am I doing things correctly? Is this the best way to get a Float32 output from my custom model?

2 - After getting some warnings from CUDA due to scalar indexing, I started using stuff like `t = point[1:1]`

rather than `t = point[1]`

is this the correct way to proceed?

3- When I call `loss(NNCustom,evalPointsTrain[1])`

I get the error :

```
ERROR: MethodError: no method matching +(::CUDA.CuArray{Float32, 1, CUDA.Mem.DeviceBuffer}, ::Float32)
For element-wise addition, use broadcasting with dot syntax: array .+ scalar
```

I think it comes from the line `return (f(t .+ Δt,x) .- f(t,x))./(Δt)`

which already has a `.+`

and I do not understand why it’s complaining. I guess that it comes from this line because I get the same error if I call `∂t(NNCustom,evalPointsTrain[1])`