Why this code shows the following error: ERROR: type Float64 has no field layer_1. In this code I want to optimize 2 unknown parameters g and k for a 2D PDE with 4 boundary conditions and 3 known data points

using NeuralPDE, Lux, ModelingToolkit, Optimization, OptimizationOptimJL, DifferentialEquations

@parameters x y g k
@variables u(…)
Dx = Differential(x)
Dy = Differential(y)

Define the PDE

eq = Dx(Dx(u(x, y))) + Dy(Dy(u(x, y))) + g/k ~ 0

Boundary conditions

bcs = [u(0, y) ~ 350.0, Dx(u(2, y)) ~ 0, Dy(u(x, 0)) ~ 0, Dy(u(x, 1)) ~ 1000/k*(298-u(x, 1))]

Domain definitions

domains = [x ∈ IntervalDomain(0.0, 2.0),
y ∈ IntervalDomain(0.0, 1.0)]

input_ = 2
n = 16
chain = Lux.Chain(Dense(input_, n, Lux.σ), Dense(n, n, Lux.σ), Dense(n, 1))

known_data = [(0.5375, 0.26364, 348.47), (1.2852, 0.85909, 345), (0.17614, 0.16591, 349.09)]

function additional_loss(phi, θ, p)
_loss(u, x, y) = eq.lhs(u(x,y), x, y) - eq.rhs
l = sum(_loss(phi[1], x, y)^2 for x in 0:0.1:2, y in 0:0.1:1)
for bc in bcs
l += sum(bc.lhs(phi[1], x, y)^2 for x in 0:0.1:2, y in 0:0.1:1)
end
# Adding data loss to the total loss
for (x_data, y_data, temp_data) in known_data
predicted_temp = phi[1](x_data, y_data)
data_loss = (predicted_temp - temp_data)^2
l += data_loss
end
return l
end

discretization = NeuralPDE.PhysicsInformedNN([chain], NeuralPDE.GridTraining(0.1),
param_estim=true, additional_loss=additional_loss)

@named pde_system = PDESystem(eq, bcs, domains, [x, y], [u(x,y)], [g, k],
defaults=Dict([g .=> 500.0, k .=> 200.0]))
prob = discretize(pde_system, discretization)

#Optimizer
opt = OptimizationOptimJL.BFGS()

callback = function (p, l)
println(“Current loss is: $l”)
return false
end

res = Optimization.solve(prob, opt, callback = callback, maxiters = 5000)
optimized_params = res.u[(end - 1):end] # optimized g and k

println("Optimized g and k values: ", optimized_params)

Specifically “res = Optimization.solve(prob, opt, callback = callback, maxiters = 5000)” -this syntax shows the mentioned error . I executed the individual syntax and I found this. Is there any solution for it?