I’m trying to use NeuralPDE.jl to solve a simple system of PDEs and it seems to completely ignore the boundary conditions.
Can someone suggest a modification of the following code that will shows the correct behavior?
using NeuralPDE, Lux, ModelingToolkit, Optimization, OptimizationOptimJL
import ModelingToolkit: Interval, infimum, supremum
import OptimizationOptimisers
using Plots
@parameters t, x
@variables u1(..), u2(..)
Dt = Differential(t)
Dx = Differential(x)
Dtt = Differential(t)^2
Dxx = Differential(x)^2
Dtx = Differential(t)*Differential(x)
eqs = [Dxx(u1(t,x)) ~ 0;
Dtt(u1(t,x)) ~ 0;
Dtx(u1(t,x)) ~ 0;
Dtx(u2(t,x)) ~ 0;
Dxx(u2(t,x)) ~ 0;
Dtt(u2(t,x)) ~ 0]
bcs = [u1(-200,x) ~ (x+200)/2,
u1(t,0) ~ -t/2,
Dt(u1(-200,x)) ~ -1/2,
Dx(u1(t,0)) ~ 1/2,
u2(-200,x) ~ 0,
u2(t,0) ~ 0,
Dt(u2(-200,x)) ~ 0,
Dx(u2(t,0)) ~ 0
]
domains = [t ∈ Interval(-200,-100),
x ∈ Interval(0,50)]
input_ = length(domains)
n = 15
chain =[Lux.Chain(Dense(input_,n,Lux.σ),Dense(n,n,Lux.σ),Dense(n,1)) for _ in 1:input_]
strategy = QuadratureTraining()
discretization = PhysicsInformedNN(chain, strategy)
@named pdesystem = PDESystem(eqs,bcs,domains,[t,x],[u1(t, x), u2(t,x)])
prob = discretize(pdesystem,discretization)
sym_prob = symbolic_discretize(pdesystem,discretization)
pde_inner_loss_functions = sym_prob.loss_functions.pde_loss_functions
bcs_inner_loss_functions = sym_prob.loss_functions.bc_loss_functions
callback = function (p, l)
println("loss: ", l)
println("pde_losses: ", map(l_ -> l_(p), pde_inner_loss_functions))
println("bcs_losses: ", map(l_ -> l_(p), bcs_inner_loss_functions))
return false
end
res = Optimization.solve(prob,OptimizationOptimisers.Adam(0.5); callback = callback, maxiters=500)
phi = discretization.phi
ts,xs = [infimum(d.domain):1.0:supremum(d.domain) for d in domains]
minimizers_ = [res.u.depvar[sym_prob.depvars[i]] for i in 1:input_]
analytic_sol_func(t,x) = [(x-t)/2, 0]
u_real = [[analytic_sol_func(t,x)[i] for x in xs for t in ts] for i in 1:input_]
u_predict = [[phi[i]([t,x],minimizers_[i])[1] for x in xs for t in ts] for i in 1:input_]
diff_u = [abs.(u_real[i] .- u_predict[i] ) for i in 1:input_]
p = []
for i in 1:input_
p1 = plot(xs, ts, reshape(u_real[i],length(ts), length(xs)),linetype=:contourf,title = "u$i, analytic");
p2 = plot(xs, ts, reshape(u_predict[i],length(ts), length(xs)),linetype=:contourf,title = "predict");
p3 = plot(xs, ts, reshape(diff_u[i],length(ts), length(xs)),linetype=:contourf,title = "error");
push!(p,plot(p1,p2,p3))
# savefig("sol_u$i")
end
plot(p[1])
plot(p[2])
Notice that the solution of
Dtx(u1(t,x))
is simply the sum of 2 generic functions f(t)+g(x)
. And the boundary condition will fix f
and g
.
The extra equations like
Dxx(u1(t,x))
just force f
and g
to be linear.
As long as the boundary condition functions are linear (the boundary condition function satisfy the equations) they will be automatically satisfied in the entire domain (they are more constraints than evolution equations).
I set up this basic example, so that once this works I can start adding non-zero right-hand sides to couple the equations. And start considering more interesting boundary conditions functions.
If it provides extra motivation the full system will be equivalent to the Einstein Equations (General Relativity) describing the collapse to a black hole in spherical symmetry in “horizon penetrating” coordinates, including Hawking evaporation. I think it could be a nice example to have in the “tutorials”, but one step at a time. Let’s just figure out if I’m doing something dumb in the simplest case first.
PS the code is runnable and will produce
To give some extra context:
I’m aware that one can get stuck in local minima during the optimization process.
I’m aware that there is functionality to re-weight the pde/bc cost functions (but I could not really get anywhere, not sure if I was doing the correct thing)
I’m aware that you can of course reformulate the problem by defining you functions u1 and u2 as u1’ and u2’ as u1+ bc(x) + bc(y) etc so that the boundary conditions for u1’ and u2’ are just zero. But in my mind there is nothing special with a boundary condition being identically zero unless the default network parameters tend to be closer to that configuration (and so closer to the minima). One could get similar results by just pre-training the net to be equal to bc(x)+bc(y) I guess, and start a new optimization from there, but it wasn’t obvious to me how to achieve that practically from the documentation.
In practice I want to make sure that the setup above is correct (it’s a modification of one of the examples) and that there are no intrinsic limitations in the package for the type of system I described, before we start testing all kind of different alternative.
If someone expert in that specific package and/or the general modeling ecosystem could take a look it would definitely be helpful to move in the correct direction and avoid fighting windmills.