I have the following code where I try to solve a little toy system of PDEs:
using ModelingToolkit, NeuralPDE, Flux, Optim, GalacticOptim, Plots, DiffEqSensitivity, OptimizationOptimisers, OptimizationOptimJL
import ModelingToolkit: Interval, infimum, supremum
@parameters t x
@variables ρ(..) u(..)
Dt = Differential(t)
Dx = Differential(x)
Dxx = Differential(x)^2
eqn1 = Dt(ρ(t, x)) * Dxx(ρ(t, x) * Dx(u(t, x))) ~ 0
eqn2 = sqrt(ρ(t, x)) ~ 0
eqns = [eqn1, eqn2]
L = 10.0
domains = [x ∈ Interval(-L/2, L/2),
t ∈ Interval(0.0, 1.0)]
ρ_IC = ρ(0, x) ~ exp(-0.9 * (x - 1)^2)
u_IC = u(0, x) ~ 0.0
bcs = [ρ_IC, u_IC]
af = gelu
chain = [Chain(Dense(2, 16, af), Dense(16, 16, af), Dense(16, 1, softplus)),
Chain(Dense(2, 16, af), Dense(16, 16, af), Dense(16, 1, softplus))]
discretization = PhysicsInformedNN(chain, QuadratureTraining())
@named pde_sys = PDESystem(eqns, bcs, domains, [t, x], [ρ(t, x), u(t, x)])
prob = NeuralPDE.discretize(pde_sys, discretization)
cb = function (p, l)
println("Current loss is: $l")
return false
end
res = Optimization.solve(prob, ADAM(0.1); callback = cb, maxiters = 100)
- \rho should always be positive as in
eqn2
a square-root is applied. I fixed this by using a softmax in the last layer instead of a linear activation function. It feels a bit strange to do it like this. Is there a way to have \rho positive as a constraint? - Similarly, I want \int_{L/2}^{L/2} \rho dx = 1 i.e. the integral of \rho, a probability density, should be normalized to 1 and then kept that way (thereby also automatically fulfilling 1)) at all times. How can I do that in SciML or something?
I am new to both Julia and PINNs, I hope my question makes sense.
Edit: I’m aware of this that seems quite close to what I want but the explanation of norm_loss_function
is not very clear to me, especially since I have a system of 2 coupled PDEs.