NeuralPDE.jl: how to use random noise as the initial condition

I am trying to solve the same model discussed in this question using a PINN approach.

Frankenstining together some code based on the examples in the documentation, I ended up with the following:

using NeuralPDE, Flux, ModelingToolkit, GalacticOptim, Optim, DiffEqFlux, DifferentialEquations
using Quadrature,Cubature
import ModelingToolkit: Interval, infimum, supremum

@parameters t, x
@variables c₁(..), c₂(..)
Dt = Differential(t)
Dxx = Differential(x)^2

f = 1.2
m = 190
q = 0.001
ϵ₁ = 0.01
ϵ = 0.01
K = 1000
Θ = 0.0025
d₀ = 0.01
D₁ = 1

eqs = [Dt(c₁(t,x)) ~ 1/ϵ*(f*c₂(t,x)*(q-c₁(t,x))/(q+c₁(t,x)) + c₁(t,x)*(1-m*c₂(t,x))/(1-m*c₂(t,x) + ϵ₁) - c₁(t,x)^2) + D₁*Dxx(c₁(t,x)),
       Dt(c₂(t,x)) ~ c₁(t,x)*(1-m*c₂(t,x)/(1-m*c₂(t,x) + ϵ₁)) - c₂(t,x) + D₁/d₀*(1 + (K-1)/(1 + 3/Θ))^(-3/2)*Dxx(c₂(t,x))]

bcs = [c₁(0,x) ~ rand(),
       c₂(0,x) ~ rand(),
       c₁(t,0) ~ 0.,
       c₂(t,0) ~ 0.,
       c₁(t,30) ~ 0.,
       c₂(t,30) ~ 0.]


# Space and time domains
domains = [t ∈ Interval(0.0,0.3),
           x ∈ Interval(0.0,30.0)]

# Neural network
input_ = length(domains)
n = 15
chain =[FastChain(FastDense(input_,n,Flux.σ),FastDense(n,n,Flux.σ),FastDense(n,1)) for _ in 1:3]
initθ = map(c -> Float64.(c), DiffEqFlux.initial_params.(chain))

_strategy = QuadratureTraining()
discretization = PhysicsInformedNN(chain, _strategy, init_params= initθ)

@named pde_system = PDESystem(eqs,bcs,domains,[t,x],[c₁(t, x),c₂(t, x)])
prob = discretize(pde_system,discretization)
sym_prob = symbolic_discretize(pde_system,discretization)

pde_inner_loss_functions = prob.f.f.loss_function.pde_loss_function.pde_loss_functions.contents
bcs_inner_loss_functions = prob.f.f.loss_function.bcs_loss_function.bc_loss_functions.contents

cb = function (p,l)
    println("loss: ", l )
    println("pde_losses: ", map(l_ -> l_(p), pde_inner_loss_functions))
    println("bcs_losses: ", map(l_ -> l_(p), bcs_inner_loss_functions))
    return false
end

res = GalacticOptim.solve(prob,BFGS(); cb = cb, maxiters=5000)

phi = discretization.phi

using Plots

ts,xs = [infimum(d.domain):0.01:supremum(d.domain) for d in domains]

acum =  [0;accumulate(+, length.(initθ))]
sep = [acum[i]+1 : acum[i+1] for i in 1:length(acum)-1]
minimizers_ = [res.minimizer[s] for s in sep]

u_predict  = [[phi[i]([t,x],minimizers_[i])[1] for t in ts  for x in xs] for i in 1:2]
for i in 1:2
    p2 = plot(ts, xs, u_predict[i],linetype=:contourf,title = "predict");
    plot(p2)
    savefig("sol_c$i")
end

It runs fine but the result is wrong, I think the initial condition is incorrect. How do I use some random noise as the initial condition?

Open an issue. What you have here is something that would fix a single value as the initial condition, not something that would change the initial value each time.

Thinking about this some more, the right thing might be to solve over all initial conditions by making that be a new dimension, and then sampling the solution over that dimension. But yeah let’s worth that out through an issue.

1 Like

Done!

https://github.com/SciML/NeuralPDE.jl/issues/481