Tutorial on the portfolio liquidation problem with PDE deep learning - NeuralPDEs and MTK

I seek to bring to NeuralPDEs.jl and MTK as a tutorial combining the PDE system and the integro-differential demos the portfolio liquidation problem of the Mean Field Control literature.

Here we consider the model proposed by [Cardaliaguet and Lehalle] (Mean field game of controls and an application to trade crowding | Mathematics and Financial Economics) and widely available in Python. Following Cardaliaguet and Lehalle, the Nash equilibrium control is α_t^∗(q) = \frac{∂_q v(t,q)}{2κ} , where (v,m) solve the following PDE system:

\begin{equation} \left\{ \begin{aligned} − γμ̄q = ∂_t v − φq^{2} + \frac{|∂_q v(t, q)|^2}{4κ} \\ ∂_t m + ∂_q \left( m\, \frac{∂_q v(t,q)}{2κ} \right) = 0\\ μ̄_t = \int_{q}^{}\frac{∂_q v(t,q)}{2κ} m(t,dq) \\ m(0,.) = m_0 , \, v(T, q) = -Aq^2 \end{aligned} \right. \end{equation}

Note that the mean field interactions are through \bar \mu_t , which is a non-local (in space) term involving the derivative of the solution to the HJB equation.The initial and
terminal conditions are imposed by penalization. For the non-local term \bar \mu_t, we use Monte Carlo samples to estimate the integral.

We solve this PDE system using the DGM. We use the NN architecture proposed with 3 layers and a width of 40. The domains are t \in (0, 1) and q \in (0,10).
In this example, we use the parameters: T = 1, A = 1, φ = 1, κ = 1, γ = 1, and a Gaussian initial distribution with mean 4 and variance 0.3. To ensure that the neural network for the distribution always outputs positive values, we used on the last layer the exponential function as an activation function.

For the Gaussian distribution, we consider two cases: mean 3 and variance 0.5, and mean −4 and variance 0.3. In both cases, the traders aim to bring their inventory close to zero (but perhaps not exactly equal due to other terms in the cost and due to the price impact). In the first case, it means selling stocks, while in the second case, it means buying stocks.

My initial coding using the docs from NeuralPDEs is:

using NeuralPDE, Lux, ModelingToolkit, Optimization, OptimizationOptimJL, LineSearches,
      OptimizationOptimisers
using ModelingToolkit: Interval, infimum, supremum

@parameters t, q 
@variables v(..), m(..)
Dt = Differential(t)
Dq = Differential(q)
Ii = Integral(q in DomainSets.ClosedInterval(0.0, 10.0))

μ̄
A = 1.0
γ = 1.0
Φ = 1.0
κ = 1.0

eqs = [
    Dt(v) ~ Φ*q^2 - abs(Dq(v))^2/4*κ - γ*μ̄ *q,
    Dt(m) ~ - Dq(m*Dq(v(t,q))/2*κ)
]


bcs = [
    m(0, q) ~ m0,
    v(1.0, q) ~ -Aq^2,
    μ̄ ~ Ii(Dq(v(t,q))*m(t,Dq)/2*κ
]

# Space and time domains
domains = [t ∈ Interval(0.0, 1.0), q ∈ Interval(0.0, 10.0)]

Where do I go from here on the number of NN, the building of the loss including the bcs and the integral term by sampling?

The process here is exactly what discretize(pdesys) does on the PDESystem, thus you do not need to do this yourself. Did you try using the discretize function as the tutorials show?