Black scholes pde

Hello

I am new to julia coming from a python / mathematica background for pde solving. For simple testing I was trying a simpler version of black scholes equation. 0 drift (rates / dividends 0)

so it’s just the 2 term equation => partial derivative with respect to t and partial derivative with respect to s

eq = Dt(v(s,t)) + sigmasigma0.50ss*Ds2(v(s,t))~ 0

#####Code is as follows

using NeuralPDE, Flux, ModelingToolkit, GalacticOptim, Optim, DiffEqFlux
using Plots
@parameters s, t
@variables v(…)
@derivatives Dt’~t
@derivatives Ds2’'~s

sigma=0.20

eq = Dt(v(s,t)) + sigmasigma0.50ss*Ds2(v(s,t))~ 0

bcs = [v(s,1.0) ~ max(0.0,s-100.0),
v(0,t) ~ 0.0,Ds2(v(200,t))~0.0]

Space and time domains

domains = [s ∈ IntervalDomain(0.0,200.0), t ∈ IntervalDomain(0.0,1.0)]

Discretization

ds = 1; dt = 0.05

Neural network

chain = FastChain(FastDense(2,12,Flux.σ),FastDense(12,12,Flux.σ),FastDense(12,1))

discretization = PhysicsInformedNN(chain,
strategy = GridTraining(dx = [ds,dt]))
pde_system = PDESystem(eq,bcs,domains,[s,t],[v])
prob = discretize(pde_system,discretization)

opt = Optim.BFGS()

res = GalacticOptim.solve(prob,opt; cb = cb, maxiters=100)
phi = discretization.phi

#########

I think there is some issue with boundary conditions coz loss funcrtion is converging very slowly

Current loss is: 1360.7984022447545
Current loss is: 918.0802461010368
Current loss is: 910.7336935735061
Current loss is: 880.8667819607933
Current loss is: 876.3691892716076
Current loss is: 832.1532920935738
Current loss is: 831.3017231355975

***output

This is a very newbie question but i couldn’t find anything similar in the forums.

My boundary conditions are

bcs = [v(s,1.0) ~ max(0.0,s-100.0),
v(0,t) ~ 0.0,Ds2(v(200,t))~0.0]

=> 1. At expiry (1.0) option price = max (0.0, s-100.0)
=> 2. At 0 spot price the option price is 0, no matter that “t” remaining
=> 3. At higher boundary values for spot (200) the second derivative is 0 (function has no gamma)

what is the question?

Thx for checking. Apologies on not being clear about the question.

Question is: Why is it converging slowly? Is it expected? Most of the examples I tried with PhysicsInformedNN (in the documentation) were having convergence very fast. For example

Current loss is: 2.xxxx or something like that.
In my case it’s 1360.79.

My guess was about discretization. Coz the equation is similar to heat equation, the ds and dt discretization can cause instability. I was trying neural network approach as I planned to extend the same pde to multiple dimensions (10 or more).

If a simple 2d example is converging slowly, I thought > 10 dimensions will break down. Thx again for checking.

I ran:

using NeuralPDE, Flux, ModelingToolkit, GalacticOptim, Optim, DiffEqFlux
using Plots
@parameters s, t
@variables v(..)
@derivatives Dt'~t
@derivatives Ds2''~s

sigma=0.20

eq = Dt(v(s,t)) + sigma*sigma*0.50*s*s*Ds2(v(s,t))~ 0

bcs = [v(s,1.0) ~ max(0.0,s-100.0),
       v(0,t) ~ 0.0,Ds2(v(200,t))~0.0]

domains = [s ∈ IntervalDomain(0.0,200.0), t ∈ IntervalDomain(0.0,1.0)]

ds = 1; dt = 0.05

chain = FastChain(FastDense(2,12,Flux.σ),FastDense(12,12,Flux.σ),FastDense(12,1))

discretization = PhysicsInformedNN(chain,
                                   strategy = GridTraining(dx = [ds,dt]))
pde_system = PDESystem(eq,bcs,domains,[s,t],[v])
prob = discretize(pde_system,discretization)

opt = Optim.BFGS()

cb = function (p,l)
    println("Current loss is: $l")
    return false
end

res = GalacticOptim.solve(prob,opt; cb = cb, maxiters=100)

and got:

Current loss is: 941.8445844955165
Current loss is: 916.7242066053147
Current loss is: 915.4214107849981
Current loss is: 886.5197006900523
Current loss is: 886.2322060730274
Current loss is: 879.1350576817094
Current loss is: 864.8905052226901
Current loss is: 855.0722887154656
Current loss is: 843.3839734708475
Current loss is: 800.2121902390024
Current loss is: 795.8313725124218
Current loss is: 527.9225609289333
Current loss is: 512.6850515371584
Current loss is: 432.41823441243133
Current loss is: 359.6082798985172
Current loss is: 322.5316396167104
Current loss is: 271.83230487386834
Current loss is: 185.96841784375948
Current loss is: 166.15939433667526
Current loss is: 154.70276384792393
Current loss is: 132.6312979172506
Current loss is: 109.6023923322233
Current loss is: 84.8821732274032
Current loss is: 65.49968659888646
Current loss is: 56.66423583664522
Current loss is: 39.10612582214182
Current loss is: 26.626455866502553
Current loss is: 20.863288643197514
Current loss is: 16.831428378077106
Current loss is: 10.125284837331167
Current loss is: 7.013246220718452
Current loss is: 4.80664736244777
Current loss is: 3.462953384658374
Current loss is: 3.0189891781939338
Current loss is: 2.390926433341269
Current loss is: 1.8050857438010617
Current loss is: 1.533811182941108
Current loss is: 1.3376160803196362
Current loss is: 1.2691458129521058
Current loss is: 0.9972695983102826
Current loss is: 0.8684206828163386
Current loss is: 0.7607549584190314
Current loss is: 0.7319856198354339
Current loss is: 0.6912456272139449
Current loss is: 0.6535142244261445
Current loss is: 0.6381949728904993
Current loss is: 0.6144508825969348
Current loss is: 0.5584514776185985
Current loss is: 0.5374529541521371
Current loss is: 0.5323937744279705
Current loss is: 0.5253735598051936
Current loss is: 0.5119120168969999
Current loss is: 0.5019829949974935
Current loss is: 0.4991410313201091
Current loss is: 0.4968586738239611
Current loss is: 0.49296168265604023
Current loss is: 0.4871122534723805
Current loss is: 0.4792527801706842
Current loss is: 0.4761781559149002
Current loss is: 0.4737455851191371
Current loss is: 0.47147475462729255
Current loss is: 0.4699416844998312
Current loss is: 0.46908490856153817
Current loss is: 0.46817992976830564
Current loss is: 0.4667766278399259
Current loss is: 0.4617626169915016
Current loss is: 0.45865841826801507
Current loss is: 0.4566381664689234
Current loss is: 0.4531503083033169
Current loss is: 0.45250981388439276
Current loss is: 0.4523191696162234
Current loss is: 0.45199941367568597
Current loss is: 0.44895219599465935
Current loss is: 0.4452453664429279
Current loss is: 0.44250729938498806
Current loss is: 0.4418356911333423
Current loss is: 0.44148653056621545
Current loss is: 0.44107474429366605
Current loss is: 0.44021617740270647
Current loss is: 0.43912417911129414
Current loss is: 0.4376125377933615
Current loss is: 0.43512267992554565
Current loss is: 0.4334465044045509
Current loss is: 0.4328137623721233
Current loss is: 0.4323701191517514

In just a few minutes. That is expected, though you might be able to find better architectures that converge faster. That said, there’s a few things to mention:

  • When dimensions increase and neural network sizes increase, you should start to use GPUs. CPU-only training is really only for small problems.
  • GridTraining isn’t a strategy that would scale to high dimensional PDEs. For that case you’d want to start using stochastic training or something like QuadratureTraining with VEGAS. That will keep the cost of going to higher dimensions down to a polynomial.

Another issue:

This is a fully implicit mesh-free method, so there is no discretization. There is also no stability issue because of the fully implicit handling.