Callback based PDE BC mutation

I’ve been trying to figure out how to pass time dependent boundary conditions into MethodOfLines advection/diffusion/reaction problems so I can mutate them during parameter optimization without having to rediscretize the problem for a while, and I thought I had come up with a clever workaround using callback functions. Unfortunately, it doesn’t quite seem to be working as planned.

My idea was that instead of defining u(t,0) ~ f(t), I can define the inlet condition u(t,0) ~ k and then mutate that constant with a DiscreteCallback which always returns true. This has the secondary advantage of really cleaning up the printed equations which no longer have that gross interpolation object attached. (see toy code below)

Copy/pasteable callback code
using ModelingToolkit, DifferentialEquations, DataInterpolations, DiffEqCallbacks, Plots, BenchmarkTools, LinearAlgebra, MethodOfLines, DomainSets
# Parameters, variables, and derivatives
@parameters p1
@variables t x
@variables u(..)
Dt = Differential(t)
Dx = Differential(x)
Dxx = Differential(x)^2

params = [p1=>50.0] #this parameter will be modified by callback
# 1D PDE and boundary conditions
eq  = Dt(u(t, x)) ~ -Dx(u(t,x)) + 0.001*Dxx(u(t, x))
bcs = [u(0, x) ~ p1,
        u(t, 0) ~ p1,
        Dx(u(t,0)) ~ 0,
        Dx(u(t, 10)) ~ 0]

# Space and time domains
domains = [t ∈ Interval(0.0, 100.0),
           x ∈ Interval(0.0, 10.0)]

# PDE system
@named pdesys = PDESystem(eq, bcs, domains, [t, x], [u(t, x)], params)

# Method of lines discretization
dx = 0.1
order = 2
discretization = MOLFiniteDifference([x => dx], t)

# Convert the PDE problem into an ODE problem
prob = discretize(pdesys,discretization)


#timeseries data from real experiment would be put here. I've included slopes and discontinuities
timepoints = [0,10,10,20,20,60,60,100]
datapoints = [50,50,100,100,100,200,200,200]

function condition(u, t, integrator)
    true
end
function affect!(integrator)
    interp = LinearInterpolation(datapoints, timepoints)
    integrator.p = [interp(integrator.t)]
end

cb = DiscreteCallback(condition, affect!)

# Solve ODE problem
sol = solve(prob, Tsit5(), saveat=0.2, callback=cb, abstol=1e-9, reltol=1e-9)
plot(sol.t, sol(sol.t, 10.0, dv = u(t,x)))

Unfortunately, this is incredibly slow and uses 10x the memory of code where the interpolation is defined within the MOL system and used as the boundary condition directly. I expected some performance hit, but it gets far worse on my actual system, going from ~0.25 seconds per solve to ~8500 until it crashes about 10% of the way through the problem.

Is this naïve approach missing something obvious, like a different type of callback, or some specific solver keywords, or is this approach as hacky and gross as it seems?

Thanks!

edit: spelling

We will have the ability to pass functions as parameters soon, when this happens simply set f as a parameter and remake with a new registered function each time. Until that point, you will need to re-discretize unfortunately

1 Like