Shallow water equations with DifferentialEquations.jl

I am really interested in using the DifferentialEquations.jl framework to solve the shallow water equations over a 2D domain efficiently instead of coding my own solution.

How would you formulate the problem in DifferentialEquations.jl ignoring all frictional, viscous and Coriolis terms, and only considering gravity? The corresponding system of equations for this set of assumptions in conservative form is in the Wikipedia link above.

In terms of the implementation: given a 2D map (or Julia array) of the landscape H and a map indicating the depth of water h_t on top of the landscape at time t, I would like to advance in time and obtain h_{t+1} iteratively until reach a time span [0,T].

I have a finite volume solution that is working for now, but I am not happy reinventing the wheel. It would be great to compare it with the solution obtained with DifferentialEquations.jl and use that instead.

@ChrisRackauckas can you give a hand? :slight_smile:

I am not familiar with this PDE, so if you could fill me in on algorithms usually used to solve these?

Normally, they correspond directly to some spatial + timestepper method. The way this is normally done, what you do is the spatial discretization part (discretize every spatial derivative, and the variables) and turn it into an ODE, and then send that ODE to some DifferentialEquations.jl solver. For example: “Crank-Nicholson” is just doing a spatial discretization and then using the Trapezoid() method on the resulting ODEs.

However, you need to make sure that the spatial discretization of the PDE is sensible, which is why you should check the standard PDE methods first.

(Also, there are some people interested in PDE discretizations for GSoC. So share some references on this PDE, and I’ll add it to the list of possible PDEs to tackle :slight_smile: )

2 Likes

It being a hyperbolic problem the FVM should be well suited. Once you got the spatial discretisation, then using an explicit ODE time stepper should work best. The Leveque book suggests: at least second order and not a multistep method.

1 Like

What’s the full name of this? (I should probably become more familiar with fluid dynamics PDEs…)

1 Like

Here: Finite Volume Methods for Hyperbolic Problems - Randall J. LeVeque - Google Books . It’s a nice book, but I couldn’t say I’ve read it all.

It goes with the Clawpack software: http://www.clawpack.org/

4 Likes

Hi @ChrisRackauckas,

I appreciate if you can skim through this quick overview on FVM for the equations: http://caminos.udc.es/info/asignaturas/201/Finite%20Volumes%20IMWE.pdf

What I have currently for space discretizattion is a backward difference in space that propagates the fluxes properly in the direction of the velocity field (it is called the upwind scheme) given that the time step is small enough (satisfies the CLF condition described in the document).

I think my implementation is working fine, but I would like to be able to switch from one PDE to another quickly in the future and play with different solvers, in other words, enjoy the goodies of DifferentialEquations.jl :slight_smile:

2 Likes

Thanks @mauro3 for the reference, super valuable :slight_smile:

1 Like

@juliohm: Have a look at http://dedalus-project.org/ . Dedalus is a Python-based system that takes a PDE in symbolic form, discretizes it into a time-stepping system, and then does the time-stepping with standard C libraries (FFTW, LAPACK, MPI). It’s not Julia, but it might do what you want with minimal effort on your part.

BTW, I am also quite interested in seeing how Julia works for PDE simulation, in my case Navier-Stokes in simple geometries (pipes, channels, cubes), with distributed-memory parallelization. I hope to implement a Fourier-Chebyshev channel-flow code or isotropic turbulence in a triply periodic cube. But I believe some fundamental work needs to happen to enable this, namely getting distributed-memory FFTW working in Julia and coordinated with DistributedArrays.

1 Like

See also ANUGA which is a FVM to model Tsunami inundations (and more). It’s a Python program that resorts to C to do the heavy work.

That looks similar to some of what the ApproxFun+DifferentialEquations setup has going on (well, is working towards). I think the right way to their kind of functionality is to make a distributed Fun type which uses what you’re talking about, and then pairing that with the timestepping routines on Fun types. The timestepping part is coming along just fine. But maybe @dlfivefifty has something to say about possible parallelism of Funs?

1 Like

Dedalus is a great project (a colleague of mine Geoff Vasil is one of the people behind it) that discretises in a very similar way to ApproxFun: using Chebyshev polynomials and their derivative. Hopefully in a few years ApproxFun + DifferentialEquations will be able to compete.

As for parallelism: the primary parallelism in Dedalus is to decouple the Fourier modes (assuming you are using the Fourier basis in one of the directions) to handle the linear, implicit part of the equation. This is possible since Fourier gives you diagonal differential operators. Then parallelised FFT and transposes allow you to handle the nonlinear, explicit part by transforming back to value space.

This is potentially doable in ApproxFun. The main part would be to recognise that a partial differential operator is diagonal in one dimension and then auto-parrallelise the solve. This would all happen in the initial factorisation of the linear part (so perhaps a flag qrfact(A;parallelize=true)).

1 Like

@John_Gibson thanks for sharing, when it comes to Python, there are plenty of packages. First time I heard of Dedalus, bookmarking it here. I think the most famous in Python is the Fenics project:

https://fenicsproject.org/

1 Like

Hey @juliohm. In the end did you manage to use DiffEq.jl?
I am also looking at dedalus or fenics to work with the Shallow Water system.

Did DiffEq + ApproxFun evolve nicely?

There hasn’t been much work on DiffEq + ApproxFun, though @ChrisRackauckas did demonstrate some examples at JuliaCon 2018, his talk is up on YouTube.

It’s a project just waiting for the right PHD student, really.

3 Likes

Well the issue is still in broadcast, but now DiffEq is only using broadcast internally (yay http://juliadiffeq.org/2019/05/09/GPU.html) so the MWE is now pretty simple:

using ApproxFun
x = Fun(identity,-1..1)
f = exp(x)
g = f/sqrt(2-x^2)

h = @. f + 0.1g

length(f.coefficients) #18
length(g.coefficients) # 51 
length(h.coefficients) # 39

ll = deepcopy(f.coefficients)
resize!(ll,51)
_h = Fun(f.space,ll .+ 0.1.*g.coefficients)
__h = f + 0.1g
using Plots
plot(_h)
plot!(__h)
length(__h.coefficients) #51

While it was helpful in this case, it’s not in all cases:

f = exp(x)
g = f/sqrt(2-x^2)

h = @. f + 0.1g # boom

This behavior of refitting in each broadcast can really cause the coefficient sizes to grow, when in reality it doesn’t need to be more than the maximum of the coefficients of any other Fun. This was the issue pre-v1.0, and now it seems to be more contained. Now I think we just need a way to say “prune coefficients at some constant maximum number of coefficients, or to a tolerance of epsilon (equal to the ODE solver tolerance)”.

2 Likes

A better display is then this integration example:

using ApproxFun, OrdinaryDiffEq
x = Fun(identity,-1..1)
f = exp(x)
g = f/sqrt(2-x^2)

h = @. f + 0.1g

function ff(u,p,t)
    2u - u^2
end
prob = ODEProblem(ff,h,(0.0,1.0))

OrdinaryDiffEq.recursive_bottom_eltype(a::ApproxFun.Fun) = eltype(a.coefficients)
OrdinaryDiffEq.recursive_unitless_bottom_eltype(a::ApproxFun.Fun) = eltype(a.coefficients)
OrdinaryDiffEq.recursive_unitless_eltype(a::ApproxFun.Fun) = eltype(a.coefficients)
isnan(h) # :(
Base.isnan(a::ApproxFun.Fun) = any(isnan,a.coefficients)

sol = DiffEqBase.__solve(prob,Tsit5(),adaptive=false,dt=0.1)
@show sol[end].coefficients
sol = DiffEqBase.__solve(prob,Tsit5())
@show sol[end].coefficients

So just a few reasonable overloads (could be added via Requires) and that’s probably the simplest adaptive time adaptive space semilinear PDE solver script you’ve ever seen :slight_smile:. The coefficients lists look like:

x = Fun(Chebyshev(-1..1),[1.37369, 1.23314, 0.312644, 0.057217, 0.0102633, 0.00210412, 0.000680313, 0.000220778, 9.27862e-5, 3.23932e-5, 1.40827e-5, 4.98013e-6, 2.19425e-6, 7.81442e-7, 3.47373e-7, 1.24314e-7, 5.56183e-8, 1.99758e-8, 8.98101e-9, 3.23453e-9, 1.45984e-9, 5.26919e-10, 2.38555e-10, 8.62588e-11, 3.91528e-11, 1.41782e-11, 6.44926e-12, 2.33834e-12, 1.06549e-12, 3.86616e-13, 1.76285e-13,
6.3722e-14, 2.87327e-14, 1.01941e-14, 4.5861e-15, 1.61385e-15, 6.88532e-16, 2.68315e-16, 1.32806e-16])
(sol[end]).coefficients = [1.74261, 0.397775, -0.0454502, -0.00270729, 0.00176207, -0.000219944, 6.33866e-5, -1.62488e-5, 1.19699e-5, -3.06074e-6, 1.83144e-6, -4.60346e-7, 2.83213e-7, -7.24161e-8, 4.46337e-8, -1.15028e-8, 7.09702e-9, -1.8429e-9, 1.13747e-9, -2.97332e-10, 1.83457e-10,
-4.82406e-11, 2.97425e-11, -7.863e-12, 4.84288e-12, -1.28656e-12, 7.91322e-13, -2.11464e-13, 1.29487e-13, -3.47097e-14, 2.12311e-14, -5.75179e-15, 3.54605e-15, -1.21433e-15, 9.9087e-16, -6.27891e-16, 2.29117e-16, -2.48411e-16, 1.45315e-16, 4.15729e-16, -7.54313e-18, -1.87555e-18, -5.55151e-19, -1.89897e-19, -7.16383e-20, -2.71441e-20, -1.05819e-20, -4.04815e-21, -1.58455e-21, -6.07215e-22, -2.3798e-22, -9.11932e-23, -3.5739e-23, -1.36772e-23, -5.35425e-24, -2.04399e-24, -7.98466e-25, -3.03684e-25, -1.18248e-25, -4.47412e-26, -1.73416e-26, -6.51539e-27, -2.5094e-27, -9.33774e-28, -3.56495e-28, -1.309e-28, -4.93575e-29, -1.77803e-29, -6.58306e-30, -2.30334e-30, -8.28578e-31, -2.77257e-31, -9.58546e-32, -2.97471e-32, -9.46569e-33, -2.61974e-33, -6.36441e-34, 1.57045e-35, 5.02227e-36, 1.70695e-36, 6.18296e-37, 2.30489e-37, 8.70625e-38, 3.28727e-38, 1.24436e-38, 4.69274e-39, 1.77251e-39, 6.6648e-40, 2.50898e-40, 9.39579e-41, 3.52145e-41, 1.31187e-41, 4.88924e-42, 1.80954e-42, 6.69697e-43, 2.45846e-43, 9.01968e-44, 3.27759e-44, 1.18944e-44, 4.26708e-45, 1.52726e-45, 5.3895e-46, 1.89486e-46, 6.54383e-47, 2.24694e-47, 7.53617e-48, 2.50516e-48, 8.0617e-49, 2.55813e-49, 7.74247e-50, 2.28975e-50, 6.23784e-51, 1.61996e-51, 3.54841e-52, 5.93231e-53]
(sol[end]).coefficients = [1.74262, 0.397778, -0.0454475, -0.00270537, 0.00176325, -0.000219333, 6.35796e-5, -1.62111e-5, 1.19832e-5, -3.05395e-6, 1.83367e-6, -4.59648e-7, 2.83495e-7, -7.23072e-8, 4.46756e-8, -1.14851e-8, 7.10432e-9, -1.84003e-9, 1.13863e-9, -2.96853e-10, 1.83652e-10, -4.81613e-11, 2.97752e-11, -7.85006e-12, 4.84817e-12, -1.28459e-12, 7.92376e-13, -2.11071e-13, 1.29953e-13, -3.44992e-14, 2.15547e-14, -5.61981e-15, 3.54937e-15, -1.37966e-15, 2.62586e-16, 3.8377e-17, 1.03977e-16, -3.55172e-17, -4.26953e-16, -7.84177e-17, 1.85717e-16, -1.83744e-16, -5.06391e-19, -1.732e-19, -6.53278e-20, -2.4752e-20, -9.64887e-21, -3.69133e-21, -1.44485e-21, -5.53705e-22, -2.17004e-22, -8.31592e-23, -3.25898e-23, -1.24727e-23, -4.8826e-24, -1.86403e-24, -7.28156e-25, -2.76957e-25, -1.07839e-25, -4.08055e-26, -1.58159e-26, -5.94256e-27, -2.28874e-27, -8.51733e-28, -3.2517e-28, -1.19409e-28, -4.50246e-29, -1.62213e-29, -6.00592e-30, -2.10174e-30, -7.56086e-31, -2.53061e-31, -8.74967e-32, -2.71649e-32, -8.64636e-33,
-2.39494e-33, -5.82687e-34, 1.3202e-35, 4.2246e-36, 1.43632e-36, 5.20266e-37, 1.93934e-37, 7.32492e-38, 2.76571e-38, 1.04692e-38, 3.94828e-39, 1.49132e-39, 5.60774e-40, 2.11105e-40, 7.90595e-41, 2.96308e-41, 1.1039e-41, 4.11417e-42, 1.52275e-42, 5.63557e-43, 2.06892e-43, 7.59046e-44,
2.75837e-44, 1.00101e-44, 3.59124e-45, 1.28535e-45, 4.53604e-46, 1.59476e-46, 5.5077e-47, 1.89109e-47, 6.34292e-48, 2.10837e-48, 6.78502e-49, 2.15277e-49, 6.51563e-50, 1.9265e-50, 5.24791e-51, 1.36221e-51, 2.98239e-52, 4.97929e-53]

and that’s what I mean by the cutoff issue, since the ODE solver tolerance is just a relative tolerance of 1e-3 and abstol of 1e-6, so it should really look like:

(sol[end]).coefficients = [1.74262, 0.397778, -0.0454475, -0.00270537, 0.00176325, -0.000219333, 6.35796e-5, -1.62111e-5, 1.19832e-5, -3.05395e-6, 1.83367e-6]

so if there was a constructor that could autochop:

function ff(u,p,t)
    chop!(u,1e-6)
    2u - u^2
end
prob = ODEProblem(ff,h,(0.0,1.0))
sol = DiffEqBase.__solve(prob,Tsit5())
@show sol[end].coefficients

this would have an efficient solution

(sol[end]).coefficients = [1.74262, 0.397778, -0.0454475, -0.00270537, 0.00176325, -0.000219333, 6.35794e-5, -1.62109e-5, 1.19852e-5, -3.06974e-6, 1.88027e-6, -4.7376e-7,
3.15883e-7]
2 Likes

Oh and BTW, implicit methods via iteration work:

sol = DiffEqBase.__solve(prob,KenCarp4(nlsolve=NLFunctional()))
@show sol[end].coefficients

sol = DiffEqBase.__solve(prob,KenCarp4(nlsolve=NLAnderson()))
sol = DiffEqBase.__solve(prob,KenCarp4(nlsolve=NLNewton()))

but using quasi-Newton will fail because ApproxFun doesn’t expose an array interface on the coefficients:

h[2] # h.coefficients[2]?
length(h) # length(h.coefficients)?

if that was exposed, then the nonlinear solver would automatically work like a pseudospectral method by solving the ODE update equation in coefficient space, assuming the Jacobian calculation plays nicely. So I just need to coordinate with @dlfivefifty to get these things acting enough like arrays for them to work, or enough like scalars for them to work.

The long term goal is to make ApproxFun a thin wrapper around a continuum array: https://github.com/JuliaApproximation/ContinuumArrays.jl

Then all the Operator terminology becomes (possibly infinite via InfiniteArrays.jl) Array terminology, making it a lot easier to describe basic operations like restrict to a subspace, mass matrices, etc. In particular, it gives a nice language for generating structured and parallelised arrays.

We have the proof of concept implemented for linear spline FEM and (infinite) ultraspherical methods, see the tests. Once this is done syncing with DiffEq should be trivial.

However, I’m busy working on papers at the moment so am actively trying to avoid working on ContinuumArrays.jl. But perhaps I’ll have a software relapse and show that it can be linked in with DiffEq.jl.

2 Likes

L-stable integrators will work with the right derivative definition:

OrdinaryDiffEq.jacobian(ff,x::ApproxFun.Fun,diff_type) = 0.0 # should be dff/du
OrdinaryDiffEq.derivative(ff,x::Number,integrator) = 0.0 # should be dff/dt
OrdinaryDiffEq._vec(x::ApproxFun.Fun) = x
OrdinaryDiffEq._reshape(x::ApproxFun.Fun,siz) = x

sol = DiffEqBase.__solve(prob,Rosenbrock23())

Yeah that sounds good, so I can wait :slight_smile:

1 Like