This is my first day learning Julia, and I’m not sure of an elegant way to compute the finite difference Laplacian on an NxM Float32 array.
\Delta f(x,y) \approx f(x-1,y) + f(x+1,y) + f(x,y-1) + f(x,y+1) - 4f(x,y)
Base.diff is implemented using
view(a, r1...) .- view(a, r0...), so I imagine this would look the same. However, after handling the boundaries, this solution seems messier than I’d think it would be.
The problem I’m testing is the 2D heat equation on a Cartesian grid using DifferentialEquations.jl.
No need to dig through the standard library in the hopes of finding a built-in “vectorized” routine for this. A loop (in a function) will be clearer and faster.
∇²f = zero(f) # initialize to zero since we don't touch the boundaries
for y = 2:size(f,2)-1, x = 2:size(f,1)-1 # compute ∇²f in interior of f
∇²f[x,y] = f[x-1,y] + f[x+1,y] + f[x,y-1] + f[x,y+1] - 4f[x,y]
Note that I only looped over the “interior” of the array
f. You could write separate loops to handle the boundaries. But a much easier solution in this kind of code is to use “ghost cells”: simply define extra array elements around the boundaries and assign them to whatever boundary condition you want (e.g. Dirichlet, periodic, …) before computing the Laplacian.
If you are performing additional calculations on the Laplacian (e.g. explicit timestepping of a heat equation), you can combine the above loop with your other calculations rather than allocating and returning a temporary
∇²f array. There are also fancier tricks using
@nloops to write a Laplacian implementation that works for any number of dimensions.
On the other hand, if you are performing sparse solves, e.g. solving Poisson’s equations, then you typically want the Laplacian operator as an explicit sparse matrix. On a Cartesian grid, this can be accomplished neatly with Kronecker products, as shown in this notebook from one of my classes.
You might want to take a look at this tutorial for optimizing these kinds of calculations: http://juliadiffeq.org/DiffEqTutorials.jl/html/introduction/optimizing_diffeq_code.html . (And you can optimize it even more as well…)
Thanks! The sparse matrix method works great too, but I’m working with dense matrices for now.
For a 2-torus, would you recommend ghost cells in
u or handling the indices in the explicit loop? Is there a way to create an array of size
(size(u,1)+2,size(u,2)+2) and then create a “view” so that
u_ghost[size(u,1)+1] are valid indices?
Using Grassmann.jl you can use automatic differentiation to compute the fully generalized Hodge De Rahm Laplacian and Dirac operators in any dimension with any metric. It’s a very simple automatic differentiation formula with Grassmann.jl, but I won’t explain the details until my presentation at JuliaCon 2019, which will be supplemented by a paper with detailed proofs.
Sorry if this doesn’t answer your question with finite difference, I’m just a bit excited to share what I’m working on and hope it will be useful once published.
With this new method, you won’t have to worry about what kind of stencil is used, as the algebra is all automatically taken care of.
You could use an Offset Array for this.
Ghost cells. See also: Arrays with periodic boundaries
(You’ll need an explicit loop anyway to update the ghost cells on each iteration. But the whole approach is so much more flexible because it separates the boundary conditions from the stencil.)
Two libraries to do this:
EquivariantOperators.jl scales the stencil coeffs wrt your finite difference cell dimensions, which can in general be noncartesian. Set
border = :circular for periodic boundaries in the constructor.
cell = dx * [1 0; 0 1]
▽2 = Laplacian(cell)
Images.jl does generic 1,2,1 stencil
imgl = imfilter(img, Kernel.Laplacian())