Second order 2D differentiation DiffEqOperators.jl

Hello i am trying to calculate the laplacian of a 2D array for the use in diffeq.jl
And expand later to 3D hopefully.
I am currently implementing it using DiffEqOperators.jl with centereddifference() and neuman0BC()
However i dont see an easy way to also implement the neumann BC’s in the y direction.
So how do i implement the BC’s in both axis?
And in general i wonder if this is the best way to implement this or if there would be a more direct way.

dx = 0.01
axis = collect(-1:dx:1);
umesh = [(i, j) for i in axis, j in axis];

sines((x,y)) = sin(pi*x)*sin(pi*y)
u = sines.(umesh)

temp_u = Dirichlet0BC(Float64)*u
xdiff = CenteredDifference{1}(2,2,dx,N)*tempu
ydiff = CenteredDifference{2}(2,2,dx,N)*u
laplacian_of_u = xdif+ydif

Hi,

What you want to do is:

dx = 0.01
axis = collect(-1:dx:1);
umesh = [(i, j) for i in axis, j in axis];

sines((x,y)) = sin(pi*x)*sin(pi*y)
u = sines.(umesh)

Qx, Qy = Dirichlet0BC(Float64, size(u))

Dxx = CenteredDifference{1}(2,2,dx,N)
Dyy = CenteredDifference{2}(2,2,dx,N)

A = (Dxx + Dyy)*compose(Qx, Qy)

laplacian_of_u = A*u

Doing things in this way improves performance, as DiffEqOperators fuses the stencils of the operators, fusing loops and leveraging GPU convolutions from NNlib.

For the extension to 3D you do the obvious thing, note that the BC constructors return a variable number of Q operators depending on ndims(u), and compose takes any number of arguments.

I’m working on DiffEqOperators and am happy to support if you have further questions, let me know how you get on!

1 Like

Thanks very much for the reply. I had found the src for the multi_dim_bc_operators already but the info on compose is very handy.

I noticed however the performance is not what i would like it to be. I will respond later with a concrete example.
I do get this warning however is this of any significance?

┌ Warning: #= C:\Users\siemd\.julia\packages\DiffEqOperators\z4eVP\src\derivative_operators\convolutions.jl:419 =#:
│ `LoopVectorization.check_args` on your inputs failed; running fallback `@inbounds @fastmath` loop instead.
│ Use `warn_check_args=false`, e.g. `@turbo warn_check_args=false ...`, to disable this warning.
└ @ DiffEqOperators C:\Users\siemd\.julia\packages\LoopVectorization\kVenK\src\condense_loopset.jl:825

It is of significance for why it would be slower than expected :sweat_smile:

I figured.

I have done some concrete comparisons of the differentiation using DiffEqOperators and my own simple implementations.

For the 1D case i used a simple finite difference Matrix multiplication compared to DiffEqOperators.
For this i found similar performance.

In de 2D case however i found DiffEqOperators.jl to be ~100x slower.
For my own implementation i used this method:
Kronecker sum Laplacian
And the following implementation for DiffEqOperators

Qx, Qy = Dirichlet0BC(Float64, size(u));
Dxx = CenteredDifference{1}(2,2,dx,N);
Dyy = CenteredDifference{2}(2,2,dx,N);
A = (Dxx + Dyy)*compose(Qx, Qy);
A*u

Which gives me these warnings:

┌ Warning: #= C:\Users\siemd\.julia\packages\DiffEqOperators\Xddum\src\derivative_operators\convolutions.jl:79 =#:
│ `LoopVectorization.check_args` on your inputs failed; running fallback `@inbounds @fastmath` loop instead.
│ Use `warn_check_args=false`, e.g. `@turbo warn_check_args=false ...`, to disable this warning.
└ @ DiffEqOperators C:\Users\siemd\.julia\packages\LoopVectorization\wLMFa\src\condense_loopset.jl:825
┌ Warning: #= C:\Users\siemd\.julia\packages\DiffEqOperators\Xddum\src\derivative_operators\convolutions.jl:49 =#:
│ `LoopVectorization.check_args` on your inputs failed; running fallback `@inbounds @fastmath` loop instead.
│ Use `warn_check_args=false`, e.g. `@turbo warn_check_args=false ...`, to disable this warning.
└ @ DiffEqOperators C:\Users\siemd\.julia\packages\LoopVectorization\wLMFa\src\condense_loopset.jl:825
┌ Warning: #= C:\Users\siemd\.julia\packages\DiffEqOperators\Xddum\src\derivative_operators\convolutions.jl:98 =#:
│ `LoopVectorization.check_args` on your inputs failed; running fallback `@inbounds @fastmath` loop instead.
│ Use `warn_check_args=false`, e.g. `@turbo warn_check_args=false ...`, to disable this warning.
└ @ DiffEqOperators C:\Users\siemd\.julia\packages\LoopVectorization\wLMFa\src\condense_loopset.jl:825

I updated all my packages but still receive these warnings.
And using Julia version 1.6.3.
What can i do to fix these warnings and improve performance?

MethodOfLines.jl is just a better way to do it. DiffEqOperators.jl has been effectively deprecated for years.