# New package CentralDiff.jl released

I developed this package for my own research. However, I’ve decided to make it public as I believe it can be useful for others performing numerical calculations.

Key features:

• Very high-order interpolation up to 20th order accuracy
• Generation of differentiation formulas using metaprogramming, achieving zero-cost abstraction
• Support for multidimensional arrays up to 3 dimensions
• Ability to choose between difference equations that prioritize consistency or computational efficiency, depending on the application

As this package has just been completed, I believe there are many areas for improvement and potential changes. I would greatly appreciate any code reviews.

2 Likes

Could comment on how this differs from FiniteDiff.jl ?

1 Like
• I’m not very familiar with FiniteDiff.jl, but from looking at the source code, it seems to only implement up to second-order accuracy.
• Both FiniteDiff.jl and FiniteDifferences.jl aim to calculate derivatives of functions.
• In contrast, CentralDiff is designed to calculate derivatives of given stencils, so it doesn’t need to know the function’s form.
• CentralDiff provides low-level functions that don’t allocate any heap memory.
• This is intended for large-scale numerical computations.
• For the same reason, CentralDiff requires ‘hi’ as a function argument instead of ‘h’.

I will show you a simple example. Is it possible to achieve equivalent code with other libraries?

``````function diffusion!(order, U, V, W, dUdt, dVdt, dWdt, dx, dy, dz, RE)
dxi, dyi, dzi = 1 / dx, 1 / dy, 1 / dz
REi = 1 / RE

for I in CartesianIndices(A)
xdiff =
(
d2fdx(order, XAxis(), U, I, dxi) +
d2fdx(order, YAxis(), U, I, dyi) +
d2fdx(order, ZAxis(), U, I, dzi)
) * REi

ydiff =
(
d2fdx(order, XAxis(), V, I, dxi) +
d2fdx(order, YAxis(), V, I, dyi) +
d2fdx(order, ZAxis(), V, I, dzi)
) * REi

zdiff =
(
d2fdx(order, XAxis(), W, I, dxi) +
d2fdx(order, YAxis(), W, I, dyi) +
d2fdx(order, ZAxis(), W, I, dzi)
) * REi

dUdt[I] += xdiff
dVdt[I] += ydiff
dWdt[I] += zdiff
end
end
``````

A fly-by comment, if you’re defining functions like `d2fdx` you could for fun make this an alias for `d²fdx²` or for succinctness/lack-of-fraction consider using `D²`

1 Like

Thank you. I will refer to this.

FYI, IMO, zero-cost abstraction is usually achievable in Julia without resorting to metaprogramming. Metaprogramming is supposed to be a last resort, if-all-else-fails type of solution. Julia offers great practical expressive power, so resorting to metaprogramming is only rarely actually necessary.

1 Like

As you can see from the source code, the coefficients required for the calculation are not hard-coded. They are computed at compile time and used to generate functions. The term “zero-cost abstraction” may not have been appropriate.
By the way, would it have been a better implementation to hardcode all the coefficients?

Here is how it differs from Pallalel Stencil.
CentralDiff.jl provides functions for very high-order differentiation and interpolation at a point.
PallalelStencil.jl is a package for MPI partitioning of arrays and implicit boundary exchange, and the derivatives provided by it act on the whole array. The derivative provided by this package works on the entire Array or CuArray, and its accuracy is only first-order precision.
However, it is possible to use a combination of these libraries.

Finally, it is important to emphasize once again that the purpose of this library is to provide a function for the derivatives and interpolations that is of the order of -very-high-order. This is used, for example, in direct numerical simulations of turbulence.

If this is specific to stencils (no idea what those are) and very high order, can I suggest renaming the library? Something like `StencilCentralDiff` maybe? That way it won’t be confused with other standard autodiff tools like FiniteDiff.jl, ForwardDiff.jl or ReverseDiff.jl

1 Like

My point is, you don’t need metaprogramming just for that.

No. I always advocate for less hardcoding and less code duplication. Here too.

It may indeed be a misleading package name.
However, it seems redundant since it is obvious that stencil calculations are performed in central difference. I will try to think of a better name.

Stencil means consecutive cells needed to compute the derivative at a point.
For example, you can compute derivative by {df/dx}_i = (f_{i+1} - f_{i-1}) / dx. Then stencil is i-1, i, i+1.

Sorry, I couldn’t think of a way other than using metaprogramming.
Can you tell me briefly what impl was best for me?

Maybe `HighOrderStencilDifferences`? Long names are almost never a bad thing if they’re more precise

2 Likes

So I had quick look at the code of your package and I think you are doing thing just fine
The only instance of metaprogramming is when you define your differentiation functions programmatically:

``````for N in (2, 4, 6, 8, 10, 12, 14, 16, 18, 20)
samples = Rational{BigInt}.((1-N)//2:(N-1)//2)
A = TaylorMatrix(samples)
Ai = inv(A)

@eval begin
@inline function fc(::Order{\$N}, axis, F, I)
\$(Expr(:call, :+, _generate_terms(@view(Ai[1, :]))...))
end

@inline function dfdxc(::Order{\$N}, axis, F, I, dxi)
\$(Expr(:call, :+, _generate_terms(@view(Ai[2, :]))...)) * dxi
end

@inline function dfdx(::Order{\$N}, axis, F, I, dxi)
\$(Expr(:call, :+, _generate_terms(@views(_shift_and_add(Ai[1, :], Ai[2, :])))...)) * dxi
end

@inline function d2fdx(::Order{\$N}, axis, F, I, dxi)
\$(Expr(:call, :+, _generate_terms(@views(_shift_and_add(Ai[2, :], Ai[2, :])))...)) * dxi^2
end
end
end
``````

I personally think this is totally reasonable since your package relies on/promises the performance of these methods.
Some considerations:

• One can in principle try to avoid this explicit generation of terms, however I think in this case it would be quite hard to actually get the compiler to do it at all at compile time (need to construct and invert some matrix apparently…). And as said above it is crucial that this does not happen at every call to these methods. That more than justifies using metaprogramming for me in this case.
• There is another design decision here: One could have done this with `@generated function`s. This would look like this:
``````@generated function fc(::Order{N}, axis, F, I)
samples = Rational{BigInt}.((1-N)//2:(N-1)//2)
A = TaylorMatrix(samples)
Ai = inv(A)
return Expr(:call, :+, _generate_terms(@view(Ai[1, :]))...)
end
``````

In principle this would allow for arbitrarily high orders to be generated. However, I see 2 arguments in favour of your current design: 1) Your approach is more efficient since it reuses the same matrix for all 4 functions of the same order. 2) Your algorithm for generating the coefficients likely becomes impractical at some order so it can make sense to limit the maximum order (a bit like the factorial function for Ints just uses a lookup table).

Thank you for reviewing codes. I am relieved to know that this is the correct implementation.