# Generating finite-difference stencils

Regarding the second suggested approach in the topic title: discrete derivatives (without interpolating data)
Is there a Julia package that performs discrete derivatives (e.g. using some higher order stencils) also for non-uniform grids?

I am looking for a package that calculates the weights for any order of derivative (including the 0th derivative, corresponding to interpolation), approximated to any order of accuracy on an arbitrary grid in one dimension (as stated in that paper: “Generation of Finite Difference Formulas on Arbitrarily Spaced Grids - Bengt Fornberg”

FWIW, here are some links to codes based on Fornberg’s (1998 and 1988) articles: Julia code and Python code

Here’s a maintained version of Fornberg:

With prebuilt operators:

Of course, doing operators is a slow way to implement it so I wouldn’t recommend it, but the package exists.

1 Like

Isn’t that package in the middle of being deprecated?

I was thinking about it and it seems like a vastly simpler algorithm exists. I think the following code should return the desired stencil weights w_k such that:

f^{(m)}(x_0) ≈ \sum_{k=1}^n w_k f(x_k) + O(\max_k |x_k - x_0|^{n-m})

given any arbitrary set of n > m points x_k, the desired derivative order m, and the point x_0 where the derivative is desired.

function stencil(x::AbstractVector{<:Real}, x₀::Real, m::Integer)
ℓ = 0:length(x)-1
m in ℓ || throw(ArgumentError("order $m ∉$ℓ"))
A = @. (x' - x₀)^ℓ / factorial(ℓ)
return A \ (ℓ .== m) # vector of weights w
end


I checked, and it returns the same results (up to roundoff error) as DiffEqOperators.calculate_weights for random inputs, so it seems fine?

Am I missing something? Probably it becomes badly conditioned for large n, but in practice it seems unlikely that anyone would need n > 10, and I’m not sure how well Fornberg does in that regime either…

PS. To derive the algorithm above, just Taylor expand \sum_{k=1}^n w_k f(x_k) around x_0 to order n-1, then set all of the f^{(\ell\ne m)} coefficients to zero while setting the f^{(m)} coefficient to 1. The result is an n\times n system of linear equations in the weights w_k.

6 Likes

In fact, simply by passing Integer and Rational arguments (so that / becomes exact), the same code can give exact rational results. For example:

julia> stencil(-4:4, 0//1, 4)
9-element Vector{Rational{Int64}}:
7//240
-2//5
169//60
-122//15
91//8
-122//15
169//60
-2//5
7//240


matching the last line of Table 1 in the Fornberg paper:

(And I can reproduce the other tables in Fornberg as well.)

Gotta love generic programming!

15 Likes

This is the kind of example that should be used to “sell” Julia! The claims of raw speed rarely rise to the point where doubters can be persuaded, but this is quite another matter.

Well done!

1 Like

Yeah, just like ODE.jl. It’ll stay alive for at least another decade as something I tell people not to do.

It’s just for most applications (i.e. anything nonlinear) using a “stencil” calculation is simply worse than fusing the nonlinearity in and doing a straight flat loop. By large margin. We’ve demonstrated this over and over again, so now we’re at the point where we basically say, we’ll keep the stencil thing because it’s what people learn in school, but please don’t do that under the vague notion that it’s somehow high-performance. Yes, you can get a “fast” implementation of a stencil, but it’s still worse than naively fusing, unless your equations are actually linear.

Huh, cool. I would add that the last term should just impose the summation condition. There’s some things out there that say it matters over a large integration, but if you’re seeing the same results up to round-off error then .

2 Likes

What a fun little code!

I’d be cautious about this version, though. When I try for x=1:5 to evaluate the first derivative at an endpoint, it’s lost 2 digits of accuracy relative to Chris’ code (and exact rational results). It’s not terrible, but given how sensitive FD methods are to roundoff already, it’s not great, either. It also loses exact antisymmetry at the ends, which I suppose is just unaesthetic.

I’d argue that it’s safe only in the Rational case. @ChrisRackauckas’s code can turn out rational results, too, however.

Fornberg’s 1988 intro says that “the main anticipated application” of his method is to “dynamically changing grids.” I’m not sure how that ever played out…

1 Like

@stevengj’s stencil code turned out to be an incredibly painless way to calculate wave speeds from wave arrival times at unevenly-spaced pressure transducers with measurement uncertainties:

function wavespeed(xt, N=1)
map(axes(xt, 1)) do i
window = max(1, i - N):min(size(xt, 1), i + N)
mapreduce(*, +, stencil(xt.t[window], xt.t[i], 1), xt.x[window])
end
end

julia> xt
10×3 DataFrame
Row │ name     x            t
│ String7  Measurem…    Measurem…
─────┼────────────────────────────────────────
1 │ PT1      0.63±0.004   -0.002128±1.0e-5
2 │ PT2      1.22±0.004   -0.001209±1.0e-5
3 │ PT3      2.072±0.004   0.0±1.0e-5
4 │ PT4      3.952±0.004   0.002638±1.0e-5
5 │ PT5      4.606±0.004   0.003565±1.0e-5
6 │ PT6      5.207±0.004   0.004423±1.0e-5
7 │ PT7      5.536±0.004   0.004879±1.0e-5
8 │ PT8      6.087±0.004   0.00567±1.0e-5
9 │ PT9      6.607±0.004   0.006409±1.0e-5
10 │ PT10     6.862±0.004   0.006772±1.0e-5

julia> wavespeed(xt)
10-element Vector{Measurement{Float64}}:
642.0 ± 12.0
669.1 ± 5.7
707.2 ± 5.9
707.4 ± 8.6
702.9 ± 6.5
714.0 ± 15.0
712.0 ± 15.0
700.2 ± 7.5
703.0 ± 19.0
702.0 ± 31.0


Thanks!

2 Likes

Cool! But as @tobydriscoll pointed out above, my code is susceptible to roundoff errors for floating-point calculations (probably because my matrix A is badly conditioned), especially as the number of points becomes larger, whereas DiffEqOperators.calculate_weights is better-behaved and otherwise equivalent, so I would tend to recommend the latter in practice (though I don’t know if it handles Measurement values?), especially for window sizes larger than a few points.

For measurements where the inherent uncertainty (we’re talking about an eyeball and a tape measure here) is many orders of magnitude larger than floating-point roundoff errors, I’ll take your simple, Measurement-compatible approach over adding a dependency with an active deprecation warning.

1 Like

Is there Julia code that does this for nonuniform spacing, one-sided differences?

It seems to work:

using DiffEqOperators: calculate_weights

function wavespeed2(xt, N=1)
map(axes(xt, 1)) do i
window = max(1, i - N):min(size(xt, 1), i + N)
mapreduce(*, +, calculate_weights(1, xt.t[i], xt.t[window]), xt.x[window])
end
end

V2 = wavespeed2(xt)
10-element Vector{Measurement{Float64}}:
642.0 ± 12.0
669.1 ± 5.7
707.2 ± 5.9
707.4 ± 8.6
702.9 ± 6.5
714.0 ± 15.0
712.0 ± 15.0
700.2 ± 7.5
703.0 ± 19.0
702.0 ± 31.0

julia> wavespeed2(xt) - wavespeed(xt)
10-element Vector{Measurement{Float64}}:
0.0 ± 0.0
0.0 ± 2.7e-15
-1.137e-13 ± 1.4e-15
4.547e-13 ± 9.5e-15
0.0 ± 1.3e-14
0.0 ± 3.8e-14
-4.55e-13 ± 3.3e-14
4.55e-13 ± 3.2e-14
1.82e-12 ± 1.1e-13
0.0 ± 0.0


Both my code and DiffEqOperators.calculate_weights handle one-sided differences as a special case.

Right. Probably it should be fine in that context up to about 20 points, for which the condition number of my A is about 10^8 (for centered differences) and so it should still be accurate to half of double precision. Beyond that the factorial function will throw an integer-overflow error (at least, as the code is currently written), so at least it will fail loudly.

Can I ask what is meant here by “fusing the nonlinearity in and doing a straight flat loop”?
I’m genuinely interested in this but my knowledge of numerical integration is limited to the finite difference stuff you learn in introductory uni courses