Regarding the second suggested approach in the topic title: discrete derivatives (without interpolating data)
Is there a Julia package that performs discrete derivatives (e.g. using some higher order stencils) also for non-uniform grids?

I was thinking about it and it seems like a vastly simpler algorithm exists. I think the following code should return the desired stencil weights w_k such that:

given any arbitrary set of n > m points x_k, the desired derivative order m, and the point x_0 where the derivative is desired.

function stencil(x::AbstractVector{<:Real}, x₀::Real, m::Integer)
ℓ = 0:length(x)-1
m in ℓ || throw(ArgumentError("order $m ∉ $ℓ"))
A = @. (x' - x₀)^ℓ / factorial(ℓ)
return A \ (ℓ .== m) # vector of weights w
end

I checked, and it returns the same results (up to roundoff error) as DiffEqOperators.calculate_weights for random inputs, so it seems fine?

Am I missing something? Probably it becomes badly conditioned for large n, but in practice it seems unlikely that anyone would need n > 10, and I’m not sure how well Fornberg does in that regime either…

PS. To derive the algorithm above, just Taylor expand \sum_{k=1}^n w_k f(x_k) around x_0 to order n-1, then set all of the f^{(\ell\ne m)} coefficients to zero while setting the f^{(m)} coefficient to 1. The result is an n\times n system of linear equations in the weights w_k.

This is the kind of example that should be used to “sell” Julia! The claims of raw speed rarely rise to the point where doubters can be persuaded, but this is quite another matter.

Yeah, just like ODE.jl. It’ll stay alive for at least another decade as something I tell people not to do.

It’s just for most applications (i.e. anything nonlinear) using a “stencil” calculation is simply worse than fusing the nonlinearity in and doing a straight flat loop. By large margin. We’ve demonstrated this over and over again, so now we’re at the point where we basically say, we’ll keep the stencil thing because it’s what people learn in school, but please don’t do that under the vague notion that it’s somehow high-performance. Yes, you can get a “fast” implementation of a stencil, but it’s still worse than naively fusing, unless your equations are actually linear.

Huh, cool. I would add that the last term should just impose the summation condition. There’s some things out there that say it matters over a large integration, but if you’re seeing the same results up to round-off error then .

I’d be cautious about this version, though. When I try for x=1:5 to evaluate the first derivative at an endpoint, it’s lost 2 digits of accuracy relative to Chris’ code (and exact rational results). It’s not terrible, but given how sensitive FD methods are to roundoff already, it’s not great, either. It also loses exact antisymmetry at the ends, which I suppose is just unaesthetic.

I’d argue that it’s safe only in the Rational case. @ChrisRackauckas’s code can turn out rational results, too, however.

Fornberg’s 1988 intro says that “the main anticipated application” of his method is to “dynamically changing grids.” I’m not sure how that ever played out…

@stevengj’s stencil code turned out to be an incredibly painless way to calculate wave speeds from wave arrival times at unevenly-spaced pressure transducers with measurement uncertainties:

function wavespeed(xt, N=1)
map(axes(xt, 1)) do i
window = max(1, i - N):min(size(xt, 1), i + N)
mapreduce(*, +, stencil(xt.t[window], xt.t[i], 1), xt.x[window])
end
end

Cool! But as @tobydriscoll pointed out above, my code is susceptible to roundoff errors for floating-point calculations (probably because my matrix A is badly conditioned), especially as the number of points becomes larger, whereas DiffEqOperators.calculate_weights is better-behaved and otherwise equivalent, so I would tend to recommend the latter in practice (though I don’t know if it handles Measurement values?), especially for window sizes larger than a few points.

For measurements where the inherent uncertainty (we’re talking about an eyeball and a tape measure here) is many orders of magnitude larger than floating-point roundoff errors, I’ll take your simple, Measurement-compatible approach over adding a dependency with an active deprecation warning.

Right. Probably it should be fine in that context up to about 20 points, for which the condition number of my A is about 10^8 (for centered differences) and so it should still be accurate to half of double precision. Beyond that the factorial function will throw an integer-overflow error (at least, as the code is currently written), so at least it will fail loudly.

Can I ask what is meant here by “fusing the nonlinearity in and doing a straight flat loop”?
I’m genuinely interested in this but my knowledge of numerical integration is limited to the finite difference stuff you learn in introductory uni courses