How to implement Lyons’ Differentiator

Hello,
I am trying to implement a discrete time differentiator, Lyons’ Differentiator type two, see: Design of a Discrete-Time Differentiator | Wireless Pi

It has five coefficients. How would you implement that?

This is my implementation of the most simple, First Difference differentiator:

# 1. create noise
# 2. calculate derivative
# 3. multiply them and calculate the mean

N = 1000000
offset = 0.7
Ω = randn(N) .+ offset
dΩ = zeros(N)
P = zeros(N)
for (i, ω) in pairs(Ω)
    if i > 1
       dΩ[i] = Ω[i] - Ω[i-1]
    end
    P[i] = ω * dΩ[i] 
end
println(mean(P), ", ", mean(Ω), ", ", mean(dΩ))

Not an answer, but I managed to implement the central difference filter:

# 1. create noise
# 2. calculate derivative
# 3. multiply them and calculate the mean

N = 1000000
offset = 0.7
Ω = randn(N) .+ offset
dΩ = zeros(N)
P = zeros(N)
for (i, ω) in pairs(Ω)
    if i > 2
        # central difference filter
        dΩ[i] = 0.5 * (Ω[i] - Ω[i-2])
    end
    if i > 1
        # delay omega by one time step
        P[i] =  Ω[i-1] * dΩ[i]
    end 
end
println(mean(P), ", ", mean(Ω), ", ", mean(dΩ))

This is a linear filter? Wouldn’t it then suffice to call DSP.filter with the corresponding coefficient vectors? The article provides the coefficients for you

2 Likes

The following code works:

# compare different differenciators

function first_difference(Ω)
    dΩ = zeros(length(Ω))
    for (i, ω) in pairs(Ω)
        if i > 1
            # first difference filter
            dΩ[i] = Ω[i] - Ω[i-1]
        end
    end
    dΩ
end

function central_difference(Ω)
    dΩ = zeros(length(Ω))
    for (i, ω) in pairs(Ω)
        if i > 2
            # central difference filter
            dΩ[i] = 0.5 * (Ω[i] - Ω[i-2])
        end
    end
    dΩ
end

function lyons2(Ω)
    dΩ = zeros(length(Ω))
    for (i, ω) in pairs(Ω)
        if i > 4
            # lyons differentiator of length five
            dΩ[i] = -3/16*Ω[i] + 31/32*Ω[i-1] - 31/32*Ω[i-3] + 3/16*Ω[i-4]
        end
    end
    dΩ
end

N = 14
Ω = zeros(N)
Ω[Int(N/2)] = 1
dΩ1 = first_difference(Ω)
dΩ2 = central_difference(Ω)
dΩ3 = lyons2(Ω)
P = zeros(N)
plot((1:N).-Int(N/2), Ω)
# plot((1:N).-Int(N/2), dΩ1)
plot((1:N).-Int(N/2+1), dΩ2)
plot((1:N).-Int(N/2+2), dΩ3)
grid(true)

I mean, if the solution is so easy I prefer not to use an extra package.

filter

Maybe offtopic, but I don’t see why these differentiator filters are so great. They are not min-max optimal (unlike Parks–McClellan), I think? Lyons doesn’t really explain how he came up with it or by what criteria it is optimal.

For comparison, if you simply generate 5-point and 7-point finite-difference stencils using stencil(-2:2, 0//1) and stencil(-3:3, 0//1) from this post, respectively, then I get a frequency response that is pretty good compared to the ones Lyons proposes. (Though for broadband you should probably do a whole-band optimization like Parks–McClellan, whereas stencil is designed to be much more accurate as \omega \to 0.)

Am I missing something? What is special about the filter coefficients proposed by Lyons?

I guess they could be viewed as slightly better than the high-order stencil in a min-max (L_\infty) sense over certain bandwidths, e.g. if you look at the right plot and ask over what bandwidth the |error| is < 0.1 it is a bit wider for lyons5 than stencil5. But if you care about L_\infty error over such a wide bandwidth, why not use Parks–McClellan?

Lyons mentions that his filter “can be implemented, if need be, without performing numerical multiplications”, by which I guess he means that the denominators are powers of two so that they can be implemented by bit shifts if you are using fixed-point arithmetic. (This isn’t true of the high-order finite-difference stencils.) Is that the key advantage? If so, it doesn’t apply to you since you are using floating-point calculations.

2 Likes

Yes. Many DSP practitioners are interested in computationally-efficient algorithms, which often means avoiding multiplications and square roots (and anything more complex than that). The reason is that these algorithms are often implemented in simple embedded microcontrollers or FPGAs that don’t even have a floating-point ALU, and even when they do, they’re very slow.

1 Like