Best method for calculating the second derivative of 1D array

Essentially I have 1D arrays of eigenvalues as shown in the above plot and I want to find the 2nd derivative of each energy band with respect to ϵ.

My current method uses Dierckx.jl.

In my code, the eigenvalues are stored in a 3D array with the structure
[Energy band, value of ϵ, value of B], at the moment that’s a [5, 520, 520]. Hence the loops in the code below:

function QuantumCapacitance(E, ϵ)
    Q_C = Array{Float64, 3}(undef, 5, size(E,2), size(E,3)) # Define array
    for i = 1:size(E,2) # Magetic field loop
        for j = 1:5     # Eigenvalue Loop
            itp = Spline1D(ϵ, E[j,:,i], k=2)  # Value for 2nd derivative 
            for k = 1:size(E,2) # detuning, ϵ, loop 
            Q_C[j, k, i] =  derivative(itp,ϵ[k],nu=2) 
            end
        end
    end
    return Q_C

This method however takes up a lot of time in my code. Is there a more efficient way to do this, are there any finite difference method packages that would work better here, would Interpolations.jl be quicker.

My reason for not using that package was earlier in my code I had to perform a cubic spline and Dierckx was much more accurate.

Thank you :slight_smile:

Interpolations.jl includes the gradient function, which may work (not all interpolants supplied by that package support gradient computation) and may be faster.
However, I notice your k loop over the values of ϵ; Dierckx.jl will take a vector of values to compute the derivative at, so if you do derivative(itp, ϵ, nu=2) it may be faster and then you can assign with Q_c[j,:,i] .= to avoid allocations (I think? since the derivative will return an array).

1 Like

Cubic spline gives second order accuracy, after 2nd derivation, not much is left…
Your functions are smooth, so you can use higher order interpolants.
But raising the order usually increases the effort and often runs into numerical issues.

One method without such issues is barycentric interpolation.
There are a couple of Julia packages available, not all with working derivatives though.

You shouldn’t use high-order polynomial interpolation (barycentric or not) from equally spaced points, because you can run into Runge phenomena.

(Equally spaced points are fine if you do a least-square fit to a much lower-degree polynomial.)

2 Likes

True for the basic barycentric method Berrut et al.
Two variants are addressing these issues:

  • Barycentric interpolation via AAA
  • Rational barycentric interpolation,

both in GitHub - macd/BaryRational.jl: Barycentric rational approximation and interpolation in one dimension.

NOTE: Currently, the derivatives are not working.

And right, a local fit should be tried too, YMMV.

2 Likes

What about simple finite differences?

function derivative2(x, y, i)
    Nx = length(x)
    @assert Nx >= 5
    dx = x[2] - x[1]
    if i == 1
        dy = 35*y[i] - 104*y[i + 1] + 114*y[i+2] - 56*y[i+3] + 11*y[i+4]
    elseif i == 2
        dy = 11*y[i-1] -20*y[i] + 6*y[i+1] + 4*y[i+2] - y[i+3]
    elseif i == Nx-1
        dy = -y[i-3] + 4*y[i-2] + 6*y[i-1] - 20*y[i] + 11*y[i+1]
    elseif i == Nx
        dy = 11*y[i-4] - 56*y[i-3] + 114*y[i-2] - 104*y[i-1] + 35*y[i]
    else
        dy = -y[i-2] + 16*y[i-1] - 30*y[i] + 16*y[i+1] - y[i+2]
    end
    dy = dy / (12 * dx^2)
    return dy
end

x = range(-10, 10, length=101)
y = @. x^2

dy = [derivative2(x, y, i) for i=1:length(x)]

@assert isapprox(dy, 2*ones(length(x)))
2 Likes

Thank you for your reply. The code is accurate enough for my use case, what I am now looking to do is try to optimise speed without sacrificing too much accuracy as I go on to iterate over a large range of values. This is currently taking hours to run through and the flexibility to change parameters and not wait so long for results would be great

Thank you very much for your reply. I will try implementing this and see how it affects the speed and accuracy of my results :slight_smile:

Further help:

1 Like

To speed up the above code

  • Inline the finite differences function in the loop and
  • move out invariant calculations
  • peel off the first two and last two loop iterations to get rid of the if statements.
2 Likes

I am quite new to Julia and programming in general so I was wondering if I have implemented your suggestions correctly. Here is my new code;

# inline finite difference function
@inline function new_derivative2!(y ,i)
    # Removed if statements
    dy = -y[i-2] + 16*y[i-1] - 30*y[i] + 16*y[i+1] - y[i+2]
    dy = dy / (12 * 0.001^2)
    return dy
end

function new_QC!(E, ϵ)
    Q_C = Array{Float64, 3}(undef, 5, length(ϵ), length(ϵ)) # Define array
    dx = 0.001  # removing invariant calculation
    for k =1:5
        for j = 1:length(ϵ)
            Q_C[k,1,j] = (35*E[k,1,j] - 104*E[k,2,j] + 114*E[k,3,j] - 56*E[k,4,j] + 11*E[k,5,j]) / (12 * dx^2)
            Q_C[k,2,j] = (11*E[k,1,j] -20*E[k,2,j] + 6*E[k,3,j] + 4*E[k,4,j] - E[k,5,j]) / (12 * dx^2)
            for i=3:(length(ϵ)-2)
                Q_C[k,i,j] = new_derivative2!(E[k,:,j], i)
            end
            Q_C[k,520,j] = (-E[k,517,j] + 4*E[k,518,j] + 6*E[k,519,j] - 20*E[k,520,j] + 11*E[k,521,j]) / (12 * dx^2)
            Q_C[k,521,j] = (11*E[k,517,j] - 56*E[k,518,j] + 114*E[k,519,j] - 104*E[k,520,j] + 35*E[k,521,j]) / (12 * dx^2)
        end
    end
    return Q_C
    
end

Benchmarks of old and new:

Old

julia> @benchmark QC!(E, ϵ)
BenchmarkTools.Trial: 6 samples with 1 evaluation.
 Range (min … max):  925.521 ms …   1.122 s  ┊ GC (min … max): 25.53% … 24.48%
 Time  (median):     977.854 ms              ┊ GC (median):    25.28%
 Time  (mean ± σ):   995.556 ms ± 77.650 ms  ┊ GC (mean ± σ):  25.16% ±  0.41%

  █ █ █                      █      █                        █
  █▁█▁█▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁█▁▁▁▁▁▁█▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁█ ▁
  926 ms          Histogram: frequency by time          1.12 s <

 Memory estimate: 5.51 GiB, allocs estimate: 1357207.

New

julia> @benchmark new_QC!(E,ϵ)
BenchmarkTools.Trial: 6 samples with 1 evaluation.
 Range (min … max):  894.819 ms …   1.077 s  ┊ GC (min … max): 25.53% … 24.80%
 Time  (median):     951.854 ms              ┊ GC (median):    25.24%
 Time  (mean ± σ):   978.050 ms ± 72.613 ms  ┊ GC (mean ± σ):  25.19% ±  0.28%

  █            █  █    █                               █     █
  █▁▁▁▁▁▁▁▁▁▁▁▁█▁▁█▁▁▁▁█▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁█▁▁▁▁▁█ ▁
  895 ms          Histogram: frequency by time          1.08 s <

 Memory estimate: 5.47 GiB, allocs estimate: 1346787.

Thank you

1 Like

One last thing is that dy = dy / (12 * 0.001^2) will be a good bit slower than dy = dy * (1/ (12 * 0.001^2)) since the second version will use a multiplication by a constant and the first will use a division.

1 Like