Multi-threading and Dierckx.jl & Interpolations.jl and gradients

I noticed that if I use GitHub - kbarbary/Dierckx.jl: Julia package for 1-d and 2-d splines (1D, k=3 and derivatives) in a loop with the Threads.@threads macro I get partially corrupted results.

If I remove the Threads.@threads macro results look fine.

Is this expected? I’m not sure how a wrapped library works when called from multiple threads.

Either way (bug or expected failure) would interpolations.jl expected to be more friendly in this respect (i.e. multi-threading)?
Assuming I can switch to interpolations.jl I don’t seem to be able to get gradients of a cubic interpolant at different locations passing directly a 1D-vector of coordinates. I need to pass just one scalar at time, not sure yet if there is a penalty if I use list comprehension [gradient(interpolant,x) for x in x_vec] to achieve the same functionality. Or if I’m just using the wrong syntax, the documentation is not explicit.

OK, the Interpolations.jl package seem fine with Threads.@threads.

Single thread performance is comparable with Dierckx.jl even though I’m not sure if it’s an apple to apple comparison.

Dierckx.jl: Spline1D(x_prof_samples, profiles[sense][1,:,tilt,azimuth], k=3)
Interpolations.jl: CubicSplineInterpolation(x_range, profiles[sense][1,:,tilt,azimuth])

But for the gradient I have to do something like:
const jac_model_y(x) = [-Interpolations.gradient(interp_model_y,xx)[1] for xx in (xg .- x)]
(where xg is a constant vector and x is a scalar shift)
because Interpolations.gradient returns a 1-element SVector.

This seems very bizarre.
Is there a way to pass directly xg .- x and get out directly a vector or the same size of the input vector, with the gradient evaluated at the input vector xg .- x points?

OK, next attempt, with specific code snippet and in-place modification of the pre-allocated output:

using Interpolations
xs = 0.5:0.1:5
A = [log(x) for x in xs]
interp_cubic = CubicSplineInterpolation(xs, A)
g = zeros(1)

you get

julia> Interpolations.gradient!(g,interp_cubic,1.0)
1-element Vector{Float64}:

julia> g
1-element Vector{Float64}:

so far so good.

Then I tried

g = zeros(2)

and I get

julia> Interpolations.gradient!(g,interp_cubic,[1.0,1.05])
ERROR: StackOverflowError:
 [1] gradient!(dest::Vector{Float64}, itp::Interpolations.Extrapolation{Float64, 1, ScaledInterpolation{Float64, 1, Interpolations.BSplineInterpolation{Float64, 1, OffsetArrays.OffsetVector{Float64, Vector{Float64}}, BSpline{Cubic{Line{OnGrid}}}, Tuple{Base.OneTo{Int64}}}, BSpline{Cubic{Line{OnGrid}}}, Tuple{StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}}}}, BSpline{Cubic{Line{OnGrid}}}, Throw{Nothing}}, x::Vector{Float64}) (repeats 79984 times)
   @ Interpolations ~/.julia/packages/Interpolations/QLbbu/src/Interpolations.jl:423

Not sure if I’m getting closer… any suggestion would be appreciated.
@mkitti , any chance this is a bug?

1 Like

Dierckx is not Thread safe, because I’m of how temporary arrays are used. I suggested a PR but didn’t get a response. I moved to Interpolations now

A StackOverflowError definitely seems like a bug. Could you create a Github issue for this? It will be easier for other collaborators and me to track this there.

Thank you for the confirmation @cortner .
I’m guessing you are referring to: Thread-safety · Issue #77 · kbarbary/Dierckx.jl · GitHub
It matches my experience, starting to have corruptions when computing derivatives.

@mkitti Here it is: StackOverflowError in gradient computation · Issue #458 · JuliaMath/Interpolations.jl · GitHub
Feel free to update/reformat.