My questions are asked at the bottom of this post, but their background is developed through this post - which is perhaps a bit pedantic.
I’m writing a p-method FEM code. Instead of explicitly coding high-order shape functions and Jacobian matrices, I’m writing basis function routines and utilizing auto-differentiation via ForwardDiff. Here’s my Lagrange basis function routine.
function lagrangeBasis(k,variate)
node = range(-1,1,length=k+1)
L = Array{Float64,1}(undef,k+1) ## Question regards this line
for j = 1:k+1
L[j] = 1.0
for m = 1:k+1
if j != m
L[j] *= ((variate-node[m])/(node[j]-node[m]))
end
end
end
return L
end
Calling this function returns an array of Float64
values for each basis function, in this case the 9 basis functions associated with the degree 8 Lagrange basis:
julia> lagrangeBasis(8,0.)
9-element Array{Float64,1}:
0.0
-0.0
0.0
-0.0
1.0
0.0
-0.0
0.0
-0.0
And benchmarking this code is giving me acceptable performance:
julia> BenchmarkTools.@btime(lagrangeBasis(8,$rand()));
417.225 ns (1 allocation: 160 bytes)
The problem I have is when I try to evaluate the derivatives (or Jacobians in multi-variate cases) using the ForwardDiff package:
julia> ForwardDiff.derivative(x->lagrangeBasis(8,x),rand())
ERROR: MethodError: no method matching Float64(::ForwardDiff.Dual{ForwardDiff.Tag{getfield(Main, Symbol("##20#21")),Float64},Float64,1})
Closest candidates are:
Float64(::Real, ::RoundingMode) where T<:AbstractFloat at rounding.jl:194
Float64(::T<:Number) where T<:Number at boot.jl:741
Float64(::Int8) at float.jl:60
...
Stacktrace:
[1] convert(::Type{Float64}, ::ForwardDiff.Dual{ForwardDiff.Tag{getfield(Main, Symbol("##20#21")),Float64},Float64,1}) at .\number.jl:7
[2] setindex!(::Array{Float64,1}, ::ForwardDiff.Dual{ForwardDiff.Tag{getfield(Main, Symbol("##20#21")),Float64},Float64,1}, ::Int64) at .\array.jl:767
[3] lagrangeBasis(::Int64, ::ForwardDiff.Dual{ForwardDiff.Tag{getfield(Main, Symbol("##20#21")),Float64},Float64,1}) at .\REPL[23]:8
[4] #20 at .\REPL[29]:1 [inlined]
[5] derivative(::getfield(Main, Symbol("##20#21")), ::Float64) at C:\Users\gregj\.julia\packages\ForwardDiff\N0wMF\src\derivative.jl:14
[6] top-level scope at none:0
I’ve found a workaround, which is to replace the array initialization in the basis function routine:
function lagrangeBasis(k,variate)
node = range(-1,1,length=k+1)
L = Array{Any,1}(undef,k+1) ## MODIFIED
for j = 1:k+1
L[j] = 1.0
for m = 1:k+1
if j != m
L[j] *= ((variate-node[m])/(node[j]-node[m]))
end
end
end
return L
end
Now evaluating this routine returns an array of Any
values
julia> lagrangeBasis(8,0.)
9-element Array{Any,1}:
0.0
-0.0
0.0
-0.0
1.0
0.0
-0.0
0.0
-0.0
And it takes considerably longer (~6x) to evaluate:
julia> BenchmarkTools.@btime(lagrangeBasis(8,$rand()));
2.517 μs (145 allocations: 2.41 KiB)
But at least now I can compute the derivatives / Jacobians:
julia> ForwardDiff.derivative(x->lagrangeBasis(8,x), 0.)
9-element Array{Float64,1}:
0.014285714285714285
-0.15238095238095237
0.7999999999999999
-3.2
-1.1102230246251565e-15
3.2
-0.7999999999999998
0.15238095238095237
-0.014285714285714284
From these examples, I have two questions
- Is there a simple & performant way to use the ForwardDiff package when the evaluated function returns an array of
Float64
, such as in the firstlagrangeBasis
function I provided? - Maybe a quick solution to the first question is to use multiple dispatch on the
lagrangeBasis
function: write one version that gets called when simply evaluating the function, and another version that gets called if ForwardDiff is evaluating the function:
function lagrangeBasis(k,variate::Float64)
node = range(-1,1,length=k+1)
L = Array{Float64,1}(undef,k+1)
...
end
function lagrangeBasis(k,variate::ForwardDiffType) # <- ??????
node = range(-1,1,length=k+1)
L = Array{Any,1}(undef,k+1)
...
end
What type would I need to pass into lagrangeBasis
to make this second case work?