I am not sure whether this is relevant in your case, but I recently ran into the following problem: When the input is an integer, and somewhere down the line there is a conversion to float, it will be a Float16
instead of a Float64
(reported here: `ForwardDiff.derivative(float, 1)` returns `Float16(1.0)` · Issue #362 · JuliaDiff/ForwardDiff.jl · GitHub).
Compare:
julia> using ForwardDiff
julia> ForwardDiff.derivative(float, 1)
Float16(1.0)
julia> ForwardDiff.derivative(float, 1.0)
1.0
I ran into this when trying to write my own convenience wrapper for the derivative of a complex function via the Jacobian based on a suggestion on this discourse, and then wondering why seemingly simpler test inputs like 1 + 1im
were giving much less accurate results than those involving e.g. π
.
The simplest “real-world” case I found where this becomes a problem was when using the function sincos
(notice the problem won’t appear if using the normal sin
function):
using ForwardDiff
julia> mysin(x) = sincos(x)[1]
mysin (generic function with 1 method)
julia> ForwardDiff.derivative(mysin, 1)
Float16(0.5405)
julia> ForwardDiff.derivative(mysin, 1.0)
0.5403023058681398
My workaround for the time being was then to just simply convert every input to Float64
beforehand.