I am relatively new to Julia, but have a lot of experience with modern Fortran. I am minimising an objective function, and I wish to use ForwardDiff.jl
to calculate the derivatives. The issue is that there is one calculation where I need to perform calculations with greater precision than I can obtain with Float64
. Ignoring the issue of precision, this would correspond to the function myfunction
below.
function myfunction(j, v)
n = length(v)
T = eltype(v)
expdv_1 = Matrix{T}(undef, n, n)
dprod = Array{T}(undef, n)
expv = exp.(v)
for ix = 1:n
expdv_1[:, ix] = expv ./ expv[ix] .- 1.0
expdv_1[ix, ix] = -1.0
dprod[ix] = 1.0 / prod(expdv_1[:, ix])
end
temp = (v .- v[j]) .* dprod ./ expdv_1[:, j]
temp[j] = dprod[j]
return abs(sum(temp))
end
Note that numerical precision is an issue whenever elements in the vector v
are close to each other. We can assume they are not identical. For the purpose of simply calculating the above function with greater accuracy (and not worrying about the derivatives), I could use MultiFloats.jl
, for example. Here I would convert the input vector v
to a Float64x4
vector (say), perform some calculations, and then return a Float64
. This is myfunction2
.
function myfunction2(j, v)
n = length(v)
expdv_1 = Matrix{Float64x4}(undef, n, n)
dprod = Array{Float64x4}(undef, n)
vT = Float64x4.(v)
expv = exp.(vT)
for ix = 1:n
expdv_1[:, ix] = expv ./ expv[ix] .- 1.0
expdv_1[ix, ix] = -1.0
dprod[ix] = 1.0 / prod(expdv_1[:, ix])
end
temp = (vT .- vT[j]) .* dprod ./ expdv_1[:, j]
temp[j] = dprod[j]
return Float64(abs(sum(temp)))
end
The problem is I don’t know how to do a similar exercise using ForwardDiff.Dual
. My idea here would be to convert the value
and partials
types in the ForwardDiff.Dual
vector to Float64x4
in order to perform the calculations, and then return the ForwardDiff.Dual
where value
and partials
type correspond to their respective input types. However, I don’t know how to achieve this (or indeed, if this is a sensible strategy).
Thanks in advance.