# Changing precision for a dual number calculation with ForwardDiff

I am relatively new to Julia, but have a lot of experience with modern Fortran. I am minimising an objective function, and I wish to use `ForwardDiff.jl` to calculate the derivatives. The issue is that there is one calculation where I need to perform calculations with greater precision than I can obtain with `Float64`. Ignoring the issue of precision, this would correspond to the function `myfunction` below.

``````function myfunction(j, v)

n = length(v)
T = eltype(v)

expdv_1 = Matrix{T}(undef, n, n)
dprod = Array{T}(undef, n)
expv = exp.(v)

for ix = 1:n
expdv_1[:, ix] = expv ./ expv[ix] .- 1.0
expdv_1[ix, ix] = -1.0
dprod[ix] = 1.0 / prod(expdv_1[:, ix])
end

temp = (v .- v[j]) .* dprod ./ expdv_1[:, j]
temp[j] = dprod[j]
return abs(sum(temp))

end
``````

Note that numerical precision is an issue whenever elements in the vector `v` are close to each other. We can assume they are not identical. For the purpose of simply calculating the above function with greater accuracy (and not worrying about the derivatives), I could use `MultiFloats.jl`, for example. Here I would convert the input vector `v` to a `Float64x4` vector (say), perform some calculations, and then return a `Float64`. This is `myfunction2`.

``````function myfunction2(j, v)

n = length(v)

expdv_1 = Matrix{Float64x4}(undef, n, n)
dprod = Array{Float64x4}(undef, n)
vT = Float64x4.(v)
expv = exp.(vT)

for ix = 1:n
expdv_1[:, ix] = expv ./ expv[ix] .- 1.0
expdv_1[ix, ix] = -1.0
dprod[ix] = 1.0 / prod(expdv_1[:, ix])
end

temp = (vT .- vT[j]) .* dprod ./ expdv_1[:, j]
temp[j] = dprod[j]
return Float64(abs(sum(temp)))

end
``````

The problem is I don’t know how to do a similar exercise using `ForwardDiff.Dual`. My idea here would be to convert the `value` and `partials` types in the `ForwardDiff.Dual` vector to `Float64x4` in order to perform the calculations, and then return the `ForwardDiff.Dual` where `value` and `partials` type correspond to their respective input types. However, I don’t know how to achieve this (or indeed, if this is a sensible strategy).

I don’t think you need your `myfunction2` at all. The type used by ForwardDiff is determined by the type of the point where you evaluate:

``````julia> ForwardDiff.gradient((v)->myfunction(1, v), rand(Float32, 3))
3-element Vector{Float32}:
0.16748047
-0.07839966
-0.08911133

julia> ForwardDiff.gradient((v)->myfunction(1, v), rand(Float64x4, 3)))
3-element Vector{MultiFloat{Float64, 4}}:
0.163543283215202351657594438896779077707314943497869746800370922035
-0.08781795453575777274859370513376068613678922506666611033657165641427
-0.07572532867944457890900073376301839157052571843120363646379951721556
``````

Just to avoid confusion, it is probably worth replacing the constants (1.0) in `myfunction` with `one(T)`. I suspect all these types define the correct operations, but if not that will save a surprise.

Thanks. To be clear, I only want to use the higher precision in `myfunction` and not in the objective function that calls this (and other functions). This is the only place I need greater precision. Moreover, the optimisation library I am using also requires that the optimisation variable vector is `Float64`.

Well, you do not need to write two functions if you need different precision, just call one function with parameters of the desired type.
And if your solver supports only Float 64 you cannot use a higher precision.