This might be trivial but I am currently fighting with `JuMP`

to accept my quality function for the minimisation procedure. I figured out that my custom type:

```
struct Position <: FieldVector{3, Float64}
x::Float64
y::Float64
z::Float64
end
```

is causing this error:

```
MethodError: no method matching Float64(::ForwardDiff.Dual{ForwardDiff.Tag{getfield(JuMP, Symbol("##84#86")){KM3NeT.MultiDUMinimiser},Float64},Float64,6})
Closest candidates are:
Float64(::Real, !Matched::RoundingMode) where T<:AbstractFloat at rounding.jl:194
Float64(::T<:Number) where T<:Number at boot.jl:741
Float64(!Matched::Int8) at float.jl:60
```

when I try to

```
register(model, :qfunc, 6, m, autodiff=true)
...
...
...
@NLobjective(model, Min, qfunc(x, y, z, θ, ϕ, t₀))
optimize!(model) # this is the line which fails with the message above
```

Given the candidates, I guess I have to somehow implement something related to roundings ? Can someone toss me in the right direction?

I found this discussion but still clueless: https://github.com/JuliaDiffEq/DiffEqBase.jl/issues/169