ForwardDiff: "No Method Matching" Error

I’m testing out the ForwardDiff package for the first time, but unfortunately I’m getting a “No Method Matching” error that I can’t figure out. I was successful in making the following test example by mimicking the one in the documentation (https://github.com/JuliaDiff/ForwardDiff.jl):

f(x::Vector) = 4*(x[1])^2 + 2*(x[2])^3
x = [2.0,3.0]
grad(y) = ForwardDiff.gradient(f, y)
println(grad(x))

which gave [16.0, 54.0], as expected. However, when I tried this on a more complicated objective function called objective, I got the following error:

MethodError: no method matching Float64(::ForwardDiff.Dual{ForwardDiff.Tag{var"#objective#282"{typeof(One_Age_Model_1eta),typeof(f_ICs),typeof(norm1),DataFrame,Array{String,1},Array{String,1},Array{Float64,1},Array{Union{Missing, Float64},1},Array{Union{Missing, Float64},1},Array{Float64,1},Array{Float64,1},Array{Int64,1},Array{Int64,1},Array{Int64,1},Array{Int64,1},Int64,Int64,Int64},Float64},Float64,10})
Closest candidates are:
  Float64(::Real, !Matched::RoundingMode) where T<:AbstractFloat at rounding.jl:200
  Float64(::T) where T<:Number at boot.jl:716
  Float64(!Matched::Irrational{:ℯ}) at irrationals.jl:189

The objective function is fairly long, so I’ll just give an outline of my code:

function objective(x::Vector)

   #The vector x contains a bunch of parameters which are used 
   #to solve an ODE system. (There are also a bunch of other objects 
   #needed to evaluate the function which are defined just before the
   #function definition.) 
   #The ODE solution is compared against some data, and a residual
   #vector is calculated by comparing the observed data with the 
   #ODE solution at a set of discrete points.
   #The function returns an L^2 norm of the residual vector.

    return norm
end

#Defining a gradient function
grad(y) = ForwardDiff.gradient(objective,y)

#Calling the gradient function gives the error shown earlier:
println(grad(x0))

Here, x0 is the input vector I’m using to test. I’ve confirmed that the objective function is working, and objective(x0) gives the expected output. So where am I going wrong here? Any help would be greatly appreciated.

Hi, thanks for your reply. I did try removing x::Vector but I still got the same error. Also, I just updated my post to provide a bit more detail.

Sorry - misread your question. Let me try the new description.

Can you create a similar-looking function as a MWE?

The problem is probably that somewhere you are trying to cast something to Float64, e.g. this gives the same error,

ForwardDiff.derivative(x -> Float64(x)^2, 2)

For ForwardDiff to work you need to have your code written so any generic Real can pass through it. This might be as easy as just removing the type annotations / explicit type conversions, or maybe parameterizing your functions with T <: Real where needed.

1 Like

Thanks for the info and suggestion! Based on the error message that seems reasonable. I’ll try this out and report back tomorrow when I get a chance.

First I’m going to give @marius311 's suggestion a shot, and failing that I’ll try to come up with a MWE…

Thank you @marius311 for the suggestion! I just did a Find + Replace and changed all the Float64's in that cell to Real, and it seemed to work. I do have a question about this: Does ForwardDiff require that all arithmetic operations involve only objects of type Real?

Pretty much, yes, or arrays of Reals, see Limitations of ForwardDiff · ForwardDiff

If you need gradients through code which is not set up like this, Zygote would be the other thing to look at (currently only reverse-mode though, and more overhead, but very powerful and can do pretty much any code).

1 Like

Ok, thanks for the info. My intention is to use ForwardDiff to supply the gradient to an optimization algorithm, which will be used to minimize objective(). The optimization algorithms I’m planning to try are wrappers of C code, so I guess I’ll find out whether ForwardDiff will work with that. All of the code to evaluate objective() is pure Julia code, so hopefully ForwardDiff can handle this…

Why don’t you use optimization algorithms written in Julia instead, eg Optim.jl?

1 Like