Hi guys, I am estimating a model by Maximum Likelihood. So far, I’ve been using Nelder-Mead as the optimization method where I don’t need to provide any gradient or Hessian.
Convergence has been somewhat slow, and I usually need 10k+ iterations, so I’ve been trying to change it to a gradient-based method. Also, I would be able to obtain standard errors without the need for boostrap.
I’ve read through the examples in the Optim and NLSolversBase documentation and can replicate them in Julia. However, when I try to use, for example, automatic Differentiation, I get the following Method error
julia> optimize(wrapmle, theta, BFGS(), Optim.Options(iterations = 100000); autodiff = :forward)
ERROR: MethodError: no method matching Float64(::ForwardDiff.Dual{ForwardDiff.Tag{typeof(wrapmle),Fl
oat64},Float64,9})
Closest candidates are:
Float64(::Real, ::RoundingMode) where T<:AbstractFloat at rounding.jl:185
Float64(::T<:Number) where T<:Number at boot.jl:725
Float64(::Int8) at float.jl:60
If I try to use it as described by the Optim documentation, so I could get the gradient and calculate standard errors, I get the same error
julia> func = OnceDifferentiable(wrapmle, theta; autodiff = :forward);
julia> optimize(func, theta)
ERROR: MethodError: no method matching Float64(::ForwardDiff.Dual{ForwardDiff.Tag{typeof(wrapmle),Fl
oat64},Float64,9})
Closest candidates are:
Float64(::Real, ::RoundingMode) where T<:AbstractFloat at rounding.jl:185
Float64(::T<:Number) where T<:Number at boot.jl:725
Float64(::Int8) at float.jl:60
I don’t really understand what is going on here. I won’t post my mle function as it is quite long and verbose and highly specific. It has a bunch of numerical integration by quadrature and I use a lot of pdf.(Normal(),…) and cdf.(Normal(),…) expressions to evaluate standard normal distributions. Also I have around 25 parameters.
Is there a specific way I should be writing my objective function to enable automatic differentiation? What is causing this error? Thanks!
Edit: Just an observation, but when using Finite Differentiation instead of Forward, I don’t have this problem!