ForwardDiff Dual Type problem with GLM


I am working on ForwardDiff package to implement some optimizations.
The core problem comes from Dual type (made by autodiff of ForwardDiff) and GLM package. I’ve seen other posts related to the issues with FowardDiff / Dual Type and thoroughly read answers, but couldn’t find the right one for the case when I need to use GLM package within optimization process.

I am trying to post minimal working example here:

using Distributions
using CSV, DataFrames
using GLM
using Optim, ForwardDiff
using StatsBase, LinearAlgebra

# Generate random data
N = 5 
y = rand(Normal(0,1), N) .> 1 # dependent variable (one or zero) in the regression below 
X = rand(Normal(0,1), N,2) # independent variables
z = [y X] # Define dataset in matrix format

# Objective function
function g(z,θ)
    X̃ = z[:,2:end]*θ # mutate X̃ with Dual Type as ForwardDiff AD is implemented

    df = DataFrame([z[:,1] X̃])
    probit = glm(@formula(x1 ~ x2), df, Binomial(), ProbitLink()) # then error is made here
    pred_y = predict(probit)

    Z = [z[:,2] pred_y]

θ₀ = [.5,.5]
W = inv(X'X)
f(θ) = (ḡ = mean(g(z,θ), dims=2); dot(ḡ, W, ḡ)) # GMM wrapper
result = Optim.optimize(f, θ₀, LBFGS(); autodiff=:forward) # run optimization

If I don’t have to use GLM package inside of function g(z,θ), then there is no problem. But the problem occurs only when I have to use GLM package in the optimization function.

As similar to other posts asking question about ForwardDiff and Dual Type (conflict with Float64), the error message is the following:

ERROR: MethodError: no method matching Float64(::ForwardDiff.Dual{ForwardDiff.Tag{typeof(f), Float64}, Float64, 2})
Closest candidates are:
  (::Type{T})(::Real, ::RoundingMode) where T<:AbstractFloat at C:\Users\yangk\AppData\Local\Programs\julia-1.7.3\share\julia\base\rounding.jl:200
  (::Type{T})(::T) where T<:Number at C:\Users\yangk\AppData\Local\Programs\julia-1.7.3\share\julia\base\boot.jl:770
  (::Type{T})(::AbstractChar) where T<:Union{AbstractChar, Number} at C:\Users\yangk\AppData\Local\Programs\julia-1.7.3\share\julia\base\char.jl:50
  [1] convert(#unused#::Type{Float64}, x::ForwardDiff.Dual{ForwardDiff.Tag{typeof(f), Float64}, Float64, 2})
    @ Base .\number.jl:7
  [2] setindex!(A::Matrix{Float64}, x::ForwardDiff.Dual{ForwardDiff.Tag{typeof(f), Float64}, Float64, 2}, i1::Int64)
    @ Base .\array.jl:903
  [3] _unsafe_copyto!(dest::Matrix{Float64}, doffs::Int64, src::Matrix{ForwardDiff.Dual{ForwardDiff.Tag{typeof(f), Float64}, Float64, 2}}, soffs::Int64, n::Int64)
    @ Base .\array.jl:253
  [4] unsafe_copyto!
    @ .\array.jl:307 [inlined]
  [5] _copyto_impl!
    @ .\array.jl:331 [inlined]
  [6] copyto!
    @ .\array.jl:317 [inlined]
nceDifferentiable{Float64, Vector{Float64}, Vector{Float64}}, initial_x::Vector{Float64})
    @ Optim C:\Users\yangk\.julia\packages\Optim\6Lpjy\src\multivariate\solvers\first_order\l_bfgs.jl:164
 [26] optimize
    @ C:\Users\yangk\.julia\packages\Optim\6Lpjy\src\multivariate\optimize\optimize.jl:36 [inlined]
 [27] #optimize#89
    @ C:\Users\yangk\.julia\packages\Optim\6Lpjy\src\multivariate\optimize\interface.jl:142 [inlined]
 [28] top-level scope
    @ c:\Users\yangk\Desktop\Question_JuliaLang.jl:28

I spent quite a long time to solve this issue, but now I’m here to get help from others. Thanks a lot for reading this.

Edit: following the reply from ChrisRackauckas, I edited code and included stacktrace.

PrellocationTools.jl doesn’t make sense here: you’re not preallocating or mutating anything.

But I think this just boils down to glm not supporting ForwardDiff automatic differentiation. Share the stacktrace: that would probably point to what in the GLM package doesn’t support AD.

Thanks for your reply!

You are right. The problem is caused by GLM and ForwardDiff AD. For those who might have similar issues, I modified code and added stacktrace in the original post.