Hi @miguelborrero!
The reason for this error is explained in the limitations of ForwardDiff: for it to work, you must make sure that your functions can accept numbers of generic type, not just Float64
. Your code in its current state has several forced conversions to Float64
: they do not improve performance and they make autodiff in forward mode impossible.
Here is a version where I made a few changes to:
- increase generality by adapting to arbitrary types
- improve clarify and possibly performance by turning
Observation
into aStaticVector
instead of having a loop over its field names (this only helps whenObservation
has a handful of fields)
using ForwardDiff
using LinearAlgebra
using StaticArrays
# adapt struct to arbitrary field type
struct Observation{T<:Real}
y::T
x1::T
x2::T
end
# efficiently turn Observation into a vector of statically known size
to_vector(o::Observation) = SVector(o.y, o.x1, o.x2)
# this function will generate the logit data for Q4
function simulate_logit_data(; n_obs=300000)
# define the DGP
X1 = 1.0:0.01:10.0
X2 = 18:0.01:65.0
α = -1.0
θ_1 = 0.5
θ_2 = 0.02
# pre-allocate the container for the data, fully specifying its type
data = Vector{Observation{Float64}}(undef, n_obs)
for i in eachindex(data)
x1 = rand(X1)
x2 = rand(X2)
u = rand()
y = (α + θ_1 * x2 + θ_2 * x2 + log(u / (1 - u)) > 0) ? 1.0 : 0.0
data[i] = Observation(y, x1, x2)
end
return data
end
# this function will compute the log-likelihood given a parameter space point
function LL(θ, data)
ll = zero(eltype(θ)) # give ll the element type of theta, which may be a Dual number
for i in eachindex(data)
obs = data[i]
Xθ = dot(θ, to_vector(obs)) # precompute this quantity as a dot product
ll += -log(1 + exp(Xθ)) + obs.y * Xθ
end
return ll
end
data = simulate_logit_data()
ForwardDiff.gradient(Base.Fix2(LL, data), @SVector(rand(3)))
Related: