I am using Turing
a lot but now I have a model which would keep some persistent memory to be computational efficient. A minimal example would be:
using Turing, Optim, ForwardDiff
mutable struct Acomputer{T <: Number}
a::Vector{T} # <- in real I would need more
para::T
end
function set_para!(ac::Acomputer, p)
ac.para = p
end
function get_a!(ac::Acomputer)
fill!(ac.a, ac.para) # <- in real this is more involved
ac.a
end
the model would be defined like this:
@model function m(ac, x)
para ~ Uniform(0,10)
set_para!(ac, para)
a = get_a!(ac)
x .~ Poisson.(a)
end
I can now optimise it w/o the use of gradients:
ac = Acomputer(zeros(2), 3.0)
optimize(m(ac, [4,6]), MLE(), NelderMead())
giving the expected result:
ModeResult with maximized lp of -3.66
1-element Named Vector{Float64}
A │
──────┼────────
:para │ 5.00009
However - if I want to make use AD, i.e. ForwardDiff
, I cannot do this anymore:
ac = Acomputer{ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 1}}(zeros(2), 3.0)
optimize(m(ac, [4,6]), MLE())
I get
MethodError: no method matching Float64(::ForwardDiff.Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 1})
Closest candidates are:
(::Type{T})(::Real, ::RoundingMode) where T<:AbstractFloat at rounding.jl:200
(::Type{T})(::T) where T<:Number at boot.jl:772
(::Type{T})(::AbstractChar) where T<:Union{AbstractChar, Number} at char.jl:50
...
… and don’t know how to resolve this issue. Is there any help?