DiffEqCallbacks' StepSizeLimiter and Dual numbers

I am trying to use GalacticOptim to run an optimization and I am using GalacticOptim.AutoForwardDiff() to generate the gradients and the run an optimizer over my objective function. My objective function contains a model with differential equations that I am solving with DifferentialEquations.jl. All is fine with it until I am adding a StepsizeLimiter callback from DiffeqCallbacks.jl (meaning that the optimizer correctly runs on my obj. function without the callback present).

With the callback present, I am getting the error:

 LoadError: TypeError: in setfield!, expected Float64, got a value of type ForwardDiff.Dual{Nothing, Float64, 2}

Reading the documentation (GitHub - SciML/DiffEqCallbacks.jl: A library of useful callbacks for hybrid scientific machine learning (SciML) with augmented differential equation solvers) tells me that this probably has to do with the cached_dtcache that I have to set to the right type, because my time domain is not Float64 anymore… At least that’s what I think the problem boils down to! (So close to a solution!?) But…what is the specific type to use here?

In other words: What should the “???” be in this line of code:
StepsizeLimiter(dtFE, cached_dtcache = ???)

Any help is greatly appreciated and sorry for not posting a MWE. I thought that this actually boils down to a simple question, but of course I can create a MWE if you think it’s indispensable.

just make it match the eltype of parameters in there, so it’s hard without seeing your code, but something like cached_dtcache = zero(eltype(p)) or whatever is Dual.

OK, does this imply that I would have to re-generate the callback that I am using from scratch if I swap over from Float parameters to Dual parameters?

I’ll give it a try and if performance is bad for this, I guess I’ll be back :slight_smile:

Yes. This callback does need to.