Changing the relstep of AutoFiniteDiff() in an OptimizationFunction

I want to define an OptimizationProblem using Optimization.jl and FiniteDiff:

    optf = OptimizationFunction(f_zero, AutoFiniteDiff())
    prob = OptimizationProblem(optf, [stretched_tether_length - 0.05, 0.0, 0.0, 0.0]; 
        lb = [stretched_tether_length - 0.1, -1.0, -1.0, -0.3], 
        ub = [stretched_tether_length, 1.0, 1.0, 0.3]
        )

But I suspect that the stepsize of AutoFiniteDiff is too small to give reliable gradients, because f_zero is noisy. How can I change the stepsize of FiniteDiff through this interface? I know that FiniteDiff.jl has a relstep and absstep parameter in their API, but how can I access this?
https://docs.sciml.ai/FiniteDiff/stable/api/

I’m not sure that’s exposed. Open an issue?

2 Likes

This would be an issue with DifferentiationInterface.jl right?

With ADTypes.jl first to add the option to the backend object, and then to DifferentiationInterface.jl for it to be taken into account during differentiation. You can open both simultaneously and link them to each other :wink:

1 Like

Yup that’s probably the implementation plan. I won’t get to it but it’s probably only an hour or so for whomever can pick it up. Then once that is all there Optimization will “get it for free”

I can give it a try

1 Like