Warning secondorder ADtype in optimization.jl

Hi, I’m trying to replicate the example from Optim.jl · Optimization.jl, i.e. I’m running

using Optimization, OptimizationOptimJL
function RunTest()
    rosenbrock(x, p) = (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2
    cons = (res, x, p) -> res .= [x[1]^2 + x[2]^2]
    x0 = zeros(2)
    p = [1.0, 100.0]
    prob = OptimizationFunction(rosenbrock, Optimization.AutoForwardDiff(); cons = cons)
    prob = Optimization.OptimizationProblem(prob, x0, p, lcons = [-5.0], ucons = [10.0])
    sol = solve(prob, IPNewton())
    display(sol)
end
RunTest()

However, I get the following warning. How can I specify a SecondOrder with AutoForwardDiff()? (I’ve read the docs and know that I’ll probably want to use AutoEnzyme instead, but that produces the same problem, so I’ve decided to keep the MWE close to the example on the website).

The selected optimization algorithm requires second order derivatives, but SecondOrder ADtype was not provided.
│ So a SecondOrder with AutoForwardDiff() for both inner and outer will be created, this can be suboptimal and not work in some cases so
│ an explicit SecondOrder ADtype is recommended.

The code that governs AD backend selection seems to be here:

By default, when you provide a backend adtype, it will use soadtype = DifferentiationInterface.SecondOrder(adtype, adtype) to compute the Hessian. You can make this choice yourself by providing adtype = SecondOrder(adtype_outer, adtype_inner), in which case adtype_inner will be used for the gradient.

Note that I deduced this from the code but if it is not documented then it is subject to change. @Vaibhavdixit02 is the right person to ask.

1 Like

Thanks! By importing DifferentiationInterface I could just use

    optprob = OptimizationFunction(rosenbrock, SecondOrder(AutoForwardDiff(), AutoForwardDiff()), cons = con2_c)

which solved the problem for me!

1 Like

It’s a bit weird that the default example throws a warning, I agree. And the current implementation is slightly suboptimal too. I opened an issue to clarify and improve this part

1 Like