Optim.jl - No default objective type for IPNewton

Please consider the following code snippet using Optim.jl:

using Optim

dof = 7

fun(x) = 0.0; x0 = fill(0.1, dof)
df = TwiceDifferentiable(fun, x0)

lx = fill(-1.2, dof); ux = fill(+1.2, dof)
dfc = TwiceDifferentiableConstraints(lx, ux)

# res = optimize(df, dfc, x0, IPNewton())
res = optimize(df, dfc, x0, IPNewton(); autodiff=:forward)

If I call optimize() without the autodiff=:forward flag, the snippet above works. However, if I use set autodiff=:forward, I get the following error:

No default objective type for IPNewton{typeof(Optim.backtrack_constrained_grad),Symbol}(Optim.backtrack_constrained_grad, :auto, false) and (TwiceDifferentiable{Float64,Array{Float64,1},Array{Float64,2},Array{Float64,1}}(fun, getfield(NLSolversBase, Symbol("#g!#44")){typeof(fun),DiffEqDiffTools.GradientCache{Nothing,Nothing,Nothing,Val{:central},Float64,Val{true}}}(fun, DiffEqDiffTools.GradientCache{Nothing,Nothing,Nothing,Val{:central},Float64,Val{true}}(nothing, nothing, nothing)), getfield(NLSolversBase, Symbol("#fg!#45")){typeof(fun)}(fun, Core.Box(getfield(NLSolversBase, Symbol("#g!#44")){typeof(fun),DiffEqDiffTools.GradientCache{Nothing,Nothing,Nothing,Val{:central},Float64,Val{true}}}(fun, DiffEqDiffTools.GradientCache{Nothing,Nothing,Nothing,Val{:central},Float64,Val{true}}(nothing, nothing, nothing)))), getfield(NLSolversBase, Symbol("#h!#46")){typeof(fun)}(fun), 0.0, [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], [0.0 0.0 0.0 0.0 0.0 0.0 0.0; 0.0 0.0 0.0 0.0 0.0 0.0 0.0; 0.0 0.0 0.0 0.0 0.0 0.0 0.0; 0.0 0.0 0.0 0.0 0.0 0.0 0.0; 0.0 0.0 0.0 0.0 0.0 0.0 0.0; 0.0 0.0 0.0 0.0 0.0 0.0 0.0; 0.0 0.0 0.0 0.0 0.0 0.0 0.0], [0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1], [0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1], [0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1], [1], [1], [1]), TwiceDifferentiableConstraints{getfield(NLSolversBase, Symbol("##88#91")),getfield(NLSolversBase, Symbol("##89#92")),getfield(NLSolversBase, Symbol("##90#93")),Float64}(getfield(NLSolversBase, Symbol("##88#91"))(), getfield(NLSolversBase, Symbol("##89#92"))(), getfield(NLSolversBase, Symbol("##90#93"))(), ConstraintBounds:
    x[1]≥-1.2, x[1]≤1.2, x[2]≥-1.2, x[2]≤1.2, x[3]≥-1.2, x[3]≤1.2, x[4]≥-1.2, x[4]≤1.2, x[5]≥-1.2, x[5]≤1.2, x[6]≥-1.2, x[6]≤1.2, x[7]≥-1.2, x[7]≤1.2
  Linear/nonlinear constraints:)).

 [1] error(::String) at ./error.jl:33
 [2] promote_objtype(::IPNewton{typeof(Optim.backtrack_constrained_grad),Symbol}, ::Array{Float64,1}, ::Symbol, ::Bool, ::TwiceDifferentiable{Float64,Array{Float64,1},Array{Float64,2},Array{Float64,1}}, ::TwiceDifferentiableConstraints{getfield(NLSolversBase, Symbol("##88#91")),getfield(NLSolversBase, Symbol("##89#92")),getfield(NLSolversBase, Symbol("##90#93")),Float64}) at /home/henrique/.julia/packages/Optim/Agd3B/src/multivariate/optimize/interface.jl:37
 [3] #optimize#88(::Bool, ::Symbol, ::Function, ::TwiceDifferentiable{Float64,Array{Float64,1},Array{Float64,2},Array{Float64,1}}, ::TwiceDifferentiableConstraints{getfield(NLSolversBase, Symbol("##88#91")),getfield(NLSolversBase, Symbol("##89#92")),getfield(NLSolversBase, Symbol("##90#93")),Float64}, ::Array{Float64,1}, ::IPNewton{typeof(Optim.backtrack_constrained_grad),Symbol}, ::Optim.Options{Float64,Nothing}) at /home/henrique/.julia/packages/Optim/Agd3B/src/multivariate/optimize/interface.jl:121
 [4] (::getfield(Optim, Symbol("#kw##optimize")))(::NamedTuple{(:autodiff,),Tuple{Symbol}}, ::typeof(optimize), ::TwiceDifferentiable{Float64,Array{Float64,1},Array{Float64,2},Array{Float64,1}}, ::TwiceDifferentiableConstraints{getfield(NLSolversBase, Symbol("##88#91")),getfield(NLSolversBase, Symbol("##89#92")),getfield(NLSolversBase, Symbol("##90#93")),Float64}, ::Array{Float64,1}, ::IPNewton{typeof(Optim.backtrack_constrained_grad),Symbol}, ::Optim.Options{Float64,Nothing}) at ./none:0 (repeats 2 times)
 [5] top-level scope at In[51]:1

Can somebody help me understand why? Is it perhaps because AD is not supported for IPNewton with multiple constraints? Thanks!

Its probably an oversight. What version of optim?

Thanks for your reply! (v1.1) pkg> st shows [429524aa] Optim v0.18.1. What do you mean by oversight?

I mean that I probably forgot to think of this edge case :slight_smile: Can you open an issue on https://github.com/JuliaNLSolvers/Optim.jl/issues/new ? I know you were already sent here from slack.

Are you trying to specify that the TwiceDifferentiable function should be differentiated using forwarddiff? If so, that should be specified in the TwiceDifferentiable(fun, x0; autodiff=:forwarddiff) part of the code, or you should just pass fun to optimize (though that method may also not exist).

Sure! I have opened an issue here: https://github.com/JuliaNLSolvers/Optim.jl/issues/711.

Actually, I’d like to use forwarddiff for both df and dfc, since in my more complex application I want to have something like dfc = TwiceDifferentiableConstraints(con_c!, lx, ux, lc, uc; autodiff=:forward). Unfortunately this does not work, currently.

On a slightly different note, I am also a bit confused by the NDifferentiable instances:

The words in front of Differentiable in the type names ( Non , Once , Twice ) are not meant to indicate a specific classification of the function as such (a OnceDifferentiable might be constructed for an infinitely differentiable function), but signals to an algorithm if the correct functions have been constructed or if automatic differentiation should be used to further differentiate the function.

So, is there a difference between

df = TwiceDifferentiable(fun, x0; autodiff=:forwarddiff)


df = NonDifferentiable(fun, x0; autodiff=:forwarddiff)

or even

df = NonDifferentiable(fun, x0)


Shouldn’t the latter be the preferable syntax for a differentiable function but for which a user did not supply the Jacobian or Hessian manually? Maybe I should start a new topic for this question alone…

The quick answer is: no. *NDifferentiable is an implementation detail, and few users should want to touch it at all. I generally want people to use

optimize(f, g!, x, method, options)

The only issue is with people who are obsessed with cache arrays because they’re doing similar optimizations over and over again. The only problem is that the constrained optimization interface isn’t as well-developed, so there are probably quite a few cases where the interface that should work simply doesn’t.