Cannot register function because it does not support differentiation although it does

I want to perform some nonlinear optimization using Ipopt through JuMP.

I have some user-defined function p_nomr_ln_Sum (see below) that I want to use throughout the optimization. I keep getting the error ERROR: LoadError: Unable to register the function :p_norm_ln_Sum because it does not support differentiation via ForwardDiff. I followed the debug instruction of JuMP and tested my function with ForwardDiff.jl alone and it works just as intended.

Here’s the code:

using JuMP, Ipopt
using ForwardDiff

# Some dummy parameters
dt = 42.0;
p = 2;
NumStageEvals = 1;
NumEigVals = 2;
RealEigVals = [42.0, 42.0];
ImagEigVals = [42.0, 42.0];

# To my understanding, `T` is the type needed to overload the computations to compute the derivative.
# This is why I use it only for the variables.
function p_norm_ln_Sum(x::Vector{T}, dt::Float64, p::Int, NumStageEvals::Int, 
                       NumEigVals::Int, RealEigVals::Vector{Float64}, 
                       ImagEigVals::Vector{Float64}) where {T<:Real}

  ln_Sums = zeros(T, NumEigVals);

  for j in 1:NumStageEvals
    ln_Sums[:] += log.( (1 .- x[2*j-1]*dt*RealEigVals[:] + x[2*j]*dt*ImagEigVals[:]).^2 
                       + (x[2*j-1]*dt*ImagEigVals[:] + x[2*j]*dt*RealEigVals[:]).^2)
  end

  return sum(ln_Sums[i]^p for i in 1:NumEigVals)^(1.0/p)
end

x = [42.0, 42.0]; # Some values
# Works perfectly
y = ForwardDiff.gradient(x->p_norm_ln_Sum(x, dt, p, NumStageEvals, NumEigVals, RealEigVals, ImagEigVals), x)
display(y); println() # Gives some meaningful output

model = Model(Ipopt.Optimizer)

# Does not work
register(model, :p_norm_ln_Sum, 7, p_norm_ln_Sum; autodiff=true) 

The whole error message is:

Stacktrace:
 [1] error(s::String)
   @ Base ./error.jl:33
 [2] _validate_register_assumptions(f::typeof(p_norm_ln_Sum), name::Symbol, dimension::Int64)
   @ JuMP ~/.julia/packages/JuMP/Y4piv/src/nlp.jl:1979
 [3] register(m::Model, s::Symbol, dimension::Int64, f::Function; autodiff::Bool)
   @ JuMP ~/.julia/packages/JuMP/Y4piv/src/nlp.jl:2052
 [4] top-level scope
   @ ~/Desktop/git/MA/Optim_Julia/Opt_Roots.jl:31
in expression starting at /Optim_Julia/Opt_Roots.jl:31

caused by: MethodError: no method matching p_norm_ln_Sum(
::ForwardDiff.Dual{ForwardDiff.Tag{JuMP.var"#142#143"{typeof(p_norm_ln_Sum)}, Float64}, Float64, 7},
 ::ForwardDiff.Dual{ForwardDiff.Tag{JuMP.var"#142#143"{typeof(p_norm_ln_Sum)}, Float64}, Float64, 7},
 ::ForwardDiff.Dual{ForwardDiff.Tag{JuMP.var"#142#143"{typeof(p_norm_ln_Sum)}, Float64}, Float64, 7},
 ::ForwardDiff.Dual{ForwardDiff.Tag{JuMP.var"#142#143"{typeof(p_norm_ln_Sum)}, Float64}, Float64, 7},
 ::ForwardDiff.Dual{ForwardDiff.Tag{JuMP.var"#142#143"{typeof(p_norm_ln_Sum)}, Float64}, Float64, 7}, 
 ::ForwardDiff.Dual{ForwardDiff.Tag{JuMP.var"#142#143"{typeof(p_norm_ln_Sum)}, Float64}, Float64, 7},
 ::ForwardDiff.Dual{ForwardDiff.Tag{JuMP.var"#142#143"{typeof(p_norm_ln_Sum)}, Float64}, Float64, 7})

Stacktrace:
 [1] (::JuMP.var"#142#143"{typeof(p_norm_ln_Sum(x::Vector{ForwardDiff.Dual{ForwardDiff.Tag{JuMP.var"#142#143"
{typeof(p_norm_ln_Sum)}, Float64}, Float64, 7}})
   @ JuMP ~/.julia/packages/JuMP/Y4piv/src/nlp.jl:1975
 [2] vector_mode_dual_eval!(f::JuMP.var"#142#143"{typeof(p_norm_ln_Sum)}, cfg::ForwardDiff.GradientConfig{ForwardDiff.Tag{JuMP.var"#142#143"{typeof(p_norm_ln_Sum)}, Float64}, Float64, 7, Vector{ForwardDiff.Dual{ForwardDiff.Tag{JuMP.var"#142#143"{typeof(p_norm_ln_Sum)}, Float64}, Float64, 7}}}, x::Vector{Float64})
   @ ForwardDiff ~/.julia/packages/ForwardDiff/wAaVJ/src/apiutils.jl:37
 [3] vector_mode_gradient(f::JuMP.var"#142#143"{typeof(p_norm_ln_Sum)}, x::Vector{Float64}, cfg::ForwardDiff.GradientConfig{ForwardDiff.Tag{JuMP.var"#142#143"{typeof(p_norm_ln_Sum)}, Float64}, Float64, 7, Vector{ForwardDiff.Dual{ForwardDiff.Tag{JuMP.var"#142#143"{typeof(p_norm_ln_Sum)}, Float64}, Float64, 7}}})
   @ ForwardDiff ~/.julia/packages/ForwardDiff/wAaVJ/src/gradient.jl:106
 [4] gradient(f::Function, x::Vector{Float64}, cfg::ForwardDiff.GradientConfig{ForwardDiff.Tag{JuMP.var"#142#143"{typeof(p_norm_ln_Sum)}, Float64}, Float64, 7, Vector{ForwardDiff.Dual{ForwardDiff.Tag{JuMP.var"#142#143"{typeof(p_norm_ln_Sum)}, Float64}, Float64, 7}}}, ::Val{true})
   @ ForwardDiff ~/.julia/packages/ForwardDiff/wAaVJ/src/gradient.jl:19
 [5] gradient(f::Function, x::Vector{Float64}, cfg::ForwardDiff.GradientConfig{ForwardDiff.Tag{JuMP.var"#142#143"{typeof(p_norm_ln_Sum)}, Float64}, Float64, 7, Vector{ForwardDiff.Dual{ForwardDiff.Tag{JuMP.var"#142#143"{typeof(p_norm_ln_Sum)}, Float64}, Float64, 7}}}) (repeats 2 times)
   @ ForwardDiff ~/.julia/packages/ForwardDiff/wAaVJ/src/gradient.jl:17
 [6] _validate_register_assumptions(f::typeof(p_norm_ln_Sum), name::Symbol, dimension::Int64)
   @ JuMP ~/.julia/packages/JuMP/Y4piv/src/nlp.jl:1975
 [7] register(m::Model, s::Symbol, dimension::Int64, f::Function; autodiff::Bool)
   @ JuMP ~/.julia/packages/JuMP/Y4piv/src/nlp.jl:2052
 [8] top-level scope

While ForwardDiff only takes gradient with respect to x, JuMP tries to differentiate with repect to all the inputs. If you change your model to use it like ForwardDiff usage it can be used AFAIK.

See the documentation: Nonlinear Modeling · JuMP

You cannot provide a vector as input.

1 Like

Hm, do you know of any guidance/tutorial how to use functions in JuMP that take vector-valued decision variables together with vector-valued parameters? To my understanding, I would now need to construct cascades of theses splatted functions to resemble my function as written above.

how to use functions in JuMP that take vector-valued decision variables together with vector-valued parameters?

You cannot. You can only use functions which take scalar variables as arguments.

In your case, you might create a new function like:

(x...) -> p_norm_ln_Sum(collect(x), dt, p, ...)
1 Like