BoundaError with NLopt in JuMP with a user-defined function and its gradient

Hello,

I am solving an NLP in JuMP with several nonlinear user-defined functions as constraints. I provide the functions and their gradients in the following way:

for j in 1:24
	register(m, Symbol("myfunc_$j"), length(x), create_cons(j, inputs)[1],  create_cons(j, inputs)[2])
	add_nonlinear_constraint(m, :($(Symbol("myfunc_$j"))($(x...)) >= $(p)))
end

The function create_cons(j, inputs) returns two values: f and ∇f in that order, which are the function and its gradient, depending on the iterator j and the dictionary inputs holding some input parameters.

I can solve the problem with no errors when using Ipopt. However, when I try to solve the problem with NLopt using the algorithm :LD_SLSQP, I get the following error:

ERROR: BoundsError
Stacktrace:
  [1] _copyto_impl!(dest::Vector{Float64}, doffs::Int64, src::Vector{Float64}, soffs::Int64, n::Int64)
    @ Base ./array.jl:329
  [2] copyto!
    @ ./array.jl:322 [inlined]
  [3] copyto!
    @ ./array.jl:346 [inlined]
  [4] _reverse_mode(d::MathOptInterface.Nonlinear.ReverseAD.NLPEvaluator, x::Vector{Float64})
    @ MathOptInterface.Nonlinear.ReverseAD ~/.julia/packages/MathOptInterface/goW8i/src/Nonlinear/ReverseAD/reverse_mode.jl:57
  [5] eval_constraint_jacobian(d::MathOptInterface.Nonlinear.ReverseAD.NLPEvaluator, J::Vector{Float64}, x::Vector{Float64})
    @ MathOptInterface.Nonlinear.ReverseAD ~/.julia/packages/MathOptInterface/goW8i/src/Nonlinear/ReverseAD/mathoptinterface_api.jl:211
  [6] eval_constraint_jacobian(evaluator::MathOptInterface.Nonlinear.Evaluator{MathOptInterface.Nonlinear.ReverseAD.NLPEvaluator}, J::Vector{Float64}, x::Vector{Float64})
    @ MathOptInterface.Nonlinear ~/.julia/packages/MathOptInterface/goW8i/src/Nonlinear/evaluator.jl:143
  [7] (::NLopt.var"#g_eq#22"{NLopt.Optimizer, Vector{Float64}, Vector{Float64}, Vector{Tuple{Int64, Int64}}, Int64, Vector{Int64}, Vector{Int64}})(result::Vector{Float64}, x::Vector{Float64}, jac::Matrix{Float64})
    @ NLopt ~/.julia/packages/NLopt/OIUOZ/src/MOI_wrapper.jl:906
  [8] optimize!(model::NLopt.Optimizer)
    @ NLopt ~/.julia/packages/NLopt/OIUOZ/src/MOI_wrapper.jl:939
  [9] optimize!
    @ ~/.julia/packages/MathOptInterface/goW8i/src/Bridges/bridge_optimizer.jl:376 [inlined]
 [10] optimize!
    @ ~/.julia/packages/MathOptInterface/goW8i/src/MathOptInterface.jl:85 [inlined]
 [11] optimize!(m::MathOptInterface.Utilities.CachingOptimizer{MathOptInterface.Bridges.LazyBridgeOptimizer{NLopt.Optimizer}, MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}})
    @ MathOptInterface.Utilities ~/.julia/packages/MathOptInterface/goW8i/src/Utilities/cachingoptimizer.jl:316
 [12] optimize!(model::Model; ignore_optimize_hook::Bool, _differentiation_backend::MathOptInterface.Nonlinear.SparseReverseMode, kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
    @ JuMP ~/.julia/packages/JuMP/ptoff/src/optimizer_interface.jl:440
 [13] optimize!
    @ ~/.julia/packages/JuMP/ptoff/src/optimizer_interface.jl:410 [inlined]
 [14] top-level scope
    @ ./timing.jl:262 [inlined]
 [15] top-level scope
    @ ./Code/src/Main.jl:0

I don’t understand why reverseAD is called, although I provide the gradients.
For info, I do provide initial values to the decision variablesm which represent a feasible point to the problem.

Any ideas why this occurs? I appreciate your help!

Can you provide a reproducible example? Hard to tell whats going on here just from that snippet.