# Get gradient of user-defined nonlinear operator in JuMP

Hi,

Is it possible to get the gradient from a user-defined nonlinear operator in JuMP? What I would like to do is pass a JuMP model around and be able to query the analytical gradient from the nonlinear operator.

A MWE would look something like this:

``````using JuMP

model = Model()

f(x...) = x[1]^2 - x[1]*x[2] + x[2]^2

function ∇f(g::AbstractArray{T}, x::T...) where {T}
g[1] = 2.0*x[1] - x[2]
g[2] = 2.0*x[2] - x[1]
return
end

@operator(model, op_f, 2, f, ∇f)

model[:op_f] # <- Can I get the gradient from this?

# Or alternatively
@variable(model, x[1:2])
@objective(model, Min, op_f(x...))

objective_function(model) # <- Can I get the gradient from this?
``````

I’ve tried using NLPModels and NLPModelsJuMP, but I’m not able to convert the JuMP model to NLPModels since it includes a user-defined operator (and I would prefer to simply use JuMP if it’s possible).

Hi @ertam98, welcome to the forum

Can you give an example of your intended use-case? If you don’t have constraints, I might suggest that you read Should you use JuMP? · JuMP.

What would you like the gradient of the objective function with respect to? All variables? Just the variables that appear in the objective? Evaluated at a given point? Or the optimal solution?

Note that the gradient of `:op_f` is given by your `∇f` function, so perhaps I don’t fully understand what your question is.

Dealing with the user-defined functions is a bit complicated, but this should point you in the right direction:

``````julia> using JuMP

julia> f(x...) = x[1]^2 - x[1]*x[2] + x[2]^2
f (generic function with 1 method)

julia> function ∇f(g::AbstractArray{T}, x::T...) where {T}
g[1] = 2.0*x[1] - x[2]
g[2] = 2.0*x[2] - x[1]
return
end
∇f (generic function with 1 method)

julia> begin
model = Model()
@operator(model, op_f, 2, f, ∇f)
@variable(model, x[1:2])
@objective(model, Min, op_f(x...))
end
op_f(x[1], x[2])

ops = Dict(
(attr.name, attr.arity) => get_attribute(model, attr)
for attr in get_attribute(model, MOI.ListOfModelAttributesSet())
if attr isa MOI.UserDefinedFunction
)
nlp = MOI.Nonlinear.Model()
for ((name, arity), args) in ops
MOI.Nonlinear.register_operator(nlp, name, arity, args...)
end
MOI.Nonlinear.set_objective(nlp, objective_function(model))
x = all_variables(model)
backend = MOI.Nonlinear.SparseReverseMode()
evaluator = MOI.Nonlinear.Evaluator(nlp, backend, index.(x))
end
end
build_objective_function_gradient_evaluator (generic function with 1 method)

eval_gradient (generic function with 1 method)

2-element Vector{Float64}:
1.3
1.9
``````

I did something similar to your example with automatic differentiation, but since user-defined gradient seems as complicated as AD I figure keeping track of `∇f` is easier.