Hi @ertam98, welcome to the forum

Can you give an example of your intended use-case? If you don’t have constraints, I might suggest that you read Should you use JuMP? · JuMP.

What would you like the gradient of the objective function with respect to? All variables? Just the variables that appear in the objective? Evaluated at a given point? Or the optimal solution?

Note that the gradient of `:op_f`

is given by your `∇f`

function, so perhaps I don’t fully understand what your question is.

Dealing with the user-defined functions is a bit complicated, but this should point you in the right direction:

```
julia> using JuMP
julia> f(x...) = x[1]^2 - x[1]*x[2] + x[2]^2
f (generic function with 1 method)
julia> function ∇f(g::AbstractArray{T}, x::T...) where {T}
g[1] = 2.0*x[1] - x[2]
g[2] = 2.0*x[2] - x[1]
return
end
∇f (generic function with 1 method)
julia> begin
model = Model()
@operator(model, op_f, 2, f, ∇f)
@variable(model, x[1:2])
@objective(model, Min, op_f(x...))
end
op_f(x[1], x[2])
julia> function build_objective_function_gradient_evaluator(model)
ops = Dict(
(attr.name, attr.arity) => get_attribute(model, attr)
for attr in get_attribute(model, MOI.ListOfModelAttributesSet())
if attr isa MOI.UserDefinedFunction
)
nlp = MOI.Nonlinear.Model()
for ((name, arity), args) in ops
MOI.Nonlinear.register_operator(nlp, name, arity, args...)
end
MOI.Nonlinear.set_objective(nlp, objective_function(model))
x = all_variables(model)
backend = MOI.Nonlinear.SparseReverseMode()
evaluator = MOI.Nonlinear.Evaluator(nlp, backend, index.(x))
MOI.initialize(evaluator, [:Grad])
function eval_gradient(x_sol)
grad = fill(NaN, length(x_sol))
MOI.eval_objective_gradient(evaluator, grad, x_sol)
return grad
end
end
build_objective_function_gradient_evaluator (generic function with 1 method)
julia> eval_gradient = build_objective_function_gradient_evaluator(model)
eval_gradient (generic function with 1 method)
julia> eval_gradient([1.5, 1.7])
2-element Vector{Float64}:
1.3
1.9
```

See also