How to define vector-valued @operator in JuMP

Hi,

I am trying to solve a nonlinear model using JuMP. Part of the constraints come from the gradient of a nonlinear function. I am trying to compute it using ForwardDiff. As such, I need to define an operator and then pass it as a constraint (from the post @constraint with ForwardDiff.gradient).

But I am aware that @operator only allows for scalar-valued outputs. I saw from the documentation (User-defined operators with vector outputs · JuMP) that we need to define one operator for each component of the vector. E.g., it is done as follows:

@operator(model, op_f_1, 2, f[1])
@operator(model, op_f_2, 2, f[2])
@operator(model, op_f_3, 2, f[3])

Is there a way to define it using a loop or some other way so I do not need to write an explicit operator line per component? I have a relatively large model, so I wonder if there is a better way than copy pasting and changing the index.

Thanks!

You can add a user-defined operator without the macro: Nonlinear Modeling · JuMP

But there’s probably a better way of doing what you’re trying to achieve.

Can you share a minimal working example of the constraint? Where does the gradient appear?

I assume you want to write something like:

@constraint(model, ∇f(x)' * d == 0)

Instead of creating an operator for ∇f, do instead something like:

my_operator(x...) = ∇f(x)' * d
@operator(model, op_my_operator, n, my_operator)
@constraint(model, op_my_operator(x...) == 0)
1 Like

Good idea! Here is a MWE of what I intend to do.

I want to compute

\max_{y_1, y_2} \ y_1 \quad \ \text{s.t.} \quad \nabla f(y_1, y_2) = (.5y_1y_2, .02 y_2^2),

where f(y_1, y_2) = y_1^2 + 2y_1y_2 + y_2.

I can code it up as follows:


using KNITRO, JuMP, ForwardDiff

# My multivariate function
f(x) = x[1]^2 + 2*x[1]*x[2] + x[2]

# Gradient
my_gradient(x) = ForwardDiff.gradient(f,x)
my_gradient_tmp(_x) = my_gradient(_x)

# JuMP optimization
model = Model(KNITRO.Optimizer)
@variable(model, y[1:2])
@operator(model, f1, 2, (x...) -> my_gradient_tmp(collect(x))[1])
@operator(model, f2, 2, (x...) -> my_gradient_tmp(collect(x))[2])

@constraint(model, f1(y...) == .5*y[1]*y[2])
@constraint(model, f2(y...) == .2*y[2]^2)

@objective(model, Max, y[1])
optimize!(model)
objective_value(model)

In terms of the above MWE, my question is that I would love to not having to write f1, f2, f3, etc., separately as I want to set the gradient (vector) to be equal to another vector-valued function.

Let me know if there is anything unclear above. Thanks!!

I think you actually want to do something like this:

using JuMP
import ForwardDiff
import Ipopt

f(x) = x[1]^2 + 2*x[1]*x[2] + x[2]

function partial(f::Function, i::Int)
    function (x::Real...)
        y = collect(Real, x)
        return ForwardDiff.derivative(z -> (y[i] = z; f(y)), x[i])
    end
end

model = Model(Ipopt.Optimizer)
@variable(model, y[1:2])
rhs = @expression(model, [0.5 * y[1] * y[2], 0.2 * y[2]^2])
for i in 1:2
    name_i = Symbol("op_my_gradient_$i")
    op = add_nonlinear_operator(model, 2, partial(f, i); name = name_i)
    @constraint(model, op(y...) == rhs[i])
end
@objective(model, Max, y[1])
optimize!(model)
@assert is_solved_and_feasible(model)
value.(y)
1 Like

I see, this is really great, thanks!!

1 Like