The problem depends on the given parameter r \in [0,1] which governs the nonlinear constraint.

The objective function is bounded, n is on the order of a few dozens, I have a heuristic which provides a starting point, and I’m happy with a local optimum, so I guess this should be a feasible problem.

In the JuMP docs there is an example of entropy maximization, but this uses a linear constraint of the form A p \leq b. I couldn’t succeed in adapting it to the nonlinear constraint with NLopt. Before I dig in deeper, I wanted to ask: is this type of problem even possible in JuMP? Thank you!

Thanks for the answer. Is it expected that I can use the exponential cone transformation as in the entropy maximization example (mentioned above) with the nonlinear constraint, or would it only apply to conic programming?

NLopt doesn’t support the entropic cone, so you’ll have to write that as a nonlinear constraint.

Just write the JuMP model exactly as you formulated above. Perhaps:

using JuMP, Ipopt
N = 3
model = Model(Ipopt.Optimizer)
@variable(model, 0 <= p[1:N, 1:N] <= 1, Symmetric, start = 1 / N)
@NLobjective(model, Max, -sum(p[i, j] * log(p[i, j]) for i in 1:N, j in 1:N))
@constraint(model, sum(p) == 1)
nl_expr = @NLexpression(model, sum(sum(p[i,j] for j in 1:N)^2 for i in 1:N))
@NLconstraint(model, sum(p[i, i] for i in 1:N) - nl_expr == r * (1 - nl_expr))
optimize!(model)