Entropy maximization with nonlinear constraint

I’m trying to solve the following entropy maximization problem:

\max_p -\sum_{ij} p_{ij} \log p_{ij}


p \in \mathbb{R}^{n \times n} \\ p_{ij} = p_{ji} \\ 0 \leq p_{ij} \leq 1 ~ \forall (i,j) \\ \sum_{ij} p_{ij} = 1 \\ \frac{\sum_i p_{ii} - \sum_i (\sum_j p_{ij} )^2}{1-\sum_i (\sum_j p_{ij} )^2} = r

The problem depends on the given parameter r \in [0,1] which governs the nonlinear constraint.

The objective function is bounded, n is on the order of a few dozens, I have a heuristic which provides a starting point, and I’m happy with a local optimum, so I guess this should be a feasible problem.

In the JuMP docs there is an example of entropy maximization, but this uses a linear constraint of the form A p \leq b. I couldn’t succeed in adapting it to the nonlinear constraint with NLopt. Before I dig in deeper, I wanted to ask: is this type of problem even possible in JuMP? Thank you!

is this type of problem even possible in JuMP?

Yes, see Nonlinear Modeling · JuMP

Thanks for the answer. Is it expected that I can use the exponential cone transformation as in the entropy maximization example (mentioned above) with the nonlinear constraint, or would it only apply to conic programming?

NLopt doesn’t support the entropic cone, so you’ll have to write that as a nonlinear constraint.

Just write the JuMP model exactly as you formulated above. Perhaps:

using JuMP, Ipopt
N = 3
model = Model(Ipopt.Optimizer)
@variable(model, 0 <= p[1:N, 1:N] <= 1, Symmetric, start = 1 / N)
@NLobjective(model, Max, -sum(p[i, j] * log(p[i, j]) for i in 1:N, j in 1:N))
@constraint(model, sum(p) == 1)
nl_expr = @NLexpression(model, sum(sum(p[i,j] for j in 1:N)^2 for i in 1:N))
@NLconstraint(model, sum(p[i, i] for i in 1:N) - nl_expr == r * (1 - nl_expr))

Oh my. Thank you, kind stranger :grinning:

1 Like