Help: Maximum entropy distributions with inequality constraints

Hi, newe question, I had been using the basic Jayne Maximum Entropy Principle for choosing some probability distribution, P, subject to some equality constraints

{\displaystyle \sum _{i=1}^{n} P(x_{i})f_{k}(x_{i}) = F_{k}\qquad k=1,\ldots ,m.}

Now I need to generalize to inequality constraints.

{\displaystyle \sum _{i=1}^{n} P(x_{i})f_{k}(x_{i})\geq F_{k}\qquad k=1,\ldots ,m.}

I found this in wikipedia

In the case of inequality constraints, the Lagrange multipliers are determined from the solution of a convex optimization program with linear constraints. In both cases, there is no closed form solution, and the computation of the Lagrange multipliers usually requires numerical methods.

My current knowledge in the subject disallow me to find a source for how to do it that is clear to me.
I have checked a few (including the references of the wikipedia article) but everything is buried in the specific subject or application being solved.

It would be a great help if somebody point me to any basic material that explain how to state the convex optimization program mentioned in the wikipedia article or something else.

Thank in advance :slight_smile:

1 Like

Here’s one way you could address this kind of problem in the discrete case:

using Convex, COSMO
m = 10 # number of constraints
n = 5 # size of the probability distribution
f = [ rand() for i = 1:n, k = 1:m ] # f[i,k] represents fₖ(xᵢ)
F = [ .01*rand() for k = 1:m ] # F[k] represents Fₖ
p = Variable(n) # the probability distribution
add_constraint!(p, p >= 0) # probability distributions must have non-negative entries
add_constraint!(p, sum(p) == 1) # total probability is 1
problem = maximize(entropy(p)) # the optimization problem
add_constraints!(problem, [ sum(p[i]*f[i,k] for i = 1:n) >= F[k] for k = 1:m ]) # the constraints

solve!(problem, COSMO.Optimizer) # solve the problem numerically
p_optimal = evaluate(p) # the distribution with maximum entropy subject to the constraints

This uses Convex.jl, which provides a modelling language to formulate the problem, and COSMO.jl, which is the optimizer that solves the optimization problem.

Note also that this problem can be written more compactly by writing the constraints as a matrix-vector product as is done in this example: Entropy Maximization · Convex.jl.

There are other optimizers you can try as well, e.g.

using Hypatia, ECOS
solve!(problem, ECOS.Optimizer)
solve!(problem, Hypatia.Optimizer)
4 Likes

Hi @ericphanson, thanks for the quick reply. Do you know any source where I can find how to solve the continuous case?

I saw that wikipedia links to https://espace.library.uq.edu.au/data/UQ_200564/UQ200564_preprint.pdf which has some formulations but I don’t know about other sources, sorry.

2 Likes