Turning a convex hull into a set of constraints

I would like to turn the output of a convex hull into a set of constraints. I’m sure I could figure this out in a reasonable amount of time, but I was wondering if anyone had done this already. Otherwise I will post my solution here eventually.

Take a look at Polyhedra.jl, what you need is a conversion from the V-representation (for Vertices) to the H-representation (for Hyperplanes)

4 Likes

Ah thanks, I thought it was probably something to do with that but the word hyperplane threw me.

So from looking at the documentation I was able to get this MWE:

using LazySets, Polyhedra, CDDLib
v = [rand(2) for i in 1:100]
hull = convex_hull(v; backend=CDDLib.Library())
p = vrep(hull)
# p = removevredundancy(p, Gurobi.Optimizer)
p = polyhedron(p)

m = Model(Gurobi.Optimizer)
x = @variable(m, [1:fulldim(p)])
@constraint(m, x in p)
@objective(m, Min, sum(x))
optimize!(m)

Plots.scatter([Singleton(vi) for vi in v])
Plots.plot!(p, alpha=0.2, lab="Feasible set")
Plots.scatter!(Singleton(value.(x)), lab="Feasible set", ms=10, mc=:red)

It works also in higher (>2) dimensions but I can’t plot that. I’m a bit hesitant to use such an “easy” solution since I don’t quite understand how the constraints are generated, but hey… if it works it works!

1 Like

Beware that the conversion between both representations has exponential time complexity, so you shouldn’t use it in high dimensions

Where is the conversion taking place in my code out of interest? When I write my constraint?

If you have N points x^{(1)}, ..., x^{(N)} \in \mathbb{R}^{d} (the dimension is d), and define X = \{x^{(1)}, ..., x^{(N)}\}, then you have two ways of representing the convex hull conv(X).

[Note: if you know what V/H representations are, skip the bullet below]

  1. \mathcal{V}-representation writes conv(X) as, quite literally, a convex combination of points in X.
    Mathematically, this reads

    conv(X) = \left\{ x \in \mathbb{R}^{d} \ \middle| \ x = \sum_{i=1, ..., N} \lambda_{i} x^{(i)}, \lambda \geq 0, e^{T}\lambda = 1 \right\}.

    You can formulate this in an optimization problem using an additional N non-negative variables (\lambda) and a so-called convexity constraint \sum_{i} \lambda_{i} = 1.
    Then, add the constraint x = \sum_{i} \lambda_{i} x^{(i)}. Note that the x^{(i)} are part of your input, i.e., they are not decision variables, so the former constraint is linear.

    FYI, this formulation forms the basis of Dantzig-Wolfe decomposition.

  2. \mathcal{H}-representation defines conv(X) through a finite number of inequalities.
    Mathematically, you get

    conv(X) = \left\{ x \in \mathbb{R}^{d} \ \middle| \ A x \geq b \right\},

    where A, b have to be computed from the initial set of points x^{(1)}, ..., x^{(N)}.
    Computing these A, b is what the Polyhdedra package does for you.
    Once you have them, you can write the above constraint in the problem.


So, what is the most efficient route?
My general rule of thumb is the following:

  • if d << N, then use the \mathcal{H}-representation.
  • If d >> N, use the \mathcal{V}-representation.
  • if you don’t know, try the \mathcal{V}-representation (it’s pretty straightforward to write and requires no external libraries).

Factors to consider:

  • As mentioned by gdalle, in general, the size of A, b may be exponentially larger than N.
    For instance, take the \ell_{1} unit ball in dimension d: you have exactly 2d extreme points, namely, +e_{j} and -e_{j} (vectors of the canonical basis and their opposite), but an exponential number of inequalities. Indeed, you can verify that all constraints of the form \sum_{j} \epsilon_{j} x_{j} \leq 1 for \epsilon \in \{-1, +1\}^{d} define facets of the convex hull.
  • In addition, A may be dense (lots of non-zeros), which would likely cause lots of memory issues, and may be numerically ill-conditioned (lots of very small and very large entries), which solvers don’t like.
  • If you use \mathcal{V}-representation and the x^{(i)} are dense vectors, you may also run into memory issues, but likely not as fast as with the H-approach.
2 Likes

I think this happens when you call

p = polyhedron(p)

Wow, thanks for the very complete answer, especially regarding the computational issues. As it goes, my convex hull is made up of 1000 points in a 46 dimensional space, so I imagine the computational considerations will be important. I will see how I get on…

What is it exactly you want to do? There may be mathematical programming techniques that circumvent the need for an explicit set of constraints

My specific use case is for re-dispatch constraints in a unit commitment (UC) model. Typically a stochastic unit commitment model would have scenarios which define imbalances on a node (bus).

However, I have a particularly quirky UC model for which the imbalances are defined at the level of the entire network. I would still however like to include network constraints in my re-dispatch constraints, but that means that I need to go from the aggregate imbalance (D) to the nodal imbalances d_n, with \sum_n d_n = D (there are N=46 nodes n in my network).

The nodal imbalances are free variables in my model, so the constraints are quite weak - I’m basically asking my optimiser to make sure that there exists some set of imbalances that summed up equal D. However, I have 1000 scenarios of these imbalances which I can use to further constraint d_n.

The simplest thing to do would be to see what the maximum imbalance is for those 1000 scenarios and constrain d_n accordingly, e.g. d_n^{min} \leq d_n \leq d_n^{max}. I’ve already done this.

However, I could go a step further and write constraints that say that the variable \vec{d} = [d_1, ... d_N] must lie in the convex hull of the imbalances. This exploits e.g. imbalances that are very correlated to one another, e.g.:

MicrosoftTeams-image

I don’t have to do this, I just thought it would be cool to try and do it and see if it improves my solution.

I think you can write these constraints in JuMP by using a convex combination with weights \lambda^s for each scenario, instead of explicitly computing the convex hull

IMO, it also depends on how often you want to do stuff. If you want to solve your UC (significantly) more often than calculate new scenarios, it might make sense to once calculate the convex hull (as a scenario reduction technique) and then use only the points of the v representation as @gdalle explained.

But I still need to compute the convex hull to in order to use the V representation, no?

In any case I’m able to write this constraint using similar code to what I posted above so I’m happy (even though my problem is now infeasible, but that’s another story…).