Simplifying LP for repeated resolving

Hi all,
I’m working on a problem that requires repeated resolving of a large LP with different objectives. The methodology I’m working on involves solving a first problem to optimality, then imposing an epsilon constraint before repeatedly reoptimizing with varied objective statements to explore the near-optimal feasible space. I thought that one way to simplify the problem might be to remove redundant constraints that do not intersect with the newly epsilon constrained feasible region. I haven’t found a good way to ask a solver to look for redundant constraints yet.

Do you all have any suggestions as to packages or functions that might be useful?

Not sure it helps, but the epsilon-constrained feasible region is itself a polytope P_\varepsilon = \{x \in P: c^\top x \leq c^\top x^\star + \varepsilon\}
And deciding whether a constraint a^\top x \leq b intersects P_\varepsilon is as hard as solving the linear program \max_{x \in P_\varepsilon} a^\top x.
Of course you can take advantage of the previous optimal solution x^\star as a starting point within P_\varepsilon, but eliminating redundant constraints manually like this may not be a piece of cake.

That being said, I think many solvers include constraint elimination subroutines, so maybe it is possible without getting your hands too dirty?

1 Like

Thanks for the thoughts! Yes, I’m aware it’s an equally hard problem, I suppose I was hoping that passing some form of presolved/reduced form model with redundant constraints eliminated to the solver in the repeated solving stage might offer at least some returns on solution times, given that I’m resolving that problem somewhere in the hundreds to thousands of times. I’ve tried utilizing CPLEX presolves, but as the presolve routines eliminate more than just redundant constraints depending on the objective function in use, it doesn’t seem to necessarily be helpful. Warmstarting with x* is an idea I’ve toyed around with as well, but because the goal of the routine is to get maximally different solutions, it seems to be somewhat contradictory to warm-start. Do you think warm-starting would still have some benefits despite that?

I guess you could always warm start and include a penalty for proximity to x^\star?

I think, depending on the method of choosing the objective function in the problem at hand, the warm start will be more or less valuable. I don’t think the penalty is necessary as we can conceptually rule out certain approaches from warmstarting, while endorsing others. Thanks so much for your help! Unfortunate that there’s doesn’t seem to be a better way to reduce the computational complexity of the problem though.

See

The MOA.EpsilonConstraint algorithm in MOA preserves the presolved model in memory, enabling efficient warm-starts. You don’t need to do this manually.

Thanks! Very cool that this is implemented. It’s not a perfect fit for my problem as it is limited to a couple objectives (as far as I can see), whereas I am looking at somewhere between several hundred and several thousand objectives. Do you have any tips on implementing preserving the presolved model in memory so that I can put it in my own code?

Do you have any tips on implementing preserving the presolved model in memory so that I can put it in my own code?

JuMP will automatically keep the model in memory if you use modifications. So this will work:

using JuMP, HiGHS
n, o = 20, 10
weights = rand(n)
C = rand(o, n)
model = Model(HiGHS.Optimizer)
set_silent(model)
@variable(model, x[1:n] >= 0)
@constraint(model, weights' * x <= n / 3)
@expression(model, c, C * x)
for i in 1:o
    @objective(model, Max, c[i])
    optimize!(model)
    @constraint(model, c[i] >= 0.9 * objective_value(model))
end
optimize!(model)
value.(x)

This algorithm is already implemented in MOA:

import MultiObjectiveAlgorithms as MOA
model = Model(() -> MOA.Optimizer(HiGHS.Optimizer))
set_attribute(model, MOA.Algorithm(), MOA.Hierarchical())
for i in 1:o
    set_attribute(model, MOA.ObjectiveRelativeTolerance(i), 0.1)
    set_attribute(model, MOA.ObjectivePriority(i), o - i)
end
set_silent(model)
@variable(model, x[1:n] >= 0)
@constraint(model, weights' * x <= n / 3)
@objective(model, Max, C * x)
optimize!(model)
value.(x)

Are you sure the problem makes sense to solve, and the solutions make sense to interpret? By the time you’ve solved the first few objectives, there is probably very little freedom in how the solution can adapt to the remaining hundreds of objectives.

2 Likes

Great, that’s what I’m already doing. Glad to hear it works as well as it can. Yes, it’s for an application involving very high dimensional polytopes with significant feasible space that I’m looking to explore as well as I can. Appreciate the help!

1 Like

About

I highly recommend this section of the Gurobi docs, especially the last paragraph of the “Allowing Multiple-Objective Degradation” section.
In a nutshell: to prevent degradation of the objective value, instead of adding an objective constraint, it is typically more efficient to use reduced cost information to force non-basic variables (with non-zero reduced cost) to stay at zero.

About presolve

LP solvers use two types of reductions in presolve: primal and dual reductions.

  • Primal reductions only use information from constraints, and keep the feasible set unchanged. For instance, if you have a constraint x = 1, you can remove variable x from the problem.
  • Dual reductions use information from constraints and the objective. While they may remove feasible solution, they are guaranteed to retain at least one optimal solution.
    For instance, suppose you have a variable x \geq 0 that appears in no constraint and has an objective coefficient of 1. It is straightforward to show that any optimal solution will have x = 0, so the solver goes ahead and eliminates x. Of course, this reduction is not valid if, say, the objective coefficient had been -1.

In your setting, after you’ve added the objective constraint, you are repeatedly solving an LP with identical constraints but different objectives.

  • To avoid having the solver discard the presolved model at each iteration, you should disable dual reductions at presolve. Not all solvers support doing that, but Gurobi (and likely other mainstream solvers) does (see Gurobi’s docs).
  • Make sure to use primal simplex by setting the Method parameter to 0. This will ensure that you can re-use the previous solution as warm-start.

If you look at the log, you should see two things:

  • no presolve after the second solve
  • the number of iterations at the second solve is much smaller
1 Like

This is complicated to do from JuMP, which is why I didn’t suggest it.

I doubt that there’s much of a problem with the constraint generation approach.

You could always do this, which may be faster for very large problems:

using JuMP, HiGHS
n, o = 20, 10
weights = rand(n)
C = rand(o, n)
model = Model(HiGHS.Optimizer)
set_silent(model)
@variable(model, x[1:n] >= 0)
@variable(model, y[1:o])
@constraint(model, weights' * x <= n / 3)
@constraint(model, y == C * x)
for i in 1:o
    @objective(model, Max, y[i])
    optimize!(model)
    set_lower_bound(model, y[i] >= 0.9 * value(y[i]))
end
optimize!(model)
value.(x)
1 Like

Thanks for the reply! One difficulty with this, aside from the ones Oscar mentioned, is that the problems in question are actually too large for primal simplex, we typically use interior point methods instead. I’ll definitely keep the primal reductions note in mind though!

Thanks again for the time. Are there actually significant speed advantages to using set_upper_bound() over simply adding a slack constraint?

You’d have to test that on your problem. The answer is probably it depends