I have an INLP and a linear relaxation; I want to steal someone else's branch-and-bound algorithm

Suppose I have a binary optimization problem of the form

   maximize  f(x)                                         (1)
 subject to  Ax ≤ b
             x binary

Here f(x) is a disgusting, nonlinear, nonconcave black box function. However, I happen to have a good linear relaxation for f(x) that is given in the form \bar f \cdot x. The linear relaxation is designed for use in the context of a branch-and-bound routine in which certain x_i have been “fixed” to equal 0 or 1. Therefore, the entries of \bar f are actually a function of \mathcal{O} and \mathcal{I}, where \mathcal{O} is the set of indices i for which x_i is fixed to 0 and \mathcal{I} the set for which x_i is fixed to 1.

It follows that the solution to the LP

   maximize  f̄(O, I) · x                                   (2)
 subject to  Ax ≤ b
             0 ≤ x ≤ 1
             x_i = 0, ∀i ∈ O
             x_i = 1, ∀i ∈ I

is an upper bound on the maximal value of f(x) on this branch, i.e. subject to the fixed-index constraints.

(Assume that \mathcal{O} and \mathcal{I} are disjoint, and that if \mathcal{O} \cup \mathcal{I} is the entire index set, then f̄(O, I) · x = f(x).)

Although notational conventions are quite diverse, many nonlinear integer programming algorithms, such as various flavors of branch-and-bound, require nothing other than the objective function f(x), the relaxation “recipe” \bar f(O, I), and the constraint data as inputs.

Once you have specified the problem and its linear relaxation, you simply pick a node, solve the relaxation, and generate child nodes by rounding a fractional x_i up or down. (As I understand it,) a large part of what makes an INLP solver succeed is

  • picking a good relaxation function (i.e. one that provides a tight upper bound), and
  • using clever heuristics to choose the branch node.

In my case, I have the relaxation function, and I would like to borrow the heuristics from somewhere else. I am wondering if there is anything in the Julia ecosystem that lets me pass my own nonlinear function and linear relaxation to a sophisticated branch-and-bound solver.

The workflow I have in mind is something like this (I will use JuMP syntax, but this is not actually valid code):

using JuMP
using AmazingBranchBound     # e.g. SCIP

model = Model(AmazingBranchBound.Optimizer)

@variable(model, x, Int)
@constraint(model, Ax .≥ b)

# For reporting the true objective function at each node
@actual_NLobjective(model, Max, f(x))

# Function that the master solver can use to update the objective for a given node
function f̄_recipe!(O, I)
    f̄ = begin
        # Secret sauce that generates a vector with the
        # linear relaxation properties given above
    end
    
    # Note that sum(f̄ .* x) is an AffExpr
    @linearized_objective(model, Max, sum(f̄ .* x))
end

optimize!(mdl)

Does my question make sense? Is this a reasonable thing to expect? Is there a simpler way to accomplish my goal?

The closest thing I can think of is Alpine.jl, where I can specify an mip_solver such as gurobi to solve the subproblems. But in my case, I don’t want to wrap an entire MOI interface around the solution to problem (2). I just want to say, “Here is a recipe for a good linear relaxation; solve it with GLPK and then use Alpine’s native branch-and-bound routines,” where of course GLPK and Alpine could be swapped with other alternatives.

1 Like

So you want your objective function to change depending on x? No, that’s not possible, and I don’t know a good way of solving that.

If f is nonconcave and black-box, you can probably never hope to find a global optimum. But what you could do is something like:

  • Find a feasible solution
  • Create f̄(O, I) using the feasible solution
  • Solve (1) with the f̄ objective and some trust region constraint sum(x[i] for i in O) + sum(1 - x[i] for i in I) <= N to get a new solution
  • Repeat
2 Likes

Technically yes, but only in a very specific sense. Given a branch-and-bound node—that is, a set of indexes for which x is fixed to 0 or 1—I have a way to generate a tight LP relaxation that functions as an upper bound on all solutions in this branch.

This means that you can do all the usual tree operations used in branch-and-bound algorithms: fathom nodes whose UB is worse than the current best feasible solution, select a node to explore based on the value of its UB, etc.

I have successfully implemented a branch-and-bound algorithm already using this \bar f recipe. It is written in native Julia, and doesn’t use JuMP or any modeling language. I have it manually select the branch node having the highest UB, solve the LP relaxation, branch on the (in this case unique) x_i that takes a fractional value, and repeat.

This works, but not as well as it would if I could steal the branch selection and/or tree management heuristics from a more mature IP library. But it seems like this kind of modularity isn’t built into anything that’s out there, huh?

Check GitHub - Wikunia/Bonobo.jl: A general branch and bound framework and the example in the docs.

1 Like

Thank you, this looks like exactly what I am looking for. If I am understanding correctly, I don’t need to use JuMP at all with this, do I? My linear relaxation has a special form that can be solved with a simple algorithm rather than LP.

Shouldn’t need to use JuMP iiuc. I didn’t write or use the package before though.

1 Like