Suppose I have a binary optimization problem of the form
maximize f(x) (1)
subject to Ax ≤ b
x binary
Here f(x) is a disgusting, nonlinear, nonconcave black box function. However, I happen to have a good linear relaxation for f(x) that is given in the form \bar f \cdot x. The linear relaxation is designed for use in the context of a branch-and-bound routine in which certain x_i have been “fixed” to equal 0 or 1. Therefore, the entries of \bar f are actually a function of \mathcal{O} and \mathcal{I}, where \mathcal{O} is the set of indices i for which x_i is fixed to 0 and \mathcal{I} the set for which x_i is fixed to 1.
It follows that the solution to the LP
maximize f̄(O, I) · x (2)
subject to Ax ≤ b
0 ≤ x ≤ 1
x_i = 0, ∀i ∈ O
x_i = 1, ∀i ∈ I
is an upper bound on the maximal value of f(x) on this branch, i.e. subject to the fixed-index constraints.
(Assume that \mathcal{O} and \mathcal{I} are disjoint, and that if \mathcal{O} \cup \mathcal{I} is the entire index set, then f̄(O, I) · x = f(x).)
Although notational conventions are quite diverse, many nonlinear integer programming algorithms, such as various flavors of branch-and-bound, require nothing other than the objective function f(x), the relaxation “recipe” \bar f(O, I), and the constraint data as inputs.
Once you have specified the problem and its linear relaxation, you simply pick a node, solve the relaxation, and generate child nodes by rounding a fractional x_i up or down. (As I understand it,) a large part of what makes an INLP solver succeed is
- picking a good relaxation function (i.e. one that provides a tight upper bound), and
- using clever heuristics to choose the branch node.
In my case, I have the relaxation function, and I would like to borrow the heuristics from somewhere else. I am wondering if there is anything in the Julia ecosystem that lets me pass my own nonlinear function and linear relaxation to a sophisticated branch-and-bound solver.
The workflow I have in mind is something like this (I will use JuMP syntax, but this is not actually valid code):
using JuMP
using AmazingBranchBound # e.g. SCIP
model = Model(AmazingBranchBound.Optimizer)
@variable(model, x, Int)
@constraint(model, Ax .≥ b)
# For reporting the true objective function at each node
@actual_NLobjective(model, Max, f(x))
# Function that the master solver can use to update the objective for a given node
function f̄_recipe!(O, I)
f̄ = begin
# Secret sauce that generates a vector with the
# linear relaxation properties given above
end
# Note that sum(f̄ .* x) is an AffExpr
@linearized_objective(model, Max, sum(f̄ .* x))
end
optimize!(mdl)
Does my question make sense? Is this a reasonable thing to expect? Is there a simpler way to accomplish my goal?
The closest thing I can think of is Alpine.jl, where I can specify an mip_solver
such as gurobi
to solve the subproblems. But in my case, I don’t want to wrap an entire MOI interface around the solution to problem (2)
. I just want to say, “Here is a recipe for a good linear relaxation; solve it with GLPK and then use Alpine’s native branch-and-bound routines,” where of course GLPK and Alpine could be swapped with other alternatives.