A Primal-Dual Global Optimization approach on the Cutting Stock Problem

This doc reads a bit lengthy, because it uses complex data structures (maybe for visualization purposes).
Therein it uses economic interpretation, which is hard to understand for me.
Therefore, I explicitly wrote the Lagrangian dual of the primal MIP, which makes the column generation subprocedure more intelligible.

Comments are welcome.

import JuMP, Gurobi; GRB_ENV = Gurobi.Env();
import LinearAlgebra.I as Identity

w, d, W, I = let # 📘 Data for the Cutting Stock Problem
    w = [75.0, 53.8, 53.0, 51.0, 50.2, 32.2, 30.8, 29.8, 20.1, 16.2, 14.5, 11.0, 8.6, 8.2, 6.6, 5.1]; # length (in meters)
    d = [189, 33, 36, 41, 35, 37, 44, 49, 37, 36, 42, 33, 47, 35, 49, 42]; # demand (in quantities)
    W = 100; # width of a raw roll
    I = length(d); @assert length(w) == I
    w, d, W, I
end;

# 🟣 Part I
# The following is a primary large-scale deterministic MIP formulation of the Cutting Stock Problem.
# The chief advantage ✅ of this problem formulation is that it can bring a global optimal solution,
# as long as you have enough time for computing.
function deterministic_mip(J) # J is a prescribed (large) number,
    # if the resulting termination status is INFEASIBLE, enlarge J.
    det = JuMP.Model(() -> Gurobi.Optimizer(GRB_ENV))
    JuMP.@variable(det, b[1:J], Bin) # b[j] == 1 ⟺ the j-th roll is used
    JuMP.@variable(det, n[1:I, 1:J] ≥ 0, Int) # n[i, j] is the number of (length == w[i])-subrolls cut from the j-th roll
    JuMP.@constraint(det, sum(n; dims = 2) .≥ d) # meet the demand
    JuMP.@constraint(det, w'n .≤ b'W) # width limit of each raw roll
    JuMP.@objective(det, Min, sum(b)) # number of used raw rolls
    JuMP.optimize!(det); JuMP.assert_is_solved_and_feasible(det; allow_local = false)
    num = JuMP.objective_value(det)
    println("The global optimal value is $(num)")
end
function deterministic_mip(b0, n0) # This is merely a variant of the above formulation in which you specify initial solutions
    J = length(b0) # 🟠
    det = JuMP.Model(() -> Gurobi.Optimizer(GRB_ENV))
    JuMP.@variable(det, b[1:J], Bin)
    JuMP.@variable(det, n[1:I, 1:J] ≥ 0, Int)
    JuMP.@constraint(det, sum(n; dims = 2) .≥ d)
    JuMP.@constraint(det, w'n .≤ b'W)
    JuMP.@objective(det, Min, sum(b))
    JuMP.set_start_value.(b, b0) # 🟠
    JuMP.set_start_value.(n, n0) # 🟠
    JuMP.optimize!(det); JuMP.assert_is_solved_and_feasible(det; allow_local = false)
    num = JuMP.objective_value(det)
    println("The global optimal value is $(num)")
end
# Do a blind (the number 500 is a trial) and brute-force (entirely by the MIP solver) solve
deterministic_mip(500); # 🖥️: The global optimal value is 334.0

# 🟣 Part II
# We introduce a pattern perspective of the Cutting Stock Problem and write an iterative algorithm---Column Generation.
# Each pattern is encoded into a column of the matrix `P` below.
# We justify the action "recruiting a new column" through the lens of Dual Cutting Plane method.
P = Identity(I); # ⚠️ This initial columns should make the initial `outer_mip` feasible
COT = 1e-6; # cut-off tolerance---a hyper parameter
function outer_mip(P) # employed to find feasible/suboptimal solutions
    J = size(P, 2)
    outer = JuMP.Model(() -> Gurobi.Optimizer(GRB_ENV)); JuMP.set_silent(outer)
    JuMP.@variable(outer, n[1:J] ≥ 0, Int)
    JuMP.@constraint(outer, [i = 1:I], view(P, i, :)'n ≥ d[i]) # [π ≥ 0] 🟡
    JuMP.@objective(outer, Min, sum(n))
    JuMP.optimize!(outer); JuMP.assert_is_solved_and_feasible(outer; allow_local = false)
    return ub, n = JuMP.objective_value(outer), JuMP.value.(n)
end;
# establish in advance (part of) the pricing subproblem
subproblem = JuMP.Model(() -> Gurobi.Optimizer(GRB_ENV)); JuMP.set_silent(subproblem);
JuMP.@variable(subproblem, p[1:I] ≥ 0, Int); JuMP.@constraint(subproblem, w'p ≤ W); # a feasible pattern
# establish in advance (part of) Lagrangian dual problem of the primal `outer_mip` above
outerDual = JuMP.Model(() -> Gurobi.Optimizer(GRB_ENV)); JuMP.set_silent(outerDual);
JuMP.@variable(outerDual, π[1:I] ≥ 0); JuMP.@objective(outerDual, Max, d'π);
J = size(P, 2);
JuMP.@constraint(outerDual, [j = 1:J], 1 ≥ view(P, :, j)'π); # [⇐ n ≥ 0] 🟡
while true # the aim of this main loop is to recruit as many advantageous columns as possible,
    # in order to lower the primal objective value of the `outer_mip`.
    # This goal is in turn pursued by attempting to lower the Lagrangian dual bound of `outer_mip`.
    # Hence, a Dual Cutting Plane method (i.e. the `@constraint` below) is employed
    Π, lb = let
        JuMP.optimize!(outerDual); JuMP.assert_is_solved_and_feasible(outerDual; allow_local = false)
        JuMP.value.(π), JuMP.objective_value(outerDual)
    end
    p_new, vio_level = let
        JuMP.@objective(subproblem, Max, Π'p)
        JuMP.optimize!(subproblem); JuMP.assert_is_solved_and_feasible(subproblem; allow_local = false)
        JuMP.value.(p), JuMP.objective_value(subproblem) - 1
    end
    if vio_level > COT # ✅ Note that this `if` is a hallmark of the Cutting Plane method
        @info "At J = $J, (invalid) lb = $lb"
        JuMP.@constraint(outerDual, 1 ≥ p_new'π) # add a new cut to lower the lb of the outer_mip
        global P = [P p_new] # recruit a new advantageous column
        global J = size(P, 2) # update J
    else
        @info "At J = $J, (invalid) lb = $lb\nno more advantageous columns can be found, thus break"
        break
    end
end # during the execution of this loop, we observe that `lb` is indeed iteratively lowered.
# Note that `size(P, 2)` is enlarged compared to that initial Identity state.
# Next We generate a feasible solution (in the pattern-formulation) to the primal problem.
# This feasible solution cannot assure global optimality because we have no idea
# whether there is still an advantageous column unrecruited. And I'm unaware of
# an efficient algorithm to identify it without branching.
ub, n = outer_mip(P)

# 🟣 Part III
# The feasible solution found in Part II is suboptimal. But we can employ it as a starting
# solution to be delivered to the MIP solver, so that we end up with a global solution. 
# To this end, we need to convert it to (b0, n0).
j_vec = findall(n -> n > .5, n) # used columns indices
n, P = n[j_vec], P[:, j_vec] # used part
n = round.(Int, n)
P = round.(Int, P)
n0 = hcat([repeat(view(P, :, j), 1, n[j]) for j in eachindex(n)]...)
b0 = trues(size(n0, 2))
@assert all(sum(n0; dims = 2) .≥ d) # do a demand feasibility check
# you can compare this "prepared" solve with that blind one in Part I
deterministic_mip(b0, n0) # 🖥️: The global optimal value is 334.0
2 Likes

If you think part of the documentation of a package is too lengthy or can be improved, why not open an issue or PR?

1 Like

For 2 reasons, maybe there are some suggestions I can hear. And my study about this problem is not finished yet. I’ll try if I can discover more insights.
The cutting pattern formulation, although looks appealing, does not admit a deterministic formulation, thereby global optimality is not assured. In contrast, the first formulation, despite looks a bit “silly”, it is indeed a deterministic MIP that can be guaranteed to be solved to global optimality, provided enough time to compute.

The suboptimal solution derived from column generation can serve as an starting feasible solution that can be fed to the primary roll-wise large-scale deterministic formulation—added in my “Part III”. Therefore we obviate the need for branch-and-price, yet still end up with a global optimization. You can take a look when you have time @odow

Hey,

At each iteration of column generation, you look at the column that has best reduced cost.

In the original formulation of the tutorial, all rolls are identical. You therefore just need to solve one knapsack at each iteration.

Make sure the master can pick any number of columns (patterns)

Focusing on subproblem for roll j=1, given \pi_i the dual value of demand covering constraint of item I, to generate the column with most negative reduced cost, you optimize :

\min_{x,y} y_1 - \sum_i \pi_i x_{i1} s.t. knapsack constraint

(Penalize demand convering contraint violation by \pi on the original formulation and you’ll end up with same problem as above if you focus only on roll j=1).

Assume y_1 = 1 because a master solution containing 0 pattern is feasible.

It yields: \min_{x} 1 - \sum_i \pi_i x_{i1} s.t. knapsack constraint
If reduced cost is negative, you keep the pattern because it improves the LP restricted master.

and this is equivalent to

\max_{x} \sum_i \pi_i x_{i1} s.t. knapsack constraint.
If reduced cost is > 1, keep the pattern.

Hope it helps

1 Like

To be honest, I cannot fully get the name “reduced cost”, along with “shadow price”. I think it should be enough for people to fathom what a “dual variable” is. I can only imagine it being a quantity like “rate of change” (for a scalar constraint) and a “gradient” (for a vector constraint).
Another point in MIP is, Lagrangian relaxation is always tighter than natural LP relaxation.
The JuMP doc uses “do LP relaxation and get dual”. In contrast, I think my Lagrangian dual method is better and can be generalized to more complex problems.

why is a master solution containing 0 pattern is feasible?
In my code above, my initial pattern is simply an identity matrix (each type of demand is considered independently), which saves effort.

Whatever, if finally we reach the same formulation, then both points of view would be nice.


Several places that the JuMP’s doc can be improved, in my opinion, is

  i   w_i d_i
 ------------
  1  75.0  38
  2  75.0  44
  3  75.0  30
  4  75.0  41
  5  75.0  36
  6  53.8  33

The i in 1:5 can be merged to belong to the same i, otherwise it looks weird.

Second,

function solve_pricing(data::Data, π::Vector{Float64})
..............
    return nothing
end

The subproblem is enclosed in a function, meaning that each iteration we’ll rebuild this JuMP model. In my code, this do not happen. Therefore I saved this overhead.

Third, I completely replaced this subsection " Choosing the initial set of patterns" with an Identity matrix, thus made the exposition shorter and smoother.

Here is a code in which I adopted an alternative algorithm for the same problem.
Note that this post is independent of my initial post.
And it can also find the global optimal.
Besided, I generated a larger scale test instance.

import JuMP, Gurobi; GRB_ENV = Gurobi.Env();
w, d, W, I = let # 📘 Data for the Cutting Stock Problem
    w = [59, 61, 62, 63, 65, 66, 67, 70, 72, 74, 77, 78, 79, 81, 83, 84, 86, 88, 89, 91, 92, 103, 104, 105, 106, 119, 120, 122, 125, 131, 133, 134, 136, 142, 144, 145, 148, 151, 154, 157, 159, 161, 166, 167, 168, 170, 174, 175, 176, 178, 180, 181]
    d = [77, 48, 62, 64, 66, 49, 25, 21, 56, 47, 53, 51, 23, 73, 71, 22, 23, 68, 71, 51, 66, 55, 58, 76, 71, 31, 53, 46, 57, 56, 63, 77, 53, 65, 28, 70, 72, 35, 46, 23, 34, 39, 36, 68, 31, 64, 40, 22, 36, 37, 39, 57]
    W = 223; # width of a raw roll
    I = length(d); @assert length(w) == I
    w, d, W, I
end;

J, LagIniBnd = 2000, 2100 # 📗 hyper parameter of the algorithm devised in this file

Mip = JuMP.Model(() -> Gurobi.Optimizer(GRB_ENV));
JuMP.@variable(Mip, 0 ≤ b[1:J] ≤ 1, Int);    # consider b[j] ∀ j
JuMP.@variable(Mip, 0 ≤ n[1:I, 1:J], Int);   # consider n[1:I, j] ∀ j
JuMP.@constraint(Mip, [j = 1:J], w'view(n, :, j) ≤ b[j]W); # ∀ j
JuMP.@objective(Mip, Min, sum(b[j] for j in 1:J)); # for each j
JuMP.@constraint(Mip, [i = 1:I], sum(n[i, j] for j in 1:J) ≥ d[i]); # [π ≥ 0] 🟡
# 💡: you can try a direct solve at this line, and see in which point it
# is inferior, compared with that final solve at the end of this code file

Min_j = JuMP.Model(() -> Gurobi.Optimizer(GRB_ENV)); JuMP.set_silent(Min_j); # the j-th block
JuMP.@variable(Min_j, 0 ≤ b_j ≤ 1, Int);
JuMP.@variable(Min_j, 0 ≤ n_j[1:I], Int);
JuMP.@constraint(Min_j, w'n_j ≤ (b_j)W);

Lag = JuMP.Model(() -> Gurobi.Optimizer(GRB_ENV)); JuMP.set_silent(Lag); # Lagrangian dual of `Mip`
JuMP.@variable(Lag, 0 ≤ π[1:I]);
JuMP.@variable(Lag, θ); # All J blocks are homogeneous, therefore only one subblock is needed
JuMP.@variable(Lag, Λ ≤ LagIniBnd); # ⚠️ add an large artificial bound
JuMP.@constraint(Lag, Λ ≤ d'π + (J)θ);
JuMP.@objective(Lag, Max, Λ);

function get_best_dual()
    for Lag_ite in 1:typemax(Int16)
        JuMP.optimize!(Lag); JuMP.assert_is_solved_and_feasible(Lag; allow_local = false)
        Π, ub = JuMP.value.(π), JuMP.objective_bound(Lag)
        JuMP.@objective(Min_j, Min, b_j - n_j'Π)
        JuMP.optimize!(Min_j); JuMP.assert_is_solved_and_feasible(Min_j; allow_local = false)
        bj, nj, Θ = JuMP.value(b_j), JuMP.value.(n_j), JuMP.objective_value(Min_j)
        lb = d'Π + (J)Θ
        @info "Lag_ite = $Lag_ite ▶ lb = $lb | $ub = ub"
        if lb + 1e-5 > ub
            BndDual = JuMP.dual(JuMP.UpperBoundRef(Λ))
            isapprox(BndDual, 0; atol=1e-7) || error("BndDual is $BndDual")
            return Π
        end
        JuMP.@constraint(Lag, θ ≤ bj - nj'π);
    end
end;

let # strengthen the original Mip
    Π, rhs = let
        Π = get_best_dual()
        JuMP.@objective(Min_j, Min, b_j - n_j'Π)
        JuMP.optimize!(Min_j); JuMP.assert_is_solved_and_feasible(Min_j; allow_local = false)
        rhs = JuMP.objective_value(Min_j)
        Π, rhs
    end
    JuMP.@constraint(Mip, [j = 1:J], b[j] - view(n, :, j)'Π ≥ rhs) # 🟣 add DWB cut
end

JuMP.optimize!(Mip); JuMP.solution_summary(Mip) # solve the DWB-equipped orginal Mip to global optimality

# 🗒️ Test Result: 
# 1. with Lagrangian bound:     Explored 1 nodes (282897 simplex iterations) in 108.98 seconds (104.16 work units), Find the OPTIMAL solution
# 2. without Lagrangian bound:  After Exploring 1245 nodes (1769432 simplex iterations) in 299.92 seconds (349.91 work units), ObjVal = 1.47000e+03, OBjBnd = 1.44800e+03

I think the general theory should not appear in a JuMP tutorial. My opinion is that the goal of JuMP documentation is to show what you can do with this package. In the case of column generation, the tutorial should focus on how to dynamically add variables to a formulation.

Take a look at the Dantzig-Wolfe decomposition. You’ll see that it’s based on a Lagrangian relaxation of the original problem and you need column generation to optimize the reformulation (e.g. the work of Vanderbeck & Wolsey: https://www.math.u-bordeaux.fr/~fvanderb/papers/refDecompIPworkingPaper.pdf part 3),

If you reformulate the original problem of the tutorial with a Dantzig-Wolfe decomposition, get rid of the convex combination constraint in the master and solve only one subproblem, you’ll get exactly the same algorithm with the same master and subproblem as in the tutorial. Reduced costs, dual values and Lagrangian are linked.

Good luck, it’s a nice topic :slight_smile:

2 Likes

Yes, I’ve noticed this point

# Create a new column
push!(x, @variable(model, lower_bound = 0))
# Update the non-zeros in the coefficient matrix
set_normalized_coefficient(demand[i], x[end], count)

To be honest, I’ve been using JuMP for 2 years, yet never used these 2 methods. (neither this time in this topic). (But it also reflects that Cutting Plane is indeed simpler—purely from a coding perspective.)

Thanks for sharing! I’ll have a read.

Joining my code in #1 and #7 posts, I’m glad to declare that I discover a (presumably) the most efficient algorithm so far to solving the cutting stock problem. The main ideas include

  1. We need a deterministic MIP formulation to the Cutting Stock Problem being studied. Such that we can quest a global optimal solution with the help of Gurobi (or other MIP solver).
  2. From primal side, we employ the idea of Column Generation to find a good heuristic solution
  3. From dual side, we enforce the Lagrangian dual bound

My code is available here

Primal-Dual Algorithm for Cutting Stock
import JuMP, Gurobi; GRB_ENV = Gurobi.Env();
import LinearAlgebra.I as Identity

w, d, W, I = let # 📘 Data for the Cutting Stock Problem
    w = [59, 61, 62, 63, 65, 66, 67, 70, 72, 74, 77, 78, 79, 81, 83, 84, 86, 88, 89, 91, 92, 103, 104, 105, 106, 119, 120, 122, 125, 131, 133, 134, 136, 142, 144, 145, 148, 151, 154, 157, 159, 161, 166, 167, 168, 170, 174, 175, 176, 178, 180, 181]
    d = [77, 48, 62, 64, 66, 49, 25, 21, 56, 47, 53, 51, 23, 73, 71, 22, 23, 68, 71, 51, 66, 55, 58, 76, 71, 31, 53, 46, 57, 56, 63, 77, 53, 65, 28, 70, 72, 35, 46, 23, 34, 39, 36, 68, 31, 64, 40, 22, 36, 37, 39, 57]
    W = 223; # width of a raw roll
    I = length(d); @assert length(w) == I
    w, d, W, I
end;

# 🟣 §I: Primal Heuristic Solution via Column Generation
P = Identity(I); # ⚠️ This initial columns should make the initial `outer_mip` feasible
COT = 1e-6; # cut-off tolerance---a hyper parameter
function outer_mip(P) # employed to find feasible/suboptimal solutions
    J = size(P, 2)
    outer = JuMP.Model(() -> Gurobi.Optimizer(GRB_ENV)); JuMP.set_silent(outer)
    JuMP.@variable(outer, n[1:J] ≥ 0, Int)
    JuMP.@constraint(outer, [i = 1:I], view(P, i, :)'n ≥ d[i]) # [π ≥ 0] 🟡
    JuMP.@objective(outer, Min, sum(n))
    JuMP.optimize!(outer); JuMP.assert_is_solved_and_feasible(outer; allow_local = false)
    return ub, n = JuMP.objective_value(outer), JuMP.value.(n)
end;
subproblem = JuMP.Model(() -> Gurobi.Optimizer(GRB_ENV)); JuMP.set_silent(subproblem);
JuMP.@variable(subproblem, p[1:I] ≥ 0, Int); JuMP.@constraint(subproblem, w'p ≤ W); # a feasible pattern
outerDual = JuMP.Model(() -> Gurobi.Optimizer(GRB_ENV)); JuMP.set_silent(outerDual);
JuMP.@variable(outerDual, π[1:I] ≥ 0); JuMP.@objective(outerDual, Max, d'π);
J = size(P, 2);
JuMP.@constraint(outerDual, [j = 1:J], 1 ≥ view(P, :, j)'π); # [⇐ n ≥ 0] 🟡
while true
    Π, lb = let
        JuMP.optimize!(outerDual); JuMP.assert_is_solved_and_feasible(outerDual; allow_local = false)
        JuMP.value.(π), JuMP.objective_value(outerDual)
    end
    p_new, vio_level = let
        JuMP.@objective(subproblem, Max, Π'p)
        JuMP.optimize!(subproblem); JuMP.assert_is_solved_and_feasible(subproblem; allow_local = false)
        JuMP.value.(p), JuMP.objective_value(subproblem) - 1
    end
    if vio_level > COT # ✅ Note that this `if` is a hallmark of the Cutting Plane method
        @info "At J = $J, (invalid) lb = $lb"
        JuMP.@constraint(outerDual, 1 ≥ p_new'π) # add a new cut to lower the lb of the outer_mip
        global P = [P p_new] # recruit a new advantageous column
        global J = size(P, 2) # update J
    else
        @info "At J = $J, (invalid) lb = $lb\nno more advantageous columns can be found, thus break"
        break
    end
end
ub, n = outer_mip(P)
j_vec = findall(n -> n > .5, n) # used columns indices
n, P = round.(Int, n[j_vec]), round.(Int, P[:, j_vec])
n0 = hcat([repeat(view(P, :, j), 1, n[j]) for j in eachindex(n)]...) # 🟠
b0 = trues(size(n0, 2)) # 🟠
@assert all(sum(n0; dims = 2) .≥ d) # do a demand feasibility check
J = length(b0)
LagIniBnd = 1.1J

# 🟣 §II: Lagrangian Dual Bound 
Mip = JuMP.Model(() -> Gurobi.Optimizer(GRB_ENV));
JuMP.@variable(Mip, 0 ≤ b[1:J] ≤ 1, Int);
JuMP.@variable(Mip, 0 ≤ n[1:I, 1:J], Int);
JuMP.set_start_value.(b, b0) # 🟠
JuMP.set_start_value.(n, n0) # 🟠
JuMP.@constraint(Mip, [j = 1:J], w'view(n, :, j) ≤ b[j]W);
JuMP.@objective(Mip, Min, sum(b[j] for j in 1:J));
JuMP.@constraint(Mip, [i = 1:I], sum(n[i, j] for j in 1:J) ≥ d[i]); # [π ≥ 0] 🟡
Min_j = JuMP.Model(() -> Gurobi.Optimizer(GRB_ENV)); JuMP.set_silent(Min_j); # the j-th block
JuMP.@variable(Min_j, 0 ≤ b_j ≤ 1, Int);
JuMP.@variable(Min_j, 0 ≤ n_j[1:I], Int);
JuMP.@constraint(Min_j, w'n_j ≤ (b_j)W);
Lag = JuMP.Model(() -> Gurobi.Optimizer(GRB_ENV)); JuMP.set_silent(Lag); # Lagrangian dual of `Mip`
JuMP.@variable(Lag, 0 ≤ π[1:I]);
JuMP.@variable(Lag, θ); # All J blocks are homogeneous, therefore only one subblock is needed
JuMP.@variable(Lag, Λ ≤ LagIniBnd); # ⚠️ add an large artificial bound
JuMP.@constraint(Lag, Λ ≤ d'π + (J)θ);
JuMP.@objective(Lag, Max, Λ);
function get_best_dual()
    for Lag_ite in 1:typemax(Int16)
        JuMP.optimize!(Lag); JuMP.assert_is_solved_and_feasible(Lag; allow_local = false)
        Π, ub = JuMP.value.(π), JuMP.objective_bound(Lag)
        JuMP.@objective(Min_j, Min, b_j - n_j'Π)
        JuMP.optimize!(Min_j); JuMP.assert_is_solved_and_feasible(Min_j; allow_local = false)
        bj, nj, Θ = JuMP.value(b_j), JuMP.value.(n_j), JuMP.objective_value(Min_j)
        lb = d'Π + (J)Θ
        @info "Lag_ite = $Lag_ite ▶ lb = $lb | $ub = ub"
        if lb + 1e-5 > ub
            BndDual = JuMP.dual(JuMP.UpperBoundRef(Λ))
            isapprox(BndDual, 0; atol=1e-7) || error("BndDual is $BndDual")
            return Π
        end
        JuMP.@constraint(Lag, θ ≤ bj - nj'π);
    end
end;
let # strengthen the original Mip
    Π, rhs = let
        Π = get_best_dual()
        JuMP.@objective(Min_j, Min, b_j - n_j'Π)
        JuMP.optimize!(Min_j); JuMP.assert_is_solved_and_feasible(Min_j; allow_local = false)
        rhs = JuMP.objective_value(Min_j)
        Π, rhs
    end
    JuMP.@constraint(Mip, [j = 1:J], b[j] - view(n, :, j)'Π ≥ rhs) # 🟣 add DWB cut
end

# 🟣 §III: Solve to Global Optimality with (Primal Solution @§I, Dual Bound @§II)
# This should be observably a lot easier than a direct crude solve
JuMP.optimize!(Mip); JuMP.solution_summary(Mip)

Let’s take a look at how eased Gurobi is

Optimize a model with 2988 rows, 77804 columns and 231944 nonzeros
Model fingerprint: 0xd4ea8bd8
Variable types: 0 continuous, 77804 integer (0 binary)
Coefficient statistics:
  Matrix range     [3e-01, 2e+02]
  Objective range  [1e+00, 1e+00]
  Bounds range     [1e+00, 1e+00]
  RHS range        [2e+01, 8e+01]

Loaded user MIP start with objective 1468

Presolve time: 0.33s
Presolved: 2988 rows, 77804 columns, 231944 nonzeros
Variable types: 0 continuous, 77804 integer (41104 binary)
Deterministic concurrent LP optimizer: primal and dual simplex
Showing primal log only...

Concurrent spin time: 0.00s

Solved with primal simplex

Root relaxation: cutoff, 53526 iterations, 2.29 seconds (2.11 work units)

    Nodes    |    Current Node    |     Objective Bounds      |     Work
 Expl Unexpl |  Obj  Depth IntInf | Incumbent    BestBd   Gap | It/Node Time

     0     0     cutoff    0      1468.00000 1468.00000  0.00%     -    2s

Explored 1 nodes (53526 simplex iterations) in 2.79 seconds (2.59 work units)
Thread count was 8 (of 8 available processors)

Solution count 1: 1468

Optimal solution found (tolerance 1.00e-04)
Best objective 1.468000000000e+03, best bound 1.468000000000e+03, gap 0.0000%

I would be more than glad to see if anyone can raise a better algorithm on this problem that beats mine. :grinning_face_with_smiling_eyes:

1 Like