I have a refreshing discovery today, based on my acquaintance Column generation · JuMP
wherein, the only thing I changed was the solver (I used Gurobi instead of HiGHS).
This is the master LP’s re-solving logging I acquired when I set Method to 0
LP warm-start: use basis
Iteration Objective Primal Inf. Dual Inf. Time
0 3.6848477e+02 0.000000e+00 5.526316e-01 0s
2 3.5932687e+02 0.000000e+00 0.000000e+00 0s
By comparison, this is the counterpart when Method === 1 (the default setting, just the same as the doc).
LP warm-start: use basis
Iteration Objective Primal Inf. Dual Inf. Time
0 -3.3333333e+29 1.333333e+30 3.333333e-01 0s
7 2.9900000e+02 0.000000e+00 0.000000e+00 0s
Okay I believe that everyone would find the former a lot more natural at least in terms of numerics—the primal simplex method which you need to switch to manually—unfortunately this is not the default foolproof setting taught by JuMP.
So the conclusion is:
- JuMP’s doc is adopting a default setting that doesn’t attain its expected performance in practice.
PS The motivation that yielded my idea today, was that I was wondering why the dual simplex method is the default. My guess is: modern solvers are row generation based, so the spirit of column generation was opposed to that. And lucky enough—I discovered this outcome—affirming my surmise.
Code
using JuMP
import DataFrames
import Gurobi
import SparseArrays
const GRB_ENV = Gurobi.Env();
struct Piece
w::Float64
d::Int
end
struct Data
pieces::Vector{Piece}
W::Float64
end
function get_data()
data = [
75.0 38
70.0 41
68.4 34
65.5 23
59.6 18
53.8 33
53.0 36
51.0 41
50.2 35
32.2 37
30.8 44
29.8 49
20.1 37
16.2 36
14.5 42
11.0 33
8.6 47
8.2 35
6.6 49
5.1 42
]
return Data([Piece(data[i, 1], data[i, 2]) for i in axes(data, 1)], 100.0)
end
function solve_pricing(data::Data, π::Vector{Float64})
I = length(π)
model = Model(() -> Gurobi.Optimizer(GRB_ENV))
set_silent(model)
@variable(model, y[1:I] >= 0, Int)
@constraint(model, sum(data.pieces[i].w * y[i] for i in 1:I) <= data.W)
@objective(model, Max, sum(π[i] * y[i] for i in 1:I))
optimize!(model)
assert_is_solved_and_feasible(model)
number_of_rolls_saved = objective_value(model)
if number_of_rolls_saved > 1 + 1e-8
# Benefit of pattern is more than the cost of a new roll plus some
# tolerance
return SparseArrays.sparse(round.(Int, value.(y)))
end
return nothing
end
data = get_data();
I = length(data.pieces)
J = 1_000 # Some large number
patterns = map(1:I) do i
n_pieces = floor(Int, data.W / data.pieces[i].w)
return SparseArrays.sparsevec([i], [n_pieces], I)
end
model = Model(() -> Gurobi.Optimizer(GRB_ENV))
# JuMP.set_attribute(model, "Method", 1)
@variable(model, x[1:length(patterns)] >= 0)
@objective(model, Min, sum(x))
@constraint(model, demand[i in 1:I], patterns[i]' * x >= data.pieces[i].d)
optimize!(model)
assert_is_solved_and_feasible(model)
solution_summary(model)
while true
# Solve the linear relaxation
optimize!(model)
assert_is_solved_and_feasible(model; dual = true)
# Obtain a new dual vector
π = dual.(demand)
# Solve the pricing problem
new_pattern = solve_pricing(data, π)
# Stop iterating if there is no new pattern
if new_pattern === nothing
@info "No new patterns, terminating the algorithm."
break
end
push!(patterns, new_pattern)
# Create a new column
push!(x, @variable(model, lower_bound = 0))
# Update the objective coefficient of the new column
set_objective_coefficient(model, x[end], 1.0)
# Update the non-zeros in the coefficient matrix
for (i, count) in zip(SparseArrays.findnz(new_pattern)...)
set_normalized_coefficient(demand[i], x[end], count)
end
println("Found new pattern. Total patterns = $(length(patterns))")
end
PS2 I further did a test, the results indicate that the default setting (dual simplex method) is indeed the best go-to algorithm for a CTPLN master LP, where I used the (Barrier+noCrossover) as a baseline.
# Barrier+noCrossover
Row │J cg_time cg_rgap decen_ub decen_rgap decen_time Kverm Kvermu KverM msttimem msttimemu msttimeM
│Int64 Int64 Float64 Float64 Float64 Float64 Int64 Float64 Int64 Float64 Float64 Float64
─────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
1 │16192 300 0.0528173 2.05448e5 0.0499749 317.78 1 2.193 3 0.1394 1.41158 3.92962
2 │16192 900 0.0173547 2.01733e5 0.016572 961.554 1 3.21189 5 0.1394 2.48585 7.85246
3 │16192 2700 0.00391167 2.00416e5 0.00391474 2817.17 1 4.58745 7 0.1394 4.71001 21.289
# Dual simplex (default)
Row │J cg_time cg_rgap decen_ub decen_rgap decen_time Kverm Kvermu KverM msttimem msttimemu msttimeM
│Int64 Int64 Float64 Float64 Float64 Float64 Int64 Float64 Int64 Float64 Float64 Float64
─────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
1 │16192 300 0.0350881 204397.0 0.0338806 332.166 1 2.48141 3 0.00010705 1.14491 7.80573
2 │16192 900 0.0130845 2.01481e5 0.0126484 988.064 1 3.48413 5 0.00010705 2.26494 16.9965
3 │16192 2700 0.00283829 2.00259e5 0.00277348 2802.26 1 4.87148 8 0.00010705 4.66875 81.4494