Hi, I wonder why set_normalized_coefficient takes about as long as the actual optimization in my MWE:
using JuMP, HiGHS
function create_solver(solver, l, C, d, lb)
model = direct_model(optimizer_with_attributes(solver))
@variable(model, x[i = eachindex(lb)].>=lb[i])
@objective(model, Min, sum(x[i] * l[i] for i in eachindex(lb)))
@constraint(model, con, C * x.≥d)
set_silent(model)
return model
end
function modify_coefficients!(model, variable, C)
for j in axes(C, 2) # Loop over the columns of C (each variable)
set_normalized_coefficient.(model[:con], variable[j], vec(C[:, j]))
end
end
function test(n)
l = rand(72)'
C = rand(36, 72)
d = rand(36)
lb = zeros(72)
solver = HiGHS.Optimizer
model = create_solver(solver, l, C, d, lb)
for _ in 1:n
C = rand(36, 72)
modify_coefficients!(model, model[:x], C)
optimize!(model) # min l * x s.t. C * x ≥ d AND x ≥ lb
end
end
@profview test(100)
My use case solves this problem for an n of about 40000, therefore the overhead from setting parameters is kinda annoying. Am I doing it inefficiently?
I think part of the problem is that you’re trying to modify every constraint coefficient one-by-one, and that the constraints are dense. The “modify and re-solve” approach is really intended for small changes between solves.
I would just do:
using JuMP, HiGHS
function create_solver(solver, c::Vector, A::Matrix, b::Vector, l::Vector)
n = length(c)
@assert n == size(A, 2) == length(l)
@assert size(A, 1) == length(b)
model = Model(solver)
set_silent(model)
@variable(model, x[i = 1:n] >= l[i])
@objective(model, Min, c' * x)
@constraint(model, A * x .>= b)
optimize!(model)
@assert is_solved_and_feasible(model)
return value.(x)
end
function test(n)
c = rand(72)
A = rand(36, 72)
b = rand(36)
l = zeros(72)
for _ in 1:n
A = rand(36, 72)
x = create_solver(HiGHS.Optimizer, c, A, b, l)
end
end
@time test(100)
Yeah, the one-by-one modification clearly is inefficient. However, my approach is about twice as fast as your implementation (0.7s vs 1.5s) on my machine – that’s why I switched to “modify and re-solve” in the first place.
What is the real problem you are trying to solve? Do you want to modify the entire dense constraint matrix? Is the performance a bottleneck? You might consider instead Parallelism · JuMP
Do you have a comparison for why the set_normalized_coefficient is “slow”? What would you expect to happen? We’re essentially throwing away the entire internal model and rebuilding from scratch. We can save some memory, but all internal data structures (presolve, factorizations, etc) need to be recomputed.
My problem requires to adjust the matrix C = B - (1+r)A over a grid of 0 \leq r_1 \leq ... \leq r_n and solve the corresponding linear problem (where A \in \mathbb{R}_{+}^{36 \times 72} and B = [I(36) I(36)].
Since performance is the bottleneck, I will indeed use some threading.
I didn’t expect a speedup of more than ~30% but still I was wondering why the setup took that long compared to the actual solution. However, I am not aware of the internals of JuMP.jl / HiGHS.jl. Thanks for your help!