Poor time performance in modifying parameters of JuMP models when using Gurobi

Hello,

I have a large LP model that I want to iteratively solve over changing parameters. Up until now, using set_normalized_rhs and set_normalized_coefficient works very well when HiGHS is an optimizer. However, when I switch to Gurobi, it worryingly takes longer time to modify a model, >3 min. vs a few seconds.

I tried to improve this by using direct_model() instead of Model() to create the model. This leads to only a slight improvement.

Have I implemented something wrong?

Since I would like to use Gurobi due to its performance and I would prefer not to recreate a model every time, which is still faster than to modify existing models, is there anything I can try?

Ps. it is my first time in the forum, and thank you in advance for any suggestion

The following is an MWE with a dummy LP problem.

using JuMP, Gurobi, HiGHS, BenchmarkTools

ENV["GUROBI_HOME"] = "C:\\gurobi1003\\win64"
Gurobi_solver = optimizer_with_attributes(Gurobi.Optimizer)
HiGHS_solver = optimizer_with_attributes(HiGHS.Optimizer)

# Functions to initialize and modify an LP model
function initialize_indirect(; nvar=1000, ncon=1000)
    m = Model()
    @variable(m, 0 ā‰¤ x[1:nvar])
    @constraint(m, ec, rand(ncon, nvar) * x .== rand(ncon))
    @constraint(m, ic, rand(ncon, nvar) * x .ā‰„  rand(ncon))
    @objective(m, Min, sum(randn(nvar) .* x))
    return m
end
function initialize_direct(solver; nvar=1000, ncon=1000)
    m = direct_model(solver)
    @variable(m, 0 ā‰¤ x[1:nvar])
    @constraint(m, ec, rand(ncon, nvar) * x .== rand(ncon))
    @constraint(m, ic, rand(ncon, nvar) * x .ā‰„  rand(ncon))
    @objective(m, Min, sum(randn(nvar) .* x))
    return m
end
function modify!(m, ncon=1000)
    set_normalized_rhs.(m[:ec], rand(ncon))
    set_normalized_coefficient.(m[:ec], m[:x][1], rand(ncon))
    set_normalized_coefficient.(m[:ec], m[:x][2], rand(ncon))
    set_normalized_coefficient.(m[:ic], m[:x][3], rand(ncon))
    set_normalized_coefficient.(m[:ic], m[:x][4], rand(ncon))
    return nothing
end

# Test with Gurobi
m = initialize_indirect()
set_optimizer(m, Gurobi_solver)
optimize!(m)
@btime modify!(m)
### 24.115 ms (15028 allocations: 337.36 KiB)

m = initialize_direct(Gurobi_solver)
optimize!(m)
@btime modify!(m)
### 15.098 ms (10028 allocations: 259.23 KiB)

# Test with HiGHS
m = initialize_indirect()
set_optimizer(m, HiGHS_solver)
optimize!(m)
@btime modify!(m)
### 8.422 ms (15028 allocations: 337.36 KiB)

m = initialize_direct(HiGHS_solver)
optimize!(m)
@btime modify!(m)
### 2.851 ms (10028 allocations: 259.23 KiB)

Hi @NataponW welcome to the forum :smile:

Let me take a look at this. 3min is certainly a bug that we should fix.

Does Gurobi print any warnings about excessive time in model update?

One thing to read is GitHub - jump-dev/Gurobi.jl: Julia interface for Gurobi Optimizer

Can you share the code that takes a long time? How different is it compared to your MWE?

I took a look at this. Nothing immediately obvious. A few follow-up points:

  • Your examples have dense constraint matrices. Is your real problem dense?
  • The timing of HiGHS and Gurobi isnā€™t very meaningfully, because solvers do different things when you modify the coefficients. The better test is to see how fast they can re-solve the problem. But that isnā€™t meaningful with this example because the problems are infeasible.

I think to say more, weā€™ll need the reproducible example of your actual problem with the >3min modification time.

Thank you for the response.

I must apologize for one misinformation. The model is MILP. Below is the model report. It is similar to an energy system optimization problem. And no, it is not as dense as the provided example. The example was an attempt to reproduce the time difference with a ā€œleanerā€ model.

Running HiGHS 1.6.0: Copyright (c) 2023 HiGHS under 
MIT licence terms
A JuMP Model
Maximization problem with:
Variables: 385491        
Objective function type: AffExpr
`VariableRef`-in-`MathOptInterface.LessThan{Float64}`: 17533 constraints
`AffExpr`-in-`MathOptInterface.LessThan{Float64}`: 131412 constraints
`VariableRef`-in-`MathOptInterface.ZeroOne`: 11 constraints
`AffExpr`-in-`MathOptInterface.EqualTo{Float64}`: 192746 constraints
`VariableRef`-in-`MathOptInterface.EqualTo{Float64}`: 8761 constraints
`VariableRef`-in-`MathOptInterface.GreaterThan{Float64}`: 376719 constraints
Model mode: DIRECT
Solver name: HiGHS
Names registered in the model: <...remove for brevity...>

I agree that one should compare the full execution cycle, not just the model update. Without the model update, Gurobi always performs better than HiGHS for our problems. However, the long update time ruins this edge, see the comparison below.

@time for _ āˆˆ 1:10
    # Update `demand` and `retailprices`
    f_modifymodel!(m, demand, retailprices)
    optimize!(m)
    @show objective_value(m)
end
#=
## With HiGHS
objective_value(m) = 988823.9131227302
...
objective_value(m) = 182299.81660214678
 37.036549 seconds (1.41 M allocations: 56.254 MiB, 0.02% gc time)

## With Gurobi
objective_value(m) = 988823.9131227313
...
objective_value(m) = 182299.8166021621
1714.400086 seconds (1.66 M allocations: 72.728 MiB, 0.00% gc time, 0.01% 
compilation time)
=#

Unfortunately, I cannot provide the actual code at the moment. I will try to create an example that better represents the actual problem, although it will take time.

For now: if it would be helpful I can upload the profiling logs. I did notice one difference. With both solvers, f_modifymodel! is followed by a series of functions in broadcast.jl, then by set_normalized_coefficient in constraints.jl and modify in MOI_wrapper.jl.
In the case of HiGHS, this is followed by Highs_changeCoeff in libhighs.jl, and thatā€™s it. But in the case of Gurobi, _update_if_necessary in MOI_wrapper.jl is called, which set off some stuffs from sort.jl and array.jl.

What is f_modifymodel!? Can you provide more details? We need to know exactly what operations are being called and the order that they are called in.

Could it be that Gurobi is reoptimizing with barrier, and HiGHS with simplex?

How do these Gurobi and HiGHS times compare with the original solution? Originally, Gurobi will probably have used barrier, and HiGHS will have used simplex.

Still seems odd for Gurobi to be so much slower when it knows what modifications have been performed

This is almost certainly due to Gurobiā€™s lazy update mechanism. But its impossible to tell without the code.

See GitHub - jump-dev/Gurobi.jl: Julia interface for Gurobi Optimizer

Interesting.

Iā€™m not complaining about Gurobi being slow! :grin:

Hello,

I encounter the same problem with my code being slower with gurobi than with highs or cplex.

It is an implementation of a column generation for CVRP. If I understand well the profiling results, it seems that the call to set_normalized_coefficient is responsible for the behavior.

I am at a lost on how to proceed with activating the calls to GRBupdatemodel.

I can provide an access to the code if somebody wants to look at it.

Best regards.

1 Like

Hi @njozefow, welcome to the forum.

Can you start a new post with a reproducible example of the code?

Make sure you take a read of jump-dev/Gurobi.jl Ā· JuMP

Hello,

Thank you for your answer.

The example is in post : Problem with JuMP + Gurobi.jl

Best regards,

1 Like

This should be fixed now with the new Gurobi.jl release: v1.2.3.

I am getting:

# Test with Gurobi
m = initialize_indirect()
set_optimizer(m, Gurobi_solver)
optimize!(m)
@btime modify!(m)
### v1.2.2: 23.962 ms (27023 allocations: 524.70 KiB)
### v1.2.3: 5.972 ms (27023 allocations: 524.70 KiB)

m = initialize_direct(Gurobi_solver)
optimize!(m)
@btime modify!(m)
### v1.2.2: 16.794 ms (22023 allocations: 446.58 KiB)
### v1.2.3: 492.208 Ī¼s (22023 allocations: 446.58 KiB)
1 Like