Hi everyone,
I have a rather general question on JuMP and LP-Solvers. I am working on an algorithm, where many LPs have to be solved. I want to solve them using the JuMP Framework. There appears to be a lot of potential to increase the performance by setting start values for some values. As far as my current coding intents have shown, many solvers don’t support setting start values via JuMPs method set_start_value()
by showing an error and others seem to ignore the start values at all. Some solvers accept start values only for (M)ILPs but not for simple LPs. Does anyone know solvers/optimizers for JuMP that make use of start values set via set_start_value()
? With one solver, I could do it using the backend and got the expected performance improve. However, interacting with the backend still seems quite risky for me and I wonder if there isn’t another way.
Best regards and thank you in advance
BenniHs
Hi @BenniHs, welcome to the forum!
There are two parts to this:
- Do LP solvers support explicitly providing a warm-start primal solution?
- Can solvers in JuMP re-use information between solves?
The answer to 1 is a qualified “no.” In most cases, you cannot pass a solution with set_start_value
. The main reason for this is algorithmic: solvers do not exploit this information since interior point cannot be easily warm-started and simplex solvers require a basis. (In some cases you can, using the backend, provide a basis, but you’re right that this is quite complicated to do.)
The answer to 2 is a qualified “yes.” (For solvers which maintain a copy of the model in memory between solves, like HiGHS, Gurobi etc. Some, like Clp and SCS do not.)
You can see this with:
julia> using JuMP, Gurobi
julia> N = 1_000
1000
julia> w, p = rand(N), rand(N);
julia> c = 100
100
julia> model = Model(Gurobi.Optimizer)
A JuMP Model
Feasibility problem with:
Variables: 0
Model mode: AUTOMATIC
CachingOptimizer state: EMPTY_OPTIMIZER
Solver name: Gurobi
julia> @variable(model, 0 <= x[1:N] <= 1);
julia> @constraint(model, [i in 1:div(N, 10)], sum(x[rand(1:N, 10)]) <= 1); # Arbitrary constraints to make problem mroe difficult
julia> @constraint(model, c_balance, w' * x <= c);
julia> @objective(model, Max, p' * x);
julia> optimize!(model)
Gurobi Optimizer version 10.0.0 build v10.0.0rc2 (mac64[x86])
CPU model: Intel(R) Core(TM) i5-8259U CPU @ 2.30GHz
Thread count: 4 physical cores, 8 logical processors, using up to 8 threads
Optimize a model with 101 rows, 1000 columns and 1991 nonzeros
Model fingerprint: 0xd04d4bb8
Coefficient statistics:
Matrix range [5e-03, 2e+00]
Objective range [3e-05, 1e+00]
Bounds range [1e+00, 1e+00]
RHS range [1e+00, 1e+02]
Presolve removed 0 rows and 111 columns
Presolve time: 0.00s
Presolved: 101 rows, 889 columns, 1880 nonzeros
Iteration Objective Primal Inf. Dual Inf. Time
0 2.8783417e+02 2.855826e+02 0.000000e+00 0s
82 2.0333329e+02 0.000000e+00 0.000000e+00 0s
Solved in 82 iterations and 0.00 seconds (0.00 work units)
Optimal objective 2.033332917e+02
User-callback calls 124, time in user-callback 0.00 sec
julia> set_normalized_rhs(c_balance, c + 1)
julia> optimize!(model)
Gurobi Optimizer version 10.0.0 build v10.0.0rc2 (mac64[x86])
CPU model: Intel(R) Core(TM) i5-8259U CPU @ 2.30GHz
Thread count: 4 physical cores, 8 logical processors, using up to 8 threads
Optimize a model with 101 rows, 1000 columns and 1991 nonzeros
Coefficient statistics:
Matrix range [5e-03, 2e+00]
Objective range [3e-05, 1e+00]
Bounds range [1e+00, 1e+00]
RHS range [1e+00, 1e+02]
Iteration Objective Primal Inf. Dual Inf. Time
0 2.0422335e+02 9.446646e-01 0.000000e+00 0s
1 2.0422091e+02 0.000000e+00 0.000000e+00 0s
Solved in 1 iterations and 0.00 seconds (0.00 work units)
Optimal objective 2.042209063e+02
User-callback calls 21, time in user-callback 0.00 sec
The first call takes 82 iterations, but if we modify a RHS term, the next solve takes 1 iteration.
Hey odow,
thank you a lot for your overview on LP warm-start and re-use of information by the solvers. It has helped me a lot to navigate in my project.
Best wishes,
BenniHs