I need to run over a million small optimization problems which are similar from the perspective of the structure but have different coefficients for constraint and objective. I know I can create and solve many models concurrently, but I am not sure that is the most efficient way.
What kind of optimization problems?
For example, this code works but is inefficient since building a model from scratch.
using JuMP, GLPK, Random
function generate_model(seed)
Random.seed!(seed)
model = Model(GLPK.Optimizer)
@variable(model, 0 <= x <= 2)
@variable(model, 0 <= y <= 30)
@constraint(model, x + y <= 20)
@constraint(model, y >= 10)
@objective(model, Max, (5*rand())*x + (3*rand())*y)
return model
end
function solve_model(seed)
model = generate_model(seed)
optimize!(model)
if termination_status(model) == MOI.OPTIMAL
return value.(all_variables(model))
else
error("Problem could not be solved to optimality")
end
end
seeds = 1:10 # define seeds
tasks = [Threads.@spawn solve_model(seed) for seed in seeds]
solutions = fetch.(tasks)
for (i, sol) in enumerate(solutions)
println("Solution to model ", i, ": ", sol)
end
Create one model per task, and update the seed within the task, in a loop inner to the task.
Isn’t what I did in my previous example?
The idea is how to create an incremental interface and use copy_to.
There an interface of JuMP that allows updating the model, without having to create a new one everytime when parameters change. Using it you can create one model per task, and within each task run a loop that solves the problem many times, updating the model locally. I’m not at the computer neither know by heart that interface, but It exists.
(I mean one model per thread, if tou want. What I mean is only spawing a number of tasks much smaller than the number of optimizatios)
OK, I tried many approaches, including the incremental interface, which didn’t work with concurrency. But that would be great if you provide an example that does what you said.
Maybe here you can find useful info: How to optimize multiple times in a run - #2 by odow
More specifically : Power Systems · JuMP
Here’s an example:
using JuMP, HiGHS, Random
function solve_seeds(seeds::AbstractVector{Int})
model = Model(HiGHS.Optimizer)
set_silent(model)
@variable(model, 0 <= x <= 2)
@variable(model, 0 <= y <= 30)
@constraint(model, x + y <= 20)
@constraint(model, y >= 10)
solutions = Vector{Float64}[]
for seed in seeds
Random.seed!(seed)
@objective(model, Max, (5*rand())*x + (3*rand())*y)
optimize!(model)
if termination_status(model) == MOI.OPTIMAL
push!(solutions, value.(all_variables(model)))
else
error("Problem could not be solved to optimality")
end
end
return solutions
end
function partition_array(x::AbstractVector, N::Int)
k = ceil(Int, length(x) / N)
return [x[(k*(i-1)+1):min(length(x), k * i)] for i in 1:N]
end
seeds = 1:10 # define seeds
solution_dict = Dict{Int,Vector{Float64}}()
my_lock = Threads.ReentrantLock()
Threads.@threads for partition in partition_array(seeds, Threads.nthreads())
solutions = solve_seeds(partition)
Threads.lock(my_lock) do
for (seed, solution) in zip(partition, solutions)
solution_dict[seed] = solution
end
end
end
solution_dict
Thanks @odow
-
I believe a channel is more efficient than a lock based on my experience coding golang. I guess similar applies to Julia. You can put the solutions in the Channel when all threads are done and copy them over instead of locking the whole array.
-
you are adding a new objective instead of updating it. That can be doable, but imagine when you need to update all the constraints and the objective, then building it from scratch can be easier, right?
-
I was hoping for a method that creates a deep copy of a model and then updates the copied model coefficients.
- This is almost certainly not a bottleneck. Don’t worry about the performance. Solving the LP is much more expensive.
- There are methods you can use to modify individual coefficients in various places:
- set_objective_coefficient
- set_normalized_coefficient
-
set_normalized_rhs
See also - Constraints · JuMP
- Constraints · JuMP
-
Objectives · JuMP
There’s a trade-off, where if you changing almost every coefficient, then sure, it can be more efficient to rebuild from scratch.
- I don’t know if I understand the suggestion. In general, I’d recommend against copying a JuMP model. There are lots of subtle things that can go wrong.