Hello to everyone,
I’m trying to solve an nested optimization problem that is structured in the following way:
- Master Problem: heuristic solver (genetic algorithm with Evolutionary.jl) for nonlinear black box problem
- Slave Problem: deterministic solver (MILP with Gurobi.jl) for linear problem with both continuous and discrete variables
The optimization variables of the MP are part of the inputs of the SP.
Here is a minimum working example:
using JuMP, Evolutionary, Gurobi
Input = Dict{Symbol, Any}()
# Master problem data
Input[:MP] = Dict{Symbol, Any}()
Input[:MP][:lb] = zeros(5)
Input[:MP][:ub] = ones(5)
Input[:MP][:lc] = zeros(2)
Input[:MP][:uc] = ones(2)
Input[:MP][:A] = rand(2,5)
# Slave problem data
Input[:SP] = Dict{Symbol, Any}()
Input[:SP][:lb] = zeros(10)
Input[:SP][:ub] = ones(10)
Input[:SP][:lc] = zeros(5)
Input[:SP][:uc] = ones(5)
Input[:SP][:A] = rand(5, 10)
# Master problem objective function
function MP_OF_fun(x, Input)
## Slave problem definition
model = direct_model(Gurobi.Optimizer())
set_silent(model)
# Slave problem variables
@variable(model, Input[:SP][:lb][i] <= y[i=1:10] <= Input[:SP][:ub][i])
# Slave problem constraints
@constraint(model, Input[:SP][:A] * y .<= Input[:SP][:uc] .+ x)
@constraint(model, Input[:SP][:A] * y .>= Input[:SP][:lc] .- x)
# Slave problem objective function
@objective(model, Max, sum(y))
# Slave problem execution
JuMP.optimize!(model)
# Result
if JuMP.termination_status(model) != OPTIMAL
MP_res = 1e4 # penalty
else
MP_res = JuMP.objective_value(model)
end
#
sleep(0.05)
return MP_res
#
end
MP_OF(x) = MP_OF_fun(x, Input)
# Master problem constraints
MP_constr(x) = Input[:MP][:A] * (x .^ 2)
# Building the Master problem
# constraints
cstr = WorstFitnessConstraints(Input[:MP][:lb], Input[:MP][:ub], Input[:MP][:lc], Input[:MP][:uc], MP_constr)
# method
mthd = GA(populationSize = 50, ɛ = 0.05, crossoverRate = 0.8, mutationRate = 0.1, selection = rouletteinv,
crossover = TPX, mutation = inversion)
# options
opts = Evolutionary.Options(iterations = 200, show_trace = true, store_trace = false)
# Master problem execution
MP_res = Evolutionary.optimize(MP_OF, cstr, mthd, opts)
In my real case study, the slave problem takes some time to be defined and executed, because of a complex calculation for its constrants. For this reason I am trying to solve the slave problems in parallel.
I tried to use the :thread
option (from here), but without success (a single SP takes actually much more time to be executed and the solver does not manage to complete the computation of the first population).
I am currently trying to use a distributed computation approach, mainly based on this topic. The parallelized version should seem something like this:
using JuMP, Evolutionary, Distributed
addprocs(10; exeflags = "--project=/tmp/gur") # Replace with your project
@everywhere begin
using JuMP, Gurobi
end
@everywhere begin
Input = Dict{Symbol, Any}()
# Master problem data
Input[:MP] = Dict{Symbol, Any}()
Input[:MP][:lb] = zeros(5)
Input[:MP][:ub] = ones(5)
Input[:MP][:lc] = zeros(2)
Input[:MP][:uc] = ones(2)
Input[:MP][:A] = rand(2,5)
# Slave problem data
Input[:SP] = Dict{Symbol, Any}()
Input[:SP][:lb] = zeros(10)
Input[:SP][:ub] = ones(10)
Input[:SP][:lc] = zeros(5)
Input[:SP][:uc] = ones(5)
Input[:SP][:A] = rand(5, 10)
# gurobi environment
if !(@isdefined env)
const env = Gurobi.Env()
end
end
# Master problem objective function
@everywhere begin
function MP_OF_fun(x, Input)
## Slave problem definition
model = direct_model(Gurobi.Optimizer(env))
set_silent(model)
# Slave problem variables
@variable(model, Input[:SP][:lb][i] <= y[i=1:10] <= Input[:SP][:ub][i])
# Slave problem constraints
@constraint(model, Input[:SP][:A] * y .<= Input[:SP][:uc] .+ x)
@constraint(model, Input[:SP][:A] * y .>= Input[:SP][:lc] .- x)
# Slave problem objective function
@objective(model, Max, sum(y))
# Slave problem execution
JuMP.optimize!(model)
# Result
if JuMP.termination_status(model) != OPTIMAL
MP_res = 1e4 # penalty
else
MP_res = JuMP.objective_value(model)
end
#
return MP_res
#
end
MP_OF(x) = MP_OF_fun(x, Input)
end
# Master problem constraints
MP_constr(x) = Input[:MP][:A] * (x .^ 2)
# Building the Master problem
# constraints
cstr = WorstFitnessConstraints(Input[:MP][:lb], Input[:MP][:ub], Input[:MP][:lc], Input[:MP][:uc], MP_constr)
# method
mthd = GA(populationSize = 50, ɛ = 0.05, crossoverRate = 0.8, mutationRate = 0.1, selection = rouletteinv,
crossover = TPX, mutation = inversion)
# options
opts = Evolutionary.Options(iterations = 200, show_trace = true, store_trace = false)
# Master problem execution
MP_res = Evolutionary.optimize(MP_OF, cstr, mthd, opts) # ??
The MP is defined only in the main process, while the SP is defined in all the processes. The problem is that I cannot figure out how to provide and execute the parallelized objective function. I know that I should use @distributed for ...
but I do not know where and how. Here is reported that it is required an override of the value
function but it remains obscure to me.
Please, let me know is someone has any idea. Thanks.