Parallelization of Genetic Algorithm (Evolutionaty.jl) with Distributed.jl and Gurobi

Hello to everyone,

I’m trying to solve an nested optimization problem that is structured in the following way:

  • Master Problem: heuristic solver (genetic algorithm with Evolutionary.jl) for nonlinear black box problem
  • Slave Problem: deterministic solver (MILP with Gurobi.jl) for linear problem with both continuous and discrete variables
    The optimization variables of the MP are part of the inputs of the SP.
    Here is a minimum working example:
using JuMP, Evolutionary, Gurobi

Input = Dict{Symbol, Any}()
# Master problem data
Input[:MP] = Dict{Symbol, Any}()
Input[:MP][:lb] = zeros(5)
Input[:MP][:ub] = ones(5)
Input[:MP][:lc] = zeros(2)
Input[:MP][:uc] = ones(2)
Input[:MP][:A] = rand(2,5)
# Slave problem data
Input[:SP] = Dict{Symbol, Any}()
Input[:SP][:lb] = zeros(10)
Input[:SP][:ub] = ones(10)
Input[:SP][:lc] = zeros(5)
Input[:SP][:uc] = ones(5)
Input[:SP][:A] = rand(5, 10)

# Master problem objective function
function MP_OF_fun(x, Input)
    ## Slave problem definition
    model = direct_model(Gurobi.Optimizer())
    set_silent(model)
    # Slave problem variables
    @variable(model, Input[:SP][:lb][i] <= y[i=1:10] <= Input[:SP][:ub][i])
    # Slave problem constraints
    @constraint(model, Input[:SP][:A] * y .<= Input[:SP][:uc] .+ x)
    @constraint(model, Input[:SP][:A] * y .>= Input[:SP][:lc] .- x)
    # Slave problem objective function
    @objective(model, Max, sum(y))
    # Slave problem execution
    JuMP.optimize!(model)
    # Result
    if JuMP.termination_status(model) != OPTIMAL
        MP_res = 1e4 # penalty
    else
        MP_res = JuMP.objective_value(model)
    end
    #
    sleep(0.05)
    return MP_res
    #
end
MP_OF(x) = MP_OF_fun(x, Input)

# Master problem constraints
MP_constr(x) = Input[:MP][:A] * (x .^ 2)

# Building the Master problem
# constraints
cstr = WorstFitnessConstraints(Input[:MP][:lb], Input[:MP][:ub], Input[:MP][:lc], Input[:MP][:uc], MP_constr)
# method
mthd = GA(populationSize = 50, ɛ = 0.05, crossoverRate = 0.8, mutationRate = 0.1, selection = rouletteinv,
          crossover = TPX, mutation = inversion)
# options
opts = Evolutionary.Options(iterations = 200, show_trace = true, store_trace = false)
# Master problem execution
MP_res = Evolutionary.optimize(MP_OF, cstr, mthd, opts)

In my real case study, the slave problem takes some time to be defined and executed, because of a complex calculation for its constrants. For this reason I am trying to solve the slave problems in parallel.
I tried to use the :thread option (from here), but without success (a single SP takes actually much more time to be executed and the solver does not manage to complete the computation of the first population).
I am currently trying to use a distributed computation approach, mainly based on this topic. The parallelized version should seem something like this:

using JuMP, Evolutionary, Distributed
addprocs(10; exeflags = "--project=/tmp/gur")  # Replace with your project
@everywhere begin
    using JuMP, Gurobi
end

@everywhere begin
    Input = Dict{Symbol, Any}()
    # Master problem data
    Input[:MP] = Dict{Symbol, Any}()
    Input[:MP][:lb] = zeros(5)
    Input[:MP][:ub] = ones(5)
    Input[:MP][:lc] = zeros(2)
    Input[:MP][:uc] = ones(2)
    Input[:MP][:A] = rand(2,5)
    # Slave problem data
    Input[:SP] = Dict{Symbol, Any}()
    Input[:SP][:lb] = zeros(10)
    Input[:SP][:ub] = ones(10)
    Input[:SP][:lc] = zeros(5)
    Input[:SP][:uc] = ones(5)
    Input[:SP][:A] = rand(5, 10)
    # gurobi environment
    if !(@isdefined env)
        const env = Gurobi.Env()
    end
end

# Master problem objective function
@everywhere begin
    function MP_OF_fun(x, Input)
        ## Slave problem definition
        model = direct_model(Gurobi.Optimizer(env))
        set_silent(model)
        # Slave problem variables
        @variable(model, Input[:SP][:lb][i] <= y[i=1:10] <= Input[:SP][:ub][i])
        # Slave problem constraints
        @constraint(model, Input[:SP][:A] * y .<= Input[:SP][:uc] .+ x)
        @constraint(model, Input[:SP][:A] * y .>= Input[:SP][:lc] .- x)
        # Slave problem objective function
        @objective(model, Max, sum(y))
        # Slave problem execution
        JuMP.optimize!(model)
        # Result
        if JuMP.termination_status(model) != OPTIMAL
            MP_res = 1e4 # penalty
        else
            MP_res = JuMP.objective_value(model)
        end
        #
        return MP_res
        #
    end
    MP_OF(x) = MP_OF_fun(x, Input)
end    

# Master problem constraints
MP_constr(x) = Input[:MP][:A] * (x .^ 2)

# Building the Master problem
# constraints
cstr = WorstFitnessConstraints(Input[:MP][:lb], Input[:MP][:ub], Input[:MP][:lc], Input[:MP][:uc], MP_constr)
# method
mthd = GA(populationSize = 50, ɛ = 0.05, crossoverRate = 0.8, mutationRate = 0.1, selection = rouletteinv,
          crossover = TPX, mutation = inversion)
# options
opts = Evolutionary.Options(iterations = 200, show_trace = true, store_trace = false)
# Master problem execution
MP_res = Evolutionary.optimize(MP_OF, cstr, mthd, opts) # ??

The MP is defined only in the main process, while the SP is defined in all the processes. The problem is that I cannot figure out how to provide and execute the parallelized objective function. I know that I should use @distributed for ... but I do not know where and how. Here is reported that it is required an override of the value function but it remains obscure to me.
Please, let me know is someone has any idea. Thanks.

It seems like you need Evolutionary.optimize to support parallel subproblems solves. This is independent of JuMP or Gurobi.jl.

So if you’ve tried

For this reason I am trying to solve the slave problems in parallel.
I tried to use the :thread option (from here ), but without success (a single SP takes actually much more time to be executed and the solver does not manage to complete the computation of the first population).

Then I’m not really sure what else you could or should do on the JuMP side of things. It depends how things are set up internally in Evolutionary.jl.

Hello @odow, thank you very much for your answer. What I would like to do is to evaluate the objective function in parallel for the elements of the population.
To do that, I should put something like that

@distributed for i in 1:N_pop
        rr[i] = MP_OF(Pop[i])
   end

in the value! function, and theoretically it should be possible according to this part of the documentation, but I am having a hard time trying to understand how the value! function works, which are its inputs and how the override could be done, since my knowledge of Julia is still very limited.
Do you think that this is something that can be done or the structure of the Evolutionary package does not allow this kind of task?

in the value! function, and theoretically it should be possible according to this part of the documentation

That part of the documentation is for someone writing a new algorithm, not trying to solve a problem, so it doesn’t apply in this case.

I don’t think parallelizing your objective is something that you can do in your code. It’s something internal to Evolutionary.jl.

I see, then I will look for alternatives that exclude the use of Evolutionary.jl.
Thank you again!

1 Like