Parallel Solves in Gurobi.jl

I have a JuMP (mixed-integer) model where I’m trying to determine the upper and lower bounds on many variables. Each individual solve is very quick, but since I have many variables (~1,000), the overall solve time is slow (~5s).

One thing that I observed is that julia often seems to be using only one core at a time, and I was hoping that parallelization might be able to reduce solve times significantly. However, I’m running into an issue that seems to have to do with there being only a single Gurobi environment created.

As recommended by @odow, I tried using a caching pool (https://github.com/JuliaOpt/Gurobi.jl/issues/119#issuecomment-375445961), but I can’t seem to get it working consistently:

julia> addprocs(2)

julia> @everywhere using JuMP, Gurobi

julia> @everywhere function upperbound_mip(x)
           @JuMP.objective(x.m, Max, x)
           JuMP.solve(x.m)
           return JuMP.getobjectivevalue(x.m)
       end

julia> m = JuMP.Model(solver = Gurobi.GurobiSolver(OutputFlag=0));

julia> @JuMP.variable(m, 0 <= x[i=1:100] <= i);

julia> upperbound_mip(x[1]) #added new line not in code sample from odow

julia> let x=x
       wp = CachingPool(workers())
       pmap(wp, (i)->upperbound_mip(x[i]), 1:100)
       end
ERROR: On worker 2:
AssertionError: env.ptr_env != C_NULL
get_error_msg at /home/vtjeng/.julia/v0.6/Gurobi/src/grb_env.jl:38
Type at /home/vtjeng/.julia/v0.6/Gurobi/src/grb_env.jl:50 [inlined]
get_intattr at /home/vtjeng/.julia/v0.6/Gurobi/src/grb_attrs.jl:16
setvarLB! at /home/vtjeng/.julia/v0.6/Gurobi/src/GurobiSolverInterface.jl:187
#build#119 at /home/vtjeng/.julia/v0.6/JuMP/src/solvers.jl:338
#build at ./<missing>:0
#solve#116 at /home/vtjeng/.julia/v0.6/JuMP/src/solvers.jl:168
upperbound_mip at ./REPL[3]:3
#17 at ./REPL[8]:3
#exec_from_cache#193 at ./distributed/workerpool.jl:284
exec_from_cache at ./distributed/workerpool.jl:283
#106 at ./distributed/process_messages.jl:268 [inlined]
run_work_thunk at ./distributed/process_messages.jl:56
macro expansion at ./distributed/process_messages.jl:268 [inlined]
#105 at ./event.jl:73

As a note, Error solve JuMP models in parallel - #7 by miles.lubin mentions that the best idea is to generate models locally.

The issue with the new line is that it instantiates the Gurobi model on the master process and copies that model to the other processes. You need to create different Gurobi models on different processes.
One option is

addprocs(2)

@everywhere using JuMP, Gurobi

@everywhere const env = Gurobi.Env()

@everywhere function upperbound_mip2(i)
    m = Model(solver=GurobiSolver(env, OutputFlag=0))
    @variable(m, 0 <= x[i=1:100] <= i)
    @objective(m, Max, x[i])
    solve(m)
    getobjectivevalue(m)
end

pmap(upperbound_mip2, 1:100)

another option is

addprocs(2)

@everywhere using JuMP, Gurobi

@everywhere const env = Gurobi.Env()
@everywhere const m = Model(solver=GurobiSolver(env, OutputFlag=0))
@everywhere @variable(m, 0 <= x[i=1:100] <= i)

@everywhere function upperbound_mip2(i)
    @objective(m, Max, x[i])
    solve(m)
    getobjectivevalue(m)
end

pmap(upperbound_mip2, 1:100)
2 Likes

Just FYI, you should be able to do

@everywhere begin
   ...
   ...
end

as well.

1 Like

I’m trying to do something similar: I want to repeatedly solve the same model with a slightly different objective function each time. In my setting, creating the model is relatively costly because there many constraints, so I’d prefer to make the model once and then update the objective function for each solve. A small example:

addprocs(1)

@everywhere using Gurobi, JuMP
@everywhere const env = Gurobi.Env()

@everywhere function make_model()
    m = Model(optimizer_with_attributes(
        () -> Gurobi.Optimizer(env), "OutputFlag" => 0))
    @variable(m, 0 <= x <= 1)
    return m
end

@everywhere function update_obj!(m::Model, a::Real)
    x = m[:x]
    @objective(m, Max, a*x)
    return m
end

@everywhere function update_and_optimize!(m::Model, a::Real)
    update_obj!(m, a)
    optimize!(m)
    return objective_value(m)
end

@everywhere function make_and_optimize(a::Real)
    m = make_model()
    return update_and_optimize!(m, a)
end

Ultimately, what I’d like to do is something like

A = 1:10
m0 = make_model()
@distributed (+) for a in A
    update_and_optimize!(m0, a)
end

When I create the model, update the objective, and solve it, all at once on the worker process, it works fine. The following call returns 10 as expected:

remotecall_fetch(() -> make_and_optimize(10), 2)

But when I make the model first (whether on the main or worker process) and then try to update and solve it, both calls below result in an AssertionError: is_valid(env):

m1 = make_model()
remotecall_fetch(() -> update_and_optimize!(m1, 10), 2)

m2 = remotecall_fetch(() -> make_model(), 2)
remotecall_fetch(() -> update_and_optimize!(m2, 10), 2)

Thanks in advance for your help!

I haven’t tried this, but I think you might need to get a separate env for each parallel worker you’re kicking off.

Thanks for your reply! Hmm, I thought that was what @everywhere env = ... was doing. The env pointers seem to be in two different places on the main and worker processes:

julia> @show env;
env = Gurobi.Env(Ptr{Nothing} @0x0000000014287d40)

julia> remotecall_fetch(() -> begin @show env end, 2);
From worker 2:    env = Gurobi.Env(Ptr{Nothing} @0x0000000007e3fe30)

Though strangely the pointer displayed using @show seems to be different from the one actually returned by remotecall_fetch:

julia> remotecall_fetch(() -> begin @show env.ptr_env end, 2)
From worker 2:    env.ptr_env = Ptr{Nothing} @0x0000000007e3fe30
Ptr{Nothing} @0x0000000000000000

This makes the model m0 once, and then copies it to each worker. You need to construct the model with a different env on each worker.

using Distributed
addprocs(3; exeflags = "--project=/tmp/gur")  # Replace with your project
@everywhere begin
    using JuMP
    using Gurobi
    const ENV = Gurobi.Env()
end
@everywhere begin
    model = Model(() -> Gurobi.Optimizer(ENV))
    set_silent(model)
    @variable(model, 0 <= x <= 1)
    function update_and_solve(a)
        println("Solving $(a) from $(myid())")
        @objective(model, Max, a * x)
        optimize!(model)
        sleep(1.0)
        return objective_value(model)
    end
end

julia> @distributed (+) for a in 1:10
           update_and_solve(a)
       end
      From worker 3:	Solving 5 from 3
      From worker 2:	Solving 1 from 2
      From worker 4:	Solving 8 from 4
      From worker 3:	Solving 6 from 3
      From worker 2:	Solving 2 from 2
      From worker 4:	Solving 9 from 4
      From worker 4:	Solving 10 from 4
      From worker 3:	Solving 7 from 3
      From worker 2:	Solving 3 from 2
      From worker 2:	Solving 4 from 2
55.0

This worked great for me, thanks!