I have a JuMP (mixed-integer) model where I’m trying to determine the upper and lower bounds on many variables. Each individual solve is very quick, but since I have many variables (~1,000), the overall solve time is slow (~5s).
One thing that I observed is that
julia often seems to be using only one core at a time, and I was hoping that parallelization might be able to reduce solve times significantly. However, I’m running into an issue that seems to have to do with there being only a single Gurobi environment created.
As recommended by @odow, I tried using a caching pool (https://github.com/JuliaOpt/Gurobi.jl/issues/119#issuecomment-375445961), but I can’t seem to get it working consistently:
julia> addprocs(2) julia> @everywhere using JuMP, Gurobi julia> @everywhere function upperbound_mip(x) @JuMP.objective(x.m, Max, x) JuMP.solve(x.m) return JuMP.getobjectivevalue(x.m) end julia> m = JuMP.Model(solver = Gurobi.GurobiSolver(OutputFlag=0)); julia> @JuMP.variable(m, 0 <= x[i=1:100] <= i); julia> upperbound_mip(x) #added new line not in code sample from odow julia> let x=x wp = CachingPool(workers()) pmap(wp, (i)->upperbound_mip(x[i]), 1:100) end
ERROR: On worker 2: AssertionError: env.ptr_env != C_NULL get_error_msg at /home/vtjeng/.julia/v0.6/Gurobi/src/grb_env.jl:38 Type at /home/vtjeng/.julia/v0.6/Gurobi/src/grb_env.jl:50 [inlined] get_intattr at /home/vtjeng/.julia/v0.6/Gurobi/src/grb_attrs.jl:16 setvarLB! at /home/vtjeng/.julia/v0.6/Gurobi/src/GurobiSolverInterface.jl:187 #build#119 at /home/vtjeng/.julia/v0.6/JuMP/src/solvers.jl:338 #build at ./<missing>:0 #solve#116 at /home/vtjeng/.julia/v0.6/JuMP/src/solvers.jl:168 upperbound_mip at ./REPL:3 #17 at ./REPL:3 #exec_from_cache#193 at ./distributed/workerpool.jl:284 exec_from_cache at ./distributed/workerpool.jl:283 #106 at ./distributed/process_messages.jl:268 [inlined] run_work_thunk at ./distributed/process_messages.jl:56 macro expansion at ./distributed/process_messages.jl:268 [inlined] #105 at ./event.jl:73
As a note, Error solve JuMP models in parallel mentions that the best idea is to generate models locally.