julia> function a_correct_way_to_build_with_multithreading()
model = Model()
@variable(model, x[1:10])
my_lock = Threads.ReentrantLock()
Threads.@threads for i in 1:10
con = @build_constraint(x[i] <= i)
Threads.lock(my_lock) do
add_constraint(model, con)
end
end
return model
end
The code above from JuMP’s doc can be improved, if I want to have access to a reference of the con added.
Here is a feasible method
con = JuMP.@build_constraint(x[1] <= 1)
cut = false # initialize a concrete object of JuMP.ConstraintRef (I don't know how to, thus use `false` instead)
Threads.lock(my_lock) do
cut = JuMP.add_constraint(model, con) # retrieve the reference
end
# [important] Now I can do something with `cut` here
Is my “feasible” method above correct? I’m a bit unclear now, since do is a hard local block.
julia> function f(af)
a = 3
println("inside function: a = $a")
end
f (generic function with 1 method)
julia> a = 0;
julia> for i = 1:1
a = 2
f() do
a = 7
end
println("inside for: a = $a")
end
inside function: a = 3
inside for: a = 2
julia> a
2
Given this fact, how can my following analogous code be made correct?
julia> Threads.nthreads()
4
julia> using JuMP
julia> my_lock = Threads.ReentrantLock();
julia> model = Model();
julia> JuMP.@variable(model, x[1:3]);
julia> for k = 1:1
Threads.@threads for i = 1:3
# The following line is indispensable. If it's absent, ERROR is seen
cut = "the initial cut inside for i = $i"
con = @build_constraint(x[i] <= i)
Threads.lock(my_lock) do
cut = JuMP.add_constraint(model, con)
end
println("at the end of i = $i, cut = $cut")
end
end
at the end of i = 1, cut = x[1] <= 1
at the end of i = 3, cut = x[3] <= 3
at the end of i = 2, cut = x[2] <= 2
julia> print(model)
Feasibility
Subject to
x[3] <= 3
x[1] <= 1
x[2] <= 2
The question is written in the title.
The code within do-end is essentially the body of an anonymous function. Although it appears that my method works, I don’t think it is decent. Can you think of a better alternative?
The following code explains why I think my crude method is fragile
# open a new julia REPL
julia> for _ = 1:1
cut = false
map([1, 3]) do x
cut = rand()
x^2
end
println("cut = $cut")
end
cut = 0.8839225864031239
julia> cut = false;
julia> for _ = 1:1
cut = false
map([1, 3]) do x
cut = rand()
x^2
end
println("cut = $cut")
end
cut = false
I can provide a solution—use try instead of the complex do syntax—as follows
import JuMP, Gurobi; GRB_ENV = Gurobi.Env();
my_lock = Threads.ReentrantLock();
std_val = sum(1:1000000);
model = JuMP.Model(() -> Gurobi.Optimizer(GRB_ENV));
JuMP.@variable(model, x[1:1000000]);
for _ = 1:1 # This loop imitates the main training loop for some algorithm (e.g. Benders)
Threads.@threads for j = 1:1000000 # blocks are executed in parallel
local con = JuMP.@build_constraint(x[j] ≤ j)
local cut = false
lock(my_lock)
try
cut = JuMP.add_constraint(model, con) # ::JuMP.ConstraintRef
finally
unlock(my_lock)
end
# ★ Important ★ You can do something with `cut` HERE, e.g.
JuMP.owner_model(cut) == model || error()
end
end
JuMP.@objective(model, Max, sum(x));
JuMP.optimize!(model); JuMP.assert_is_solved_and_feasible(model; allow_local = false)
Δ = std_val - JuMP.objective_value(model)
abs(Δ) < 1e-11 && @info "test success"
I’m afraid this method is unusable, and it is somewhat misleading for a new user. The reason is that in my context, the counterpart of run_do_block is Thread.lock. You have no idea what the concrete implementation of Thread.lock is—for example, it can return nothing—e.g.
julia> function run_do_block(f) # the concrete code implementation is entailed for proper inference
f()
return nothing
end;
julia> cut = run_do_block() do # Here cut = the UPPER "return", not THIS "return"
cut = 1
return cut
end;
julia> isnothing(cut) # it depends on the concrete code implementation of `run_do_block`
true
meaning that the lock function happens to return what the anonymous function returns. The correctness depends on the concrete implementation of lock, which is invisible from user’s side.
By comparison, your Ref method is more agreeable.
import JuMP, Gurobi; GRB_ENV = Gurobi.Env();
my_lock = Threads.ReentrantLock();
std_val = sum(1:1000000);
model = JuMP.Model(() -> Gurobi.Optimizer(GRB_ENV));
JuMP.@variable(model, x[1:1000000]);
for _ = 1:1 # This loop imitates the main training loop for some algorithm (e.g. Benders)
Threads.@threads for j = 1:1000000 # blocks are executed in parallel
local con = JuMP.@build_constraint(x[j] ≤ j)
local ref = Ref{JuMP.ConstraintRef}()
Threads.lock(my_lock) do
ref[] = JuMP.add_constraint(model, con)
nothing # you can do some other tasks here
end
# ★ Important ★ You can do something with `ref[]` HERE, e.g.
JuMP.owner_model(ref[]) == model || error()
end
end
JuMP.@objective(model, Max, sum(x));
JuMP.optimize!(model); JuMP.assert_is_solved_and_feasible(model; allow_local = false)
Δ = std_val - JuMP.objective_value(model)
abs(Δ) < 1e-11 && @info "test success"
The doc for lock(f::Function, lock) does not explicitly state that it returns the value of f. I think it should have. I doubt it will ever change, there is no reason to change it, and too many rely on this functionality. Anyway, if you think this is a problem, you can instead use
ret = @lock lock begin
stuff
more stuff
v # return value
end
The doc states what it expands to,
That is, it’s documented that it returns the value of expr.
I like it. Now we have all three methods, with the macro version being
# ... omit the same code ...
Threads.@threads for j = 1:1000000 # blocks are executed in parallel
local con = JuMP.@build_constraint(x[j] ≤ j)
local cut = @lock my_lock JuMP.add_constraint(model, con)
JuMP.owner_model(cut) == model || error()
end
# ... omit the same code ...