My question is inspired by an optimization problem, e.g. the famous Benders decomposition. But I’ll introduce it here in an abstract easily-comprehended fashion.
Settings of the problem
On one hand, there is a master problem keeping hold of a set of cuts (in optimization aka constraints). The status of the master problem is completely determined by that set of cuts. At a certain status, the master problem can produce a corresponding
x
:
function solve_master(model) # a JuMP Model at some certain status
x::Vector{Float64} = _some_computation(model)
return x # a trial solution
end
_some_computation
here is solving an LP (linear programming). Therefore it is relatively easy—taking few time to finish.
On the other hand, there is a vector of (
j in 1:J
) subproblems. Given a certain j
, along with some x
solved from solve_master
, the j
th subproblem can (always) produce a cut for the (master) model
:
function solve_subproblem(j::Int, x::Vector{Float64})
return cut = _some_computing_work(j, x)
end
Although a cut
can surely be produced as such, it may fail to strictly enrich the (master) model
(i.e. not helpful). If it is not helpful, we would discard it outright. To decide whether a cut
is helpful, we would query the x_new = solve_master(model)
in which the arg model
should ideally be at its latest status. Note that the x
that we used to generate the cut
could have already become outdated, since solve_subproblem
takes certain period of time (in practice, this is an MIP solving work, therefore it takes a relatively long time to finish). Therefore the odds are that model
has received some helpful cuts from some j_other
which isn’t j
.
Now we assume the general case J > 1
, say, J = 2
.
Since the J
subproblems can be solved independently, we can formulate the overall iterative algorithm as
model = _some_initialization()
const J = 2
const my_lock = Threads.ReentrantLock()
while true
x = solve_master(model) # `model` should be at its present status
is_globally_success = Threads.Atomic{Bool}(false)
Threads.@threads for j = 1:J
local cut = solve_subproblem(j, x)
# if possible, we had best use `x_new` for the following judge function (but how?)
# Nonetheless, using the existing `x` is still valid.
local is_success::Bool = _some_judge(j, cut, x_new)
if is_success
@lock my_lock add_cut!(model, cut) # strictly enrich the (master) `model`
is_globally_success[] = true
end
end
is_globally_success[] && continue
break
end
Comment: Although the code above embodies the idea of multithreading and it is theoretically viable to generate the correct result, it lacks an asynchronous spirit.
For simplicity we let J = 2
. Then there are 3
entities: Master, subproblem1, subproblem2.
The main work for Master is x = _some_computation(model)
The main work for subproblem1 is is_success_1 = _some_computing_work(1, x)
The main work for subproblem2 is is_success_2 = _some_computing_work(2, x)
(By “main work” I mean that it takes some cpu time to finish.)
I wonder if there is a way in julia, so that I can achieve the follows:
From the standpoint of subproblem1, it restarts its main work (not immediately, but schedule one) whenever an updated x
is available. (same for subproblem 2)
From the standpoint of Master, it restarts its main work (not immediately, but schedule one) whenever there is any (i.e. any j in 1:J
) update to the model
. If there are more than 1 updates, then it takes the latest (i.e. all updates accepted) model
as its arg and restarts its main work.
So the philosophy is “event-triggered”, where events include
- there is a new
x
generated (and thus is available) - A successful update due to subproblem1 is available
- A successful update due to subproblem2 is available
The termination criterion is simple: At some time upon a new x
is generated, neither subproblem1 nor subproblem2 can come up with a successful update. Then we quit the overall algorithm.