I am solving many independent assignment/matching problems using JuMP. Since the problems are independent, it is theoretically no different if I solve each of them separately or if I solve one big combined problem. However, I am wondering about the computational efficiency of each approach? Solving many problems incurs lots of model building & setup costs, but solutions to larger problems often start taking relatively more time as the problem size grows beyond some threshold.
Here is an example of one of these small problems:
using JuMP, HiGHS
qtyowned = [400, 400, 100, 100]
val = [10, 10, 10, 10.50]
qtysold = [200, 200, 200, 200, 100, 100]
saleprc = [10, 10, 10, 10, 10, 10.50]
N = size(qtysold,1)
K = size(qtyowned,1)
model = Model(HiGHS.Optimizer)
@variable(model, x[i=1:N, k=1:K], Bin)
@constraint(model, [i=1:N], sum(x[i,:]) == 1)
@constraint(model, [k=1:K], sum(x[j,k] * qtysold[j] for j in 1:N) == qtyowned[k])
@constraint(model, [k=1:K], sum(x[j,k] * qtysold[j] * saleprc[j] for j in 1:N) == qtyowned[k] * val[k])
optimize!(model)
round.(Int, value.(x))
NB: sometimes I add a quadratic objective for aligning timestamps as well, if that matters at all.
I can stack a bunch of these problems together by either making x
really big and adding another eligibility constraint or by looping over this and creating lots of subproblems in one model. Usually the dimension is small but sometimes N
and K
can be as large as 50 or 100 for a single problem. Would it be faster to keep them separate or combine them?
Also, separately, the first constraint is equivalent to SOS1()
, correct? But I haven’t found a solver that supports that constraint. Would that be faster?