I am trying to solve a very large (N=100M variables) and very sparse (density ~ 1/N) QUBO problem, using CPLEX interfaced via JuMP. To this end, I’m using the following function
function solve_with_CPLEX(Q)
L = size(Q, 1)
model = direct_model(CPLEX.Optimizer())
set_time_limit_sec(model, 3600 * 3)
set_optimizer_attribute(model, "CPXPARAM_Threads", 8)
@variable(model, x[i ∈ collect(1:L)], Bin)
@objective(model, Min, x.data' * triu(Q) * x.data)
tts = @elapsed optimize!(model)
value.(x.data), tts
end
It works well for small problems, but the time needed to just build the model (without optimize! call) grows very rapidly, see the plot below.
Thanks for the reply
Memory is not the problem, I have around 1TB. But I have identified the culprit I think. You see I omitted one detail from my MWE. My QUBO problems come from Ising models, so the the solve_with_CPLEX function actually looks like this:
function solve_with_CPLEX(J,h)
Q = to_qubo(J,h)
L = size(Q, 1)
model = direct_model(CPLEX.Optimizer())
set_time_limit_sec(model, 3600 * 3)
set_optimizer_attribute(model, "CPXPARAM_Threads", 8)
@variable(model, x[i ∈ collect(1:L)], Bin)
@objective(model, Min, x.data' * triu(Q) * x.data)
tts = @elapsed optimize!(model)
value.(x.data), tts
end
where
function to_qubo(J::AbstractMatrix{T}, h::AbstractVector{T}) where {T}
b = dropdims(sum(J, dims = 2), dims = 2)
Q = T(4) .* J
Q[diagind(Q)] .+= T(2) .* (h .- b)
Q
end
And it turns out the line Q[diagind(Q)] .+= T(2) .* (h .- b) is super slow