I find that if I create and solve models in a for loop, then the memory usage will increase dramatically. I would expect the memory allocated keeps constant.
Here is the test code:
using JuMP, SCIP #,Gurobi
function test(iter)
for i = 1:iter
m= Model()
nvar = 1000
@variable(m, 0<=x[1:nvar]<=1)
@constraint(m, con[i in 1:(nvar-2)], x[i]+x[i+1]<=1)
@objective(m, :Min, sum{(x[i]+x[i+2])^2, i in 1:(nvar-2)})
m.solver=SCIPSolver("display/verblevel", 0)
#m.solver=GurobiSolver(LogToConsole=0,OutputFlag=0)
solve(m)
m = 1
gc()
end
end
test(1)
println(@time test(1))
println(@time test(10))
When I use SCIP I got
1.076273 seconds (96.84 k allocations: 72.203 MB, 4.01% gc time)
11.053048 seconds (967.77 k allocations: 721.995 MB, 4.07% gc time)
When I use Gurobi I got
0.076081 seconds (90.31 k allocations: 49.162 MB, 68.42% gc time)
0.685412 seconds (884.67 k allocations: 490.791 MB, 65.15% gc time)
So, the memory allocated increases as the number of iterations, which is surprising given the same problem is solve at each iteration and gc() is called explicitly at the end of each iteration.
When I monitor with top command in linux and run with test(1000)
. I find the memory usage with Gurobi
keeps at 10G but the memory usage with SCIP starts with 10G but keeps at 30G.