The model creation code is given above (create_model(size)
) and then I am calling (here for n=1000
)
@time (model = create_model(1000))
@time (solve_model!(model))
I am doing this a few times to average timings and gc usage (obviously not running this 100 times for n=2000
but timings and gc usage is pretty stable accross runs, since I am running this on an Ubuntu test machine with no other load).
Notes:
- Yes, I ommit initial calls to not account compilation time in the timings I listed.
- Turning off the GC is handled inside
create_model
(or not in the case of “auto gc”), as is turning it on again - exactly like the code given in my last post - I am not calling that from the REPL but by running
julia benchmark_model.jl
- Full source (“benchmark_model.jl”) given below
Is that what you are looking for stating explicit, or am I overlooking something that you need here?
This then outputs for example (size=1000
):
65.561803 seconds (420.44 M allocations: 23.330 GiB, 24.45% gc time)
12.195936 seconds
benchmark_model.jl
using JuMP
using GLPK
"""
Create a simple JuMP model with a specified size
"""
function create_model(size)
T = 1:10000
T2 = @view T[2:end]
GC.gc(true)
GC.enable(false)
model = JuMP.direct_model(GLPK.Optimizer())
set_time_limit_sec(model, 0.001)
for start in 1:100:size
I = start:(start+99)
x = @variable(model, [I, T], lower_bound=0, upper_bound=100)
@constraint(model, [i in I, t in T2], x[i, t] - x[i, t-1] <= 10)
@constraint(model, [i in I, t in T2], x[i, t-1] - x[i, t] <= 10)
@objective(model, Min, sum(x[i, t] for t in T for i in I))
GC.enable(true)
GC.gc(false)
GC.enable(false)
end
GC.enable(true)
return model
end
"""
Solve the model (this actually just passes it to GLPK due to the timelimit)
"""
function solve_model!(model)
JuMP.optimize!(model)
end
# ensure pre-compilation
model = create_model(100)
model = create_model(200)
solve_model!(model)
# timed run
i = 1000
println("Size: $i")
@time (model = create_model(i))
@time (solve_model!(model))
Note: I’ve had a stupid mistake after splitting up the constraint into two constraints (pointed out over here) - basically duplicating the T
array during slicing - that increased performance by a (really) small amount. That’s why there is now a T2
in model creation.