Optimal model creation (and garbage collection)

Hi! I’m rather new to JuMP (and Julia) and I’m trying to compare the speed of creating a model using JuMP to an existing Python project (trying to decide whether it would be beneficial to switch to JuMP, now that 1.0 is “here”).

I’m using the following function to create a simple model with 10000 timesteps and a variable size:

function create_model(size)
    I = 1:size
    T = 1:10000

    model = JuMP.Model()
    @variable(model, x[I, T], lower_bound=0, upper_bound=100)
    @constraint(model, [i in I, t in T[2:end]], -10 <= x[i, t] - x[i, t-1] <= 10)
    @objective(model, Min, sum(x[i, t] for t in T for i in I))
    return model
end

This is just:

min \sum_{i\in I, t\in T} x_{i,t} \\ s.t. \\ \qquad 0 \leq x_{i,t} \leq 100 \qquad \forall i\in I, t\in T \\ \qquad |x_{i,t} - x_{i,t-1}| \leq 10 \qquad \forall i\in I, t \in T \backslash \{1\}

Now I have two questions:

  1. I assume there is a more efficient way to construct this problem (constructing the same problem in the Python is considerably faster; but the Python code is pretty optimized)? Any tips/tricks?
  2. Benchmarking the time it takes to construct, I see GC at around 40%. I tried disabling the garbage collector beforehand (using GC.enable(false)) but I feel like that should not be the go-to solution, right? Is there any way to “configure” the garbage collector, …?
BenchmarkTools.Trial: 3 samples with 1 evaluation.
 Range (min … max):  1.438 s …    1.880 s  ┊ GC (min … max): 31.25% … 45.86%
 Time  (median):     1.817 s               ┊ GC (median):    41.84%
 Time  (mean ± σ):   1.712 s ± 239.067 ms  ┊ GC (mean ± σ):  40.34% ±  7.55%

Thanks for any hints,
Stefan

1 Like

Welcome to the community!

I am curious, because this is the second question in a few days concerning the time for model building. Does that time (< 2s) makes a difference in your case?

In my experience with optimization the time for model building was always irrelevant relative to the time to solve it, so I’m intrigued by the type of application where that is an issue.

2 Likes

Building the JuMP model is only part of the time. What solver are you using? Did you time how long it took to solve the model and return a solution? Some of the python packages can build a symbolic model without populating the data, then when the model has to be solved they take a lot longer.

1 Like

Playing with my new favourite toy

using BenchmarkTools
using JuMP
using JET

function create_model(size)
    I = 1:size
    T = 1:10000

    model = JuMP.Model()
    @variable(model, x[I, T], lower_bound=0, upper_bound=100)
    @constraint(model, [i in I, t in T[2:end]], -10 <= x[i, t] - x[i, t-1] <= 10)
    @objective(model, Min, sum(x[i, t] for t in T for i in I))
    return model
end

# @btime create_model(100)
JET.@report_opt create_model(100)

yields

[ Info: Precompiling JET [c3a54625-cd67-489e-a8e7-0a5a0ff4e31b]
═════ 124 possible errors found ═════
[minor details omitted]

Maybe there is room for improvement.

I will add this example not because it solves anything (the solver I use does not accepts constraints, only bounds), but it may give a counterexample to compare things and clarify what OP is actually searching for:

julia> using SPGBox

julia> function solve(n)
           x = 500*rand(n); lower=zeros(n); upper = 100*ones(n)
           spgbox!(sum, (g,x) -> g .= 1, x, lower=lower, upper=upper)
       end
solve (generic function with 1 method)

julia> @time solve(10^4)
  0.135097 seconds (8.19 k allocations: 916.875 KiB, 99.60% compilation time)

 SPGBOX RESULT: 

 Convergence achieved. 

 Final objective function value = 0.0
 Best solution found = [ 0.0, 0.0, 0.0, ..., 0.0]
 Projected gradient norm = 0.0

 Number of iterations = 2
 Number of function evaluations = 3


julia> @time solve(10^4); # without compilation
  0.000554 seconds (19 allocations: 625.625 KiB)

Maybe this gives some ground for understanding what is actually needed here.

So what would be the preferable way of comparing the time here? I’ll start with some ideas:

  • using the same solver in both Python & Julia
  • measuring not only the time to build the model, but end-to-end, including the time to pass the model to the solver and solving time
  • look at the (self-reported?) time that the solver itself needs, and subtract it from the overall time.
  • try to exclude the start-up time of the processes, and compilation time?

I’m curious about getting this right, because I wanted to redo the python-mip benchmarks done with Python, PyPy and Julia.

We’ve done benchmarks with a time limit of 1ms. That way the solver has to load the problem into memory and start the solution process. Otherwise minor differences in the binaries can have major differences on the solution process. But yes, you want to time going from nothing until the solver starts useful work.

5 Likes

Thanks for all your fast replies! :open_mouth:
I’ll try to get back at all the important points and update with what I’ve been able to dig into further:

Does that time (< 2s) makes a difference in your case?

The two seconds do not matter here, but I am looking to construct an energy systems model that will be run iteratively with live data updates and has a pretty tight response window (new data every 15 minutes, so only a few minutes for constructing + solving). Since I can only partially optimize the time it takes Gurobi to solve the problem (I can construct the model in a way that the solve gets faster, but other than that not that much that I can do), it is really relevant if constructing the model takes 30s or 3minutes. (see also my answer below regarding the Python code)

Playing with my new favourite toy

Is there anything I could do with that JET report? Is there anything “wrong” with the way I construct the variables / constraints?

So what would be the preferable way of comparing the time here? I’ll start with some ideas:

  • using the same solver in both Python & Julia
  • measuring not only the time to build the model, but end-to-end, including the time to pass the model to the solver and solving time
  • look at the (self-reported?) time that the solver itself needs, and subtract it from the overall time.
  • try to exclude the start-up time of the processes, and compilation time?
  1. I’m using the same solver in Python & Julia (Gurobi or sometimes GLPK on test-machines where I don’t have access to a Gurobi licence).
  2. I’m always a bit reluctant to include solving time because that can vary a lot with one implementation randomly representing the specific model in a way that makes the solve faster (and therefore you can’t generalize onto larger/other models). But @odow’s idea (setting a time limit) seems to circumvent this mostly (as far as I understand) and I’ll be using that from now on!
  3. start-up, compilation, … times are always excluded, otherwise the comparison would not be fair (one wouldn’t include Cython compilation times e.g., so no reason to include precompile times)

[…] Some of the python packages can build a symbolic model without populating the data […]

I am using a customized version of Pyomo that strips all non-linear stuff from it (there is a lot of unecessary time-loss due to generalism) and optimizes a few things here and there for better performance after cythonizing the whole project. So I was doing “create + write to file” as comparison (since that gets recommended by some papers), but after reading your input and the drawbacks write_to_file comes with, I’ve changed that to a more fair comparison (see below).

We’ve done benchmarks with a time limit of 1ms.

I’ve now changed my code to adopt that idea, just with a limit of 1s (because I can only pass an integer timelimit to my GLPK test code using the Python implementation). I’ve previously only looked at “model creation time” and the time it takes writing the model to a file (since some papers suggest that to be a fair comparison), but as you wrote in this other post, that can be potentially misleading. This greatly improved the comparison in favor of JuMP.

But a few things still irritate me, which could all be related to garbage collection (?):

Timings for various sizes n are

     | time to construct |  time to "solve"  |
-----+---------+---------+---------+---------+
n    |  pyomo  |   jump  |  pyomo  |   jump  |
-----+---------+---------+---------+---------+
100  | 11s     | 4s      | 90s     | 9s      |
200  | 22s     | 7s      | 182s    | 18s     |
500  | 52s     | 22s     | 450s    | 97s     |
1000 | 104s    | 41s    | 813s    | 340s    |
2000 | 204s    | 117s    | 1654s   | 677s    |

While Pyomo seems to be scaling somewhat linearly, JuMP had a big jump in observed times. But with factors now going from 10:1 to less than 2:1 for total time pyomo:jump, I am still unsure how it could scale with more complicated models. I again took a look at the average amount of time spent on GC:

     | GC % of total time             |
-----+----------------+---------------+
n    | model creation | model "solve" |
-----+----------------+---------------+
100  | 40%            | 20%           |
200  | 30%            | 20%           |
500  | 40%            | 60%           |
1000 | 40%            | 80%           |
2000 | 55%            | 80%           |

It seems like garbage collection heavily influences this. This happens (to some extent) less in the Python implementation. Why? Because I am actively pausing the garbage collection there and only allowing it at very specific times (for a model consisting of n blocks, it is easy to approximate the memory size of each block without garbage collection and only “cleanup” when it’s necessary).

I’ve again tested with a disabled GC by doing:

	GC.gc()
    GC.enable(false)

    @time (model = create_model(size))

    GC.enable(true)
	GC.gc()
    GC.enable(false)

	@time (solve_model!(model))

    GC.enable(true)

This leads to the following (larger n omitted due to the fact I only have 30GB RAM available during testing; Pyomo timings identical for reference; time to “solve” refers to passing it to GLPK with a timelimit of 1s):

     | time to construct |  time to "solve"  |
-----+---------+---------+---------+---------+
n    |  pyomo  |   jump  |  pyomo  |   jump  |
-----+---------+---------+---------+---------+
100  | 11s     | 2s      | 90s     | 7s      |
200  | 22s     | 5s      | 182s    | 13s     |
500  | 52s     | 12s     | 450s    | 34s     |
1000 | 104s    | 22s     | 813s    | 70s     |

One can see here, that now (without GC) JuMP scales exactly linearly with problem size.
This leads again to the question: Is there a way to configure the GC or influence how often/when sweeps (or however it works) are done?

5 Likes

I don’t know. JET reports show possible problems in application and library code, @report_opt even diagnoses runtime dispatch. I also had a short look into the profile of your MWE and it looked to me as if library code could possibly be improved (but I didn’t see dynamic dispatch in the profiler).

The best way to tune the GC is to avoid the bulk of allocations (again this is probably more a question to the library developers than you)?

Using direct-mode should greatly decrease the GC pressure.

Use:

model = direct_model(Gurobi.Optimizer())

or

model = direct_model(GLPK.Optimizer())

You’ll need to rewrite

@constraint(model, [i in I, t in T[2:end]], -10 <= x[i, t] - x[i, t-1] <= 10)

as

@constraint(model, [i in I, t in T[2:end]], -10 <= x[i, t] - x[i, t-1])
@constraint(model, [i in I, t in T[2:end]], x[i, t] - x[i, t-1] <= 10)
1 Like

Thanks again for the helpful inputs! I’ve done new tests and am at least partially successful. Reporting back with the new information below :smiley: Short TLDR at the end of this post.

Using direct-mode should greatly decrease the GC pressure.

I have (naively as it shows) assumed that this could be a hint that the way JuMP works with “bridges” could be the culprit. So I’ve used the following to test again: model = JuMP.Model(GLPK.Optimizer; add_bridges=false), now constructing my constraints in a natively supported way:

@constraint(model, [i in I, t in T[2:end]], x[i, t] - x[i, t-1] <= 10)
@constraint(model, [i in I, t in T[2:end]], x[i, t-1] - x[i, t] <= -10)

This results in similar timings and GC usage, with model creation even taking a bit more time. So I discarded that thought and went on testing the direct mode as proposed. This results in the following timings

     | time to construct |  time to "solve"  |
-----+---------+---------+---------+---------+
n    | normal  | direct  | normal  | direct  |
-----+---------+---------+---------+---------+
100  | 4s      | 7s      | 9s      | 1s      |
200  | 7s      | 15s     | 18s     | 2s      |
500  | 22s     | 49s     | 97s     | 6s      |
1000 | 41s     | 180s    | 340s    | 12s     |
2000 | 117s    | 660s    | 677s    | 29s     |

and GC usage (note that “solve” now does not suffer from any garbage collection)

     | GC % of total time (normal)    | GC % of total time (direct) |
-----+----------------+---------------+-----------------------------+
n    | model creation | model "solve" | model creation              |
-----+----------------+---------------+-----------------------------+
100  | 40%            | 20%           | 35%                         |
200  | 30%            | 20%           | 35%                         |
500  | 40%            | 60%           | 50%                         |
1000 | 40%            | 80%           | 70%                         |
2000 | 55%            | 80%           | 85%                         |

This shows that

  1. additional computational effort is now spent on “creating” the model (to be expected since it’s basically just one step compared to the disjunct steps before) and
  2. that using direct mode is a bit faster (total time of 689s vs. 793s for n=2000) but for bigger models garbage collection still heavily impacts performance.

But there is one really important difference. Now that most of the computational time happens during model creation (a step that is mostly “user controlled” - compared to model solve taking a large chunk of total time that is effectively “hidden” in JuMP). This can be used to manually control when the garbage collector is allowed to run. I have change model construction to the following:

function create_model(size)
    T = 1:10000

    # disable GC and prepare direct mode model
    GC.enable(false)
	model = JuMP.direct_model(GLPK.Optimizer())
	set_time_limit_sec(model, 0.001)

	for start in 1:100:size
		I = start:(start+99)
		x = @variable(model, [I, T], lower_bound=0, upper_bound=100)
		@constraint(model, [i in I, t in T[2:end]], x[i, t] - x[i, t-1] <= 10)
		@constraint(model, [i in I, t in T[2:end]], x[i, t-1] - x[i, t] <= 10)
		@objective(model, Min, sum(x[i, t] for t in T for i in I))
		
        # after construction of 100 "blocks" do a single GC pass  
		GC.enable(true)
		GC.gc(false)
		GC.enable(false)
	end

    # re-enable GC and return model
    GC.enable(true)
    return model
end

So what am I doing here? I’m splitting up model creation into chunks of 100 “blocks” each. While doing this GC is disabled, between chunks I do a single pass of garbage collection. This can now be adapted to the specific needs (more often / less often). The new timings are (compared to “auto gc” being the direct mode timings from before):

     | time to construct   | time to "solve"     |
-----+-----------+---------+-----------+---------+
n    | manual gc | auto gc | manual gc | auto gc |
-----+-----------+---------+-----------+---------+
100  | 6s        | 7s      | 1s        | 1s      |
200  | 12s       | 15s     | 3s        | 2s      |
500  | 28s       | 49s     | 7s        | 6s      |
1000 | 65s       | 180s    | 15s       | 12s     |
2000 | 122s      | 660s    | ?s        | 29s     |

which shows a massive improvement with larger model sizes scaling in the same way as smaller ones.

[Why is there a “?”? I could not successfully pass the model with n=2000 to the solver due to running out of memory. Even doing GC.gc(true) before calling optimize!(model) resulted in a error. I am not that unsettled though, since I’m only using about 40GB of RAM currently and any “real” machine this would be running on would be much bigger in that regard.]

A look at the GC % confirms that (while it can still be seen, that garbage collection will get more complicating with bigger models AND that it still makes up a huge chunk of total computational time - around one fourth):

n    | GC % (manual gc) | GC % (auto gc) |
-----+------------------+----------------+
100  | 15%              | 35%            |
200  | 15%              | 35%            |
500  | 20%              | 50%            |
1000 | 25%              | 70%            |
2000 | 25%              | 85%            |

Current learnings:

  • Using GC.gc() requires the GC to be enabled (using GC.enable(true)) if it was turned off manually before. So no disabling and then just calling it. Overall it’s really difficult to find any kind of information regarding the Julia garbage collector.
  • Using direct mode reduces total time as long as garbage collection is not the predominant slowdown.
  • Shifting the main garbage collection pressure into a part of the code that can be (more easily) controlled (as in “not purely JuMP internal”) enables simply turning the GC off for some time.
  • Disabling bridges did not help at all. (maybe I did something wrong there?)
  • I am still unsure if this approach will scale onto much larger models since an increased GC % can still be observed.
  • I am unsure why the case with n=2000 actually runs out of memory, since changing up the chunk size does not influence that. It feels like some portion of memory is never free’d correctly by doing that manually, but …

Any other tips from a JuMP’s perspective? I will try to further dig into how GC actually works in Julia (and consulting the main Julia discourse, since I assume that’s a potentially better fit for that topic) because I do not think “wasting” 25% of the total computational time for garbage collection is a good target to aim for.

1 Like

What code are you actually running for the benchmarking? Can you be explicit in how you are generating the timings?

The model creation code is given above (create_model(size)) and then I am calling (here for n=1000)

@time (model = create_model(1000))
@time (solve_model!(model))

I am doing this a few times to average timings and gc usage (obviously not running this 100 times for n=2000 but timings and gc usage is pretty stable accross runs, since I am running this on an Ubuntu test machine with no other load).

Notes:

  • Yes, I ommit initial calls to not account compilation time in the timings I listed.
  • Turning off the GC is handled inside create_model (or not in the case of “auto gc”), as is turning it on again - exactly like the code given in my last post
  • I am not calling that from the REPL but by running julia benchmark_model.jl
  • Full source (“benchmark_model.jl”) given below

Is that what you are looking for stating explicit, or am I overlooking something that you need here?

This then outputs for example (size=1000):

 65.561803 seconds (420.44 M allocations: 23.330 GiB, 24.45% gc time)
 12.195936 seconds

benchmark_model.jl

using JuMP
using GLPK

"""
	Create a simple JuMP model with a specified size
"""
function create_model(size)
    T = 1:10000
	T2 = @view T[2:end]
	
	GC.gc(true)
	GC.enable(false)
	model = JuMP.direct_model(GLPK.Optimizer())
	set_time_limit_sec(model, 0.001)

	for start in 1:100:size
		I = start:(start+99)
		x = @variable(model, [I, T], lower_bound=0, upper_bound=100)
		@constraint(model, [i in I, t in T2], x[i, t] - x[i, t-1] <= 10)
		@constraint(model, [i in I, t in T2], x[i, t-1] - x[i, t] <= 10)
		@objective(model, Min, sum(x[i, t] for t in T for i in I))
		
		GC.enable(true)
		GC.gc(false)
		GC.enable(false)
	end
	
	GC.enable(true)
    return model
end


"""
	Solve the model (this actually just passes it to GLPK due to the timelimit)
"""
function solve_model!(model)
	JuMP.optimize!(model)
end


# ensure pre-compilation
model = create_model(100)
model = create_model(200)
solve_model!(model)

# timed run
i = 1000
println("Size: $i")
@time (model = create_model(i))
@time (solve_model!(model))

Note: I’ve had a stupid mistake after splitting up the constraint into two constraints (pointed out over here) - basically duplicating the T array during slicing - that increased performance by a (really) small amount. That’s why there is now a T2 in model creation.

1 Like

Deactivating calls to GC and running JET.@report_opt create_model(1000) I see

═════ 35 possible errors found ═════
┌ @ C:\Users\Win10\source\repos\julia\1.jl:13 GLPK.Optimizer()
│┌ @ C:\Users\Win10\.julia\packages\GLPK\oTTtu\src\MOI_wrapper.jl:124 $(QuoteNode(GLPK.var"#Optimizer#5#6"()))(false, GLPK.SIMPLEX, Base.pairs(Core.NamedTuple()), #self#)
││┌ @ C:\Users\Win10\.julia\packages\GLPK\oTTtu\src\MOI_wrapper.jl:133 MathOptInterface.set(model, MathOptInterface.RawParameter("msg_lev"), GLPK.MSG_ERR)
│││┌ @ C:\Users\Win10\.julia\packages\GLPK\oTTtu\src\MOI_wrapper.jl:312 set_intopt = GLPK.set_parameter(Base.getproperty(model, :intopt_param), key, value)
││││┌ @ C:\Users\Win10\.julia\packages\GLPK\oTTtu\src\MOI_wrapper.jl:300 GLPK.convert(%39, value)
│││││ runtime dispatch detected: GLPK.convert(%39::DataType, value::Int32)
││││└────────────────────────────────────────────────────────────────────
│││┌ @ C:\Users\Win10\.julia\packages\GLPK\oTTtu\src\MOI_wrapper.jl:310 GLPK.Symbol(%13)
││││ runtime dispatch detected: GLPK.Symbol(%13::Any)
│││└────────────────────────────────────────────────────────────────────
│││┌ @ C:\Users\Win10\.julia\packages\GLPK\oTTtu\src\MOI_wrapper.jl:311 GLPK.set_parameter(%15, %14, value)
││││ runtime dispatch detected: GLPK.set_parameter(%15::InteriorParam, %14::Any, value::Int32)
│││└────────────────────────────────────────────────────────────────────
│││┌ @ C:\Users\Win10\.julia\packages\GLPK\oTTtu\src\MOI_wrapper.jl:312 GLPK.set_parameter(%25, %14, value)
││││ runtime dispatch detected: GLPK.set_parameter(%25::IntoptParam, %14::Any, value::Int32)
│││└────────────────────────────────────────────────────────────────────
│││┌ @ C:\Users\Win10\.julia\packages\GLPK\oTTtu\src\MOI_wrapper.jl:313 GLPK.set_parameter(%35, %14, value)
││││ runtime dispatch detected: GLPK.set_parameter(%35::SimplexParam, %14::Any, value::Int32)
│││└────────────────────────────────────────────────────────────────────
││┌ @ C:\Users\Win10\.julia\packages\GLPK\oTTtu\src\MOI_wrapper.jl:141 Core.apply_type(MathOptInterface.Utilities.CleverDicts.CleverDict, MathOptInterface.VariableIndex, GLPK.VariableInfo)()   
│││┌ @ C:\Users\Win10\.julia\packages\MathOptInterface\YDdD3\src\Utilities\CleverDicts.jl:92 #self#(0)
││││┌ @ C:\Users\Win10\.julia\packages\MathOptInterface\YDdD3\src\Utilities\CleverDicts.jl:92 Core.apply_type(MathOptInterface.Utilities.CleverDicts.CleverDict, _, _)(MathOptInterface.Utilities.CleverDicts.key_to_index, MathOptInterface.Utilities.CleverDicts.index_to_key, n)
│││││┌ @ C:\Users\Win10\.julia\packages\MathOptInterface\YDdD3\src\Utilities\CleverDicts.jl:80 Base.convert(Core.fieldtype(Core.apply_type(MathOptInterface.Utilities.CleverDicts.CleverDict, _, 
_, _, _), 6), vec)
││││││┌ @ array.jl:614 _(a)
│││││││┌ @ array.jl:623 Base.copyto_axcheck!(Core.apply_type(Base.Array, _, _)(Base.undef, Base.size(x)), x)
││││││││┌ @ abstractarray.jl:1110 Base.copyto!(dest, src)
│││││││││┌ @ array.jl:343 Base.copyto!(dest, 1, src, 1, Base.length(src))
││││││││││┌ @ array.jl:317 Base._copyto_impl!(dest, doffs, src, soffs, n)
│││││││││││┌ @ array.jl:331 Base.unsafe_copyto!(dest, doffs, src, soffs, n)
││││││││││││┌ @ array.jl:307 Base._unsafe_copyto!(dest, doffs, src, soffs, n)
│││││││││││││┌ @ array.jl:253 Base.setindex!(dest, Base.getindex(src, Base.-(Base.+(soffs, i), 1)), Base.-(Base.+(doffs, i), 1))
││││││││││││││┌ @ array.jl:963 Base.convert(_, x)
│││││││││││││││ runtime dispatch detected: Base.convert(_::Type{GLPK.VariableInfo}, x::MathOptInterface.VariableIndex)
││││││││││││││└────────────────
││┌ @ C:\Users\Win10\.julia\packages\GLPK\oTTtu\src\MOI_wrapper.jl:142 Core.apply_type(MathOptInterface.Utilities.CleverDicts.CleverDict, GLPK.ConstraintKey, GLPK.ConstraintInfo)()
│││┌ @ C:\Users\Win10\.julia\packages\MathOptInterface\YDdD3\src\Utilities\CleverDicts.jl:92 #self#(0)
││││┌ @ C:\Users\Win10\.julia\packages\MathOptInterface\YDdD3\src\Utilities\CleverDicts.jl:92 Core.apply_type(MathOptInterface.Utilities.CleverDicts.CleverDict, _, _)(MathOptInterface.Utilities.CleverDicts.key_to_index, MathOptInterface.Utilities.CleverDicts.index_to_key, n)
│││││┌ @ C:\Users\Win10\.julia\packages\MathOptInterface\YDdD3\src\Utilities\CleverDicts.jl:80 Base.convert(Core.fieldtype(Core.apply_type(MathOptInterface.Utilities.CleverDicts.CleverDict, _, 
_, _, _), 6), vec)
││││││┌ @ array.jl:614 _(a)
│││││││┌ @ array.jl:623 Base.copyto_axcheck!(Core.apply_type(Base.Array, _, _)(Base.undef, Base.size(x)), x)
││││││││┌ @ abstractarray.jl:1110 Base.copyto!(dest, src)
│││││││││┌ @ array.jl:343 Base.copyto!(dest, 1, src, 1, Base.length(src))
││││││││││┌ @ array.jl:317 Base._copyto_impl!(dest, doffs, src, soffs, n)
│││││││││││┌ @ array.jl:331 Base.unsafe_copyto!(dest, doffs, src, soffs, n)
││││││││││││┌ @ array.jl:307 Base._unsafe_copyto!(dest, doffs, src, soffs, n)
│││││││││││││┌ @ array.jl:253 Base.setindex!(dest, Base.getindex(src, Base.-(Base.+(soffs, i), 1)), Base.-(Base.+(doffs, i), 1))
││││││││││││││┌ @ array.jl:963 Base.convert(_, x)
│││││││││││││││ runtime dispatch detected: Base.convert(_::Type{GLPK.ConstraintInfo}, x::GLPK.ConstraintKey)
││││││││││││││└────────────────
││┌ @ C:\Users\Win10\.julia\packages\GLPK\oTTtu\src\MOI_wrapper.jl:144 MathOptInterface.empty!(model)
│││┌ @ C:\Users\Win10\.julia\packages\GLPK\oTTtu\src\MOI_wrapper.jl:222 GLPK.set_callback(model, #9)
││││┌ @ C:\Users\Win10\.julia\packages\GLPK\oTTtu\src\MOI_wrapper.jl:188 c_callback = $(Expr(:cfunction, Base.CFunction, :(_6), Int32, svec(Ptr{Nothing}, Ptr{Nothing}), :(:ccall)))
│││││┌ @ C:\Users\Win10\.julia\packages\GLPK\oTTtu\src\MOI_wrapper.jl:184 Base.setproperty!(%57, :exception, %59)
││││││ runtime dispatch detected: Base.setproperty!(%57::GLPK.CallbackData, :exception::Symbol, %59::Any)
│││││└────────────────────────────────────────────────────────────────────
┌ @ C:\Users\Win10\source\repos\julia\1.jl:14 Main.set_time_limit_sec(model, 0.001)
│┌ @ C:\Users\Win10\.julia\packages\JuMP\klrjG\src\JuMP.jl:986 MathOptInterface.set(model, MathOptInterface.TimeLimitSec(), limit)
││┌ @ C:\Users\Win10\.julia\packages\JuMP\klrjG\src\JuMP.jl:1251 %1(%2, attr, value)
│││ runtime dispatch detected: %1::typeof(MathOptInterface.set)(%2::MathOptInterface.AbstractOptimizer, attr::MathOptInterface.TimeLimitSec, value::Float64)
││└──────────────────────────────────────────────────────────────
┌ @ C:\Users\Win10\.julia\packages\JuMP\klrjG\src\macros.jl:1261 JuMP.set_objective(model, JuMP._throw_error_for_invalid_sense(_error, MIN_SENSE), ##291)
│┌ @ C:\Users\Win10\.julia\packages\JuMP\klrjG\src\objective.jl:155 JuMP.set_objective_sense(model, sense)
││┌ @ C:\Users\Win10\.julia\packages\JuMP\klrjG\src\objective.jl:78 MathOptInterface.set(model, MathOptInterface.ObjectiveSense(), sense)
│││┌ @ C:\Users\Win10\.julia\packages\JuMP\klrjG\src\JuMP.jl:1254 %1(%2, attr, value)
││││ runtime dispatch detected: %1::typeof(MathOptInterface.set)(%2::MathOptInterface.AbstractOptimizer, attr::MathOptInterface.ObjectiveSense, value::MathOptInterface.OptimizationSense)       
│││└──────────────────────────────────────────────────────────────
│┌ @ C:\Users\Win10\.julia\packages\JuMP\klrjG\src\objective.jl:156 JuMP.set_objective_function(model, func)
││┌ @ C:\Users\Win10\.julia\packages\JuMP\klrjG\src\objective.jl:127 JuMP.set_objective_function(model, 0.0)
│││┌ @ C:\Users\Win10\.julia\packages\JuMP\klrjG\src\objective.jl:117 JuMP.set_objective_function(model, MathOptInterface.ScalarAffineFunction(Base.getindex(Core.apply_type(MathOptInterface.ScalarAffineTerm, JuMP.Float64)), JuMP.Float64(func)))
││││┌ @ C:\Users\Win10\.julia\packages\JuMP\klrjG\src\objective.jl:102 MathOptInterface.set(model, attr, func)
│││││┌ @ C:\Users\Win10\.julia\packages\JuMP\klrjG\src\JuMP.jl:1254 %1(%2, attr, value)
││││││ runtime dispatch detected: %1::typeof(MathOptInterface.set)(%2::MathOptInterface.AbstractOptimizer, attr::MathOptInterface.ObjectiveFunction{MathOptInterface.ScalarAffineFunction{Float64}}, value::MathOptInterface.ScalarAffineFunction{Float64})
│││││└──────────────────────────────────────────────────────────────
││││┌ @ C:\Users\Win10\.julia\packages\JuMP\klrjG\src\objective.jl:95 %2(%3, %1)
│││││ runtime dispatch detected: %2::typeof(MathOptInterface.supports)(%3::MathOptInterface.AbstractOptimizer, %1::MathOptInterface.ObjectiveFunction{MathOptInterface.ScalarAffineFunction{Float64}})
││││└─────────────────────────────────────────────────────────────────
││││┌ @ C:\Users\Win10\.julia\packages\JuMP\klrjG\src\objective.jl:95 JuMP.!(%4)
│││││ runtime dispatch detected: JuMP.!(%4::Any)
││││└─────────────────────────────────────────────────────────────────
││││┌ @ C:\Users\Win10\.julia\packages\JuMP\klrjG\src\objective.jl:106 Base.setproperty!(%26, :nlobj, JuMP.nothing)
│││││ runtime dispatch detected: Base.setproperty!(%26::Any, :nlobj::Symbol, JuMP.nothing)
││││└──────────────────────────────────────────────────────────────────
│┌ @ C:\Users\Win10\.julia\packages\JuMP\klrjG\src\objective.jl:156 JuMP.set_objective_function(model, func)
││┌ @ C:\Users\Win10\.julia\packages\JuMP\klrjG\src\objective.jl:113 JuMP.moi_function(func)
│││┌ @ C:\Users\Win10\.julia\packages\JuMP\klrjG\src\aff_expr.jl:550 MathOptInterface.ScalarAffineFunction(a)
││││┌ @ C:\Users\Win10\.julia\packages\JuMP\klrjG\src\aff_expr.jl:507 JuMP._assert_isfinite(a)
│││││┌ @ C:\Users\Win10\.julia\packages\JuMP\klrjG\src\aff_expr.jl:473 Base.string("Invalid coefficient ", coef, " on variable ", var, ".")
││││││┌ @ strings/io.jl:185 Base.print_to_string(xs...)
│││││││┌ @ strings/io.jl:144 Base.print(s, x)
││││││││┌ @ strings/io.jl:35 Base.show(io, x)
│││││││││┌ @ C:\Users\Win10\.julia\packages\JuMP\klrjG\src\print.jl:821 JuMP.function_string(JuMP.REPLMode, f)
││││││││││┌ @ C:\Users\Win10\.julia\packages\JuMP\klrjG\src\print.jl:625 var_name = JuMP.name(v)
│││││││││││┌ @ C:\Users\Win10\.julia\packages\JuMP\klrjG\src\variables.jl:316 MathOptInterface.get(JuMP.owner_model(v), MathOptInterface.VariableName(), v)
││││││││││││┌ @ C:\Users\Win10\.julia\packages\JuMP\klrjG\src\JuMP.jl:1234 %10(%11, attr, %12)
│││││││││││││ runtime dispatch detected: %10::typeof(MathOptInterface.get)(%11::MathOptInterface.AbstractOptimizer, attr::MathOptInterface.VariableName, %12::MathOptInterface.VariableIndex)    
││││││││││││└──────────────────────────────────────────────────────────────
│┌ @ C:\Users\Win10\.julia\packages\JuMP\klrjG\src\objective.jl:156 JuMP.set_objective_function(model, func)
││┌ @ C:\Users\Win10\.julia\packages\JuMP\klrjG\src\objective.jl:113 JuMP.set_objective_function(model, JuMP.moi_function(func))
│││┌ @ C:\Users\Win10\.julia\packages\JuMP\klrjG\src\objective.jl:102 MathOptInterface.set(model, attr, func)
││││┌ @ C:\Users\Win10\.julia\packages\JuMP\klrjG\src\JuMP.jl:1254 %1(%2, attr, value)
│││││ runtime dispatch detected: %1::typeof(MathOptInterface.set)(%2::MathOptInterface.AbstractOptimizer, attr::MathOptInterface.ObjectiveFunction{MathOptInterface.SingleVariable}, value::MathOptInterface.SingleVariable)
││││└──────────────────────────────────────────────────────────────
│││┌ @ C:\Users\Win10\.julia\packages\JuMP\klrjG\src\objective.jl:95 %2(%3, %1)
││││ runtime dispatch detected: %2::typeof(MathOptInterface.supports)(%3::MathOptInterface.AbstractOptimizer, %1::MathOptInterface.ObjectiveFunction{MathOptInterface.SingleVariable})
│││└─────────────────────────────────────────────────────────────────
┌ @ C:\Users\Win10\source\repos\julia\1.jl:18 x = JuMP.Containers.container(#7, JuMP.Containers.vectorized_product(I, T))
│┌ @ C:\Users\Win10\.julia\packages\JuMP\klrjG\src\Containers\container.jl:66 JuMP.Containers.container(f, indices, JuMP.Containers.default_container(indices))
││┌ @ C:\Users\Win10\.julia\packages\JuMP\klrjG\src\Containers\container.jl:105 JuMP.Containers.map(#38, indices)
│││┌ @ abstractarray.jl:2887 Base.collect(Base.Generator(f, A))
││││┌ @ array.jl:784 y = Base.iterate(itr)
│││││┌ @ generator.jl:47 Base.getproperty(g, :f)(Base.getindex(y, 1))
││││││┌ @ C:\Users\Win10\.julia\packages\JuMP\klrjG\src\Containers\container.jl:105 Core.getfield(#self#, :f)(I...)
│││││││┌ @ C:\Users\Win10\.julia\packages\JuMP\klrjG\src\Containers\macro.jl:194 JuMP.add_variable(Core.getfield(#self#, :model), JuMP.build_variable(_error, JuMP.VariableInfo(true, 0, true, 100, false, NaN, false, NaN, false, false)), "")
││││││││┌ @ C:\Users\Win10\.julia\packages\JuMP\klrjG\src\variables.jl:977 JuMP._moi_add_variable(JuMP.backend(model), model, v, name)
│││││││││┌ @ C:\Users\Win10\.julia\packages\JuMP\klrjG\src\variables.jl:983 JuMP._moi_constrain_variable(moi_backend, index, Base.getproperty(v, :info))
││││││││││┌ @ C:\Users\Win10\.julia\packages\JuMP\klrjG\src\variables.jl:994 JuMP.moi_add_constraint(moi_backend, MathOptInterface.SingleVariable(index), Core.apply_type(MathOptInterface.GreaterThan, JuMP.Float64)(Base.getproperty(info, :lower_bound)))
│││││││││││┌ @ C:\Users\Win10\.julia\packages\JuMP\klrjG\src\constraints.jl:537 Base.string("Constraints of type ", JuMP.typeof(f), "-in-", JuMP.typeof(s), " are not supported by the solver")  
││││││││││││┌ @ strings/io.jl:185 Base.print_to_string(xs...)
│││││││││││││┌ @ strings/io.jl:144 Base.print(s, x)
││││││││││││││┌ @ strings/io.jl:35 Base.show(io, x)
│││││││││││││││┌ @ show.jl:879 Base._show_type(io, Base.inferencebarrier(x))
││││││││││││││││┌ @ show.jl:882 Base.show_type_name(io, Base.getproperty(Base.unwrap_unionall(x), :name))
│││││││││││││││││┌ @ show.jl:969 Base.show(io, Base.getproperty(tn, :module))
││││││││││││││││││┌ @ show.jl:1101 Base.is_root_module(m)
│││││││││││││││││││┌ @ lock.jl:219 Base.lock(temp)
││││││││││││││││││││┌ @ lock.jl:101 slowlock(rl)
│││││││││││││││││││││┌ @ lock.jl:110 Base.wait(c)
││││││││││││││││││││││┌ @ condition.jl:126 Base.list_deletefirst!(%46, %40)
│││││││││││││││││││││││ runtime dispatch detected: Base.list_deletefirst!(%46::Any, %40::Task)
││││││││││││││││││││││└────────────────────
│││││││││││││││││││┌ @ lock.jl:223 Base.unlock(temp)
││││││││││││││││││││┌ @ lock.jl:131 _unlock(rl)
│││││││││││││││││││││┌ @ lock.jl:137 notifywaiters(rl)
││││││││││││││││││││││┌ @ lock.jl:141  = Base.notify(cond_wait)
│││││││││││││││││││││││┌ @ condition.jl:142 #self#(c, Base.nothing)
││││││││││││││││││││││││┌ @ condition.jl:142 Base.#notify#570(true, false, #self#, c, arg)
│││││││││││││││││││││││││┌ @ condition.jl:142 Base.notify(c, arg, all, error)
││││││││││││││││││││││││││┌ @ condition.jl:148 Core.kwfunc(Base.schedule)(Core.apply_type(Core.NamedTuple, (:error,))(Core.tuple(error)), Base.schedule, t, arg)
│││││││││││││││││││││││││││┌ @ task.jl:740 Base.#schedule#594(error, _3, t, arg)
││││││││││││││││││││││││││││┌ @ task.jl:742 %10(%11, t)
│││││││││││││││││││││││││││││ runtime dispatch detected: %10::typeof(Base.list_deletefirst!)(%11::Any, t::Task)
││││││││││││││││││││││││││││└───────────────
││││││││││││││││┌ @ show.jl:884 Base.show_typealias(io, x)
│││││││││││││││││┌ @ show.jl:724 Base.show_typealias(io, Base.getindex(alias, 1), x, Base.getindex(alias, 2), wheres)
││││││││││││││││││┌ @ show.jl:671 Base.show_typeparams(io, env, Core.svec(vars...), wheres)
│││││││││││││││││││┌ @ show.jl:630 Base.show(io, %266)
││││││││││││││││││││ runtime dispatch detected: Base.show(io::IOContext{IOBuffer}, %266::Any)
│││││││││││││││││││└───────────────
│││││││││││││││││││┌ @ show.jl:633 Base.show(io, %324)
││││││││││││││││││││ runtime dispatch detected: Base.show(io::IOContext{IOBuffer}, %324::Any)
│││││││││││││││││││└───────────────
│││││││││││││││││││┌ @ show.jl:638 Base.show(io, %207)
││││││││││││││││││││ runtime dispatch detected: Base.show(io::IOContext{IOBuffer}, %207::Any)
│││││││││││││││││││└───────────────
││││││││││││││││┌ @ show.jl:887 Base.show_datatype(io, x)
│││││││││││││││││┌ @ show.jl:987 #self#(io, x, Base.getindex(Base.TypeVar))
││││││││││││││││││┌ @ show.jl:1009 Base.show_typeparams(io, parameters, Base.getproperty(Base.unwrap_unionall(Base.getproperty(Base.getproperty(x, :name), :wrapper)), :parameters), wheres)     
│││││││││││││││││││┌ @ show.jl:630 Base.show(io, %264)
││││││││││││││││││││ runtime dispatch detected: Base.show(io::IOBuffer, %264::Any)
│││││││││││││││││││└───────────────
│││││││││││││││││││┌ @ show.jl:633 Base.show(io, %321)
││││││││││││││││││││ runtime dispatch detected: Base.show(io::IOBuffer, %321::Any)
│││││││││││││││││││└───────────────
│││││││││││││││││││┌ @ show.jl:638 Base.show(io, %206)
││││││││││││││││││││ runtime dispatch detected: Base.show(io::IOBuffer, %206::Any)
│││││││││││││││││││└───────────────
││││││││││││││││┌ @ show.jl:890 Base.show_unionaliases(io, x)
│││││││││││││││││┌ @ show.jl:811 Base.make_typealiases(properx)
││││││││││││││││││┌ @ show.jl:732 mods = Base.modulesof!(Core.apply_type(Base.Set, Base.Module)(), x)
│││││││││││││││││││┌ @ show.jl:506 Base.modulesof!(s, Base.getproperty(x, :a))
││││││││││││││││││││┌ @ show.jl:500 Base.modulesof!(s, %1)
│││││││││││││││││││││ runtime dispatch detected: Base.modulesof!(s::Set{Module}, %1::Any)
││││││││││││││││││││└───────────────
││││││││││││││││││┌ @ show.jl:788 Core.kwfunc(Base.sort!)(Core.apply_type(Core.NamedTuple, (:by, :rev))(Core.tuple(#479, true)), Base.sort!, aliases)
│││││││││││││││││││┌ @ sort.jl:711 Base.Sort.#sort!#8(alg, lt, by, rev, order, _3, v)
││││││││││││││││││││┌ @ sort.jl:723 Base.Sort.sort!(v, alg, ordr)
│││││││││││││││││││││┌ @ sort.jl:662 Base.Sort.sort!(v, Base.Sort.first(inds), Base.Sort.last(inds), alg, order)
││││││││││││││││││││││┌ @ sort.jl:591 #self#(v, lo, hi, a, o, Base.Sort.similar(v, 0))
│││││││││││││││││││││││┌ @ sort.jl:592 Base.Sort.sort!(v, lo, hi, Base.Sort.SMALL_ALGORITHM, o)
││││││││││││││││││││││││┌ @ sort.jl:507 Base.Sort.lt(o, x, Base.getindex(v, Base.Sort.-(j, 1)))
│││││││││││││││││││││││││┌ @ ordering.jl:119 Base.Order.lt($(QuoteNode(Base.Order.ReverseOrdering{Base.Order.ForwardOrdering}(Base.Order.ForwardOrdering()))), %9, %20)
││││││││││││││││││││││││││ runtime dispatch detected: Base.Order.lt($(QuoteNode(Base.Order.ReverseOrdering{Base.Order.ForwardOrdering}(Base.Order.ForwardOrdering())))::Base.Order.ReverseOrdering{Base.Order.ForwardOrdering}, %9::Any, %20::Any)
│││││││││││││││││││││││││└───────────────────
││││││││││││││││┌ @ show.jl:894 Base.show_delim_array(io, Base.uniontypes(x), '{', ',', '}', false)
│││││││││││││││││┌ @ show.jl:1199 #self#(io, itr, op, delim, cl, delim_one, Base.first(Base.LinearIndices(itr)), Base.last(Base.LinearIndices(itr)))
││││││││││││││││││┌ @ show.jl:1210 Base.show(%6, %44)
│││││││││││││││││││ runtime dispatch detected: Base.show(%6::IOContext{IOBuffer}, %44::Any)
││││││││││││││││││└────────────────
││││││││││││││││┌ @ show.jl:921 Base.show_datatype(io, x, wheres)
│││││││││││││││││┌ @ show.jl:1008 Base.show_type_name(io, Base.getproperty(x, :name))
││││││││││││││││││┌ @ show.jl:968 Base.isvisible(%38, %86, %80)
│││││││││││││││││││ runtime dispatch detected: Base.isvisible(%38::Symbol, %86::Module, %80::Any)
││││││││││││││││││└───────────────
│││││││││││││┌ @ strings/io.jl:139 Base._str_sizehint(%4)
││││││││││││││ runtime dispatch detected: Base._str_sizehint(%4::Union{DataType, String})
│││││││││││││└─────────────────────
│││││││││││││┌ @ strings/io.jl:144 Base.print(%54, %58)
││││││││││││││ runtime dispatch detected: Base.print(%54::IOBuffer, %58::Union{DataType, String})
│││││││││││││└─────────────────────
│││││││││┌ @ C:\Users\Win10\.julia\packages\JuMP\klrjG\src\variables.jl:985 JuMP.set_name(var_ref, name)
││││││││││┌ @ C:\Users\Win10\.julia\packages\JuMP\klrjG\src\variables.jl:324 MathOptInterface.set(JuMP.owner_model(v), MathOptInterface.VariableName(), v, s)
│││││││││││┌ @ C:\Users\Win10\.julia\packages\JuMP\klrjG\src\JuMP.jl:1263 %9(%10, attr, %11, value)
││││││││││││ runtime dispatch detected: %9::typeof(MathOptInterface.set)(%10::MathOptInterface.AbstractOptimizer, attr::MathOptInterface.VariableName, %11::MathOptInterface.VariableIndex, value::String)
│││││││││││└──────────────────────────────────────────────────────────────
││││││││┌ @ C:\Users\Win10\.julia\packages\JuMP\klrjG\src\variables.jl:977 JuMP._moi_add_variable(%1, model, v, name)
│││││││││ runtime dispatch detected: JuMP._moi_add_variable(%1::MathOptInterface.AbstractOptimizer, model::Model, v::ScalarVariable{Int64, Int64, Float64, Float64}, name::String)
││││││││└──────────────────────────────────────────────────────────────────
┌ @ C:\Users\Win10\source\repos\julia\1.jl:19 JuMP.Containers.container(#8, JuMP.Containers.vectorized_product(I, T2))
│┌ @ C:\Users\Win10\.julia\packages\JuMP\klrjG\src\Containers\container.jl:66 JuMP.Containers.container(f, indices, JuMP.Containers.default_container(indices))
││┌ @ C:\Users\Win10\.julia\packages\JuMP\klrjG\src\Containers\container.jl:105 JuMP.Containers.map(#38, indices)
│││┌ @ abstractarray.jl:2887 Base.collect(Base.Generator(f, A))
││││┌ @ array.jl:784 y = Base.iterate(itr)
│││││┌ @ generator.jl:47 Base.getproperty(g, :f)(Base.getindex(y, 1))
││││││┌ @ C:\Users\Win10\.julia\packages\JuMP\klrjG\src\Containers\container.jl:105 Core.getfield(#self#, :f)(I...)
│││││││┌ @ C:\Users\Win10\.julia\packages\JuMP\klrjG\src\macros.jl:677 JuMP.add_constraint(Core.getfield(#self#, :model), JuMP.build_constraint(_error, JuMP._functionize(##281), LessThan{Float64}(0.0)), "")
││││││││┌ @ C:\Users\Win10\.julia\packages\JuMP\klrjG\src\constraints.jl:570 JuMP.set_name(con_ref, name)
│││││││││┌ @ C:\Users\Win10\.julia\packages\JuMP\klrjG\src\constraints.jl:147 MathOptInterface.set(Base.getproperty(con_ref, :model), MathOptInterface.ConstraintName(), con_ref, s)
││││││││││┌ @ C:\Users\Win10\.julia\packages\JuMP\klrjG\src\JuMP.jl:1272 %9(%10, attr, %11, value)
│││││││││││ runtime dispatch detected: %9::typeof(MathOptInterface.set)(%10::MathOptInterface.AbstractOptimizer, attr::MathOptInterface.ConstraintName, %11::MathOptInterface.ConstraintIndex{MathOptInterface.ScalarAffineFunction{Float64}, MathOptInterface.LessThan{Float64}}, value::String)
││││││││││└──────────────────────────────────────────────────────────────
││││││││┌ @ C:\Users\Win10\.julia\packages\JuMP\klrjG\src\constraints.jl:559 JuMP.moi_add_constraint(%7, %5, %6)
│││││││││ runtime dispatch detected: JuMP.moi_add_constraint(%7::MathOptInterface.AbstractOptimizer, %5::MathOptInterface.ScalarAffineFunction{Float64}, %6::MathOptInterface.LessThan{Float64}) 
││││││││└────────────────────────────────────────────────────────────────────

So quite a bit of dynamic dispatch going on, maybe room for improvement?

1 Like

I’ll take a look.

But why are you setting the objective repeatedly in the loop? It does not add to the objective, instead it replaces it.

1 Like

For n=1000 you have a model with ~1e7 variables and ~2e7 constraints. There are little bits of JuMP that could be improved, but the little bits escalate into a big problem because they get it 1e7 times.

If you use an algebraic modeling language you’re going to incur overhead. At that scale it might be better to pick a single solver and interact with their API directly.

I’ve opened an issue in JuMP to take a deeper look at this: https://github.com/jump-dev/JuMP.jl/issues/2817

3 Likes

Thanks a lot for all your help!

But why are you setting the objective repeatedly in the loop?

That’s just me being stupid… That happened when I introduced the loop to break up garbage collection into smaller chunks. Before doing that it was just that single sum over all (double) indices.

I don’t know what’s the most efficient way to do this, either

  • adding an “obj” expression and add_to_expression!(obj, sum(...)) once in the loop, than afterwards just adding that as @objective(model, Min, obj)
  • constructing the variables in a non-anonymous fashion and using that to then do the single sum after the loop

Currently not at my PC anymore, so I can update with the revised code later.

EDIT:

If you use an algebraic modeling language you’re going to incur overhead.

Yes, I am aware of that and that is perfectly fine. There is always going to be an overhead - but that is more than compensated by having access to better data-processing / dynamic model building / multi-solver support. I am just trying not to lose any unnecessary time :smiley:

I’ve opened an issue in JuMP to take a deeper look at this

I am really amazed by all the inputs and support here! I’ll try and contribute with more insights (if I can produce any).

1 Like

Since I am a complete Julia beginner: dynamic dispatch here refers to multi dispatch that only gets resolved during execution (and not during pre-compilation)? I’m assuming that is “slow”?

Since I am a complete Julia beginner

If you’re a Julia beginner, don’t worry about the details. They’re related to the JuMP internals and not something you will see or interact with. But… dynamic dispatch happens when Julia can’t infer the type of a variable prior to runtime. This happens quite a bit in JuMP because we can switch solvers – it’s only at runtime that we find out which solver the user is using and then we can use the appropriate method.

2 Likes

I am posting this here first (instead directly into the github issue), since I assume that there are a few things to fix and currently it’s just an interesting side-benchmark. I’ve tried replicating the model directly using MathOptInterface. This leads (using the same “manual GC hack”) to timings that are roughly 33% faster but with higher GC shares.

Comparing the time spent in garbage collection (instead of just %share) reveals that for (for example) n=2000 the times are really similar at 30 seconds each (~25% of the JuMP time, ~40% of the MOI time). Without profiling where the GC actually happens (and what causes it) - since I have no idea of the internal workings and haven’t had a look at the code - this could hint at the fact that this happens in MOI, and that JuMP does not add that much overhead (on the GC timings).

Current code using MOI is below. I’ve tried replicating it in a similar way - for example I could probably do without constructing everything in for-loops, but since in reality the bounds and constraints would vary between blocks / timesteps I assumed it to be more realistic in this way.


using GLPK
using MathOptInterface
const MOI = MathOptInterface


function create_model(I)
    T = 10000
	
	GC.enable(false)

    # new GLPK optimizer
    optimizer = GLPK.Optimizer()
    MOI.set(optimizer, MOI.TimeLimitSec(), 0.001)

    # prepare vector of variables
    x = Vector{Vector{MOI.VariableIndex}}(undef, I)

    for i in 1:I
        # add variables
        x[i] = MOI.add_variables(optimizer, T)

        # add bounds
        MOI.add_constraints(optimizer, x[i], MOI.LessThan(100.0))
        MOI.add_constraints(optimizer, x[i], MOI.GreaterThan(0.0));

        # add constraints
        for t in 2:T
            MOI.add_constraint(optimizer, 1.0 * x[i][t] - x[i][t-1], MOI.LessThan(10.0))
            MOI.add_constraint(optimizer, 1.0 * x[i][t-1] - x[i][t], MOI.LessThan(10.0))
        end
		
		# call GC every 100 blocks
		if (i % 100) == 0
			GC.enable(true)
			GC.gc(false)
			GC.enable(false)
		end
    end

    # add the objective
    MOI.set(
        optimizer,
        MOI.ObjectiveFunction{MOI.ScalarAffineFunction{Float64}}(),
        MOI.ScalarAffineFunction(MOI.ScalarAffineTerm.(ones(I*T), reduce(vcat, x)), 0.0),
    )
    MOI.set(optimizer, MOI.ObjectiveSense(), MOI.MIN_SENSE)

	GC.enable(true)
    return optimizer
end

solve_model!(optimizer) = MOI.optimize!(optimizer)