JuMP takes long time to load the problem/model

Hi I use JuMP and CPLEX solver to solve the optimization problem in time-space network. I write this code to the JuMP with the data. I tried to model network design model in time-space network.

using JuMP, CPLEX, DataFrames, CSV, GLPK

node = [11,12,13,14,15,16,41,42,51]
time = [6,	12,	18,	24,	30,	36,	42,	48,	54,	60,	66,	72,	78,	84,	90,	96,	102,	108,	114,	120,	126,	132,	138,	144,	150,	156,	162,	168]
graph = DataFrames.DataFrame(
        [
            11	41	6	6	12	101.74	74.19	2380	1
11	41	6	12	18	101.74	74.19	2380	1
11	41	6	18	24	101.74	74.19	2380	1
11	41	6	24	30	101.74	74.19	2380	1
11	41	6	30	36	101.74	74.19	2380	1
11	41	6	36	42	101.74	74.19	2380	1
11	41	6	42	48	101.74	74.19	2380	1
11	41	6	48	54	101.74	74.19	2380	1
11	41	6	54	60	101.74	74.19	2380	1
11	41	6	60	66	101.74	74.19	2380	1

I skipped the data because of the row limitation to create this topic. However there are 2269 rows in total.

51	51	6	144	150	1	0	100	Inf
51	51	6	150	156	1	0	100	Inf
51	51	6	156	162	1	0	100	Inf
51	51	6	162	168	1	0	100	Inf
51	51	6	168	6	1	0	100	Inf
        ],
    ["origin", "destination", "time", "start", "finish", "distance", "fc", "vc", "cap"],
)
com = DataFrames.DataFrame(
        [
            11	12	490	6	30
11	13	52	36	84
11	14	5	54	126
11	15	52	66	144
11	16	233	78	138
12	11	3195	36	66
12	13	4128	24	72
12	14	343	12	42
12	15	2321	84	156
12	16	2739	18	48
13	11	1186	78	120
13	12	5032	24	72
13	14	924	24	54
13	15	5334	90	144
13	16	1286	60	108
14	11	68	6	54
14	12	206	84	126
14	13	332	36	90
14	15	283	30	54
14	16	69	84	114
15	11	966	48	84
15	12	4030	96	144
15	13	5815	30	78
15	14	653	60	120
15	16	1083	48	126
16	11	1083	6	60
16	12	2339	96	138
16	13	475	84	150
16	14	47	72	132
16	15	368	84	114
        ],
    ["origin", "destination", "demand", "arrive", "due"],
)
numlink = length(graph.origin)
numnode = length(node)
numcom = length(com.origin)
numtime = length(time)
demandori = Tuple((Int64(com.origin[i]),Int64(com.arrive[i])) for i in 1:numcom)
demanddes = Tuple((Int64(com.destination[i]),Int64(com.due[i])) for i in 1:numcom)
links = Tuple(((Int64(graph.origin[i]),Int64(graph.start[i])), (Int64(graph.destination[i]),Int64(graph.finish[i]))) for i in 1:numlink)
translinks = Tuple(((Int64(graph.origin[i]),Int64(graph.start[i])), (Int64(graph.destination[i]),Int64(graph.finish[i]))) for i in 1:numlink if graph.origin[i] != graph.destination[i])
nodes = Tuple((node[i], time[t]) for i in 1:numnode, t in 1:numtime)
udict = Dict((links[i]=>graph.cap[i]) for i in 1:numlink)
fcdict = Dict((links[i]=>graph.fc[i]) for i in 1:numlink)
vcdict = Dict((links[i]=>graph.vc[i]) for i in 1:numlink)
disdict = Dict((links[i]=>graph.distance[i]) for i in 1:numlink)
model = direct_model(CPLEX.Optimizer())
variables(model, begin
    y[translink in translinks], Int
    x[k in 1:numcom, link in links] >= 0
end)
objective(model, Min, sum(fcdict[translink]*y[translink] for translink in translinks) + sum(vcdict[link]*disdict[link]*x[k,link] for k in 1:numcom, link in links))
constraint(model, [k in 1:numcom, (i,t) in nodes; i==com.origin[k] && t==com.arrive[k]], sum(x[k,((ii,tt),(j,tp))] for ((ii,tt),(j,tp)) in links if ii==i && tt==t) - sum(x[k,((j,tp),(ii,tt))] for ((j,tp),(ii,tt)) in links if ii==i && tt==t) == com.demand[k])
constraint(model, [k in 1:numcom, (i,t) in nodes; i==com.destination[k] && t==com.due[k]], sum(x[k,((ii,tt),(j,tp))] for ((ii,tt),(j,tp)) in links if ii==i && tt==t) - sum(x[k,((j,tp),(ii,tt))] for ((j,tp),(ii,tt)) in links if ii==i && tt==t) == -com.demand[k])
constraint(model, [k in 1:numcom, (i,t) in nodes; (i,t)!=demandori[k] && (i,t)!=demanddes[k]], sum(x[k,((ii,tt),(j,tp))] for ((ii,tt),(j,tp)) in links if ii==i && tt==t) - sum(x[k,((j,tp),(ii,tt))] for ((j,tp),(ii,tt)) in links if ii==i && tt==t) == 0)
constraint(model, [translink in translinks], sum(x[k,translink] for k in 1:numcom) <= udict[translink]*y[translink])
#println(model)
optimize!(model)
println(solution_summary(model))
println(objective_value(model))
for translink in translinks
    if value.(y[translink]) > 0
        println("y $translink = ", value.(y[translink]))
    end
end
for k in 1:numcom
    for link in links
        if value.(x[k,link]) > 0
            println("x $k $link = ", value.(x[k,link]))
        end
    end
end

It needs very long time for CPLEX to display something (the log)/load the model/problem (around 30 minutes). But after the model/problem can be loaded, it solved in short time.

Version identifier: 12.10.0.0 | 2019-11-26 | 843d4de2ae
Tried aggregator 1 time.
Reduced MIP has 9576 rows, 70056 columns, and 198576 nonzeros.
Reduced MIP has 0 binaries, 2016 generals, 0 SOSs, and 0 indicators.
Presolve time = 0.11 sec. (41.62 ticks)
Tried aggregator 1 time.
Detecting symmetries...
Reduced MIP has 9576 rows, 70056 columns, and 198576 nonzeros.
Reduced MIP has 0 binaries, 2016 generals, 0 SOSs, and 0 indicators.
Presolve time = 0.13 sec. (69.18 ticks)
MIP emphasis: balance optimality and feasibility.
MIP search method: dynamic search.
Parallel mode: deterministic, using up to 8 threads.
Root relaxation solution time = 0.20 sec. (105.24 ticks)

Root node processing (before b&c):
  Real time             =    0.73 sec. (366.76 ticks)
Parallel b&c, 8 threads:
  Real time             =    0.00 sec. (0.00 ticks)
  Sync time (average)   =    0.00 sec.
  Wait time (average)   =    0.00 sec.
                          ------------
Total (root+branch&cut) =    0.73 sec. (366.76 ticks)
  1.213043 seconds (1.16 M allocations: 58.671 MiB, 36.04% compilation time)
CPLEX Error  1217: No solution exists.
CPLEX Error  1217: No solution exists.
* Solver : CPLEX

* Status
  Result count       : 1
  Termination status : OPTIMAL
  Message from the solver:
  "integer optimal, tolerance"

* Candidate solution (result #1)
  Primal status      : FEASIBLE_POINT
  Dual status        : NO_SOLUTION
  Objective value    : 1.81111e+10
  Objective bound    : 1.81110e+10
  Relative gap       : 2.96865e-06
  Dual objective value : 1.81110e+10

* Work counters
  Solve time (sec)   : 7.52000e-01
  Simplex iterations : 0
  Barrier iterations : 0
  Node count         : 0

Apologize for long code in the data. I just want to show how big the data I inputted to the JuMP and CPLEX. Hope anyone could help me with this problem. Thank you in advanced.

Hi @fstwnn, welcome to the forum :smile:

Two logistical things for future questions:

  1. I’ve moved your question to the Optimization (Mathematical) section, which is where I try to keep all the JuMP-related questions.

  2. You can format your code by placing it in triple-backticks like this:

```julia
1 + 1
```

Let me take a look at your example and get back to you.

I haven’t run your code, but the problem is almost surely constraints like this:

@constraint(
    model,
    [k in 1:numcom, (i,t) in nodes; i==com.origin[k] && t==com.arrive[k]],
    sum(x[k,((ii,tt),(j,tp))] for ((ii,tt),(j,tp)) in links if ii==i && tt==t) - sum(x[k,((j,tp),(ii,tt))] for ((j,tp),(ii,tt)) in links if ii==i && tt==t) == com.demand[k]
)

JuMP will expand this into

for k in 1:numcon
    for (i, t) in nodes
        if i == com.origin[k] && t == com.arrive[k]
            lhs = AffExpr(0.0)
            for ((ii, tt), (j, tp)) in links
                 if ii == i && tt == t
                    add_to_expression!(lhs, x[k, ((ii, tt), (j, tp))])
                 end
            end
            for ((j, tp), (ii, tt)) in links
                if ii == i && tt == t
                   add_to_expression!(lhs, -1, x[k, ((j, tp), (ii, tt))])
                end
           end
           @constraint(model, lhs == com.demand[k])
        end
    end
end

The issue is that you loop over links twice to build each constraint. If you
have many elements in links, this can be surprisingly expensive!

You can make this faster by caching a map from nodes to their in- or
out-relations:

outgoing_nodes = Dict()
incoming_nodes = Dict()
for (a, b) in links
    if !haskey(outgoing_nodes, a)
        outgoing_nodes[a] = []
    end
    push!(outgoing_nodes[a], b)
    if !haskey(incoming_nodes, b)
        incoming_nodes[b] = []
    end
    push!(incoming_nodes[b], a)
end
@constraint(
    model,
    [k in 1:numcom, n in nodes; n == (com.origin[k], com.arrive[k])],
    sum(x[k, (n, n2)] for n2 in outgoing_nodes[n]) - sum(x[k, (n2, n)] for n2 in incoming_nodes[n]) == com.demand[k]
)
@constraint(
    model, 
    [k in 1:numcom, n in nodes; n == (com.destination[k], com.due[k])],
    sum(x[k, (n, n2)] for n2 in outgoing_nodes[n]) - sum(x[k, ((n2, n)] for n2 in incoming_nodes[n]) == -com.demand[k]
)
@constraint(
    model,
    [k in 1:numcom, n in in nodes; n != demandori[k] && n != demanddes[k]],
    sum(x[k, (n, n2)] for nn2 in outgoing_nodes[n]) == sum(x[k, (n2, 2)] for n2 in incoming_nodes[n])
)

With a bit of tweaking, you could simplify further to:

nodes = [(node[i], time[t]) for i in 1:numnode, t in 1:numtime]
outgoing_nodes = Dict(n => eltype(nodes)[] for n in nodes)
incoming_nodes = Dict(n => eltype(nodes)[] for n in nodes)
for (a, b) in links
    push!(outgoing_nodes[a], b)
    push!(incoming_nodes[b], a)
end
function get_demand(com, n, k)
    if n == (com.origin[k], com.arrive[k])
        return com.demand[k]
    elseif n == (com.destination[k], com.due[k])
        return -com.demand[k]
    else
        return 0.0
    end
end
@constraint(
    model,
    [k in 1:numcom, n in nodes],
    sum(x[k, (n, n2)] for n2 in outgoing_nodes[n]) - sum(x[k, (n2, n)] for n2 in incoming_nodes[n]) == get_demand(com, n, k)
)

(I didn’t test because I don’t have your data, but you get the idea.)

I’ll also point you to this tutorials, which implement related models:

See also this discussion: JuMP, GAMS, and the IJKLM model | JuMP

1 Like

Yes, you are right that the problem is on that constraint. Now it works. Thank you. However my original data consists of 42168 links. Could you give me a suggestion related to the data structure or something I can do with my code to speed up this process with 42168 links? Thanks in advanced.

1 Like

The problem is not the number of links. JuMP is an excellent tool to build large optimization models with. You just need to avoid algorithms that have poor scaling properties.

A very common cause is something that looks like:

for item_1 in list_1
    sum(x[item_2] for item_2 in list_2 if item_1 == item_2)
end

The general fix is to precompute the various subsets to avoid looping over list_2 for every element in list_1.

Note that this is a general computer science suggestion (to avoid O(N^2) type algorithms): it applies to all programs you write, not JuMP constraints specifically.

2 Likes

I am currently running my original dataset, which consists of 42168 links and 1407 commodities. It has been running for three hours, and there hasn’t been any output to display yet. Based on your explanation, it seems that the code itself is not the issue (after implementing your correction), but rather the size of the problem is causing CPLEX to take a long time to load the model/problem?

Can you share a reproducible example of your new code?

Here is my new code after implementing your corrections:
To make it reproducible for you, how could I share the csv file on the demand and link data to you? Since it is impossible to put all the data here. The data consists of 42168 links and 1407 commodities.

node = [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62]
time = [2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78,80,82,84,86,88,90,92,94,96,98,100,102,104,106,108,110,112,114,116,118,120,122,124,126,128,130,132,134,136,138,140,142,144,146,148,150,152,154,156,158,160,162,164,166,168]
com = CSV.read("C:\\Users\\fstwn\\demandseatolldynamic.csv", DataFrame)
graph = CSV.read("C:\\Users\\fstwn\\arcdataseatolldynamic.csv", DataFrame)
numlink = length(graph.origin)
numnode = length(node)
numcom = length(com.origin)
numtime = length(time)
demandori = Tuple((Int64(com.origin[i]),Int64(com.arrive[i])) for i in 1:numcom)
demanddes = Tuple((Int64(com.destination[i]),Int64(com.due[i])) for i in 1:numcom)
links = Tuple(((Int64(graph.origin[i]),Int64(graph.start[i])), (Int64(graph.destination[i]),Int64(graph.finish[i]))) for i in 1:numlink)
translinks = Tuple(((Int64(graph.origin[i]),Int64(graph.start[i])), (Int64(graph.destination[i]),Int64(graph.finish[i]))) for i in 1:numlink if graph.origin[i] != graph.destination[i])
nodes = [(node[i], time[t]) for i in 1:numnode, t in 1:numtime]
outgoing_nodes = Dict(n => eltype(nodes)[] for n in nodes)
incoming_nodes = Dict(n => eltype(nodes)[] for n in nodes)
for (a, b) in links
    push!(outgoing_nodes[a], b)
    push!(incoming_nodes[b], a)
end
function get_demand(com, n, k)
    if n == (com.origin[k], com.arrive[k])
        return com.demand[k]
    elseif n == (com.destination[k], com.due[k])
        return -com.demand[k]
    else
        return 0.0
    end
end
udict = Dict((links[i]=>graph.cap[i]) for i in 1:numlink)
fcdict = Dict((links[i]=>graph.fc[i]) for i in 1:numlink)
vcdict = Dict((links[i]=>graph.vc[i]) for i in 1:numlink)
disdict = Dict((links[i]=>graph.distance[i]) for i in 1:numlink)
model = direct_model(CPLEX.Optimizer())
@variables(model, begin
    y[translink in translinks], Int
    x[k in 1:numcom, link in links] >= 0
end)
@objective(model, Min, sum(fcdict[translink]*y[translink] for translink in translinks) + sum(vcdict[link]*disdict[link]*x[k,link] for k in 1:numcom, link in links))
@constraint(model, [k in 1:numcom, n in nodes], sum(x[k, (n, n2)] for n2 in outgoing_nodes[n]) - sum(x[k, (n2, n)] for n2 in incoming_nodes[n]) == get_demand(com, n, k))
@constraint(model, [translink in translinks], sum(x[k,translink] for k in 1:numcom) <= udict[translink]*y[translink])
optimize!(model)
println(solution_summary(model))
println(objective_value(model))
for translink in translinks
    if value.(y[translink]) > 0
        println("y $translink = ", value.(y[translink]))
    end
end
for k in 1:numcom
    for link in links
        if value.(x[k,link]) > 0
            println("x $k $link = ", value.(x[k,link]))
        end
    end
end

To make it reproducible for you, how could I share the csv file on the demand and link data to you? Since it is impossible to put all the data here. The data consists of 42168 links and 1407 commodities.

One issue is that you have nearly 60,000,000 variables (42168 * 1407 = 59_330_376). This is a very big integer program. What sort of computer are you trying to solve this on? How much RAM does it have?

Here are some more ideas for your code:

using JuMP
import CPLEX
import CSV
import DataFrames
node = 1:62
time = 2:2:168
com = CSV.read(
    "C:\\Users\\fstwn\\demandseatolldynamic.csv",
    DataFrames.DataFrame,
)
graph = CSV.read(
    "C:\\Users\\fstwn\\arcdataseatolldynamic.csv",
    DataFrames.DataFrame,
)
num_link = length(graph.origin)
num_com = length(com.origin)
links = tuple.(
    tuple.(gaph.origin, graph.start),
    tuple.(graph.destination, graph.finish),
)
trans_links = filter(((a, b), (c, d)) -> a != c, links)
nodes = [(n, t) for n in node, t in time]
link_to_index = Dict(link => i for (i, link) in enunmerate(links))
outgoing_nodes = Dict(n => eltype(nodes)[] for n in nodes)
incoming_nodes = Dict(n => eltype(nodes)[] for n in nodes)
for (a, b) in links
    push!(outgoing_nodes[a], b)
    push!(incoming_nodes[b], a)
end
function get_demand(com, n, k)
    if n == (com.origin[k], com.arrive[k])
        return com.demand[k]
    elseif n == (com.destination[k], com.due[k])
        return -com.demand[k]
    else
        return 0.0
    end
end
model = direct_model(CPLEX.Optimizer())
@variables(model, begin
    y[translink in trans_links] >= 0, Int
    x[k in 1:num_com, link in links] >= 0
end)
@objective(
    model,
    Min,
    sum(graph.fc[link_to_index(translink)] * y[translink] for translink in trans_links) +
    sum(graph.vc[link_to_index(link)] * graph.distance[link_to_index(link)] * sum(x[:,link]) for link in links)
)
@constraint(
    model,
    [k in 1:num_com, n in nodes],
    sum(x[k, (n, n2)] for n2 in outgoing_nodes[n]) - sum(x[k, (n2, n)] for n2 in incoming_nodes[n]) == get_demand(com, n, k)
)
@constraint(
    model,
    [translink in trans_links],
    sum(x[:, translink]) <= graph.cap[link_to_index(translink)] * y[translink]
)
optimize!(model)
println(solution_summary(model))
println(objective_value(model))
for translink in trans_links
    if value(y[translink]) > 0.5
        println("y $translink = ", value(y[translink]))
    end
end
for k in 1:num_com, link in links
    if value(x[k, link]) > 0
        println("x $k $link = ", value(x[k, link]))
    end
end

This post has finally motivated me to write a tutorial about this problem: [docs] add tutorial: Performance problems with sum-if formulations by odow · Pull Request #3810 · jump-dev/JuMP.jl · GitHub

1 Like

I use laptop HP ZBook Firefly 15 G7 Mobile Workstation with 16384MB RAM.

Thanks for your ideas for my code. However there are some errors on your suggestion code:

First:

trans_links = filter(((a, b), (c, d)) -> a != c, links)

I have changed it to

trans_links = filter(link -> link[1][1] != link[2][1], links)

It works, but there is some more errors in the objective function code with this error message:
ERROR: MethodError: objects of type Dict{Tuple{Tuple{Int64, Int64}, Tuple{Int64, Int64}}, Int64} are not callable
Stacktrace:
[1] macro expansion
@ C:\Users\fstwn.julia\packages\MutableArithmetics\iovKe\src\rewrite_generic.jl:268 [inlined]
[2] macro expansion
@ C:\Users\fstwn.julia\packages\MutableArithmetics\iovKe\src\rewrite.jl:325 [inlined]
[3] macro expansion
@ C:\Users\fstwn.julia\packages\JuMP\as6Ji\src\macros.jl:257 [inlined]
[4] macro expansion
@ C:\Users\fstwn.julia\packages\JuMP\as6Ji\src\macros@objective.jl:66 [inlined]

I do not know how to fix this error.

1 Like

Oops, I meant link_to_index[link] instead of link_to_index(link). The perils of not testing the code!

Note that 16 GB is probably not enough RAM for this. You should look at your memory usage as the program is running. You’re probably swapping to disk, which is causing the slow-down.

Could you profile which part is now taking the most time? Is it building the model? Or is it stuck in optimize!(model).

Thanks, now it works with link_to_index[link]. I am not building a model. I just run the model use optimize!(model) and CPLEX solver. With my original data and your suggestion code, it does not display anything in more than three hours (I just terminate it after more than three hours of running). What I mean of “does not display anything” is the JuMP need long time to display the CPLEX log like below:

Version identifier: 12.10.0.0 | 2019-11-26 | 843d4de2ae
Tried aggregator 1 time.
Reduced MIP has 9576 rows, 70056 columns, and 198576 nonzeros.
Reduced MIP has 0 binaries, 2016 generals, 0 SOSs, and 0 indicators.
Presolve time = 0.11 sec. (41.62 ticks)
Tried aggregator 1 time.
Detecting symmetries...
Reduced MIP has 9576 rows, 70056 columns, and 198576 nonzeros.
Reduced MIP has 0 binaries, 2016 generals, 0 SOSs, and 0 indicators.
Presolve time = 0.13 sec. (69.18 ticks)

If I use small size problem, the CPLEX log can be displayed in only some seconds. But for my original data, it takes long time to display it.

Perhaps the problem is on the size of the problem and the computer that I used.

However you said “You should look at your memory usage as the program is running.” How could I do this?

Very nice new tutorial. Being interested in model building for these things, I typed up a quick tutorial of using acsets (attributed C-sets, a data structure from applied category theory, but in practice something akin to an in-memory relational database with additional equations on paths possible) for generating this type of model. They are exported from the Catlab.jl package. The full script is here https://jump.dev/JuMP.jl/dev/tutorials/getting_started/sum_if/ with acsets (github.com)

The data structure which powers this is this acset whose schema is in the screenshot below (produced by to_graphviz). The ovals are finite Sets (edges and verticies), and the arrows are Functions. The dots are “attributes” which map to Julia types, in this case Int for the demand, and JuMP.VariableRef for the flows. The arrows between finite sets are indexed so that reverse lookup is fast.

In the code I used some convenience functions for acsets that are “graph-like”, such as inedges but they are just wrappers for the generic way to do inverse lookup on arrows, see the source at: Catlab.jl/src/graphs/BasicGraphs.jl at main · AlgebraicJulia/Catlab.jl (github.com)

@present SchDemandGraph <: SchGraph begin
    Demand::AttrType
    Flows::AttrType
    d::Attr(V,Demand)
    x::Attr(E,Flows)
end

to_graphviz(SchDemandGraph)

@abstract_acset_type AbstractDemandGraph <: AbstractGraph

@acset_type DemandGraph(SchDemandGraph, index=[:src,:tgt]) <: AbstractDemandGraph

acs_graph = @acset DemandGraph{Int,JuMP.VariableRef} begin
    V=length(nodes)
    E=length(edges)
    src=first.(edges)
    tgt=last.(edges)
    d=demand.(nodes)
end

model = JuMP.Model()
acs_graph[:, :x] = @variable(model, flows[e in Catlab.edges(acs_graph)] >= 0)
@constraint(
    model,
    [n in Catlab.vertices(acs_graph)],
    sum(acs_graph[inedges(acs_graph, n), :x]) -
    sum(acs_graph[outedges(acs_graph, n), :x]) == acs_graph[n, :d]
)

Benchmarking results are basically the same as expected. I think that acsets open up interesting possibilities for very expressive model building and fusing data + the LP/MIP model, so it was fun to write out this small example.

6 Likes

It would be helpful if you could isolate which part of the code is taking a long time. How log does it take if you comment out optimize!(model) onwards?

"You should look at your memory usage as the program is running.” How could I do this?

I have tried to comment out the optimize!(model) onwards and after running of 3 hours, nothing is displayed. Here is the memory usage after an hour of running

Can you share the data set? Ill send you a DM with my email

So I took a look at the data you sent me. The problem is just too large to build on your laptop. You need more RAM.

You can see this just by measuring how long it takes to do:

model = direct_model(CPLEX.Optimizer())
@time @variable(model, x[1:60_000_000]);

I have run this:

model = direct_model(CPLEX.Optimizer())
@time @variable(model, x[1:60_000_000]);

and the result is
330.273526 seconds (961.03 M allocations: 36.297 GiB, 83.85% gc time, 0.21% compilation time)

Sure. So it takes JuMP 5 1/2 minutes just to create the variables. Now it has to do the constraints. It has to do all your data structure stuff. And then it has to actually optimize the problem!

You can probably improve things with:

model = direct_model(CPLEX.Optimizer())
set_string_names_on_creation(model, false)
@time @variable(model, x[1:60_000_000]);

But at some point you’re going to run out of your 16 GB of memory.