Specifying datatype in JuMP expressions and constraints to save memory

I am solving for an ILP that, despite my best efforts to downsize (ie. use sparse matrices), is taking ~ 100 GB RAM.

In my model, I have expressions that are the sum of binary variables * integer variables. Perhaps mistakenly, my intuition tells me if I specify the variable type (ie. Int16) I can save space ie:

s_j_k = SparseMatrixCSC{UInt16, UInt32}(s_j_k)
@variable(model, r_i_j[1:I, 1:J], base_name="r_i_j", binary=true)

@expression(
        model,
        z_i_k[i=1:I, k=1:K],
        sum(r_i_j[i,j]*s_j_k[j,k] for j in 1:J),
    )

I’m wondering if after multiplication, JuMP automatically casts these expressions as Int64. Is it possible to specify an integer type (ie. Int32, Int16) for JuMP expressions and constraints? If this is not possible, what are some ways to reduce the memory consumption of large ILPs?

Is it possible to specify an integer type

No.

taking ~ 100 GB RAM.

Just the sparse array? Just z_i_k? How big is your sparse array? How big is this problem you’re trying to solve?

To reduce the memory overhead you could also try Model(Solver.Optimizer; bridge_constraints = false), replacing Solver as appropriate.

2 Likes

Another issue is that you say s_j_k is a sparse array, but then you’re looping over all the elements (including zeros)?

It would help if you could provide a minimal working example of what you are trying to achieve.

1 Like

Here, I’ve provided a minimal working example. The parameters are scaled down; but we wish to solve for something on the scale of I=1, J=132718, K=4587, T1=400, T3=33, N=138, p=1000. At that scale (where we are having trouble), the sparse matrix has dimensions 153414 x 4587 with 3397677 stored entries.

  • NOTE: since .csv cannot be attached, I’ve listed some values below (which may be cumbersome, but can be copy pasted.
	### PARAMETERS ###
	I=1
	J=736 
	K=2
	T1=400
	T3=33
	N=1
	p=1000

    ### CREATE ADJACENCY MATRIX ###
    df = CSV.read("example_v1.csv", DataFrame)

    # create sparse matrix
    s_j_k = sparse(df.p_idx.+ 1, df.w_index.+ 1,ones(Int, (nrow(df),)))
    # change datatype to reduce memory
    s_j_k = SparseMatrixCSC{UInt16, UInt32}(s_j_k)

    ### CREATE MODEL ###
    model = Model(Gurobi.Optimizer)
    set_optimizer_attribute(model, "Threads", 1)

    ### VARIABLES ###
    @variable(model, r_i_j[1:I, 1:J], base_name="r_i_j", binary=true)
    @variable(model, y_i_k[1:I, 1:K], base_name="y_i_k", binary=true)
    
    @expression(
        model,
        z_i_k[i=1:I, k=1:K],
        sum(r_i_j[i,j]*s_j_k[j,k] for j in 1:J),
    )

    ### CONSTRAINTS ###
    @constraint(model, 
            [i in 1:I], 
            sum(y_i_k[i, :]) <= T3
    )
    @constraint(
        model,
            [k=1:ceil(Int, K/34)],
            sum(y_i_k[:, (1+(k-1)*34):min(k*34, K)]) <=1,
    )

    @constraint(
        model,
        [i=1:I, k=1:K-33],
        sum(y_i_k[i,k:(k+33)]) <=1
    )

    @constraint(
        model, 
        [i=1:I, k=1:K],
        sum(r_i_j[i, j] * y_i_k[i, k] * s_j_k[j, k] for j in 1:J) >= T1 * y_i_k[i, k],
    )
    
    @objective(
        model,
        Min,
        sum(r_i_j) + (N-sum(y_i_k))*p
    )
p_idx	w_idx
p_idx,w_idx
3,1
3,1
3,1
3,1
3,1
3,1
21,1
21,1
21,1
21,1
21,1
21,1
21,1
21,1
21,1
21,1
21,1
21,1
21,1
48,1
48,1
48,1
48,1
48,1
74,1
74,1
74,1
74,1
74,1
74,1
74,1
74,1
74,1
74,1
74,1
83,1
83,1
83,1
83,1
83,1
83,1
83,1
86,1
86,1
86,1
86,1
86,1
86,1
86,1
86,1
86,1
86,1
86,1
86,1
86,1
166,1
226,1
226,1
226,1
226,1
226,1
227,1
339,1
339,1
339,1
339,1
339,1
339,1
339,1
339,1
339,1
339,1
339,1
349,1
349,1
349,1
349,1
349,1
349,1
356,1
356,1
356,1
356,1
356,1
356,1
356,1
356,1
356,1
356,1
356,1
362,1
362,1
362,1
362,1
362,1
362,1
362,1
362,1
362,1
362,1
362,1
363,1
363,1
363,1
363,1
363,1
363,1
371,1
371,1
371,1
371,1
371,1
371,1
371,1
371,1
371,1
371,1
371,1
371,1
371,1
391,1
391,1
391,1
391,1
391,1
391,1
391,1
391,1
391,1
391,1
391,1
414,1
414,1
414,1
414,1
414,1
414,1
414,1
414,1
414,1
414,1
414,1
414,1
472,1
472,1
472,1
472,1
472,1
472,1
472,1
472,1
472,1
472,1
472,1
472,1
472,1
472,1
472,1
472,1
473,1
473,1
473,1
473,1
473,1
473,1
473,1
473,1
473,1
473,1
473,1
473,1
473,1
473,1
473,1
473,1
473,1
507,1
510,1
510,1
510,1
510,1
510,1
510,1
510,1
510,1
510,1
510,1
510,1
516,1
516,1
516,1
516,1
516,1
516,1
516,1
516,1
516,1
516,1
516,1
668,1
1,2
2,2
4,2
5,2
6,2
7,2
8,2
9,2
10,2
11,2
12,2
13,2
14,2
15,2
16,2
17,2
18,2
19,2
20,2
22,2
23,2
24,2
25,2
26,2
27,2
28,2
29,2
30,2
31,2
32,2
33,2
34,2
35,2
36,2
37,2
38,2
39,2
40,2
41,2
42,2
43,2
44,2
45,2
46,2
47,2
49,2
50,2
51,2
52,2
53,2
54,2
55,2
56,2
57,2
58,2
59,2
60,2
61,2
62,2
63,2
64,2
65,2
66,2
67,2
68,2
69,2
70,2
71,2
72,2
73,2
75,2
76,2
77,2
78,2
79,2
80,2
81,2
82,2
84,2
85,2
87,2
88,2
89,2
90,2
91,2
92,2
93,2
94,2
95,2
96,2
97,2
98,2
99,2
100,2
101,2
102,2
103,2
104,2
105,2
106,2
107,2
108,2
109,2
110,2
111,2
112,2
113,2
114,2
115,2
116,2
117,2
118,2
119,2
120,2
121,2
122,2
123,2
124,2
125,2
126,2
127,2
128,2
129,2
130,2
131,2
132,2
133,2
134,2
135,2
136,2
137,2
138,2
139,2
140,2
141,2
142,2
143,2
144,2
145,2
146,2
147,2
148,2
149,2
150,2
151,2
152,2
153,2
154,2
155,2
156,2
157,2
158,2
159,2
160,2
161,2
162,2
163,2
164,2
165,2
167,2
168,2
169,2
170,2
171,2
172,2
173,2
174,2
175,2
176,2
177,2
178,2
179,2
180,2
181,2
182,2
183,2
184,2
185,2
186,2
187,2
188,2
189,2
190,2
191,2
192,2
193,2
194,2
195,2
196,2
197,2
198,2
199,2
200,2
201,2
202,2
203,2
204,2
205,2
206,2
207,2
208,2
209,2
210,2
211,2
212,2
213,2
214,2
215,2
216,2
217,2
218,2
219,2
220,2
221,2
222,2
223,2
224,2
225,2
228,2
229,2
230,2
231,2
232,2
233,2
234,2
235,2
236,2
237,2
238,2
239,2
240,2
241,2
242,2
243,2
244,2
245,2
246,2
247,2
248,2
249,2
250,2
251,2
252,2
253,2
254,2
255,2
256,2
257,2
258,2
259,2
260,2
261,2
262,2
263,2
264,2
265,2
266,2
267,2
268,2
269,2
270,2
271,2
272,2
273,2
274,2
275,2
276,2
277,2
278,2
279,2
280,2
281,2
282,2
283,2
284,2
285,2
286,2
287,2
288,2
289,2
290,2
291,2
292,2
293,2
294,2
295,2
296,2
297,2
298,2
299,2
300,2
301,2
302,2
303,2
304,2
305,2
306,2
307,2
308,2
309,2
310,2
311,2
312,2
313,2
314,2
315,2
316,2
317,2
318,2
319,2
320,2
321,2
322,2
323,2
324,2
325,2
326,2
327,2
328,2
329,2
330,2
331,2
332,2
333,2
334,2
335,2
336,2
337,2
338,2
340,2
341,2
342,2
343,2
344,2
345,2
346,2
347,2
348,2
350,2
351,2
352,2
353,2
354,2
355,2
357,2
358,2
359,2
360,2
361,2
364,2
365,2
366,2
367,2
368,2
369,2
370,2
372,2
373,2
374,2
375,2
376,2
377,2
378,2
379,2
380,2
381,2
382,2
383,2
384,2
385,2
386,2
387,2
388,2
389,2
390,2
392,2
393,2
394,2
395,2
396,2
397,2
398,2
399,2
400,2
401,2
402,2
403,2
404,2
405,2
406,2
407,2
408,2
409,2
410,2
411,2
412,2
413,2
415,2
416,2
417,2
418,2
419,2
420,2
421,2
422,2
423,2
424,2
425,2
426,2
427,2
428,2
429,2
430,2
431,2
432,2
433,2
434,2
435,2
436,2
437,2
438,2
439,2
440,2
441,2
442,2
443,2
444,2
445,2
446,2
447,2
448,2
449,2
450,2
451,2
452,2
453,2
454,2
455,2
456,2
457,2
458,2
459,2
460,2
461,2
462,2
463,2
464,2
465,2
466,2
467,2
468,2
469,2
470,2
471,2
474,2
475,2
476,2
477,2
478,2
479,2
480,2
481,2
482,2
483,2
484,2
485,2
486,2
487,2
488,2
489,2
490,2
491,2
492,2
493,2
494,2
495,2
496,2
497,2
498,2
499,2
500,2
501,2
502,2
503,2
504,2
505,2
506,2
508,2
509,2
511,2
512,2
513,2
514,2
515,2
517,2
518,2
519,2
520,2
521,2
522,2
523,2
524,2
525,2
526,2
527,2
528,2
529,2
530,2
531,2
532,2
533,2
534,2
535,2
536,2
537,2
538,2
539,2
540,2
541,2
542,2
543,2
544,2
545,2
546,2
547,2
548,2
549,2
550,2
551,2
552,2
553,2
554,2
555,2
556,2
557,2
558,2
559,2
560,2
561,2
562,2
563,2
564,2
565,2
566,2
567,2
568,2
569,2
570,2
571,2
572,2
573,2
574,2
575,2
576,2
577,2
578,2
579,2
580,2
581,2
582,2
583,2
584,2
585,2
586,2
587,2
588,2
589,2
590,2
591,2
592,2
593,2
594,2
595,2
596,2
597,2
598,2
599,2
600,2
601,2
602,2
603,2
604,2
605,2
606,2
607,2
608,2
609,2
610,2
611,2
612,2
613,2
614,2
615,2
616,2
617,2
618,2
619,2
620,2
621,2
622,2
623,2
624,2
625,2
626,2
627,2
628,2
629,2
630,2
631,2
632,2
633,2
634,2
635,2
636,2
637,2
638,2
639,2
640,2
641,2
642,2
643,2
644,2
645,2
646,2
647,2
648,2
649,2
650,2
651,2
652,2
653,2
654,2
655,2
656,2
657,2
658,2
659,2
660,2
661,2
662,2
663,2
664,2
665,2
666,2
667,2
669,2
670,2
671,2
672,2
673,2
674,2
675,2
676,2
677,2
678,2
679,2
680,2
681,2
682,2
683,2
684,2
685,2
686,2
687,2
688,2
689,2
690,2
691,2
692,2
693,2
694,2
695,2
696,2
697,2
698,2
699,2
700,2
701,2
702,2
703,2
704,2
705,2
706,2
707,2
708,2
709,2
710,2
711,2
712,2
713,2
714,2
715,2
716,2
717,2
718,2
719,2
720,2
721,2
722,2
723,2
724,2
725,2
726,2
727,2
728,2
729,2
730,2
731,2
732,2
733,2
734,2
735,2
736,2

I didn’t run this code, so there might be typos, but hopefully it should point you in the right direction.

The biggest point is: why is it a sparse matrix!? Just read the indices into a dictionary and then loop through that in the constraint. The sparse matrix approach ends up looping over every element, even the zeros.

The other thing I changed is to add bridge_constraints = false and to change some of the sum to explicit loops.

Let me know if it helps.

function read_csv()
    data = Dict{Int,Vector{Int}}()
    open("example_v1.csv", "w") do io
        readline(io)  # skip header
        while !eof(io)
            line = readline(io)
            items = split(line, ' ' )
            (p, w) = parse.(Int, items)
            if haskey(data, w)
                push!(data[w], p+1)
            else
                data[w] = [p+1]
            end
        end
    end
    return data
end

data = read_csv()
model = Model(Gurobi.Optimizer; bridge_constraints = false)
@variable(model, r_i_j[1:I, 1:J], Bin)
@variable(model, y_i_k[1:I, 1:K], Bin)
@constraints(model, begin
    [i in 1:I], sum(y_i_k[i, k] for j in 1:K) <= T3
    [k=1:ceil(Int, K/34)], sum(y_i_k[i, kk] for in 1:I, kk in (1+(k-1)*34):min(k*34, K)) <= 1
    [i=1:I, k=1:K-33], sum(y_i_k[i, k+j] for j in 1:33) <= 1
    [i=1:I, k=1:K], sum(r_i_j[i, j] * y_i_k[i, k] for j in data[k]) >= T1 * y_i_k[i, k]
end)
@objective(model, Min, sum(r_i_j) + (N  - sum(y_i_k)) * p)

This is great feedback. I removed the 0’s (because they were indeces used by a Python script; I should have added +1 since Julia is 1 indexed).

The reason I use a sparse matrix is because the entries above are really row (p_idx) and column (w_idx) indeces. Ie. one instance of (3, 1) means to row 3, col 1 we add a value +1.

There are 6 (3, 1) instances , so if we go to the sparse matrix, row 3, col 1 should have value 6.


But that’s beside the point! Iterating through the indeces where findall(!iszero, s_j_k) would significantly reduce the number of constraints. Did I understand this correctly?

would significantly reduce the number of constraints

There will be the same number of constraints. But you can do something more efficient than constructing the sparse matrix so you can loop over just the non-zero keys.

Perhaps something like

function read_csv()
    data = [Dict{Int,Int}() for k in 1:K]
    open("example_v1.csv", "w") do io
        readline(io)  # skip header
        while !eof(io)
            line = readline(io)
            items = split(line, ' ' )
            (p, w) = parse.(Int, items)
            d = data[w]
            d[p + 1] = get(d, p + 1, 0) + 1
        end
    end
    return data
end

data = read_csv()
model = Model(Gurobi.Optimizer; bridge_constraints = false)
@variable(model, r_i_j[1:I, 1:J], Bin)
@variable(model, y_i_k[1:I, 1:K], Bin)
@constraints(model, begin
    [i in 1:I], sum(y_i_k[i, k] for j in 1:K) <= T3
    [k=1:ceil(Int, K/34)], sum(y_i_k[i, kk] for in 1:I, kk in (1+(k-1)*34):min(k*34, K)) <= 1
    [i=1:I, k=1:K-33], sum(y_i_k[i, k+j] for j in 1:33) <= 1
    [i=1:I, k=1:K], sum(r_i_j[i, j] * y_i_k[i, k] * v for (j, v) in data[k]) >= T1 * y_i_k[i, k]
end)
@objective(model, Min, sum(r_i_j) + (N  - sum(y_i_k)) * p)
1 Like

Thank you for these performance tips! I’m still finding that since Gurobi backend is the same/unchanged, my ILP still consumes 100-120 GB memory. This did significantly improve the model building time; it is much faster.

Use model = direct_model(Gurobi.Optimizer()). That should help even more.

How many variables and constraints in the full problem? Do you have the Gurobi log?

Duly noted RE: direct_model. In total there seems to be millions of variables (46399930) and hundreds of thousands of quadratic constraints (278667). Maybe this isn’t a good fit for solvers like Gurobi

Academic license - for non-commercial use only - expires 2022-08-12
UInt16[0x0e34, 0x0e35, 0x0e36, 0x0e37, 0x0e38, 0x0e39, 0x0e3a, 0x0e3b, 0x0e3c, 0x0e3d, 0x0e3e, 0x0e3f, 0x0e40, 0x0e41, 0x0e42, 0x0e43, 0x0e44, 0x0e45, 0x0e46, 0x0e47, 0x0e48, 0x0e49, 0x0e4a, 0x0e4b, 0x0e4c, 0x0e4d, 0x0e4e, 0x0e4f, 0x0e50, 0x0e51, 0x0e52, 0x0e53, 0x0e54, 0x0e55, 0x0e56, 0x0e57, 0x0e58, 0x0e59, 0x0e5a, 0x0e5b, 0x0e5c, 0x0e5d, 0x0e5e, 0x0e5f, 0x0e60, 0x1172, 0x1173, 0x1174, 0x1175, 0x1176, 0x1177, 0x1178, 0x1179, 0x117a, 0x117b, 0x117c, 0x117d, 0x117e, 0x117f, 0x1180, 0x1181, 0x1182, 0x1183, 0x1184, 0x1185, 0x1186, 0x1187, 0x1188, 0x1189, 0x118a, 0x118b, 0x118c, 0x118d, 0x118e, 0x118f, 0x1190, 0x1191, 0x1192, 0x1193, 0x1194, 0x1195, 0x1196, 0x1197, 0x1198, 0x1199, 0x119a, 0x119b, 0x119c, 0x119d, 0x119e, 0x119f, 0x11a0, 0x11a1, 0x11a2, 0x11a3, 0x11a4, 0x11a5, 0x11a6, 0x11a7, 0x11a8, 0x11a9, 0x11aa, 0x11ab, 0x11ac, 0x11ad, 0x11ae, 0x11af, 0x11b0, 0x11b1, 0x11b2, 0x11b3, 0x11b4, 0x11b5, 0x11b6, 0x11b7, 0x11b8, 0x11b9, 0x11ba, 0x11bb, 0x11bc, 0x11bd, 0x11be, 0x11bf, 0x11c0, 0x11c1, 0x11c2, 0x11c3, 0x11c4, 0x11c5, 0x11c6, 0x11c7, 0x11c8, 0x11c9, 0x11ca, 0x11cb, 0x11cc, 0x11cd, 0x11ce, 0x11cf, 0x11d0, 0x11d1, 0x11d2, 0x11d3, 0x11d4, 0x11d5, 0x11d6, 0x11d7, 0x11d8, 0x11d9, 0x11da, 0x11db, 0x11dc, 0x11dd, 0x11de, 0x11df, 0x11e0, 0x11e1, 0x11e2, 0x11e3, 0x11e4, 0x11e5, 0x11e6, 0x11e7, 0x11e8, 0x11e9, 0x11ea, 0x11eb]Gurobi Optimizer version 9.1.2 build v9.1.2rc0 (linux64)
Thread count: 32 physical cores, 64 logical processors, using up to 1 threads
Optimize a model with 4690 rows, 137305 columns and 164010 nonzeros
Model fingerprint: 0x5b431467
Model has 278667 quadratic constraints
Variable types: 0 continuous, 137305 integer (137305 binary)
Coefficient statistics:
  Matrix range     [1e+00, 1e+00]
  QMatrix range    [1e+00, 1e+02]
  QLMatrix range   [4e+02, 4e+02]
  Objective range  [1e+00, 1e+03]
  Bounds range     [0e+00, 0e+00]
  RHS range        [1e+00, 3e+01]
  QRHS range       [2e+01, 2e+01]
Found heuristic solution: objective 138000.00000
Presolve removed 0 rows and 0 columns (presolve time = 44s) ...
Presolve removed 2542 rows and 21136 columns (presolve time = 127s) ...
Presolve removed 5234 rows and 21744 columns (presolve time = 144s) ...
Presolve removed 6224 rows and 21744 columns (presolve time = 145s) ...
Presolve removed 6224 rows and 21744 columns (presolve time = 150s) ...
Presolve removed 6224 rows and 21744 columns (presolve time = 155s) ...
Presolve removed 6518 rows and 21744 columns (presolve time = 234s) ...
Presolve removed 6524 rows and 21744 columns (presolve time = 250s) ...
Presolve removed 6524 rows and 21744 columns (presolve time = 255s) ...
Presolve removed 6524 rows and 21744 columns (presolve time = 260s) ...
Presolve removed 6524 rows and 21744 columns (presolve time = 339s) ...
Presolve removed 6524 rows and 21744 columns (presolve time = 355s) ...
Presolve removed 6524 rows and 21744 columns (presolve time = 360s) ...
Presolve removed 6524 rows and 21744 columns (presolve time = 365s) ...
Presolve removed 6524 rows and 21744 columns (presolve time = 371s) ...
Presolve removed 7294 rows and 22514 columns (presolve time = 768s) ...
Presolve added 10595 rows and 0 columns
Presolve removed 0 rows and 21629 columns
Presolve time: 767.91s
Presolved: 46337933 rows, 46399930 columns, 247953743 nonzeros
Variable types: 0 continuous, 46399930 integer (46399740 binary)

Root simplex log...

Iteration    Objective       Primal Inf.    Dual Inf.      Time
       0    1.0500000e+05   1.115543e+05   0.000000e+00   4008s
     126    1.0500000e+05   1.965220e+05   0.000000e+00   4044s
    2606    1.0504549e+05   9.093705e+03   0.000000e+00   4151s
...
   49456    1.0505385e+05   2.422572e+02   0.000000e+00  26756s
   49626    1.0505385e+05   1.509375e+02   0.000000e+00  26955s
   49836    1.0505385e+05   2.865344e+00   0.000000e+00  27132s
   50006    1.0505385e+05   2.966176e+00   0.000000e+00  27325s
   50191    1.0505385e+05   0.000000e+00   4.253250e-04  27490s
   50301    1.0505385e+05   0.000000e+00   1.436733e-04  27566s
   50411    1.0505385e+05   0.000000e+00   6.542432e-04  27645s
   50511    1.0505385e+05   0.000000e+00   0.000000e+00  27748s
   50511    1.0505385e+05   0.000000e+00   0.000000e+00  27858s

Root relaxation: objective 1.050538e+05, 50511 iterations, 24270.77 seconds
Total elapsed time = 27933.29s
Total elapsed time = 28322.36s

    Nodes    |    Current Node    |     Objective Bounds      |     Work
 Expl Unexpl |  Obj  Depth IntInf | Incumbent    BestBd   Gap | It/Node Time
     0     0 105053.849    0 11163 138000.000 105053.849  23.9%     - 28575s

Maybe this isn’t a good fit for solvers like Gurobi

Yeah it’s a pretty large problem…

You may be able to reformulate as a linear program, or at least a linearly-constrained one. This may not help the memory by added even more objects, but it may help the solver.

Since r_i_j and y_i_k are binary variables, their product can be replaced with an additional variable and four sparse inequalities. This is the McCormick envelope, but enforcing binary variables means it is an exact reformulation (e.g. cf. https://optimization.mccormick.northwestern.edu/index.php/McCormick_envelopes).

Setting x = ry, with r and y binary variables, we can replace any instance of ry with x and add the following:

x \geq 0 \\ x \geq r + y -1 \\ x \leq r \\ x \leq y \\

Also, I may be missing something, but could you not cancel or remove y_i_k[i, k] in the above constraint anyway since it does not depend on the index of summation?

1 Like

Gurobi does this linearization automatically.

You can tell from the

Presolve added 10595 rows and 0 columns

Whether doing it manually would help is probably problem-dependent. There’s a trade-off between problem size and the cost of formulating the LP vs QCP.

Right, interesting.

Yes, we actually tried this! But RE: odow’s comment, we saw that Gurobi does the linearization automatically.

2 Likes