JuMP writeMPS does not finish writing LP model to file

I’m trying to save a large linear program model created with JuMP v0.18.5 in Julia v0.6.4 to a .mps file on disk using writeMPS, but the file maxes out quickly at 711MB and the Julia script trying to write the file eventually is killed. I tried it three times, to verify that the file maxes out at the same number. ‘ulimit -a’ says that the maximum file size is unlimited. The operating system is Ubuntu Linux 18.04.4 LTS (bionic), 64 bit. Do you know what is causing this?

$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 127958
max locked memory (kbytes, -l) 16384
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) unlimited
cpu time (seconds, -t) unlimited
max user processes (-u) 127958
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

JuMP v0.18.5 in Julia v0.6.4

Now there’s something I haven’t seen in a while! I don’t have any idea why that might happen, or suggestions for how you could debug or fix. JuMP 0.18 and Julia 0.6 are unsupported. I suggest you update to JuMP 1.0 and Julia 1.6.

You might run into a few problems doing so though, because a lot has changed since then (JuMP 0.18.5 was released in 2018). However, the benefit of updating is that you get compatibility guarantees of JuMP and Julia being 1.0 releases, so the code you write from now on won’t break in future.

On the Julia front, you might want to first install Julia 0.7, Julia Downloads (Old releases), which includes some deprecation warnings between Julia 0.6 and Julia 1.0. Fix those and then changing from 0.7 to 1.6 should be pretty simple.

On the JuMP front … a lot has changed. In fact so much that I don’t have many good suggestions for how to update. Here’s our documentation: Introduction · JuMP

If you have places where you get stuck, post them here and we can point you in the right direction.

This constraint causes writeMPS to hang after writing 711MB: @constraint(m,[i=1:I,j=J_i[i]],0 <= y[j] - x[i,j])

If I remove that constraint, writeMPS completes successfully.

Here’s the code:

If you can provide a reproducible example, we might be able to help. But you should really update. Lots has changed (for the better):

In particular, we complete revamped how we do file writing, so this is likely fixed on more recent versions of JuMP.

Below is the code that creates the LP model and writes it to an MPS file. I should be 14888 and J is 5284. For 1 <= i <= I, J_i[i] is a subset of {1,2,…,J}. c is a matrix of non-negative weights. p is a fixed positive integer like 12. If I = 14888, writeMPS hangs after writing 711MB to the file. If I is small like 500, then it works fine. I don’t know how big I can be before it breaks.

m = Model()
@variable(m,x[i=1:I,j=J_i[i]] >= 0)
@variable(m,1 >= y[j=1:J] >= 0)
@objective(m,Min,sum(c[i,j]*x[i,j] for i=1:I, j=J_i[i]))
@constraint(m,sum_x[i=1:I],sum(x[i,j] for j=J_i[i]) == 1)
@constraint(m,sum(y[j] for j=1:J) == p)
@constraint(m,[i=1:I,j=J_i[i]],0 <= y[j] - x[i,j])
writeMPS(m,“raven5.mps”)

I don’t have any ideas for why it might hang after 711 MB of writing. You should just update to JuMP 1.0.

For your code above, the only change between JuMP 0.18 and JuMP 1.0 is:

# writeMPS(m,“raven5.mps”)
write_to_file(m, "raven5.mps")

But if you’re using a solver, lots of things changed, so

m = Model(solver = GurobiSolver())

is now

m = Model(Gurobi.Optimizer)

and how we solve the model and deal with statuses and solutions has changed.

status = solve(model)
x_val = getvalue(x)

is now

optimize!(model)
status = termination_status(model)
x_val = value.(x)

Could the file writing issue be related to the max locked memory, which is 16384, as shown in my initial message?

Right now, I just need the MPS file so that I can try to solve it with a third-party LP solver like PDLP in Google OR-Tools. Before creating the model with JuMP, I also need to read in the inputs from a .jld file created with JLD v0.8.3 in Julia 0.6.4.

I,J,J_i,cost_mat = load("…/output_files/bso_data_in_syn_cplex1210_model_11_all.jld",“I”,“J”,“J_i”,“cost_mat”)

I installed Julia v1.7.2. The script is still eventually killed when write_to_file is called. 621MB are written to the MPS file. write_to_file works fine if I is small enough.

I installed Julia v1.7.2

Okay, now that’s interesting.

What is versioninfo()? Is this on a laptop or some server?

What happens if you just try to write to a file normally? How far can you scale N in these settings?

function main(io, N)
    for n in 1:N
        println("A B $n")
    end
end

io = IOBuffer()
main(io, 1_000_000)

open("tmp.txt", "w") do io
    main(io, 1_000_000)
end

versioninfo()
Julia Version 1.7.2
Commit bf53498635 (2022-02-06 15:21 UTC)
Platform Info:
OS: Linux (x86_64-pc-linux-gnu)
CPU: Intel(R) Core™ i7-3930K CPU @ 3.20GHz
WORD_SIZE: 64
LIBM: libopenlibm
LLVM: libLLVM-12.0.1 (ORCJIT, sandybridge)

This is running on a desktop server. If I run your test script, output is written to the console. The last output is: “A B 1000000”. However, nothing is written to tmp.txt.
$ ls -alh tmp.txt
-rw------- 1 0 May 22 22:52 tmp.txt

Oops. It should be println(io, "A B $n")

This time no output is written to the console, but tmp.txt is written to.

-rw------- 1 11M May 22 23:25 tmp.txt

$ tail tmp.txt
A B 999991
A B 999992
A B 999993
A B 999994
A B 999995
A B 999996
A B 999997
A B 999998
A B 999999
A B 1000000

This time no output is written to the console, but tmp.txt is written to

And what happens as you increase N? 10_000_000? 100_000_000, 200_000_000?

To make longer lines, you could also try

println(io, "A B $(rand() * n)")

I’m trying to figure out if this is a problem in JuMP’s write_to_file (in which case, we will fix), or a more general problem with writing on your machine, in which case, you should contact your administrator. There’s probably some setting, but I wouldn’t know where to start.

For N = 100_000_000:
$ ls -alh tmp.txt
-rw------- 1 13G May 22 23:47 tmp.txt

Very weird. What if you try the LP file writer? Just use raven5.lp as the filename.

For N = 200_000_000 and using the random number output you suggested, the program is killed before writing tmp.txt.
$ julia_1.7.2 tprint.jl
Killed

Below is the test code in tprint.jl:
function main(io, N)
for n in 1:N
println(io,“A B $(rand() * n)”)
end
end

N = 2_000_000_000 # 1_000_000_000
io = IOBuffer()
main(io, N)

open(“tmp.txt”, “w”) do io
main(io, N)
end

The random number output script works for N = 2_000_000.

$ ls -alh tmp.txt
-rw------- 1 44M May 23 00:48 tmp.txt

write_to_file(m,“raven5.lp”) also is killed and no data is written to the .lp file.
-rw------- 1 0 May 23 01:48 raven5.lp

It looks like you actually ran with N = 2_000_000_000 did you run out of disk space?

write_to_file(m,“raven5.lp”) also is killed and no data is written to the .lp file.

I don’t really understand the problem. We don’t do anything special in the MPS and LP writers. They just write to a file.

Are you running out of RAM or disk space? Can you try this on a different computer? Can you share a reproducible example with all of the data that I can run?

Yes, I meant N = 2_000_000_000. It didn’t get to writing the file; the tmp.txt file was not even created. I believe the process was killed while trying to write to IOBuffer.

I was able to write the MPS file successfully on Windows 10 running on a modern laptop.