JuMP creating a huge MPS file

Further to my question, here How to specify barrier method to use? - Specific Domains / Optimization (Mathematical) - JuliaLang, I have given an example of a case where it is taking too long to build the model or write a MPS file. For a pure linear program with 20001 rows and 40001 columns, it was taking about 30 minutes to build the JuMP model/ or before the solve process begins. I wrote the model to a MPS file and the file turned out to be 13.4GB!

Could someone please guide on what may be going wrong?

using JuMP, Xpress
function dense_lp(nvar, ncol)
    data = rand(ncol, nvar)
    model = direct_model(Xpress.Optimizer())
    @variable(model, x[1:nvar] >= 0)
    @variable(model, y[1:ncol] >= 0)
    @variable(model, slack >= 0.0)
    @constraint(model, x_limit, sum(x[i] for i = 1:nvar) == 1)
    @constraint(model, [j in 1:ncol], slack + y[j] >= sum(data[j, i] * x[i] for i in 1:nvar))
    @objective(model, Min, slack + 200 / ncol * sum(y[j] for j = 1:ncol))
    write_to_file(model, "dense_LP.mps")

1 Like

So a few back-of-the-envelope calculations:

  • data has 400_000_000 Float64 elements. Each element requires 8 bytes, so the total is ~3GB.
  • However, MPS doesn’t store values in binary, it stores them as text. So each number is written out as
    julia> string(rand())
    There are a few complications, but call it 18 characters of 1 byte each. So now your 400e6 elements take ~6.7 GB to write out as text.
  • But we also need to record the variable and constraint name for each element. So add on a few more bytes etc.
  • Then we need to deal with the objective, the other variables, variable bounds, etc. and you can see how 13.4 GB is actually pretty reasonable.

I don’t have many suggestions to make this faster. Could you try

model = Model(Xpress.Optimizer; add_bridges = false)

It’d also be useful to try a different solver like Gurobi. Are they all slow? Or just Xpress?


Thanks for the detailed response.

Setting add_bridges = false doesn’t reduce the time it is taking to write MPS file. It still takes 30 minutes to write MPS file for a problem of this size. I was using direct_model to create a model, which I think disables bridges by default.

I don’t have access to Gurobi or another commercial solver, but I don’t expect them to be faster when it comes to writing a MPS file through JuMP. Is the time to write MPS file not solver independent? When it takes that long to write a MPS file, the time it takes to generate the model or start the solve process should be comparable.

These are different things. Did Xpress take 30 minutes to start the solve process as well? Writing an MPS file should be slower than passing the model to a solver.

If you just use Model() with no solver, is it faster?
Xpress.jl is not optimized for very efficient queries of data like constraint coefficients.

1 Like

In the MWE, it took 22 minutes to write a MPS file. To start the solve process, it was 6 minutes faster than writing MPS file, but still it was16 minutes. I have tried Clp as well and it is taking a long time before I can see any progress by the solver.

It looks like the long model generation time is as a result of JuMP spending too much time to build the model and not solver dependent.

How fast was model = Model(Xpress.Optimizer; add_bridges = false) to start solving?

Part of the problem is the density. JuMP isn’t optimized for these types of problems. It’s really intended for problems will millions of variables and constraints, but only a few non-zeros per constraint.

16 minutes on the MWE in this post, but about 26-30 minutes on another similar example I tried. This 16 minutes is the time before I can see anything on the screen from the solver.

I have tried writing MPS file by using model = Model() instead of Xpress and it was slightly faster. It took 25 minutes on the other example compared with 30 minutes when using Xpress.

I will try Mosel or another official API and see if it will be faster with dense problems.

Can you try

function dense_lp2(nvar, ncol)
    data = rand(nvar, ncol)
    model = direct_model(Xpress.Optimizer())
    x = @variable(model, [1:nvar], lower_bound = 0.0)
    y = @variable(model, [1:ncol], lower_bound = 0.0)
    slack = @variable(model, lower_bound = 0.0)
    @constraint(model, sum(x) == 1)
    for j in 1:ncol
        @constraint(model, sum(data[i, j] * x[i] for i=1:nvar) <= y[j] + slack)
    @objective(model, Min, slack + 200 * sum(y) / ncol)
    return model
@time dense_lp2(2000, 2000);

On my machine this builds for Gurobi in 120 seconds, which seems reasonable.

julia> function dense_lp2(nvar, ncol)
           data = rand(nvar, ncol)
           model = direct_model(Gurobi.Optimizer())
           x = @variable(model, [1:nvar], lower_bound = 0.0)
           y = @variable(model, [1:ncol], lower_bound = 0.0)
           slack = @variable(model, lower_bound = 0.0)
           @constraint(model, sum(x) == 1)
           for j in 1:ncol
               @constraint(model, sum(data[i, j] * x[i] for i=1:nvar) <= y[j] + slack)
           @objective(model, Min, slack + 200 * sum(y) / ncol)
           return model
dense_lp2 (generic function with 1 method)

julia> @time dense_lp2(2000, 2000);
Set parameter Username
Academic license - for non-commercial use only - expires 2022-06-19
  0.638469 seconds (90.22 k allocations: 372.590 MiB, 9.64% gc time)

julia> @time dense_lp2(2000, 20000);
Set parameter Username
Academic license - for non-commercial use only - expires 2022-06-19
  6.618951 seconds (846.25 k allocations: 3.634 GiB, 11.40% gc time)

julia> @time dense_lp2(20000, 20000);
Set parameter Username
Academic license - for non-commercial use only - expires 2022-06-19
124.733077 seconds (1.18 M allocations: 49.292 GiB, 4.27% gc time)
1 Like

@time dense_lp2(20000,20000) takes 245.955302 seconds on second run, so significantly faster but still slower than the run-time you get. Is the speed up coming from using anonymous variables?

I ran the same code, but added the following two attributes:

set_optimizer_attribute(model, "MOI_SOLVE_MODE", "b")
set_optimizer_attribute(model, "PRESOLVE", 0)

and made a call to optimize!(model)
The code runs fine but then I got the following error message:

Barrier method finished in 3005 seconds
Crossover starts
   Its         Obj Value      S   Ninf  Nneg        Sum Inf  Time
     0           .500030      P      1     0        .000000  3982
Problem is unfinished
ERROR: XpressError(32): Subroutine not completed successfully, possibly due to invalid argument. Unable to call `Xpress.lpoptimize`:

909 Error: Number of elements (2399810000) exceeds the limit (1073741823) for factorization.

I am not sure if this is anything to do with Xpress.jl or the internals of the solver.

Number of elements (2399810000) exceeds the limit (1073741823)

julia> div(typemax(Cint), 2)

julia> 2399810000 > div(typemax(Cint), 2)

So Xpress runs out of space to store the coefficients. There is a 64-bit API, but I don’t think it’s hooked up, because to date there hasn’t been a request for it.

What’s the rationale for limiting JuMP to Int32? If solvers support 64bit surely that must be for a reason

It has nothing to do with JuMP, this is a design choice in Xpress.jl.

1 Like

its the opposite actually. Xpress originally did not support 64bit Ints (floats are always 64bits in xpress).
The 64 bit API is very recent compared to the 32bit API, and there are highlights in the manual saying that the problem might even not be solved due to size.

Your case is a bit pathological because you have a 100% dense problem.
I have solver fairly big problems that took days to solve with Xpress (hundreds of millions of variables and constraints and non-zeros) and never needed to use the 64bit API.

If you have such a dense problem most solvers might struggle to solve it.

If it is just a performance test for MPS, and the actual problem is sparse as most real-world problems are then I’d suggest changing the performance test since it won’t be measuring the real thing.

If the actual problem you are trying to handle is as dense as this test, and you are 100% sure that such a dense formulation is the way to go, I can help guide you on a PR to support the 64bit API.

P.S. given the error you got, it does not seem to be an issue that requires the 64bit API for data input.
It seems the xpress internals simply won’t handle that many non-zeros.

I would even say that this is not even an issue with a design choice in Xpress.jl, but a design choice in the Xpress solver.

I am not sure if a dense formulation is the way to go. Probably delayed column/constraint generation should be used for problems of this size, but I don’t know how to use these methods.

I was able to solve the problem using the parallel simplex method by first writing the MPS file with JuMP and then using the Xpress solver command line. When I tried to solve it with the barrier method using the Xpress.jl in JuMP, I got this error. The solver did several iterations before reporting this error.

Here’s the section in the Xpress documentation on 64-bit models:

1 Like