JuMP/MOI performance overhead vs XPress api

Solving exactly same lp problem using XPress api is way faster than using JuMP/MOI: 2 ses vs 9 secs for a simple case; then 452 secs vs 1796 for more complex case. Is this overhead a known issue? Is there a way to optimize performance with JuMP interface?
Calling XPress api directly:
‘’’
prob = Xpress.XpressProblem()
Xpress.readprob(prob, probPath, “”)
Xpress.lpoptimize(prob, “”)
‘’’
Using JuMP/MOI:
‘’’
model = read_from_file(probPath)
set_optimizer(model, ()->Xpress.Optimizer(DEFAULTALG = 2, PRESOLVE = 1); add_briges=false )
set_optimizer_attribute(model, “OUTPUTLOG”, 0) #turned off printing out output
optimize!(model)
‘’’

There will be some overhead.

1 - JuMP reader is not optimized the ideal input is JuMP model
2 - second option requires compiling much more code
3 - you are not using the same attributes (default value for DEFAULTALG is 1)

You should compare the solvetime of the output log file. This number should be very similar (maybe not the same due to constraint ordering.

Hard to say much more without the files.

Probably best to continue in an issue: Issues · jump-dev/Xpress.jl · GitHub with concrete examples.

Thank you for your response. I’ve checked the log file - and the algorithm used is “dual simplex” - exactly the same as used with xpress api:
|1|Automatically determined.
|2|Dual simplex.|
|3|Primal simplex.|
|4|Newton barrier.|

How are you timing this? Is the 452 vs 1796 the runtime of the solver? Or everything including JuMP reading the problem then creating it in Xpress?

If it’s the former, this might be due to different ordering in variables and constraints. Our file interfaces do not preserve ordering from the file to the solver.

If it’s the later, this is expected. JuMP has to read the problem from file, copy it to JuMP, and then copy it to Xpress. So there are 3 versions of the problem instead of 1 when you use the direct API.

Thank you, Oscar, it is the later.

1 Like

Is it an MPS or LP file? Can you share the example?

It’s LP for me. And then FICO developers tried the same with MIP - and also confirmed the overhead. This is our client’s problem/model - so it cannot be shared - it’s quite large - but we have to use it to benchmark the performance. I bet using smaller LP - time diff won’t be that drastic.

For sake of completeness, here is the code that I used for timing things:

using Xpress, JuMP, Dates

function julia(model)
  model = read_from_file(model)
  set_optimizer(model, ()->Xpress.Optimizer(logfile = "lp.log"))
  println("Julia start:  ", Dates.format(now(), "HH:MM:SS"))
  optimize!(model)
  println("Julia end:    ", Dates.format(now(), "HH:MM:SS"))
end;

function julia_unbridged(model)
  model = read_from_file(model)
  set_optimizer(model, ()->Xpress.Optimizer(logfile = "lp.log"); add_bridges=false)
  println("Julia unbridged start:  ", Dates.format(now(), "HH:MM:SS"))
  optimize!(model)
  println("Julia unbridged end:    ", Dates.format(now(), "HH:MM:SS"))
end;

function direct(model)
  prob = Xpress.XpressProblem()
  Xpress.readprob(prob, "afiro.mps", "")
  println("Xpress start: ", Dates.format(now(), "HH:MM:SS"))
  Xpress.lpoptimize(prob, "")
  println("Xpress end:   ", Dates.format(now(), "HH:MM:SS"))
end;

@time julia("afiro.mps")
@time julia_unbridged("afiro.mps")
@time direct("afiro.mps")

With the model quoted below it prints:

Julia start:  16:45:04
Julia end:    16:45:09
  9.948826 seconds (26.37 M allocations: 1.465 GiB, 7.16% gc time, 99.33% compilation time)
Julia unbridged start:  16:45:09
Julia unbridged end:    16:45:09
  0.408376 seconds (356.03 k allocations: 18.724 MiB, 5.57% gc time, 98.99% compilation time)
Xpress start: 16:45:09
Xpress end:   16:45:09
  0.007924 seconds (90 allocations: 4.016 KiB)

We can see that not using bridges speeds up things but then there is still some overview. Not sure we can get rid of that. It may just the price to pay for using a framework on top of the solver.

The time spent in the solver itself is always the same.

Model:

NAME          MODEL
ROWS
 N  __OBJ___
 E  R09     
 E  R10     
 L  X05     
 L  X21     
 E  R12     
 E  R13     
 L  X17     
 L  X18     
 L  X19     
 L  X20     
 E  R19     
 E  R20     
 L  X27     
 L  X44     
 E  R22     
 E  R23     
 L  X40     
 L  X41     
 L  X42     
 L  X43     
 L  X45     
 L  X46     
 L  X47     
 L  X48     
 L  X49     
 L  X50     
 L  X51     
COLUMNS
    X01       X48       0.301
    X01       R09       -1
    X01       R10       -1.06
    X01       X05       1
    X02       __OBJ___  -0.4
    X02       X21       -1
    X02       R09       1
    X03       X46       -1
    X03       R09       1
    X04       X50       1
    X04       R10       1
    X06       X49       0.301
    X06       R12       -1
    X06       R13       -1.06
    X06       X17       1
    X07       X49       0.313
    X07       R12       -1
    X07       R13       -1.06
    X07       X18       1
    X08       X49       0.313
    X08       R12       -1
    X08       R13       -0.96
    X08       X19       1
    X09       X49       0.326
    X09       R12       -1
    X09       R13       -0.86
    X09       X20       1
    X10       X45       2.364
    X10       X17       -1
    X11       X45       2.386
    X11       X18       -1
    X12       X45       2.408
    X12       X19       -1
    X13       X45       2.429
    X13       X20       -1
    X14       __OBJ___  -0.32
    X14       X21       1.4
    X14       R12       1
    X15       X47       -1
    X15       R12       1
    X16       X51       1
    X16       R13       1
    X22       X46       0.109
    X22       R19       -1
    X22       R20       -0.43
    X22       X27       1
    X23       __OBJ___  -0.6
    X23       X44       -1
    X23       R19       1
    X24       X48       -1
    X24       R19       1
    X25       X45       -1
    X25       R19       1
    X26       X50       1
    X26       R20       1
    X28       X47       0.109
    X28       R22       -0.43
    X28       R23       1
    X28       X40       1
    X29       X47       0.108
    X29       R22       -0.43
    X29       R23       1
    X29       X41       1
    X30       X47       0.108
    X30       R22       -0.39
    X30       R23       1
    X30       X42       1
    X31       X47       0.107
    X31       R22       -0.37
    X31       R23       1
    X31       X43       1
    X32       X45       2.191
    X32       X40       -1
    X33       X45       2.219
    X33       X41       -1
    X34       X45       2.249
    X34       X42       -1
    X35       X45       2.279
    X35       X43       -1
    X36       __OBJ___  -0.48
    X36       X44       1.4
    X36       R23       -1
    X37       X49       -1
    X37       R23       1
    X38       X51       1
    X38       R22       1
    X39       __OBJ___  10
    X39       R23       1
RHS
    B         X05       80
    B         X17       80
    B         X27       500
    B         R23       44
    B         X40       500
    B         X50       310
    B         X51       300
ENDATA

What happens if you run each function twice?

Also: what is the application? Who is generating the MPS file? And why use JuMP if it’s just to read an MPS file?

For a comparison, here is HiGHS (I don’t have Xpress):

julia> using JuMP, HiGHS

julia> function julia(filename)
         model = read_from_file(filename)
         set_optimizer(model, HiGHS.Optimizer)
         set_silent(model)
         optimize!(model)
       end
julia (generic function with 1 method)

julia> function julia_unbridged(filename)
         model = read_from_file(filename)
         set_optimizer(model, HiGHS.Optimizer; add_bridges=false)
         set_silent(model)
         optimize!(model)
       end
julia_unbridged (generic function with 1 method)

julia> function direct(filename)
         model = Highs_create()
         Highs_readModel(model, filename)
         Highs_setBoolOptionValue(model, "output_flag", false)
         Highs_run(model)
         Highs_destroy(model)
       end
direct (generic function with 1 method)

julia> @time julia("model.mps")
  8.906542 seconds (24.55 M allocations: 1.457 GiB, 4.80% gc time, 57.93% compilation time)

julia> @time julia_unbridged("model.mps")
  1.305653 seconds (1.90 M allocations: 113.475 MiB, 2.83% gc time, 2.24% compilation time)

julia> @time direct("model.mps")
  0.001125 seconds

julia> @time julia("model.mps")
  0.001708 seconds (3.74 k allocations: 305.234 KiB)

julia> @time julia_unbridged("model.mps")
  0.001502 seconds (2.87 k allocations: 266.609 KiB)

julia> @time direct("model.mps")
  0.001135 seconds

The first 9 second run is Julia compilation. The second times are much faster, and the overhead of reading the MPS file in Julia, copying it into JuMP, and then copying into the solver is ~50%, which I think is pretty good.

The 9 second initial latency is bad, but it is a known problem that is actively being worked on:

If you are doing this in production, and the 9 second penalty is problematic, you can use PackageCompiler: Home · PackageCompiler.

1 Like

Here are the results with Xpress and running things twice.

Julia:
 11.155115 seconds (26.31 M allocations: 1.461 GiB, 6.69% gc time, 99.35% compilation time)
  0.003539 seconds (5.30 k allocations: 348.781 KiB)
Julia without bridges:
  0.424023 seconds (367.86 k allocations: 19.386 MiB, 5.36% gc time, 98.96% compilation time)
  0.003167 seconds (4.72 k allocations: 333.766 KiB)
Xpress direct:
  0.010510 seconds (2 allocations: 48 bytes)
  0.000601 seconds (2 allocations: 48 bytes)

As with HiGHS, things go much faster in the second call.

Also: what is the application? Who is generating the MPS file? And why use JuMP if it’s just to read an MPS file?

This can be answered by @Marie only. I was only investigating the “Xpress angle” of this.

2 Likes

Thanks for re-running the test, Daniel. A couple of comments:

  • @marie you should read the JuMP performance tips: Performance tips · JuMP, and you should look into PackageCompiler.
  • Aside from the compilation latency (which yes, is a problem), we’re actually doing pretty well, considering the overhead of having to parse the problem in Julia, construct a JuMP model, and then copy it into the solver, compared to reading directly into the solver. There’s probably some room for improvement with the readers, but they haven’t been a high priority because a key feature of JuMP is that you can create a problem in JuMP and pass it to the solver without going through file I/O.
  • There’s a 5x difference between Xpress direct and Julia without bridges compared to a 1.5x difference with HiGHS. That suggests there is room for improvement in Xpress.jl. But so far Xpress.jl has been written by voluntary efforts of the community, and it is not officially supported by FICO. I try to make sure the open-source solvers are well maintained and efficient, but that isn’t the case for the commercial solvers (NumFOCUS signs agreement with MIT to provide ongoing maintenance and support | JuMP).
1 Like

Thank you, Oscar!!!

1 Like