JuMP/MOI performance overhead vs XPress api

Thanks for re-running the test, Daniel. A couple of comments:

  • @marie you should read the JuMP performance tips: Performance tips · JuMP, and you should look into PackageCompiler.
  • Aside from the compilation latency (which yes, is a problem), we’re actually doing pretty well, considering the overhead of having to parse the problem in Julia, construct a JuMP model, and then copy it into the solver, compared to reading directly into the solver. There’s probably some room for improvement with the readers, but they haven’t been a high priority because a key feature of JuMP is that you can create a problem in JuMP and pass it to the solver without going through file I/O.
  • There’s a 5x difference between Xpress direct and Julia without bridges compared to a 1.5x difference with HiGHS. That suggests there is room for improvement in Xpress.jl. But so far Xpress.jl has been written by voluntary efforts of the community, and it is not officially supported by FICO. I try to make sure the open-source solvers are well maintained and efficient, but that isn’t the case for the commercial solvers (NumFOCUS signs agreement with MIT to provide ongoing maintenance and support | JuMP).
1 Like