One stop shop for benchmark optimization problems?


Wouldn’t it be nice to have a single package in Julia hosting benchmark optimization problems for different classes of optimization to make it easier for solver developers to test their solvers and benchmark them? The problem is that there are too many benchmarking sets e.g. Netlib, MIPLIB, CUTEst, SDPLib, CSPLib, MacMINLP, GAMS World, COCONUT, mintOC,, POLIP, CBLIB, and CEC. These problems come in all sorts of file formats, e.g. gms, ams, mod, and mps to name a few. So I wonder if some parsers have been written for some of these formats to say read the optimization problems into a JuMP model and then the solver developer can operate on the JuMP model struct directly without having to worry about where it came from. Of course it would be nearly insane to try and write all these problems from scratch in JuMP syntax because there are too many of them, and some are based on big data files.

So I wonder what the optimization solver developers in this forum think, is this doable, pointless, on the agenda, or perhaps already partially done? Any advice or pointer is appreciated.


Why not write parsers for each of these formats into native Julia/JuMP code?

For example, here is a preliminary workup of a gms parser,


For the “vOptLib: Library of numerical instances for MultiObjective Linear Optimization problems” (, several parsers in Julia are provided by the vOptSpecif API (part of the vOpt research project).


Yes I wanted to write one if I needed to, but I was checking the available first. toJuMP looks pretty good, I will try it out, thanks.


This is pretty good, the solver and the parser, it only supports mop files?


I think this could take a big effort, but you should try to plan how to get a publication out of it before hand as a good motivator, and see if others want in.


True, but the most annoying part is writing parsers.


Yes, only MOP files.


the solver developer can operate on the JuMP model struct directly without having to worry about where it came from

JuMP models were never designed to be canonical and permanent representations of optimization problems. We strongly encourage solver authors to implement their solvers when possible at the MathProgBase (and soon, MathOptInterface) level which abstracts away the internal details of JuMP and makes it much easier to source a problem from a different file format or modeling interface. For example, we have demos of calling a MPB solver from AMPL via NL files and from CVXPY via a C interface. I have a package to convert between CBLIB and MPB: As a solver author, you stand to benefit if others can access your solver without needing to rewrite their models in JuMP format. The only good reason for tying a solver to a JuMP model is for cases where you exploit some higher-level structure which is not encodable through MPB/MOI.

I wonder if some parsers have been written for some of these formats to say read the optimization problems into a JuMP model

Direct translations of large benchmark instances into flat JuMP code can also be technically problematic. If an instance has 100,000 constraints, any sort of direct translator into JuMP would produce a function with 100,000 standalone lines of @constraint or @NLconstraint. This will just crash the Julia compiler. Storing such an instance in a file format created for this purpose is a much better idea.


Thanks for your comment. So I guess for big problems, the translator should talk directly to MPB. But actually many of the benchmark problems are not that big, so a toJuMP translator would also partly fill the gap, to make the models accessible to MPB solvers. Also as you said for problems with some structure, JuMP-like packages might be better, I am thinking geometric/signomial programs for example where the parser’s job would be to just extract the data and lay them out in arrays that can be looped over in a few JuMP-like macros. While connecting the solver to MPB is the way to go for a long-term solution, in the prototyping phase, it may be a bit too much commitment.

I suppose most solver developers have been handpicking a number of benchmarking and example problems on their own, and occasionally writing parsers to specific file formats and hooking up to libraries, but there is no one place which gathers all the efforts, yet. So the main point of this post is to try and compile a bunch of these efforts in one place for starter and identify the main gaps that can then be filled by the interested as they require them to test their own solvers. Certainly no one has enough free time to tackle all the gaps at once, but it is good to start somewhere. Also as more translators and native Julia collections start showing up in the future, the need for organization would be higher to avoid duplicating the work, and allow focusing more on the writing of the solver itself.