Trying to use `GRBreadmodel`

I’m trying something like

using JuMP
import Gurobi

grb_model = Gurobi.Optimizer()
Gurobi.GRBreadmodel(
    grb_model.env,
    "foo.lp",
    Ptr{Gurobi.GRBmodel}(pointer_from_objref(grb_model))
)

model = direct_model(grb_model)

I can call optimize!(model) on that (which works), but the model itself (obviously?) remains empty (the obj comes from the “name” in the LP file, which seems to be properly set as ModelName):

obj
Feasibility problem with:
Variables: 0
Model mode: DIRECT
Solver name: Gurobi

The optimizer shows the loaded problem:

julia> backend(model)
    sense  : minimize
    number of variables             = 464326
    number of linear constraints    = 586972
    number of quadratic constraints = 0
    number of sos constraints       = 0
    number of non-zero coeffs       = 1490227
    number of non-zero qp objective terms  = 0
    number of non-zero qp constraint terms = 0

Is there any way to use GRBreadmodel and then “easily” reconstruct an “equivalent” JuMP model (without manually checking each potentially existing “thing”, like GRBgetvars, …)?

I kinda tried too many different ways but all of them failed… Thanks!


If that sounds weird, the actual use case would have been to use Gurobi as an efficient LP file reader (which I know sounds absurd, but I did not manage to make read_from_file work, and thought it would be a quick workaround before re-implementing some parts of that manually - the main culprit seems to be the string(...) call in LP.jl as far as a quick profiling revealed).

No, you cannot use GRBreadmodel to populate a JuMP or Gurobi.jl model.

The reason is that you will populate the .inner field of the Gurobi.Optimizer struct, but none of the others: Gurobi.jl/src/MOI_wrapper/MOI_wrapper.jl at 360e11e0df9ad378bd6a6ed60090e5065f38d0c0 · jump-dev/Gurobi.jl · GitHub

but I did not manage to make read_from_file work,

This is a bug then. Do you have a reproducible example?

using JuMP, Gurobi
model = read_from_file("model.lp")
set_optimizer(model, Gurobi.Optimizer)
optimize!(model)

Our LP reader could really do with a rewrite: [FileFormats.LP] write a proper parser · Issue #2351 · jump-dev/MathOptInterface.jl · GitHub. It started as something pretty basic, and then accreted various features. But It doesn’t really follow best-practice by defining a grammar, etc.

Thanks! Yeah I thought so, but had the small hope of some efficient “reconstruct” magic hidden somewhere.

I’m sorry! Bad wording in my initial post. It works, it just doesn’t “work” for my use case: Loading large problems, that where generated by something else (e.g., pyomo, gurobipy, …). The mentioned line seems to result in a lot of (string) allocations, which makes it slow.

MWE given below:

  • The test.LP model comes from Calliope (v0.6.10; national scale tutorial model; 1-year period).
  • The test.MPS was created using gurobi_cl by calling gurobi_cl TimeLimit=0 ResultFile=test.MPS test.LP
  • I’ve uploaded the compressed files (wetransfer: https://we.tl/t-sEvYoxkX8H) as .7z (uncrompessed size ~ 194+317 MB) - can provide them in some other way if needed

Note: To be honest… I know it’s not really important, and converting every LP file to MPS makes it a lot faster, and the memory usage is reasonable. It just makes handling “large” models a bit of a pain, which is why I thought there may be an easy solution. Further, I know that “efficient” LP file reading is probably hard because there are so many weird versions of it - only reason I’m using it is because there may be use cases where a “previous step” can only provide an LP and no other format …


Using Gurobi:

grb_model = Gurobi.Optimizer()
Gurobi.GRBreadmodel(
    grb_model.env,
    "test.LP",
    pointer_from_objref(grb_model)
)

# Read LP format model from file test.LP
# Reading time = 1.69 seconds
# obj: 586972 rows, 464326 columns, 1490227 nonzeros

grb_model = Gurobi.Optimizer()
Gurobi.GRBreadmodel(
    grb_model.env,
    "test.MPS",
    pointer_from_objref(grb_model)
)

# Read MPS format model from file test.MPS
# Reading time = 0.81 seconds
# obj: 586972 rows, 464326 columns, 1490227 nonzeros

Making sure Gurobi actually reads variable names and not just constructs the coefficients:

varname = Ref{Cstring}()
Gurobi.GRBgetstrattrelement(
    grb_model,
    "VarName",
    1,
    pointer_from_objref(varname)
)
unsafe_string(varname[])

# "cost(monetary__region1_1__free_transmission_region1_)"

The same with read_from_file:

JuMP.read_from_file(
    "test.LP",
    format=JuMP.MOI.FileFormats.FORMAT_LP
)

# 23.481682 seconds (51.20 M allocations: 251.653 GiB, 10.56% gc time)

JuMP.read_from_file(
    "test.MPS",
    format=JuMP.MOI.FileFormats.FORMAT_MPS
)

# 6.243054 seconds (32.98 M allocations: 3.027 GiB, 29.22% gc time)

Is 20 seconds the bottleneck? Or are there larger models?

We can probably make improvements on the memory front.

1 Like

No, that’s fine, it’s more about the memory usage, because:

Yes - the “tutorial model” that I’ve used was just the only one that I had readily available that I can also share. Roughly x50 in size would be a “large” model. Unfortunately, on PCs that are memory-constrained, reading the files scales badly even with smaller models.

Not sure if that’s worth the effort / priority right now, since I assume not a lot of people would even encounter that problem at all. I’ll stick to pre-converting to MPS for now :smiley: (I just wanted to clarify if there was an easy way that I overlooked)