API documentation for gurobi.jl

I’ve been trying to solve an ILP in Python where the primary bottle neck appears to be model construction. Multiple sources cite that model construction is heavily language dependent (whereas the optimization is language-independent), and I’m trying to see if I can re-implement the same ILP in Julia.

Trouble is, I can’t seem to find an API documentation. Just translating the Python ILP (where it works) into Julia seems to be a huge hassle. Would someone be able to point me in the right direction? Ie:

I= 30
N = 50
penalty = 1000
model = gp.Model("STT_rnd1")
r_i_j = model.addVars(I, J, vtype=GRB.BINARY, name="r_i_j")
y_i_k = model.addVars(I, K, vtype=GRB.BINARY, name="y_i_k")
z_i_j_k = model.addVars(I, K, vtype=GRB.INTEGER, name="z_i_j_k")
s_j_k = copy.deepcopy(dense_adj_mat) # a numpy matrix

for i in range(I):
    for k in range(K):
        model.addConstr(gp.quicksum(r_i_j[i,j]*y_i_k[i,k]*s_j_k[j,k] for j in range(J))>= T1*y_i_k[i,k])

is something I have tested already in Python, but I can’t get past instantiating some of these parameters in Julia:

model = Model(Gurobi.Optimizer)
Gurobi.add_vars!(model, 0.0, I, J, Gurobi.GRB_BINARY, "r_i_j")

throwing the following error:

MethodError: no method matching add_vars!(::Model, ::Float64, ::Int64, ::Int64, ::Int8, ::String)
Closest candidates are:
add_vars!(::Gurobi.Model, ::Union{Char, Int8, Array{Char,1}, Array{Int8,1}}, ::Array{T,1} where T, ::Union{Array{T,1}, T} where T<:Real, ::Union{Array{T,1}, T} where T<:Real) at C:\Users\b2jia.julia\packages\Gurobi\VhpiN\src\grb_vars.jl:111

All you need should be in the JuMP documentation. Your model does not seem to use gurobi-specific stuff.

1 Like

There is two ways to use Gurobi in Julia that you are mixing in:

model = Model(Gurobi.Optimizer)
Gurobi.add_vars!(model, 0.0, I, J, Gurobi.GRB_BINARY, "r_i_j")

The first line is using Gurobi through JuMP, which is the recommended way. Then you can create the binary variables with

@variable(model, r[I, J], Bin)

as detailed in the docs.

The second line use the Julia wrapper of the Gurobi C interface. If you want to that interface, without JuMP, then you should not use Model(Gurobi.Optimizer) as this would create a JuMP model. You should not use anything from JuMP in that case, Gurobi.jl simply wraps the C-interface as is (the Julia interface is even automatically generated with Clang.jl). So check the C-interface documentation in the Gurobi website.
Again, I would recommend using the JuMP interface directly. It is much easier and will allow you to easily switch to another solver without any change to your code (except changing the line Model(Gurobi.Optimizer) of course)


I see that I conflated the two (JuMP, Gurobi). Thank you for pointing this out.

1 Like

Note that the Gurobi python wrapper is pretty efficient. You should not expect massive speed gains just by rewriting it into JuMP.

It is worth spending some time to understand why the bottleneck is model construction. (Uncommon, particularly when you are solving MILPs.) Are you solving this problem multiple times with different data? Try modifying an existing model, rather than building a new one every solve.

1 Like