Gurobi not using barrier method


I have a model (the LP mentioned in my earlier post), that, when solved from Julia, logs indicate that Gurobi uses a primal simplex algorithm, but when I dump the MPS and solve it from the command-line, gurobi_cl uses a barrier algorithm (and solves about 3 times faster).

I’m not aware of setting any solver attributes that might influence its choice of method. (It’s just M = Model(Gurobi.Optimizer)), but I might be doing something else that causes this?

As far as I can tell from searching Gurobi.jl, there’s no code in there that sets Method, so I’m not sure where it’s happening. Is there an undocumented get_solver_attribute equivalent to set_solver_attribute that might allow me track Model and see if it’s getting changed?

Thanks again,

We don’t change the default, so I guess Gurobi chooses a different method via the command line and via the C API.

I would just set the method manually if there’s that much of a difference.

1 Like

Here’s the attribute to set:

1 Like

As far as I can tell, the problem is that I’m (sequential objectives) solving the same problem more than once:

  • Setting Method to -1 (Gurobi chooses) before every call to optimize! uses barrier then dual on the first solve, but on subsequent calls to optimize!, the -1 makes no difference and it starts with the dual simplex.
  • Setting Method to 2 (barrier) before every call to optimize! sort of gets what I want every time - the barrier method, but I’d really rather it chose from scratch each time.

Not sure I understand the problem.

  • You’re solving a problem, modifying the objective, and then resolving?
  • With default settings, Gurobi uses barrier on the first solve, and then dual simplex on the second?
  • You’d prefer barrier-barrier?
  • Is the second solution close to the first? If so, you might be better forcing barrier then primal simplex. That way, Gurobi will warm-start with a primal feasible basis for the second solve.

I think this is a case of “the defaults aren’t optimal,” so you’re stuck doing some manually poking of the method parameter. You might want to open a support ticket with Gurobi. I’m sure they’d be interested in this sort of thing.

1 Like

That’s a pretty good summary :slight_smile: And yep, I’m talking to Gurobi about this (the question about “why is it not using barrier?” came from them).

So, at the moment:

  • We’re doing a sequential-objective-type solve: what’s the most you can do of X and, then, what’s the most you can do of Y while maintaining that much of X?
  • the best option I’ve found is forcing barrier for each solve
  • I would expect the solutions are close enough that warm starts would help (keeping in mind this is an LP) but there’s hiccups with that I haven’t got my head around yet
  • at the moment, I chuck in a constraint to maintain the objective found in the previous solve, but when I get some time, I’ll try reduced-cost fixing.

With a little bit of hacking, you can expose Gurobi’s multi-objective support directly:

using JuMP, Gurobi
model = direct_model(Gurobi.Optimizer())
@variable(model, x[1:2] >= 0)
@constraint(model, sum(x) >= 1)
@constraint(model, x[1] + 2x[2] >= 1.5)
@constraint(model, x[2] >= 0.25)
@expression(model, f1, 2x[1] + x[2])
@expression(model, f2, x[1] + 3x[2])
@objective(model, Min, f1)
grb = backend(model)
MOI.set(grb, Gurobi.MultiObjectivePriority(1), 1)
MOI.set(grb, Gurobi.MultiObjectivePriority(2), 2)

The second barrier solve is likely more efficient because you’re adding in the objective constraint. Are you adding it exactly or with a small tolerance? You probably want to add @constraint(model, c'x <= opt + 1e-4) or similar.

As far as I know, Gurobi doesn’t warm-start the barrier, but it uses a basis warm-start for simplex. However adding a constraint and modifying the objective simultaneously will invalidate the primal and dual basis.

1 Like

Thanks! I was thinking about asking you if there was support for that stuff in JuMP.

At the moment, though, I’m trying to stay away from being Gurobi-specific - between users, developers and CI, we’re don’t have many tokens, and so we need everything to run with GLPK as well.

We do add the constraint with a tolerance - and at the moment I’d like to learn how to (or if you can) do reduced-cost fixing with a tolerance.

1 Like

I don’t think there is any explicit support for reduced-cost fixing in JuMP or Gurobi (may be wrong on the latter), but you can use reduced_cost(x) and fix(x, value(x); force = true) to do this (note the modify-query problem, so you’ll want to query all the information first, then do the fixing second):

And adding support for multi-objective stuff in JuMP is on my TODO list post a JuMP 1.0 release.