About the optimization problem choice in JuMP v0.19

I hope this isn’t a dumb question…

With the precedent version of JuMP, to define the kind of optimization problem (LP, MIP,…), the syntax was

Model(solver=GLPKSolverMIP(args…))

or

Model(solver=GLPKSolverLP())

I read in the documentation that now the syntax is :

Model(with_optimizer(GLPK.Optimizer, args…))

So I don’t understand where to define the kind of optimization problem… I trie several tests but didn’t find out…

I you can enlighten me :slight_smile: !

Thanks

1 Like

I would also enjoy a clear an succinct summary of what happens with the various options, including bridges and direct_model. The documentation drops lots of hints, but I still feel a little fuzzy on exactly what is going on when I call optimize!.

Thanks again to all those who worked on JuMP 0.19 by the way! The Gurobi wrapper has been fixed and tagged so I’m now happily using 0.19 for my “big” problems. :smile:

Model(with_optimizer(GLPK.Optimizer))

will just work.

1 Like

In the previous version, GLPK had two types of solvers (LP and MIP), and we forced the user to choose.

Now, the wrapper will choose for you. To by clear:

Model(solver = GLPKSolverLP())
# becomes
Model(with_optimizer(GLPK.Optimizer))

Model(solver = GLPKSolverMIP())
# becomes
Model(with_optimizer(GLPK.Optimizer))
1 Like

Solvers · JuMP and Solvers · JuMP are intended to cover this. Please open issues if there are points that are unclear in the docs.

See also slides 41-42 at http://www.juliaopt.org/meetings/bordeaux2018/lubin.pdf. They’re a bit out of date though (from JuMP-dev in July, 2018).

1 Like

I think my main confusion right now is around the following: I’ll tell you how I think it works and please correct me if I’m wrong:

If you create a model with direct_model, all changes you make to that model, such as adding variables are constraints, are called on the back-end immediately when the JuMP calls are made (e.g. @variable in JuMP results in a bunch of “add_variable” C++ calls). If instead you create a model with Model(with_optimizer(opt)), the a JuMP object is created representing the problem, and this does not translate into actual C++ (or whatever) calls until the user calls optimize!. If a LazyBridge optimizer is used, the form of the problem created in the back-end is equivalent but not necessarily identical to what you just created in JuMP.

Correct.

Not quite. JuMP itself stores no copies of the model. The model data is stored in a CachingOptimizer (defined in MOI.Utilities) that manages loading and synchronizing the model data with the underlying optimizer (e.g., Gurobi). A CachingOptimizer can be in one of three states:

JuMP.optimize! triggers a switch to ATTACHED_OPTIMIZER if the CachingOptimizer is not already in that state, but you can control it manually also:

direct_model and JuMP’s MANUAL mode did not exist prior to JuMP 0.19. JuMP’s AUTOMATIC mode is essentially what happened in 0.18 but in a more ad-hoc and less transparent way.

Bridges sit in the middle of all this and perform a minimal set of transformations (i.e., computed by a shortest path algorithm) between the constraints that the user wrote down and the constraints that a solver natively accepts.

3 Likes

Thank you all for your answers. It is clearer now.

I’m not sure if it’s necessary to open an issue for the docs but it’s true that I haven’t found any mention to the selection of the kind of optimization problem even in the sections
http://www.juliaopt.org/JuMP.jl/v0.19.0/solvers/#Automatic-and-Manual-modes-1
and Solvers · JuMP mentioned.

Thanks again for your help.

This used to be a special case for GLPK. It no longer is. You don’t need to choose a priori what kind of optimization problem you are building.