I would also enjoy a clear an succinct summary of what happens with the various options, including bridges and direct_model. The documentation drops lots of hints, but I still feel a little fuzzy on exactly what is going on when I call optimize!.
Thanks again to all those who worked on JuMP 0.19 by the way! The Gurobi wrapper has been fixed and tagged so I’m now happily using 0.19 for my “big” problems.
I think my main confusion right now is around the following: I’ll tell you how I think it works and please correct me if I’m wrong:
If you create a model with direct_model, all changes you make to that model, such as adding variables are constraints, are called on the back-end immediately when the JuMP calls are made (e.g. @variable in JuMP results in a bunch of “add_variable” C++ calls). If instead you create a model with Model(with_optimizer(opt)), the a JuMP object is created representing the problem, and this does not translate into actual C++ (or whatever) calls until the user calls optimize!. If a LazyBridge optimizer is used, the form of the problem created in the back-end is equivalent but not necessarily identical to what you just created in JuMP.
Not quite. JuMP itself stores no copies of the model. The model data is stored in a CachingOptimizer (defined in MOI.Utilities) that manages loading and synchronizing the model data with the underlying optimizer (e.g., Gurobi). A CachingOptimizer can be in one of three states:
JuMP.optimize! triggers a switch to ATTACHED_OPTIMIZER if the CachingOptimizer is not already in that state, but you can control it manually also:
direct_model and JuMP’s MANUAL mode did not exist prior to JuMP 0.19. JuMP’s AUTOMATIC mode is essentially what happened in 0.18 but in a more ad-hoc and less transparent way.
Bridges sit in the middle of all this and perform a minimal set of transformations (i.e., computed by a shortest path algorithm) between the constraints that the user wrote down and the constraints that a solver natively accepts.