How do I extract a trace of the optimization steps in JuMP

During optimization via some interior point method or similar
the solver will compute the value of the objective and probably a bunch of other things.
And one should see it going down

Is there a standard way to capture this into a vector or similar?

Or is the best i can do turn on printing as a solver option, and redirect stdio into a stream then parse it using regex?

There is no standard way. What solver?

1 Like

unknown but probably HiGHS.
Maybe ECOS or Gurobi
But lets say HiGHS

Not possible, and I don’t even know if the log would be very helpful.

Most solvers only care about the final solution, not about the path to get there.

1 Like

Currently solvers like CPLEX/Gurobi have a “synamic search mode” that they are trying to make default which is their name for “we do not want to give our users anymore the guarantee that our core algorithm is branch-and-bound”. This new dynamic search is a black box they can do whatever they want.

Some solvers (mostly the commercial ones) allow you to query some information through a callback. Depending on the solver, you may be able to extract, e.g., the current simplex objective value and primal/dual infeasibility (this is from Gurobi callback docs). You’d need a solver-specific callback for that.

On the academic solver side, SCIP would allow you to do anything you want at the C level (through so-called “event handlers” and plugins), but I don’t know how much of that is exposed in Julia (and I’ve never used any of them myself). As far as I know, HiGHS doesn’t (yet) have callbacks / they are not exposed in Julia.

Another option would be to set logging to the highest level, and hopefully the information you want will be printed there. It will likely have an impact on performance though (lots of I/O operations).

1 Like

I guess the particular thing i really want to know is when I terminate with a TERMINATION_STATUS like “ALMOST_OPTIMAL” or “ITERATION_LIMIT” what rough areas were the objective value and the derivatives etc around, so I can see about relaxing my tolerances so those count as success.

Also, especially in the case of ITERATION_LIMIT I want to know if my objective value had more or less plateau’ed recently (albeit not to the level of flat that would count as ALMOST OPTIMAL), or if it was still in a steep area and just hadn’t made it.

About ITERATION_LIMIT, every solver I worked with will print when they find a new incumbent (i.e., a better solution than ever found before) in the log with the time this happened. I think most of them that support callbacks will also have a callback for when an incumbent is found. So this information at least is available.

You mentioned interior-point algorithm and HiGHS/ECOS/Gurobi as solvers; are you considering purely continuous, convex, possibly nonlinear problems? (if not, you can ignore what follows)

If so, and if Mosek is an option for you, they also have callbacks that may provide some of the information you’re looking for.

More generally though: if you’re indeed playing with continuous, nonlinear convex problems, the only 2 reasons I know that IPM solvers would terminate with ALMOST_OPTIMAL / ITERATION_LIMIT / SLOW_PROGRESS would be because of numerical issues (badly scaled problem) and because of loss of constraint qualification. Infeasible/unbounded problems can also cause trouble to solvers that don’t use a homogeneous embedding algorithm (AFAIK it’s the default choice for Mosek/ECOS). In my experience, for the former 2, you’ll often see a plateau in the objective value, yes.