Hi all,
I work with sequential convex programming (SCP) for trajectory optimization, which is where a nonlinear programming problem is approximated by a sequence of convex linearizations. The structure of the problem does not change between iterations, but the coefficients do. These problems (and convexified constraints) can have SOC constraints and in some cases SDP constraints.
Currently, I use JuMP.jl to set up the convex subproblems at each iteration and manually delete and (re)add the linearized constraints at each iteration. A simple implementation can be found here:
For some solvers, such as MOSEK, this works well, and has seemingly low computational overhead, but others, e.g. Gurobi, seem to have much higher overheads (multiples of the total solve time). When profiling it seems this seems to take a significant amount of time with many calls to GRBupdatemodel.
It would be useful to know what the intended workflow within JuMP would be for these types of problems. I currently see 4 possible approaches:
- Create a full new JuMP problem at each iteration
- Remove and add new constraints at each iteration
- Use JuMP parameters (possibly with ParametricOptInterface)
- set_normalized_coefficient to modify
I would be interested in knowing if there are any performance benefits to taking approaches 3 or 4 over approach 2. Approach 3 may have issues because of the limited types of constraints supported by ParametricOptInterface, and approach 4 may be difficult to implement as the normalized forms of constraints will need to be considered (adding complication).