I would like to solve some similar optimization problems, which are mathematical formulation (cost and constraints) are same, but just model parameter values or size of model parameters are different, in JuMP.

Currently, I’m rebuilding each model from scratch for each problem, but it

Is there any efficient way to solve these problems (e.g. reusing model, caching modeling result, etc)

Which parts of the formulation are similar? When it comes to constraints you can delete them, modify them etc. pretty easily. Check out the docs here.

Also check out the warm start functionality here, although to be honest I have not found this to really speed up solving (I usually work with LPs of ~3000 variables)

Thanks you for your comment.

Which parts of the formulation are similar?

Both cost and constraints. I modified my question. Deleting constraints looks useful, I will try it.

It’s usually best to modify a constraint rather than deleting it when possible. For modifying the cost/objetive, see here.

Note that if the solver does not support your modification, it maybe be discarded and recreated from scratch silently. You could use the MANUAL mode to control this and get an error instead of being silently discarded. See also Direct mode for even more control.

Ah, I didn’t know that. I thought LPs can use a previous solution as a starting feasible point if, e.g. only the objective changes. Shouldn’t that be more efficient in theory since the solver can skip the initialization (feasibility problem) step?

In some cases the solver may warm-start from a previous basic feasible solution contained within the solver from a previous solve. But this is independent of `set_warm_start`

.

```
using JuMP, Gurobi
model = Model(Gurobi.Optimizer)
@variable(model, x >= 0)
@constraint(model, 2x >= 1)
@objective(model, Min, x)
optimize!(model)
@objective(model, Min, 2x)
optimize!(model) # Warmstarts from previous solve
```

A precision: if the new problem has the same feasible set as the previous one, yes the solution to the previous problem should be used as feasible initial point. If the feasible set changes (for example, cuts are added in MINLP frameworks), the previous solution may not be feasible any more, however it stays dual feasible.

This is the reason why MINLP frameworks usually work with the dual simplex (= simplex applied to the dual problem). In this case, you maintain dual feasibility throughout and converge when primal feasibility is attained.

Dual active-set methods are brilliant in this regard

Thank you. Your answer is what I want, I will try it.