Most often, modifications involve changing the “right-hand side” of a linear constraint. This presents a challenge for JuMP because it leads to ambiguities. For example, what is the right-hand side term of @constraint(model, 2x + 1 <= x - 3) ? This applies more generally to any constant term in a function appearing in the objective or a constraint.

To avoid these ambiguities, JuMP includes the ability to fix variables to a value using the fix function. Fixing a variable sets its lower and upper bound to the same value. Thus, changes in a constant term can be simulated by adding a dummy variable and fixing it to different values. Here is an example:

Even though const_term is fixed, it is still a decision variable. Thus, const_term * x is bilinear. Fixed variables are not replaced with constants when communicating the problem to a solver.

In my problem I have vector constraints like const_term * x and I would like to repeatedly solve for different values of const_term.
The program is linear for any fixed value of const_term and (as stated above), if I try to use fix for const_term the program is recognized as quadratic/bilinear by Gurobi, which leads to problems.

What is a good workaround for solving a repeatedly parameterized problem in JuMP where the coefficients in the constraints change?

I haven’t used it before, but I think that’s the goal of ParameterJuMP. And with Convex.jl instead of JuMP— which may or may not be an viable alternative, depending on the problem— you could use fix! and free! (I recently helped fix some bugs regarding fix! and free! there, so I figured I’d advertise the package).

A solution for my purposes was just to continually deepcopy the model and then add the changed constraint.
I assume this is not very efficient, but for the time being it seems to work ok.

Is there any reason why set_coefficient doesn’t work?

We try to discourage the use of deepcopy. Instead, you could use a function to rebuild the base model:

function base_model()
model = Model()
@variable(model, x)
@variable(model, y)
@constraint(model, x >= 0)
return model
end
for c in 1:3
m = base_model()
@constraint(m, m[:x] + c * m[:y] <= 1)
optimize!(m)
end

But if you have time, could you elaborate on why deepcopy is discouraged?

Also, for the base model workaround, my model is quite large and the part I want to adjust is only a small component of it. Would that make this solution less attractive?

The underlying solver object is accessed via a C API. deepcopy’ing a model will copy the Julia part, but only copy the pointer to the underlying solver. So if you make modifications to one of the models, the second one is now out of date. See, e.g.,

using JuMP, Gurobi
model = Model(with_optimizer(Gurobi.Optimizer))
@variable(model, x >= 0)
@objective(model, Min, x)
optimize!(model)
@assert value(x) == 0
model_2 = deepcopy(model)
@constraint(model_2, model_2[:x] >= 1)
optimize!(model_2)
@assert value(model_2[:x]) == 1
optimize!(model)
@show value(x) # <-- actually 1 not 0!