Workaround for modifying coefficients in constraints in JuMP?

In the 0.19 documentation on this page it says:

Modifying a constant term

Most often, modifications involve changing the “right-hand side” of a linear constraint. This presents a challenge for JuMP because it leads to ambiguities. For example, what is the right-hand side term of @constraint(model, 2x + 1 <= x - 3) ? This applies more generally to any constant term in a function appearing in the objective or a constraint.

To avoid these ambiguities, JuMP includes the ability to fix variables to a value using the fix function. Fixing a variable sets its lower and upper bound to the same value. Thus, changes in a constant term can be simulated by adding a dummy variable and fixing it to different values. Here is an example:

julia> @variable(model, const_term)
const_term

julia> @constraint(model, con, 2x <= const_term)
con : 2 x - const_term <= 0.0

julia> fix(const_term, 1.0)

Note

Even though const_term is fixed, it is still a decision variable. Thus, const_term * x is bilinear. Fixed variables are not replaced with constants when communicating the problem to a solver.

In my problem I have vector constraints like const_term * x and I would like to repeatedly solve for different values of const_term.
The program is linear for any fixed value of const_term and (as stated above), if I try to use fix for const_term the program is recognized as quadratic/bilinear by Gurobi, which leads to problems.

What is a good workaround for solving a repeatedly parameterized problem in JuMP where the coefficients in the constraints change?

I haven’t used it before, but I think that’s the goal of ParameterJuMP. And with Convex.jl instead of JuMP— which may or may not be an viable alternative, depending on the problem— you could use fix! and free! (I recently helped fix some bugs regarding fix! and free! there, so I figured I’d advertise the package).

There’s Constraints · JuMP, the paragraph below the part of the documentation you posted. You could also think about using GitHub - tkoolen/Parametron.jl: Efficiently solving instances of a parameterized family of (possibly mixed-integer) linear/quadratic optimization problems in Julia. This has been discussed before a number of times. See e.g. Deleting containerized constraints - #13 by tkoolen for a Parametron demo.

Thank you for the suggestions.

A solution for my purposes was just to continually deepcopy the model and then add the changed constraint.
I assume this is not very efficient, but for the time being it seems to work ok.

Is there any reason why set_coefficient doesn’t work?

We try to discourage the use of deepcopy. Instead, you could use a function to rebuild the base model:

function base_model()
    model = Model()
    @variable(model, x)
    @variable(model, y)
    @constraint(model, x >= 0)
    return model
end

for c in 1:3  
    m = base_model()
    @constraint(m, m[:x] + c * m[:y] <= 1)
    optimize!(m)
end

You’re right: set_coefficient works.
I had interpreted the “scalar” qualification in the manual http://www.juliaopt.org/JuMP.jl/v0.19.2/constraints/#Modifying-a-variable-coefficient-1 to mean that it wouldn’t work for a sum of multiple variables.
However, this works:

m = Model()
@variable(m, x[1:2])
@constraint(m, con, 5*x[1] + 6*x[2] == 0)
set_coefficient(con, x[1], 4)
println(m)
Feasibility
Subject to
 4 x[1] + 6 x[2] = 0.0

So, that solves my immediate problem.

But if you have time, could you elaborate on why deepcopy is discouraged?

Also, for the base model workaround, my model is quite large and the part I want to adjust is only a small component of it. Would that make this solution less attractive?

The underlying solver object is accessed via a C API. deepcopy’ing a model will copy the Julia part, but only copy the pointer to the underlying solver. So if you make modifications to one of the models, the second one is now out of date. See, e.g.,

using JuMP, Gurobi

model = Model(with_optimizer(Gurobi.Optimizer))
@variable(model, x >= 0)
@objective(model, Min, x)
optimize!(model)
@assert value(x) == 0

model_2 = deepcopy(model)
@constraint(model_2, model_2[:x] >= 1)
optimize!(model_2)
@assert value(model_2[:x]) == 1

optimize!(model)
@show value(x) # <-- actually 1 not 0!

Makes sense! Thanks for the explanation.

If the docs were confusing, please help us make them better!

First, scroll to the top of Constraints · JuMP and click “Edit on Github”

Then click the pencil “Edit this file”

Scroll down, make changes to the file, add a commit message, then click “Propose file change”

Let us know if you need any help!

Ok, I modified the example and added a pull request.

1 Like