 # Fixing vs Constraining a variable in JuMP

I was wondering if someone could explain the differences between fixing and constraining a variable in a little more detail than the JuMP documentation does.

If you use fix(...; force=true); my understanding is that this will clamp down the variable to your given value and remove any way for it to change—probably by setting both the lower and upper bound of the variable to be the same as your given value too.

If you set a @constraint on a variable to be equal to your value, what is the difference here? The variable could have bounds set farther away, but so long as your system plays nice: this shouldn’t matter, and the optimal result will yield your variable’s value to your set value.

These two things don’t seem to be equivalent though, so can someone tell me when I should be using one case or the other?

The problem I’m working on (Julia 1.1.0, JuMP.jl 0.19.2, Ipopt.jl 0.5.4, Ipopt 3.12.13) has this variable T_{at}:

N=60;
@variable(model, 0.0 <= Tₐₜ[1:N] <= 20.0);
...
@constraint(model, [i=1:N-1], Tₐₜ[i+1] == Tₐₜ[i] + ξ₁*((FORC[i+1]-λ*Tₐₜ[i])-(ξ₃*(Tₐₜ[i]-Tₗₒ[i]))));


The initial value of T_{at} is known, so I wish to set it:

JuMP.fix(Tₐₜ, tatm₀; force=true);


On a normal run, this works as I’d expect. A solution is found without problems.
Now, I wish to change the upper bound of T_{at} \leq 2.0:

for i in 2:N
JuMP.set_upper_bound(vars.Tₐₜ[i], 2.0);
end


We run from 2:N since the first value is fixed. Without getting into too much details of the rest of my system: Ipopt can’t solve this problem. It becomes concave and ultimately gets stuck in some local well and calculates forever in the same place:

iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
0 -0.0000000e+00 1.75e+04 1.00e+00 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0
250r-3.0806001e+04 1.46e+04 6.84e+03 2.2 2.98e+03 - 3.98e-02 1.88e-03f 1
500 -1.8223300e+04 3.99e+03 1.02e+02 -1.0 5.18e+05 -8.2 2.38e-03 4.56e-05h 7
750r-4.7288246e+13 4.08e+09 5.77e-10 -4.2 1.91e-03 - 1.00e+00 1.00e+00h 1
1000r-4.7288246e+13 4.08e+09 5.76e-10 -4.2 1.91e-03 - 1.00e+00 1.00e+00h 1
1250r-4.7288246e+13 4.08e+09 5.76e-10 -4.2 1.91e-03 - 1.00e+00 1.00e+00h 1
1500r-4.7288246e+13 4.08e+09 5.76e-10 -4.2 1.91e-03 - 1.00e+00 1.00e+00h 1
1750r-4.7288246e+13 4.08e+09 5.77e-10 -4.2 1.91e-03 - 1.00e+00 1.00e+00h 1
2000r-4.7288246e+13 4.08e+09 5.76e-10 -4.2 1.91e-03 - 1.00e+00 1.00e+00h 1
2250r-4.7288246e+13 4.08e+09 5.77e-10 -4.2 1.91e-03 - 1.00e+00 1.00e+00h 1
2500r-4.7288246e+13 4.08e+09 5.76e-10 -4.2 1.91e-03 - 1.00e+00 1.00e+00h 1
2750r-4.7288246e+13 4.08e+09 5.76e-10 -4.2 1.91e-03 - 1.00e+00 1.00e+00h 1
3000r-4.7288246e+13 4.08e+09 5.76e-10 -4.2 1.91e-03 - 1.00e+00 1.00e+00h 1

(When lg(mu) << 0 the problem is concave and Ipopt is going to have a hard time AFAIK).

Instead, if I unfix the value, set a constraint and then bound the first value:

JuMP.unfix(Tₐₜ);
JuMP.set_lower_bound(Tₐₜ, 0.0); # Same as initial lower bound
@constraint(model, Tₐₜ == tatm₀);
for i in 1:N # Set the first element too this time
JuMP.set_upper_bound(Tₐₜ[i], 2.0);
end


This problem solves to optimal without complaint.

What’s going on here? Should I just avoid using fix altogether? Am I using fix in the wrong context? Any additional information around these differences would be greatly appreciated.

fix(x, value) sets the lower and upper variable bounds of x to value.

@constraint(model, x == value) adds a new linear constraint where the left-hand side is the function 1.0 * x + 0 and the right-hand side is value.

In general, it’s better to use fix, since it’s one less constraint to deal with and solvers can efficiently use variable bounds.

It’s unusual that Ipopt struggles, but it’s hard to say why without a minimum working example.

What value is tatm₀?

1 Like

Thanks. That’s what I thought.

tatm₀=0.8, so it’s still well within the range I’m trying to bound.

Yeah, it’s odd that Ipopt has this issue in this case then. I’ve tried to cut my project down to a MWE before for another problem, but it’s pretty closely bound and doesn’t seem to occur when I’m not using the full system.

Here’s a link to a script which will show you what I’m seeing, which is the smallest I can get it sorry.

To run it, just make a new folder fixtest, then:

julia> ]
(v1.1) pkg> activate .
(fixtest) pkg> add JuMP
(fixtest) pkg> add Ipopt


Then add fixtest.jl to fixtest/src/
Finally, from the src directory run julia --project=@. fixtest.jl

At the moment, the solver will spool forever and not reach an end condition.
If you comment out lines 250-253, and replace them with 256-261, you’ll get a solution quite quickly.

Note, that I’ve been testing on Ipopt 3.12.13, can’t say for sure that the bundled 3.12.10 will give the same results—I’ve had issues with it in the past. Just in case, here’s what I see:

Spooling version, using fix (killed early)

This is Ipopt version 3.12.13, running with linear solver mumps.
NOTE: Other linear solvers might be more efficient (see Ipopt documentation).

Number of nonzeros in equality constraint Jacobian…: 3912
Number of nonzeros in inequality constraint Jacobian.: 176
Number of nonzeros in Lagrangian Hessian…: 944

Total number of variables…: 1544
variables with only lower bounds: 536
variables with lower and upper bounds: 178
variables with only upper bounds: 59
Total number of equality constraints…: 1374
Total number of inequality constraints…: 59
inequality constraints with only lower bounds: 0
inequality constraints with lower and upper bounds: 0
inequality constraints with only upper bounds: 59

iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
0 -0.0000000e+00 9.00e+03 1.00e+00 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0
250r-6.4597620e+13 1.04e+10 2.32e-03 -3.3 8.42e-03 - 1.00e+00 3.05e-05f 16
500r-6.4597620e+13 1.04e+10 2.31e-03 -3.3 8.42e-03 - 1.00e+00 1.53e-05f 17
750r-6.4597620e+13 1.04e+10 2.30e-03 -3.3 8.42e-03 - 1.00e+00 1.53e-05f 17

Working version, with constraints

This is Ipopt version 3.12.13, running with linear solver mumps.
NOTE: Other linear solvers might be more efficient (see Ipopt documentation).

Number of nonzeros in equality constraint Jacobian…: 3916
Number of nonzeros in inequality constraint Jacobian.: 176
Number of nonzeros in Lagrangian Hessian…: 945

Total number of variables…: 1545
variables with only lower bounds: 536
variables with lower and upper bounds: 179
variables with only upper bounds: 59
Total number of equality constraints…: 1375
Total number of inequality constraints…: 59
inequality constraints with only lower bounds: 0
inequality constraints with lower and upper bounds: 0
inequality constraints with only upper bounds: 59

iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
0 -0.0000000e+00 9.00e+03 1.00e+00 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0
In iteration 59, 1 Slack too small, adjusting variable bound
In iteration 62, 1 Slack too small, adjusting variable bound

Number of Iterations…: 228

                               (scaled)                 (unscaled)


Objective…: -2.6944230209485045e+03 -2.6944230209485045e+03
Dual infeasibility…: 5.6843418860808015e-14 5.6843418860808015e-14
Constraint violation…: 9.7067947429918533e-11 9.7067947429918533e-11
Complementarity…: 2.5059126891810398e-09 2.5059126891810398e-09
Overall NLP error…: 2.5059126891810398e-09 2.5059126891810398e-09

Number of objective function evaluations = 311
Number of objective gradient evaluations = 199
Number of equality constraint evaluations = 311
Number of inequality constraint evaluations = 311
Number of equality constraint Jacobian evaluations = 238
Number of inequality constraint Jacobian evaluations = 238
Number of Lagrangian Hessian evaluations = 228
Total CPU secs in IPOPT (w/o function evaluations) = 86.921
Total CPU secs in NLP function evaluations = 1.485

EXIT: Optimal Solution Found.

Interesting. It’d really help to have a simpler example. Some tips to get the script smaller.

• Try setting constants to 1.0 (or 0), and then removing.
• Try commenting out constraints.
• Try fixing variables to 0, and then removing
• Try hard-coding, then removing the parameter

Sure thing. I’ve got to head out of town now and won’t be back until Monday. I’ll try to get you a smaller example then.

BTW Ipopt has special treatment of variables with the same upper and lower bounds. This option may be of interest, https://www.coin-or.org/Ipopt/documentation/node44.html#opt:fixed_variable_treatment

Thanks @ccoffrin, good to know. Looks like I need the default value there though, so no need to override this.

@odow: seems my system is quite brittle - at least in the sense of getting a smaller example of this particular issue. Will keep working on putting a smaller example together, but something interesting / odd to note at the moment:

Removing the initial condition constants from the top of the script and hardcoding them:

JuMP.fix(K, 135.0; force=true);
JuMP.fix(Mₐₜ, 830.4; force=true);
JuMP.fix(Mᵤₚ, 1527.0; force=true);
JuMP.fix(Mₗₒ, 1527.0; force=true);
JuMP.fix(Tₐₜ, 0.8; force=true);
JuMP.fix(Tₗₒ, 0.0068; force=true);


Swaps the concave/spooling issue. With this arrangement, fixing the T_{at} upper bound results in an optimal solution, whereas the constraint option now spools…

Edit: As you can see I copy/pasted the M_{lo} value wrong. The above isn’t actually the case…

OK, so I’ve taken out a number of parameters and hard codeded things where I can to make it simpler.

Updated script

It is difficult to widdle the example down to something that gives me the same type of result—where the fix fails and the constraint doesn’t, so it’s had to stay relatively the same.

That being said, I’ve identified four different ways that the fix solution would also solve to optimal, which are listed under ADJUSTMENTS at the top of the file:

1. Set \alpha to 0.0
2. Set \gamma_e to 0.0
3. Set \{\theta_1\} to 1.0
4. Set \{L\} to 1.0, and remove scaling from PERIODU, CPC and YGROSS.

In addition to this, the system is quite brittle for reasons I can’t understand.

const scale = 5/3.666;


is used in two places (CCA and M_{at}). Replacing these values directly makes even the constraint version fails.

CCA[i+1] == CCA[i] + Eind[i]*scale
# -->
CCA[i+1] == CCA[i] + Eind[i]*(5/3.666)


Perhaps I’m looking at two different issues here, but I fail to see how this change isn’t a bug somewhere deeper than my code.

Constraints like \Omega[i] == 0.0 seem to be necessary for stability, since if I drop that the system becomes concave. You can see DAMAGES is also zero and should be able to be dropped, but cannot be for the same reason.

Does this (semi-)cut down version help in identifying the problem here?

I can’t reproduce this. When I run your updated script, I get Optimal Solution Found. Although, when I run the original one, I can reproduce the stall.

Keep working on a simpler example. You still have a lot of things you could remove, like DAMAGES, CPRICE, etc.

It’s very interesting that hard-coding the coefficients results in a different result. Does it still occur if you use @NLconstraint for the CCA[i+1] constraints instead of @constraint? JuMP re-orders some of the operations, but I wouldn’t expect there to be a quantitative difference.