I was wondering if someone could explain the differences between fixing and constraining a variable in a little more detail than the JuMP documentation does.
If you use fix(...; force=true);
my understanding is that this will clamp down the variable to your given value and remove any way for it to change—probably by setting both the lower and upper bound of the variable to be the same as your given value too.
If you set a @constraint
on a variable to be equal to your value, what is the difference here? The variable could have bounds set farther away, but so long as your system plays nice: this shouldn’t matter, and the optimal result will yield your variable’s value
to your set value.
These two things don’t seem to be equivalent though, so can someone tell me when I should be using one case or the other?
The problem I’m working on (Julia 1.1.0, JuMP.jl 0.19.2, Ipopt.jl 0.5.4, Ipopt 3.12.13) has this variable T_{at}:
N=60;
@variable(model, 0.0 <= Tₐₜ[1:N] <= 20.0);
...
@constraint(model, [i=1:N-1], Tₐₜ[i+1] == Tₐₜ[i] + ξ₁*((FORC[i+1]-λ*Tₐₜ[i])-(ξ₃*(Tₐₜ[i]-Tₗₒ[i]))));
The initial value of T_{at} is known, so I wish to set it:
JuMP.fix(Tₐₜ[1], tatm₀; force=true);
On a normal run, this works as I’d expect. A solution is found without problems.
Now, I wish to change the upper bound of T_{at} \leq 2.0:
for i in 2:N
JuMP.set_upper_bound(vars.Tₐₜ[i], 2.0);
end
We run from 2:N
since the first value is fixed. Without getting into too much details of the rest of my system: Ipopt can’t solve this problem. It becomes concave and ultimately gets stuck in some local well and calculates forever in the same place:
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
0 -0.0000000e+00 1.75e+04 1.00e+00 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0
250r-3.0806001e+04 1.46e+04 6.84e+03 2.2 2.98e+03 - 3.98e-02 1.88e-03f 1
500 -1.8223300e+04 3.99e+03 1.02e+02 -1.0 5.18e+05 -8.2 2.38e-03 4.56e-05h 7
750r-4.7288246e+13 4.08e+09 5.77e-10 -4.2 1.91e-03 - 1.00e+00 1.00e+00h 1
1000r-4.7288246e+13 4.08e+09 5.76e-10 -4.2 1.91e-03 - 1.00e+00 1.00e+00h 1
1250r-4.7288246e+13 4.08e+09 5.76e-10 -4.2 1.91e-03 - 1.00e+00 1.00e+00h 1
1500r-4.7288246e+13 4.08e+09 5.76e-10 -4.2 1.91e-03 - 1.00e+00 1.00e+00h 1
1750r-4.7288246e+13 4.08e+09 5.77e-10 -4.2 1.91e-03 - 1.00e+00 1.00e+00h 1
2000r-4.7288246e+13 4.08e+09 5.76e-10 -4.2 1.91e-03 - 1.00e+00 1.00e+00h 1
2250r-4.7288246e+13 4.08e+09 5.77e-10 -4.2 1.91e-03 - 1.00e+00 1.00e+00h 1
2500r-4.7288246e+13 4.08e+09 5.76e-10 -4.2 1.91e-03 - 1.00e+00 1.00e+00h 1
2750r-4.7288246e+13 4.08e+09 5.76e-10 -4.2 1.91e-03 - 1.00e+00 1.00e+00h 1
3000r-4.7288246e+13 4.08e+09 5.76e-10 -4.2 1.91e-03 - 1.00e+00 1.00e+00h 1
(When lg(mu) << 0 the problem is concave and Ipopt is going to have a hard time AFAIK).
Instead, if I unfix
the value, set a constraint and then bound the first value:
JuMP.unfix(Tₐₜ[1]);
JuMP.set_lower_bound(Tₐₜ[1], 0.0); # Same as initial lower bound
@constraint(model, Tₐₜ[1] == tatm₀);
for i in 1:N # Set the first element too this time
JuMP.set_upper_bound(Tₐₜ[i], 2.0);
end
This problem solves to optimal without complaint.
What’s going on here? Should I just avoid using fix
altogether? Am I using fix
in the wrong context? Any additional information around these differences would be greatly appreciated.