Difference in output with change in variable structure

I am solving a day ahead optimization problem. I am using the same set of variables and constraint and the same objective function in two cases, but I am getting a different outcome. The first case considers a single hydro plant with Nk units. The second generalizes the code to account for multiple hydro plants, each with different number of units. So everything is the same, with the same input. My question is that can changes in the data structure of variables result in change in the optimization result?
The following the first case

@variables(PO, begin
      0 ≤ Ph[h=1:Nk, t=1:T] ≤ Phmax[h]
        Phbase[t=1:T] >= 0
        Phbuffer[t=1:T] >= 0
        xh[1:T], Bin
end)
@constraints(PO, begin
        Koynabuffer[t=1:T], sum(Ph[h,t] for h in 1:Nk) == Phbase[t] + Phbuffer[t]
        EAvail_Base, sum(Phbase[t] for t in 1:T) * δₜ <= Eh_base
        EAvail_Buffer, sum(Phbase[t] + Phbuffer[t] for t in 1:T) * δₜ <= Eh_base + Eh_buffer
        MinH_Koyna[t=1:T], sum(Ph[h, t] for h = 1:Nk) ≥ KoynaP_min
        PAvail_Koyna[t=1:T], sum(Ph[h, t] for h = 1:Nk) ≤ (Koyna_DC - KoynaP_min) * xh[t] + KoynaP_min
        # Koyna Ramp Constraints
        RampUpKoyna[h=1:Nk, t=2:T], Ph[h, t] - Ph[h, t-1] ≤ RU_Koyna[h] * AF_hydro[h] * δₜm
        RampDnKoyna[h=1:Nk, t=2:T], Ph[h, t] - Ph[h, t-1] ≥ -RU_Koyna[h] * AF_hydro[h] * δₜm
end)

The second case is as follows:

@variables(PO, begin
     0 ≤ Ph[p=1:N_Hydro_Plants, k=1:Nk[p], t=1:T] ≤ Hydro_IC_unit[p][k]  # Power per unit
        Phbase[p=1:N_Hydro_Plants, t=1:T] >= 0                               # Base TMC per plant
        Phbuffer[p=1:N_Hydro_Plants, t=1:T] >= 0 
        xh[p=1:N_Hydro_Plants, t=1:T], Bin
end)
@constraints(PO, begin
        # Plant-level Hydro Generation Constraints (like original)
        # Total plant power = base + buffer
        Hydrobuffer[p=1:N_Hydro_Plants, t=1:T], sum(Ph[p,k,t] for k in 1:Nk[p]) == Phbase[p,t] + Phbuffer[p,t]

        # Plant-level energy availability constraints
        EAvail_Base[p=1:N_Hydro_Plants], sum(Phbase[p,t] for t in 1:T) * δₜ <= Eh_base[p]
        EAvail_Buffer[p=1:N_Hydro_Plants], sum(Phbase[p,t] + Phbuffer[p,t] for t in 1:T) * δₜ <= Eh_base[p] + Eh_buffer[p]

        # Plant-level minimum power constraint (total plant generation >= TM)
        MinH_Hydro[p=1:N_Hydro_Plants, t=1:T], sum(Ph[p,k,t] for k in 1:Nk[p]) >= HydroP_min[p]

        # Plant-level capacity constraint using plant binary (original formulation)
        # If xh=0: Ph_total <= TM, if xh=1: Ph_total <= DC
        PAvail_Hydro[p=1:N_Hydro_Plants, t=1:T], sum(Ph[p,k,t] for k in 1:Nk[p]) <= (Hydro_DC[p] - HydroP_min[p]) * xh[p,t] + HydroP_min[p]

        # Unit-level Hydro Ramp Constraints (include AF factor as in original)
        RampUpHydro[p=1:N_Hydro_Plants, k=1:Nk[p], t=2:T], Ph[p,k,t] - Ph[p,k,t-1] <= RU_Hydro_unit[p][k] * AF_Hydro_unit[p][k]
        RampDnHydro[p=1:N_Hydro_Plants, k=1:Nk[p], t=2:T], Ph[p,k,t] - Ph[p,k,t-1] >= -RD_Hydro_unit[p][k] * AF_Hydro_unit[p][k]

end)

In a non-convex problem, where it is finding a local optimum, it’s conceivable that a change of variables will lead an algorithm to converge to a different local optimum.

A good way to check whether this is happening is to pass your previous local optimum as the starting point to optimization in the new variables. Then you should ordinarily expect local optimization to find the same optimum. If it doesn’t, then probably your change of variables has a bug where you accidentally changed the problem.

Hi @Gagan_Meena,

It looks like your problem is a MILP. There are two things that could be happening when you change the order of variables or constraints:

  1. you may find different primal solutions solutions with slightly different objective values because most solvers have a non-zero MIP gap. Their objective values should be within say, 0.1% of each other (it depends on the solver).
  2. you may find a different optimal primal solution. However, their objective values should be identical. (That is, there may be multiple optimal primal solutions, this is sometimes called dual degeneracy.)

Getting different primal solutions may happen even if you don’t change the order of variables and constraints, but you change computers, or change some of the settings. In practice, solvers provide very little control over which primal solution will be returned, only that the solution returned will be optimal.

I’m not sure whether the term “local optimum” is well defined in a MILP (I think that was only in continuous optimization, i.e. nonlinear programming). It appears we use “suboptimum” in MILP context.

But the warm start idea for the second solve is true.

I don’t think Steven noticed it was a MIP

1 Like