Why do I get Infeasible Solution (JuMP with Optimizer Ipopt)

Can you post the code?

Which solver are you using? JuMP is a convenient way to write models in a solver-agnostic way, not a solver.

2 Likes

I am using Ipopt as a solver …yes I will post the code now

using JuMP, Ipopt
using CSV
using DataFrames


include("structPS.jl")
include("basicfunctions.jl")
include("loadflow.jl")

LINES_DATAFRAME= create_dataframe("C:/Users/mayar.madboly/Documents/M A S T E R S S S S S S/chapter_4/work/excel/L3.CSV")
BUSES_NUMBER=3
LINES_NUMBER=3
LINES_ARRAY=create_array(Line,LINES_NUMBER)
set_array(LINES_ARRAY,Line,LINES_DATAFRAME)
Y_BUS=create_ybus(BUSES_NUMBER,LINES_ARRAY)
Y_LINES=get_ylines(BUSES_NUMBER,LINES_ARRAY)
G = real(Y_LINES);
B = imag(Y_LINES);
Nodes_set=[1,2,3]
Gen_set=[1,2,3]
Pd=[0,-4,0]
Qd=[0,-2.5,0]
S=zeros(3,3)
S=[2 2 2; 2 2 2; 2 2 2]

m = JuMP.Model(JuMP.with_optimizer(Ipopt.Optimizer, print_level =0))
# 2.1.Variables
JuMP.@variable(m, 0.1≤ v[i in Nodes_set] ≤ 1.06)
JuMP.@variable(m, -π/4 ≤ δ[i in Nodes_set] ≤ π/4)
JuMP.@variable(m, 0.1 ≤ p[i in Gen_set] ≤ 5)
JuMP.@variable(m, 0.1 ≤ q[i in Gen_set] ≤ 5)
JuMP.@variable(m, pij[i = Nodes_set, j = Nodes_set])
JuMP.@variable(m, qij[i = Nodes_set, j = Nodes_set])
JuMP.@constraint(m, Pg2,(p[2] ==  0.0))  #bus 2 is a PQ bus ,there is no generator at bus 2
JuMP.@constraint(m, Qg2,(q[2] ==  0.0))
JuMP.@constraint(m, ReferenceAngle,(δ[1] ==  0.0))

JuMP.@NLconstraint(m, p_line[i in Nodes_set, j in Nodes_set],
     (pij[i,j] == (v[i]^2*G[i,j]-v[i]*v[j]*(G[i,j]*cos(δ[i]-δ[j])+B[i,j]*sin(δ[i]-δ[j])))))

JuMP.@NLconstraint(m, q_line[i in Nodes_set, j in Nodes_set],
      (qij[i,j] == (-1*v[i]^2*B[i,j]-v[i]*v[j]*(G[i,j]*sin(δ[i]-δ[j])-B[i,j]*cos(δ[i]-δ[j])))))

JuMP.@NLconstraint(m, Pnodal[i in Nodes_set],
   (sum(pij[i,j] for j in Nodes_set) == p[i]+Pd[i]))


JuMP.@constraint(m, Qnodal[i in Nodes_set],
  (sum(qij[i,j] for j = Nodes_set) == q[i]+Qd[i]))

JuMP.@NLconstraint(m, Smax[i in Nodes_set, j in Nodes_set],
  pij[i,j]^2 + qij[i,j]^2 ≤ S[i,j]^2)

    #OBJECTIVE
    JuMP.@NLobjective(m,Min,p[1]+p[3])
    JuMP.optimize!(m)
    println("Optimal solutions:")
    println("p1:",JuMP.value(p[1]))
    println("p3:",JuMP.value(p[3]))
    println("p12:",JuMP.value(pij[1,2]))
    println("p21:",JuMP.value(pij[2,1]))
    println("p13:",JuMP.value(pij[1,3]))
    println("p31:",JuMP.value(pij[3,1]))
    println("p23:",JuMP.value(pij[2,3]))
    println("p32:",JuMP.value(pij[3,2]))

status = JuMP.termination_status(m)
println(status)

I will send you the lines array do you do not need the functions in the beginning, okay?

using JuMP, Ipopt

include("structPS.jl")
include("basicfunctions.jl")
include("loadflow.jl")

Y_LINES=[0.0+0.0im  10.0-20.0im  10.0-30.0im; 0.0+0.0im   0.0+0.0im   16.0-32.0im; 0.0+0.0im   0.0+0.0im    0.0+0.0im]
G = real(Y_LINES);
B = imag(Y_LINES);
Nodes_set=[1,2,3]
Gen_set=[1,2,3]
Pd=[0,-4,0]
Qd=[0,-2.5,0]
S=zeros(3,3)
S=[2 2 2; 2 2 2; 2 2 2]

m = JuMP.Model(JuMP.with_optimizer(Ipopt.Optimizer, print_level =0))
# 2.1.Variables
JuMP.@variable(m, 0.1≤ v[i in Nodes_set] ≤ 1.06)
JuMP.@variable(m, -π/4 ≤ δ[i in Nodes_set] ≤ π/4)
JuMP.@variable(m, 0.1 ≤ p[i in Gen_set] ≤ 5)
JuMP.@variable(m, 0.1 ≤ q[i in Gen_set] ≤ 5)
JuMP.@variable(m, pij[i = Nodes_set, j = Nodes_set])
JuMP.@variable(m, qij[i = Nodes_set, j = Nodes_set])
JuMP.@constraint(m, Pg2,(p[2] ==  0.0))  #bus 2 is a PQ bus ,there is no generator at bus 2
JuMP.@constraint(m, Qg2,(q[2] ==  0.0))
JuMP.@constraint(m, ReferenceAngle,(δ[1] ==  0.0))

JuMP.@NLconstraint(m, p_line[i in Nodes_set, j in Nodes_set],
     (pij[i,j] == (v[i]^2*G[i,j]-v[i]*v[j]*(G[i,j]*cos(δ[i]-δ[j])+B[i,j]*sin(δ[i]-δ[j])))))

JuMP.@NLconstraint(m, q_line[i in Nodes_set, j in Nodes_set],
      (qij[i,j] == (-1*v[i]^2*B[i,j]-v[i]*v[j]*(G[i,j]*sin(δ[i]-δ[j])-B[i,j]*cos(δ[i]-δ[j])))))

JuMP.@NLconstraint(m, Pnodal[i in Nodes_set],
   (sum(pij[i,j] for j in Nodes_set) == p[i]+Pd[i]))


JuMP.@constraint(m, Qnodal[i in Nodes_set],
  (sum(qij[i,j] for j = Nodes_set) == q[i]+Qd[i]))

JuMP.@NLconstraint(m, Smax[i in Nodes_set, j in Nodes_set],
  pij[i,j]^2 + qij[i,j]^2 ≤ S[i,j]^2)

    #OBJECTIVE
    JuMP.@NLobjective(m,Min,p[1]+p[3])
    JuMP.optimize!(m)
    println("Optimal solutions:")
    println("p1:",JuMP.value(p[1]))
    println("p3:",JuMP.value(p[3]))
    println("p12:",JuMP.value(pij[1,2]))
    println("p21:",JuMP.value(pij[2,1]))
    println("p13:",JuMP.value(pij[1,3]))
    println("p31:",JuMP.value(pij[3,1]))
    println("p23:",JuMP.value(pij[2,3]))
    println("p32:",JuMP.value(pij[3,2]))

status = JuMP.termination_status(m)
println(status)

I suggest you try to simplify your problem as much as possible. It’s hard to offer advice on a big model with included files. Take a read of the first post in Please read: make it easier to help you - #19.

You should also try to provide a feasible starting point using the start keyword:

@variable(model, x, start = 1.0)

Moreover, instead of having fixed constraints p[2] == 0, you can use fix(p[2], 0.0; force = true).

It would be helpful to see the Ipopt log as well.

Finally, since you’re solving OPF type problems, you should check out GitHub - lanl-ansi/PowerModels.jl: A Julia/JuMP Package for Power Network Optimization

3 Likes

At first, I had to use the included files to create y_lines but then I put it explicitly and removed the included files
I provided a starting point but it didnot work though
I came across PowerModels but since I want to make my own formulation, I am trying not to use it

So at first, I had to use the included files to create y_lines but then I put it explicitly and removed the included files
I provided a starting point but it didnot work though
I came across PowerModels but since I want to make my own formulation, I am trying not to use it

Is there a feasible solution for your problem data? Here is something I got to solve. But the solution has nonzero entries for p[2] and q[2].

Y_LINES=[0.0+0.0im  10.0-20.0im  10.0-30.0im; 0.0+0.0im   0.0+0.0im   16.0-32.0im; 0.0+0.0im   0.0+0.0im    0.0+0.0im]
G = real(Y_LINES);
B = imag(Y_LINES);
Nodes_set=[1,2,3]
Gen_set=[1,2,3]
Pd=[0,-4,0]
Qd=[0,-2.5,0]
S=zeros(3,3)
S=[2 2 2; 2 2 2; 2 2 2]
m = Model(Ipopt.Optimizer)
@variables(m, begin
    0.1 ≤ v[i in Nodes_set] ≤ 1.06
    -π/4 ≤ δ[i in Nodes_set] ≤ π/4
end)
fix(δ[1], 0.0; force = true)
@NLexpressions(m, begin
    pij[i in Nodes_set, j in Nodes_set],
        v[i]^2 * G[i, j] - v[i] * v[j] * (G[i, j] * cos(δ[i] - δ[j]) + B[i, j] * sin(δ[i] - δ[j]))
    qij[i in Nodes_set, j in Nodes_set],
        -v[i]^2 * B[i, j] - v[i] * v[j] * (G[i, j] * sin(δ[i] - δ[j]) - B[i, j] * cos(δ[i] - δ[j]))
    p[i in Nodes_set], sum(pij[i, j] for j in Nodes_set) - Pd[i]
    q[i in Nodes_set], sum(qij[i, j] for j in Nodes_set) - Qd[i]
end)
@NLconstraints(m, begin
    Smax[i in Nodes_set, j in Nodes_set], pij[i,j]^2 + qij[i,j]^2 ≤ S[i,j]^2
    # p[2] == 0
    # q[2] == 0
end)
@NLobjective(m, Min, p[1]+p[3])
optimize!(m)

what does it mean to add these NLexpression, is it equivalent to adding NLconstraints ?

I did simulate my test case and yet it has an operating point @odow

Try using the operating point as the start value.

Here’s the documentation: Nonlinear Modeling · JuMP. @NLexpressions are just like @expression. They aren’t a constraint, just a collection of terms.

1 Like

I managed to get a local solution and I tried to put the starting point and run again and I got the same solution
Ipopt guarantees global convergence and this showed in my experiment with and without starting point
I need to reach an optimum solution
-should I change the solver or
-should I change the convergence options

@odow

Ipopt guarantees global convergence

This is only true for convex problems, and your problem is non-convex.

@ccoffrin is the expert when it comes to Ipopt and OPF. He might have some suggestions.

2 Likes

Okay so I need to change my solver now
@ccoffrin Can you please suggest other solvers that I should look into ?

The topic of proving global optimality of AC-OPF is a bit of a long story with no simple answer at this moment.

The short answer I can give you is that for the polar form of the equations you have put in this post, Couenne is the only solver I know of that will give you optimality proffs, off the shelf. However, last I checked this solver cannot provide proofs above 30 bus networks in a day or so.

If you switch the rectangular form of the power flow equations you have more solver options. For example, SCIP, BARON, Alpine, Gurobi. Last I checked these were getting stuck around 50-100 buses.

As far as I know, the best approach for global optimality proofs is developing upper bounds with Ipopt and using Optimization Based Bound Tightening + Convex Relaxations to develop tight lower bounds on the objective value. Here is a recent study about this approach and the implementation is available here.

I would note howere, if your primary need is just to have high quality solutions to the OPF problem, Ipopt is a relatively safe bet, in the vast majority of cases it the local solution it finds is globally optimal or nearly so.

5 Likes

I’ll also note, if your primary interest is to debug an infeasible issue in your OPF formulation; convex relaxations are a great tool for this. I discuss that approach a little bit around half way through in this video.

2 Likes

What is the advantage of fix(x, 0.0; force = true) over x == 0 ?

fix changes variable bounds. @constraint(model, x == 0) adds an affine constraint.

2 Likes

EDIT: the question is answered in the documentation:

If the variable already has variable bounds and force=false , calling fix will throw an error. If force=true , existing variable bounds will be deleted, and the fixing constraint will be added. Note a variable will have no bounds after a call to unfix .

=====

In that case,

@variable(model, x)
@constraint(model, 0.9 <= x <= 1)
...
@constraint(model, x == 0)

would result infeasible, while

@variable(model, x)
@constraint(model, 0.9 <= x <= 1)
...
fix(x, 0.0, force=true)

would not?

1 Like

Correct. I didn’t read clearly enough.

No, both would. @constraint(model, 0.9 <= x <= 1) adds a ScalarAffineFunction-in-Interval constraint.