Bilevel JuMP

How to define the right objective function in jump?Is there a problem with my definition? In addition, if the objective function is composed of multiple sub functions and cannot be represented by a single formula, the sub function!
Here is my source code

using JuMP, BilevelJuMP, Cbc

model = BilevelModel(Cbc.Optimizer, mode = BilevelJuMP.SOS1Mode())
T=1;
IMG=3;
a_m1=0.043;
b_m1=20;
c_m1=0;
a_m2=0.075;
b_m2=17.5;
c_m2=0;
PGm1min=10;
PGm1max=600;
PGm2min=10;
PGm2max=150;
PGD_EXmin=-150;#联络线交互功率下限
PGD_EXmax=150;#联络线交互功率上限
Plpgmin=0;
Plpgmax=50;

Cb=[0.47 0.47 0.47];
Cs=[0.43 0.43 0.43];
Pload=[420.13];

PPV=[0 0 0];
PWT=[40 40 40];
PDL=[ 0 10 15];
PES_BT_D=[45 45 48];
PES_BT_C=[0 0 0];
PES_CD_D=[0 0 5];
PES_CD_C=[0 0 0];
PES_HC_D=[0 0 10];
PES_HC_C=[0 0 0];
PMTmin=0;
PMTmax=60*ones(T,3)
@variable(Lower(model), PMT)

@variable(Upper(model), PGm1[1,1])
@variable(Upper(model), PGm2[1,2])
@variable(Upper(model), PGD_EX[1,3])
@variable(Upper(model), Plpg[1,1])

@objective(Upper(model), Min, a_m1PGm1^2+sum(a_m2PGm2^2)-sum((Cb+Cs).*PGD_EX))

@constraints(Upper(model), begin
PGm1min<=PGm1<=PGm1max
PGm2min<=PGm2<=PGm2max
PGm1+sum(PGm2,2)+Plpg==sum(PGD_EX,2)+Pload
PGD_EXmin<=PGD_EX<=PGD_EXmax
Plpgmin<=Plpg<=Plpgmax
end)

@objective(Lower(model), Min, 6*PMT)
@constraints(Lower(model), begin
PGD_EX+PWT+PPV+PMT+PES_BT_D==PDL+PES_BT_C
PMTmin<=PMT<=PMTmax
end)

optimize!(model)

objective_value(model) # = 3 * (3.5 * 8/15) + 8/15 # = 6.13…
value(PGD_EX) # = 3.5 * 8/15 # = 1.86…
value(PMT)
value(PGm1)
value(PGm2) # = 8/15 # = 0.53.

REI2QZYKOS5)O4LOVBO7SRH|690x376, 100% needs to be defined first, how to express it

Please read the first post of Please read: make it easier to help you - #81 to learn how to format your code correctly.

There are multiple issues.

First, you seem to want a quadratic objective with PGm1^2, but you are using Cbc with is a linear solver.

Second, PGm1 is a 1x1 matrix, so you need to index it as PGm1[1, 1]. Do you just want it to be a scalar? Same goes for PGm2.

Third, it looks like you want PGD_EX to be a vector?Use @variable(Upper(model), PGD_EX[1:3]).

I suggest you read the JuMP documentation and try solving some simpler models before moving to bilevel problems: Variables · JuMP

1 Like

@dty That very nice that you are interested in BilevelJuMP.

@odow Just left excellent suggestions, I strongly recommend following them.

Starting simpler is the way to go. If you are having trouble, you might start with single level problems directly in JuMP, it is more user proof and mature.
Once basic stuff work in JuMP, porting to BilevelJuMP should be easy.

Last, but not least, if you can follow PSA: make it easier to help you it will be much easier to help.

Hello, I’m sorry to disturb you again!
Here is my double optimization code

using JuMP, BilevelJuMP, Cbc
using CSV

model = BilevelModel(Cbc.Optimizer, mode = BilevelJuMP.SOS1Mode())


T=24
a_m1=0.043;
b_m1=20;
c_m1=0;
a_m2=0.075;
b_m2=17.5;
c_m2=0;
PGm1min=10
PGm1max=600
PGm2min=10
PGm2max=150
PGD_EXmin=-150
PGD_EXmax=150
Plpgmin=0;
Plpgmax=50;

Cb=CSV.read("E:\\Julia程序\\Cb.csv")
Cs=CSV.read("E:\\Julia程序\\Cs.csv")
Pload=CSV.read("E:\\Julia程序\\Pload.csv")
PPV=CSV.read("E:\\Julia程序\\PPV.csv")
PWT=CSV.read("E:\\Julia程序\\PWT.csv")
PDL=CSV.read("E:\\Julia程序\\PDL.csv")

Cb=convert(Array,Cb)
Cs=convert(Array,Cs)
Pload=convert(Array,Pload)
PPV=convert(Array,PPV)
PWT=convert(Array,PWT)
PDL=convert(Array,PDL)





@variable(Lower(model), PMT[1:T,1:3])









@variable(Upper(model), PGD_EX[1:T,1:3])
@variable(Upper(model), PGm1[1:T,1:1])
@variable(Upper(model), PGm2[1:T,1:2])
@variable(Upper(model), Plpg[1:T,1:1])
@objective(Upper(model), Min, a_m1*sum(PGm1.*PGm1)+b_m1*sum(PGm1)+c_m1+a_m2*sum(sum(PGm2.*PGm2))+b_m2*sum(sum(PGm2))+c_m2+sum((Cb+Cs).*PGD_EX))

@constraints(Upper(model), begin


    PGm1+2*sum(PGm2,dims=2)+Plpg.==sum(PGD_EX,dims=2)+Pload
    PGm1min .<=PGm1 .<= PGm1max
    PGm2min .<=PGm2 .<= PGm2max
    0 .<=Plpg .<=50
    -150 .<=PGD_EX .<=150
end)
@objective(Lower(model), Min, sum(sum(0.3*PMT)))
@constraints(Lower(model), begin
    PGD_EX+PWT+PPV+PMT.==PDL
end)

optimize!(model)

objective_value(model)

I think that the solver such as gurobi and CBC can solve the quadratic programming (QP) problem, but the errors are as follows.

I think that the solver such as gurobi and CBC can solve the quadratic programming (QP) problem, but the errors are as follows.
In my model, the upper objective function is quadratic, but the constraint is linear. So it belongs to QP problem.
I think that the solver such as gurobi and CBC can solve the quadratic programming (QP) problem, but the errors are as follows.
So how can I change the format to fit the way the BilevelJuMPpackage is written?

@joaquimg

Cbc is a linear solver. It does not support quadratic objectives.

1 Like

As @odow said, you can not use Cbc with quadratic objective (in the upper problem).
You can use Gurobi.

Thank you very much for your correction

@odow @joaquimg
First of all, thank you very much for your correction
I’ve changed it to a gurobi solver. But there are still the following errors, I try to now jump Lee simply run a program, no error. However, in bilevel jump, the following errors appear for the lower level inequality constraint of 77-79. I don’t know where I wrote it wrong. (maybe my variable definition does not conform to the specification?) Please correct me

using JuMP, BilevelJuMP, Gurobi
using CSV
using XLSX

model = BilevelModel(Gurobi.Optimizer, mode = BilevelJuMP.SOS1Mode())


T=24
a_m1=0.043;
b_m1=20;
c_m1=0;
a_m2=0.075;
b_m2=17.5;
c_m2=0;
PGm1min=10
PGm1max=600
PGm2min=10
PGm2max=150
PGD_EXmin=-150
PGD_EXmax=150
Plpgmin=0;
Plpgmax=50;
PMTmin=0
PMTmax=60


#=Cs=XLSX.readdata("E:\\Julia程序\\Cs.xlsx","Sheet1","A1:C24")
Cb=XLSX.readdata("E:\\Julia程序\\Cb.xlsx","Sheet1","A1:C24")
Pload=XLSX.readdata("E:\\Julia程序\\Pload.xlsx","Sheet1","A1:A24")
PWT=XLSX.readdata("E:\\Julia程序\\PWT.xlsx","Sheet1","A1:C24")
PPV=XLSX.readdata("E:\\Julia程序\\PPV.xlsx","Sheet1","A1:C24")
PDL=XLSX.readdata("E:\\Julia程序\\PDL.xlsx","Sheet1","A1:C24")=#
Cb=CSV.read("E:\\Julia程序\\Cb.csv")
Cs=CSV.read("E:\\Julia程序\\Cs.csv")
Pload=CSV.read("E:\\Julia程序\\Pload.csv")
PPV=CSV.read("E:\\Julia程序\\PPV.csv")
PWT=CSV.read("E:\\Julia程序\\PWT.csv")
PDL=CSV.read("E:\\Julia程序\\PDL.csv")
PCL=CSV.read("E:\\Julia程序\\PCL.csv")
PHL=CSV.read("E:\\Julia程序\\PHL.csv")

Cb=convert(Array,Cb)
Cs=convert(Array,Cs)
Pload=convert(Array,Pload)
PPV=convert(Array,PPV)
PWT=convert(Array,PWT)
PDL=convert(Array,PDL)





@variable(Lower(model), PMT[1:T,1:3])
@variable(Lower(model), PES_BT_D[1:T,1:3])
@variable(Lower(model), PES_BT_C[1:T,1:3])



@variable(Upper(model), PGD_EX[1:T,1:3])
@variable(Upper(model), PGm1[1:T,1:1])
@variable(Upper(model), PGm2[1:T,1:2])
@variable(Upper(model), Plpg[1:T,1:1])
@objective(Upper(model), Min, a_m1*sum(PGm1.*PGm1)+b_m1*sum(PGm1)+c_m1+a_m2*sum(sum(PGm2.*PGm2))+b_m2*sum(sum(PGm2))+c_m2+sum((Cb+Cs).*PGD_EX))

@constraints(Upper(model), begin


    PGm1+2*sum(PGm2,dims=2)+Plpg.==sum(PGD_EX,dims=2)+Pload
    10 .<=PGm1 .<= 600
    10 .<=PGm2 .<= 150
    0 .<=Plpg .<=50
    -150 .<=PGD_EX .<=150
end)
@objective(Lower(model), Min, sum(sum(0.025*PPV+0.029*PWT+0.025*2*PMT+0.028*(PES_BT_D+PES_BT_C))))
@constraints(Lower(model), begin
    PGD_EX+PWT+PPV+2*PMT+PES_BT_D .== PDL+PES_BT_C
    0 .<=PMT .<= 60
    0 .<=PES_BT_D .<=80
    0 .<=PES_BT_C .<=40
end)

optimize!(model)

objective_value(model)

In the figure, I give the error information and mark the error lines (inequality constraints in the lower level). When I annotate them out, the problem can run.

Please take another read of the PSA. In particular, post the text of an error instead of screenshots to make it easier to read.

The error message tells you what is wrong. Interval constraints aren’t supported (yet).

You can add variable bounds via the @variable macro:
@variable(model, 1 <= x <= 2),

@odow sorry, I’ve pasted the contents of the error message here.

LoadError: Constraints of function MathOptInterface.ScalarAffineFunction{Float64} in the Set MathOptInterface.Interval{Float64} are not implemented
Stacktrace:
 [1] error(::String, ::Type{T} where T, ::String, ::Type{T} where T, ::String) at .\error.jl:42
 [2] supported_constraints(::Array{Tuple{DataType,DataType},1}) at C:\Users\61940\.juliapro\JuliaPro_v1.4.2-1\packages\Dualization\LvJXR\src\supported.jl:9
 [3] dualize(::MathOptInterface.Utilities.CachingOptimizer{MathOptInterface.AbstractOptimizer,MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}}, ::Dualization.DualProblem{Float64,Dualization.DualizableModel{Float64}}, ::Dualization.DualNames, ::Array{MathOptInterface.VariableIndex,1}, ::Bool) at C:\Users\61940\.juliapro\JuliaPro_v1.4.2-1\packages\Dualization\LvJXR\src\dualize.jl:27
 [4] #dualize#1 at C:\Users\61940\.juliapro\JuliaPro_v1.4.2-1\packages\Dualization\LvJXR\src\dualize.jl:8 [inlined]
 [5] build_bilevel(::MathOptInterface.Utilities.CachingOptimizer{MathOptInterface.AbstractOptimizer,MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}}, ::MathOptInterface.Utilities.CachingOptimizer{MathOptInterface.AbstractOptimizer,MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}}, ::Dict{MathOptInterface.VariableIndex,MathOptInterface.VariableIndex}, ::Array{MathOptInterface.VariableIndex,1}, ::BilevelJuMP.SOS1Mode{Float64}, ::Dict{MathOptInterface.VariableIndex,MathOptInterface.ConstraintIndex}) at C:\Users\61940\.juliapro\JuliaPro_v1.4.2-1\packages\BilevelJuMP\Ll9sk\src\moi.jl:276
 [6] optimize!(::BilevelModel; lower_prob::String, upper_prob::String, bilevel_prob::String, solver_prob::String) at C:\Users\61940\.juliapro\JuliaPro_v1.4.2-1\packages\BilevelJuMP\Ll9sk\src\jump.jl:954
 [7] optimize!(::BilevelModel) at C:\Users\61940\.juliapro\JuliaPro_v1.4.2-1\packages\BilevelJuMP\Ll9sk\src\jump.jl:919
 [8] top-level scope at untitled-af4ea65fb7eb9c3d3d30fce1a8ca4a82:66
in expression starting at untitled-af4ea65fb7eb9c3d3d30fce1a8ca4a82:66

For the same problem, I did not report an error in the upper constraint, such as the upper and lower bound inequality constraint of PGM1 and pgm2. Why do the lower classes have such problems? In addition, if in @ variable, you can only use < >, instead of using. <. >, the effect is the same, right? All constraints are for all variables in the matrix?

Why do the lower classes have such problems?

Because it tries to dualize the lower problem, and as it says, scalaraffine-in-interval are not implemented.

Note that the following are not equivalent. The first sets variable bounds:

@variable(model, 0 <= x <= 1)

while this one adds a row to the constraint matrix with variable bounds -Inf and Inf.

@variable(model, x)
@constraint(model, 0 <= x <= 1)
2 Likes

Thank you! I should use the first method you mentioned to constrain all variables in the matrix.

@joaquimg @odow
I define the constraints in the lower level in @ variable, which solves most of the lower level constraints.
But for the constraints that need to use the for loop (such as climbing constraints), there are new errors. I can learn from the JuMP.jl Manual, only know how to define the for loop in @ constraint. But I don’t know how to define the for loop in @ variable
I’ll show the error lines as follows

T=24
@variable(Lower(model), 0 <=PMT[1:T,1:3] <= 60)
PMT0=[30 30 30]
PMT1=reshape(PMT[1,:],1,:)
@variable(Lower(model),-30<=(PMT1-PMT0<=30)
@variable(Lower(model),-30<=diff(PMT)<=30)

The error report is as follows

LoadError: MethodError: no method matching iterate(::typeof(diff))
Closest candidates are:
  iterate(::Core.SimpleVector) at essentials.jl:603
  iterate(::Core.SimpleVector, ::Any) at essentials.jl:603
  iterate(::ExponentialBackOff) at error.jl:253
  ...
Stacktrace:
 [1] _piterate at .\iterators.jl:963 [inlined]
 [2] iterate at .\iterators.jl:971 [inlined]
 [3] iterate(::JuMP.Containers.VectorizedProductIterator{Tuple{typeof(diff),Array{BilevelJuMP.BilevelVariableRef,2}}}) at C:\Users\61940\.juliapro\JuliaPro_v1.4.2-1\packages\JuMP\YXK4e\src\Containers\vectorized_product_iterator.jl:62
 [4] iterate at .\generator.jl:44 [inlined]
 [5] collect(::Base.Generator{JuMP.Containers.VectorizedProductIterator{Tuple{typeof(diff),Array{BilevelJuMP.BilevelVariableRef,2}}},JuMP.Containers.var"#28#29"{var"#55#56"}}) at .\array.jl:665
 [6] map(::Function, ::JuMP.Containers.VectorizedProductIterator{Tuple{typeof(diff),Array{BilevelJuMP.BilevelVariableRef,2}}}) at .\abstractarray.jl:2098
 [7] container(::Function, ::JuMP.Containers.VectorizedProductIterator{Tuple{typeof(diff),Array{BilevelJuMP.BilevelVariableRef,2}}}, ::Type{JuMP.Containers.DenseAxisArray}) at C:\Users\61940\.juliapro\JuliaPro_v1.4.2-1\packages\JuMP\YXK4e\src\Containers\container.jl:85
 [8] container(::Function, ::JuMP.Containers.VectorizedProductIterator{Tuple{typeof(diff),Array{BilevelJuMP.BilevelVariableRef,2}}}) at C:\Users\61940\.juliapro\JuliaPro_v1.4.2-1\packages\JuMP\YXK4e\src\Containers\container.jl:65
 [9] top-level scope at C:\Users\61940\.juliapro\JuliaPro_v1.4.2-1\packages\JuMP\YXK4e\src\macros.jl:79
in expression starting at E:\Julia程序\8.1.jl:81

But I don’t know how to define the for loop in @ variable

I don’t understand what this means.

This doesn’t make sense. diff(PMT) is a vector of expressions. How is this a variable? Do you want it to be a constraint?

julia> model = Model()

julia> @variable(model, x[1:5]);

julia> diff(x)
4-element Array{GenericAffExpr{Float64,VariableRef},1}:
x[2] - x[1]
x[3] - x[2]
x[4] - x[3]
x[5] - x[4]

julia> @constraint(model, diff(x) .>= 0)
4-element Array{ConstraintRef{Model,MathOptInterface.ConstraintIndex{MathOptInterface.ScalarAffineFunction{Float64},MathOptInterface.GreaterThan{Float64}},ScalarShape},1}:
 x[2] - x[1] ≥ 0.0
 x[3] - x[2] ≥ 0.0
 x[4] - x[3] ≥ 0.0
 x[5] - x[4] ≥ 0.0

Again, I suggest you try something simple in pure JuMP before moving to BilevelJuMP.

1 Like

I’m sorry, my statement may not be clear.
As you said before Because it tries to dualize the lower problem,
Therefore, in the lower level, the constraints of variables should be placed in the@variable

I try to write two ways, one is vectorization, the other is for circulation

@variable(Lower(model), 0 <=PMT[1:T,1:3] <= 60)
PMT0=[30 30 30]
@constraint(Lower(model),  -30 .<=PMT[1,:]-PMT0[1,:] .<= 30)
@constraint(Lower(model),  -30 .<= diff(PMT,dims=1) .<= 30)


@variable(Lower(model), 0 <=PMT[1:T,1:3] <= 60)
PMT0=[30 30 30]
for i =1:3
    @constraint(Lower(model),  -30 <=PMT[1,i]-PMT0[1,i] <= 30)
end


for i=1:3
    for t=2:T
        @constraint(Lower(model),  -30 <=PMT[t,i]-PMT[t-1,i] <= 30)
        
    end    
end

Are these two methods the same meaning?

Ah. Okay, now I get the confusion.

Yes, those things are the same.

the constraints of variables should be placed in the @variable

Before, you were adding variable bounds on a single variable, so you can put them in @variable.

The real issue with dualization is that it can’t dualize two-sided constraints that would appear in the constraint matrix (i.e., things other than variable bounds). So you need to write something like

@constraint(Lower(model),  diff(PMT,dims=1) .<= 30)
@constraint(Lower(model),  diff(PMT,dims=1) .>= -30)
2 Likes

Just now, I found this problem by trying. I can’t define both sides at the same time in the lower constraint. So it’s not that it can’t be written in@constraint(Lower(model)).Your method is correct, I just need to divide it into two steps!
I finally understand! Thank you very much. You are awesome! :laughing:

1 Like

@joaquimg @odow
I seem to have a little problem again.
I want to add 0-1 variable of integer in bileveljump, the result is not allowed
Please help me to have a look
The relevant code is shown below

@constraints(Lower(model), begin
@variable(Lower(model), 0<=W1[1:T,1:3]<=1 ,Int)
@variable(Lower(model), 0<=W2[1:T,1:3]<=1 ,Int)
 0 .<=PES_BT_D.*W1
       PES_BT_D.*W1 .<=40

       0 .<=PES_BT_C.*W2
       PES_BT_C.*W2 .<=80
       W1+W2 .<=1
end)

The error is as follows

LoadError: Constraints of function MathOptInterface.ScalarQuadraticFunction{Float64} in the Set MathOptInterface.LessThan{Float64} are not implemented
Stacktrace:
 [1] error(::String, ::Type{T} where T, ::String, ::Type{T} where T, ::String) at .\error.jl:42
 [2] supported_constraints(::Array{Tuple{DataType,DataType},1}) at C:\Users\61940\.juliapro\JuliaPro_v1.4.2-1\packages\Dualization\LvJXR\src\supported.jl:9
 [3] dualize(::MathOptInterface.Utilities.CachingOptimizer{MathOptInterface.AbstractOptimizer,MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}}, ::Dualization.DualProblem{Float64,Dualization.DualizableModel{Float64}}, ::Dualization.DualNames, ::Array{MathOptInterface.VariableIndex,1}, ::Bool) at C:\Users\61940\.juliapro\JuliaPro_v1.4.2-1\packages\Dualization\LvJXR\src\dualize.jl:27
 [4] #dualize#1 at C:\Users\61940\.juliapro\JuliaPro_v1.4.2-1\packages\Dualization\LvJXR\src\dualize.jl:8 [inlined]
 [5] build_bilevel(::MathOptInterface.Utilities.CachingOptimizer{MathOptInterface.AbstractOptimizer,MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}}, ::MathOptInterface.Utilities.CachingOptimizer{MathOptInterface.AbstractOptimizer,MathOptInterface.Utilities.UniversalFallback{MathOptInterface.Utilities.Model{Float64}}}, ::Dict{MathOptInterface.VariableIndex,MathOptInterface.VariableIndex}, ::Array{MathOptInterface.VariableIndex,1}, ::BilevelJuMP.SOS1Mode{Float64}, ::Dict{MathOptInterface.VariableIndex,MathOptInterface.ConstraintIndex}) at C:\Users\61940\.juliapro\JuliaPro_v1.4.2-1\packages\BilevelJuMP\Ll9sk\src\moi.jl:276
 [6] optimize!(::BilevelModel; lower_prob::String, upper_prob::String, bilevel_prob::String, solver_prob::String) at C:\Users\61940\.juliapro\JuliaPro_v1.4.2-1\packages\BilevelJuMP\Ll9sk\src\jump.jl:954
 [7] optimize!(::BilevelModel) at C:\Users\61940\.juliapro\JuliaPro_v1.4.2-1\packages\BilevelJuMP\Ll9sk\src\jump.jl:919
 [8] top-level scope at untitled-cbe91f2d43f6b6063e382eac38d1cec3:227
in expression starting at untitled-cbe91f2d43f6b6063e382eac38d1cec3:227
@variable @constraint

There was a reverse order,There is no mistake in my program. The paste is out of order