Help creating a JuMP variable array

I am interested if anyone can help me create a JuMP variable which is a 2X2 array. I want it to look like:

X=diag([X0,X1]) 

Where X0 and X1 are complex and just numbers varied in the optimization problem, however I need help formatting this specifically in the form of a JuMP variable.

Any help is greatly appreciated

You should be able to fill any type of AbstractArray with JuMP variables. You can do, for example

@variable(m, xdiag[i=1:N])  # first create the variables that will fill the array
x = Diagonal(xdiag)  # create the array with those variables along its diagonal
1 Like

Thank you! I used:

N=2
@variable(m, Qdiag[i=1:N])  
Q = Diagonal(Qdiag)

However I get an error that my dimensions are mismatched.
DimensionMismatch("dimensions must match")

Stacktrace: [1] promote_shape(::Tuple{Base.OneTo{Int64},Base.OneTo{Int64}}, ::Tuple{Base.OneTo{Int64},Base.OneTo{Int64}}) at .\indices.jl:79 [2] +(::Array{JuMP.GenericAffExpr{Float64,JuMP.Variable},2}, ::Array{JuMP.GenericQuadExpr{Float64,JuMP.Variable},2}) at .\arraymath.jl:38 [3] +(::Array{JuMP.GenericAffExpr{Float64,JuMP.Variable},2}, ::Array{JuMP.GenericAffExpr{Float64,JuMP.Variable},2}, ::Array{JuMP.GenericQuadExpr{Float64,JuMP.Variable},2}) at .\operators.jl:424

Do you know why this would be? I assume it is something to do with my N=2 statement

What you have works for me. It looks like the error is coming from a different line. Try posting a minimum working example of your problem.

Thanks, it was a different error and restarting the kernal helped. Now I have this new very related error. Q has worked for me but now I would like to make an identical variable X except it will be a flipped diagonal rather than a normal diagonal matrix. Can you help with what’s wrong with this code?:

@variable(m, Rdiag[i=1:N]) 
R = flipdim(Diagonal(Rdiag),2)

MethodError: Cannot convert an object of type JuMP.GenericAffExpr{Float64,JuMP.Variable} to an object of type JuMP.Variable This may have arisen from a call to the constructor JuMP.Variable(...), since type constructors fall back to convert methods.

I can post a minimum working example:

using NLopt
using JuMP

m=Model(solver=NLoptSolver(algorithm=:DIRECT))


N=2
L=100000

@variable(m, Rdiag[i=1:N]) 
R = flipdim(Diagonal(Rdiag),2)

This is because Diagonal only stores the diagonal elements, and uses the zero function to get the off-diagonal elements when needed for generic methods. Since

julia> typeof(zero(Rdiag[1]))
JuMP.GenericAffExpr{Float64,JuMP.Variable}

we have typeof(Diagonal(Rdiag)[1, 2]) != typeof(Diagonal(Rdiag)[1, 1]), and since flipdim uses the element type of its input matrix as the element type of its output, you get an error saying that you can’t put a GenericAffExpr in a matrix of Variables.

You can work around this using

flipdim(Matrix(Diagonal(Rdiag)), 2)

Edit: opened this issue: https://github.com/JuliaLang/julia/issues/27494.

1 Like

Have you read Nonlinear Modeling — JuMP -- Julia for Mathematical Optimization 0.18 documentation?

2 Likes

No, thank you that is just what I needed.

1 Like

So I have been cleaning it up and now I have the same error with a Generic AffExpr rather than a Quad expression. I am thinking that I need to simplify e_c and create expressions for each of its components, like a particular JuMP constraint for e.^(*(T,L)), and for (kron(C(Q,R),C(conj(Q),conj(R)))), etc? I am sorry I am understanding more and more thanks to your help and the syntax notes but I am missing some part of the big picture here

Just to clarify first, do you want a matrix exponential or an elementwise scalar exponential?

Also, what are C and c?

Alright, getting a little bit closer, but still quite a few big problems. First, @variable(m,T) creates a scalar variable called T, and you’re currently trying to constrain each element of the matrix T_ to be equal to T. That’s probably not what you want.

Second, are you aware that JuMP’s variables are assumed to be real? conj applied to a real matrix A just returns A. Are you looking for adjoint/transpose instead?

lets go for a matrix exponential for now.

OK, in Julia, the . in e.^A is used to signify a broadcasted operation (in this case, an elementwise-applied exponential function). If you were operating on a numeric array, you’d want expm(A) for a matrix exponential (or just exp(A) in the upcoming version of Julia).

Next, it’s important to note the difference between linear/quadratic modeling and nonlinear modeling. All of the expressions you construct up to @constraint(m,Exp1.==e.^(*(T,L))) are linear or quadratic in the decision variables you create using @variable. That code works because various operators (like +) and functions are overloaded for e.g. JuMP’s Variable and GenericAffExpr types. But JuMP’s nonlinear interface unfortunately doesn’t work quite the same way as its linear/quadratic interface. It’s quite a bit harder to use.

The following nonlinear syntax notes are especially relevant to your problem:

All expressions must be simple scalar operations.

This means that creating an expression that represents the matrix exponential of T * L unfortunately can’t work, as it is not a scalar expression.

There is no operator overloading provided to build up nonlinear expressions

and

All nonlinear expressions must be inside of macros.

Before, you were trying to construct a nonlinear expression outside of a JuMP macro, which is not supported, so the code in your latest post is better in that sense, but still not quite right. @expression is only for linear/quadratic modeling, so you’d need @NLexpression, but again, @NLexpression can only be used to construct scalar nonlinear expressions.

Here’s a slightly updated version of the first part of your problem formulation:

using Ipopt # just because that's what I have installed right now
using JuMP

C(A, B) = A * B - B * A
m = Model(solver=IpoptSolver())

N = 2
L = 100000

@variable(m, Rdiag[i=1:N])  # first create the variables that will fill the array
R = flipdim(Matrix(Diagonal(Rdiag)),2)  # create the array with those variables along its diagonal
@variable(m, Qdiag[i=1:N]) 
Q = Diagonal(Qdiag)
T  = kron(Q, Diagonal(ones(2))) + kron(Diagonal(ones(2)), conj(Q)) + kron(R, conj(R))
LT = L * T

# since syntax notes state that
# > 'AffExpr and QuadExpr objects cannot currently be used inside nonlinear expressions'
# need to create matrix of slack variables `LT_` and set them equal to `LT`
@variable(m, LT_[1:size(LT, 1), 1:size(LT, 2)])
for i in eachindex(T)
    @constraint(m, LT_[i] == LT[i])
end

The next part, involving the nonlinear expressions for expm and for e.g. kron(C(Q,R),C(conj(Q),conj(R))) (a matrix of quartic expressions) is the tricky bit, but unfortunately this is all I have time for right now. Some hints for you or others who want to help out:

  1. if you use the same slack variable trick as above for C(Q, R) and C(conj(Q),conj(R)), you can get kron(C(Q,R),C(conj(Q),conj(R))) as a matrix of quadratic expressions, and then you can use the slack variable trick again to reduce this to a matrix of plain variables.
  2. since you can only use scalar nonlinear expressions, you’ll have to do the whole computation of the objective given LT, C(Q, R) and C(conj(Q),conj(R)) in one function that you then register with JuMP. Nonlinear functions can’t take matrix or vector arguments though; search Discourse for some solutions to this problem.
  3. there’s currently no generic expm function, so automatic differentiation won’t work; you’ll have to supply derivatives of the manually registered function from 2. yourself (not too hard in this case)
1 Like

since I am new to coding, julia, and the science of the project itself so I am sorry for my ignorance.

Your model looks very difficult to formulate even for people who are experienced in JuMP. It might be helpful to play around with some simpler models completely unrelated to this in order to learn some of the basics of JuMP:
https://github.com/JuliaOpt/JuMP.jl/tree/master/examples

Also, as far as it looks, you are only doing unconstrained optimization. If so, JuMP is probably the wrong choice. You may want to look at GitHub - JuliaNLSolvers/Optim.jl: Optimization functions for Julia instead.

1 Like

Thank you so much, that was very helpful. You’ve laid everything out in a very understandable way. For the conj(matrix) I was looking for a complex transpose. I really cannot thank you enough for the time you’ve been willing to put in you have helped me learn a lot. I will work off of what you have given me thus far.

I will also look into optim to see if it is better suited to my needs because I do believe I want unconstrained optimization and thank you for those references I will check them out.

Yeah, Optim may be a better choice at this point in time. Hopefully JuMP’s nonlinear interface will become a little easier to use for cases like this in the future.

1 Like