JuMP: solver errors in reformulating Complex NLP to Real NLP

I’m trying to convert the following complex NLP to a real NLP and I think I have a formulation issue.

Complex problem:

\underset{v, m, f, a} {min} \quad v^H Q v + f^T f + c_1^T \log(m) + c_2^T a
s.t. \qquad v = m \times \exp(j\, a)

with variables v \in \mathbb{C}^n, m \in \mathbb{R}^n, a \in \mathbb{R}^n, f \in \mathbb{R}^n, and data Q \succeq 0, c_1 \in \mathbb{R}^n, c_2 \in \mathbb{R}^n.

Real problem:

\underset{m, f, a} {min} \quad (m \cos(a))^T\, Q\, (m \cos(a)) + (m\sin(a))^T\, Q\, (m\sin(a)) + f^T f + c_1^T \log(m) + c_2^T a

Below is my code for the real problem splitting complex v into rectangular coordinates:

## inputs
# c1
# c2
# Q

## model
nlp = Model(solver=IpoptSolver())  ## or MosekSolver()

## variables
@variable(nlp, m[1:N])
@variable(nlp, -2.0*pi <= a[1:N] <= 2.0*pi)
@variable(nlp, f[1:N])

## initialize
setvalue(f, f_init)
setvalue(a, a_init)
setvalue(m, m_init)

## helpers
@NLexpression(nlp, vQv, sum( 
                             sum( 
                                 ( 
                                   m[i] * cos(a[i]) * m[j] * cos(a[j])
                                 + m[i] * sin(a[i]) * m[j] * sin(a[j])
                                 ) * Q[i,j] 
                             for i=1:N)
                         for j=1:N))
@NLexpression(nlp, fIf, sum(f[i]^2 for i=1:N))
@NLexpression(nlp, c1logm, sum(c1[i] * log(m[i]) for i=1:N))
@NLexpression(nlp, c2a, sum(c2[i] * a[i] for i=1:N))  ## linear, but `NL` bc in NLobjective 

## objective
@NLobjective(nlp, Min, vQv + fIf + c1logm + c2a)

a bit of info on Q:

eig(full(Q))[1]
145-element Array{Float64,1}:
  1.17301e-15
  0.0574616 
  ⋮          
 16.0423     
 22.1229     

and using DegeneracyHunter’s printInfeasibleEquations(nlp, 0.001) and printVariableDiagnostics(nlp) functions gives me no issues from the beginning.


When I tried solving in Ipopt, I get an error:

Warning: Cutting back alpha due to evaluation error
                                   (scaled)                 (unscaled)
Objective...............:  -1.3065000882646967e+02   -1.3065000882646967e+02
Dual infeasibility......:   2.4882922922388329e+10    2.4882922922388329e+10
Constraint violation....:   0.0000000000000000e+00    0.0000000000000000e+00
Complementarity.........:   0.0000000000000000e+00    0.0000000000000000e+00
Overall NLP error.......:   2.4882922922388329e+10    2.4882922922388329e+10

EXIT: Error in step computation (regularization becomes too large?)!

I thought I had a formulation error, so I tried a couple of things.

  1. I removed the c_2^T \log(m) term and was able to solve. This makes sense to me.

  2. I introduced an auxiliary variable logm and a constraint m[i] == exp(logm[i]) and reintroduced the c_2^T \log(m) term as c_2^T logm. When solving, I get the following message: EXIT: Iterates diverging; problem might be unbounded. Why does Ipopt catch that the problem is unbounded in the this case but not the original formulation?