JuMP: Real Variable used in Complex expression

Possibly related to How Define a Complex JuMP Variable

I’m wondering if it’s possible to define objective functions and/or constraint functions take take in real variables and return a real number (i.e. f(x): ℝ^n → ℝ) but internally use some complex number arithmetic.

Let’s say we wish to solve the following simple system of equations

min θ
s.t. |e^{iθ} +1| = 0
where 0≤θ≤2π.

With JuMP, I’d write this as

@variable(model, 0 <= θ <= 2π)
@objective(model, Min, θ)
@constraint(model, con, abs(exp(im*θ) +1) == 0)

but that gives me an error ERROR: InexactError: Float64(im). Is there no way to express a function like this?

1 Like

The replies in the other post still reflect the current state of JuMP.

You should add two JuMP variables, one for the real and one for the imaginary parts.

Also, @constraint is for linear constraints. You are probably looking for @NLconstraint. Take a read of the JuMP documentation.

http://www.juliaopt.org/JuMP.jl/v0.19.0/nlp/

In my use case, the objective function is a simple sum of some of the variables but the constraint function involves various matrix exponentials (for Hamiltonians, i.e., \exp(it \sum_i H_i) for H_i Hermitian). It’s possible to break it down into real and imaginary terms but would become quite involved, I think.
I have an implementation in Optim.jl but it has some issues with constraints (cannot use non-constant constraints with a solver for non-differentiable functions or function constraints such as the example above) and JuMP looks more polished.

NLopt can handle this sort of thing. However, you should generally try very hard to recast your problem in a differentiable form if you want good convergence from nonlinear optimization. (My understanding is that JuMP can do this for you in many cases.) For example, you know you can write absolute-value constraints in a differentiable form, right?

(For real-valued functions of complex variables, being differentiable as a function of the real and imaginary parts separately is basically the same as being differentiable in the sense of CR calculus, but I guess JuMP doesn’t do CR calculus yet.)

3 Likes

Perhaps the problem is not necessarily nondifferentiable. The main issue lies in a term ‖U - f(\vec t, \vec x)‖≤ε, where U∈ ℂ^{n \times n} is some known Unitary matrix, \vec t ∈ ℝ^j, \vec x ∈ ℝ^k are vectors of parameters 0≤\vec t_i, \vec x_i≤2π. We wish to bound the distance of f(\vec t, \vec x)\colon ℝ^i \times ℝ^k \to ℂ^{n\times n} from U in some norm (we use the Frobenius norm).

The structure of f(\vec t, \vec x) is as described previously, i.e., roughly,

f(\vec t, \vec x) = \prod_i \exp(-i \vec t_i H_i),

where we have a constant number of terms in the product (say three), and H_i are Hermitian matrices determined by distinct subsets of \vec x_i.

We wish to solve the following:

minimize Σ_i \vec t_i
such that ‖U - f(\vec t, \vec x)‖ ≤ ε
where 0≤\vec t_i, \vec x_i ≤ 2π

Getting a differentiable form may be nontrivial. The hope is that it would’ve been sufficient to optimize without it for small sizes.

It would be helpful if anyone has pointers to solvers that could (attempt to) optimize for problems of this form for small sizes.

I don’t understand — your problem is trivially differentiable (just square \Vert U - f \Vert^2 \le \varepsilon^2 to make it differentiable, and your f is also obviously differentiable assuming H(x) is differentiable in x).

Whether it is automatically differentiable by JuMP is another story, of course. (In my own work, I rarely use automatic differentiation — my problems (usually PDE-constrained optimization) are inevitably too complicated, typically involving external libraries, for AD to handle without a lot of effort. At some point you just have to learn how adjoint methods work so that you can take derivatives on your own.)

If you want to use a derivative-free method, you are fine as long as you write it in a form that is in principle differentiable as above (since many derivative-free algorithms internally assume differentiability). For example, you could try the COBYLA algorithm in NLopt. You might want to simply minimize \Vert U - f \Vert^2 first to make sure you have a feasible starting point for minimizing \sum t_i (for nonconvex optimization you typically need a feasible starting point to guarantee convergence to a local optimum).