Convex Semi-definite Problem

I am trying to solve a convex semi-definite optimization problem, and I don’t know what would be the right approach to implement it in Julia. The problem is the following:

`min c'x`

where `c` and `x` are vectors in `R^n` subject to

`F(x) <= Y`

where `F(x)` is a symmetric `m * m` matrix for any `x`, and `Y` is a fixed matrix. The main problem is that `F` is a highly non-linear convex function, and each evaluation requires a lot of computations which are not implemented in Julia (I have them in FreeFEM++). To solve the problem, each evaluation of `F` would require to run an external script.

Do you have any recommendations of where can I start? I was thinking of using Mosek and JuMP . . . but I don’t really know if it would be feasible to implement in Julia.

Any help would be appreciated, thanks!

What do you mean by `F(x) <= Y`? Do you want them to be element wise inequality, or do you mean `Y - F(x)` is positive-semidefinite?

You cannot use Mosek (or really, any conic SDP solver) for this because they don’t support external functions.

Depending on what you mean by `<=`, a second question is: can you differentiate `F(x)`?

I mean that the difference is positive-semidefinite.

And yes, `F` is Frechet-differentiable. I would have to compute its derivative with a finite difference scheme though.

This is a pretty tricky problem, and I don’t have an immediate suggestions for you. What is the dimension of `x`? Finite differencing sounds expensive.

One approach might be to use a nonlinear optimizer with

`g(x) = LinearAlgebra.eigmin(Y - F(x))` and the constraint `g(x) >= 0`.

but you’d still need some way of supplying derivatives.

I’ll note that you probably shouldn’t use JuMP for this. See:

Other readers on this forum will probably chime in with some clever ideas that presently escape me.

When the solver is not in Julia but you know exactly what it does, maybe ImplicitDifferentiation.jl can help to get derivatives of `F` wrt `x`?

Thanks to both of you. Surely I shouldn’t use JuMP for this problem, and I will look into ImplicitDifferentiation.jl.

I will read about interior-point methods usually used for SDP problems, and see if I can take any ideas from there. Any recommendations?

On the other hand, I know for a fact that `F` is a decreasing function (`x` with the component-wise partial order and `F(x)` with the semi-definite one). Can I use this information somehow when implementing some method?