[ANN] SequentialConvexRelaxation.jl: non-convex optimization on top of Convex.jl

Hi all,

I’d like to introduce the package SequentialConvexRelaxation.jl.
The package implements a heuristic method on top of Convex.jl to attempt to solve non-convex problems that can be expressed as a convex problem, say min_x f(x), with one or more additional non-convex constraints of the form A(x) * P * B(x) == C(x).
Here, A(x), B(x) and C(x) are Variables/AbstractExpr’s you can model with Convex.jl and P can be any non-zero matrix.
The convex problem can also contain any convex constraint on the variables x, A(x), B(x), and C(x), like a constant C matrix for matrix factorization or A(x) == B(x) for quadratic equality constraints. There is special support for binary constraints, x \in {0,1}.

In general the additional constraint makes the problem non-convex. This package solves a sequence of (modified) convex optimization problems in an attempt to find a good, feasible solution.
In principle, the method is not unlike how non-convex cardinality/sparsity constraints are typically replaced with convex L1-norm regularization. Here however, the package makes use of the nuclear norm operator (and L1-regularization for binary variables) in an attempt to enforce rank constraints that are equivalent to the non-convex bilinear/bi-affine constraint above.
Since the problem is non-convex, success and global optimality are not guaranteed and results may depend on tuning and initial values. An initial feasible solution is not required, though.

The package has several examples in the examples folder.
The examples include minimizing the Rosenbrock function, solving a sudoku, training a very simple neural network with relu activation functions, and a robust controller design problem, as all of these can be written in the form above.

I’m excited to hear what you think and feedback is appreciated.

5 Likes

Cool! This seems related to https://github.com/madeleineudell/MultiConvex.jl.

Hey,
The package you mention uses alternating minimization, but the technique in this package is fundamentally different. It optimizes over all variables simultaneously.

1 Like