# Suggestions needed: global binary optimization

The problem is very simple: x is a vector taking value from 0 or 1; S is a real symmetric matrix. The object is to minimize `dot(x.+c, S, x.+c)` with the constraint `sum(x)=N`, where N is an integer smaller than `length(x)` and c is just a real float number.

The issue is: the length of x is very large (~10^4) and I believe a deterministic solution is not possible (maybe I am wrong?). I have little knowledge in mathematical optimization and could use some suggestions on what algorithms I should spend time to investigate to solve such a problem.

What if you do an eigenvalue decomposition of `S` and pick the x non-zero for the N largest eigenvalues?

1 Like

I didn’t quite understand that. Which eigenvalue decomposition do you suggest to use? Here’s a non-optimal snippet which minimizes the function by brute force for a random S and c. And a list of eigenvalues. How does your suggested algorithm pick the first and fourth position?

``````using LinearAlgebra
using Combinatorics
import Random

Random.seed!(42)
M = 4
N = 2
A = rand(M,M)
S = A + A'
c = rand()

f(x) = dot(x .+ c, S, x .+ c)

function fc(f, cmb)
x = zeros(M)
x[cmb] .= 1
f(x)
end
val, x = minimum( (fc(f,cmb), cmb) for cmb in combinations(1:M, N))
println("minimum: x=\$x, val=\$val")

println(eigvals(S))
``````

output:

``````minimum: x=[1, 4], val=16.61444763159244
[-0.023080673209075427, 0.3946150562493981, 1.0832708395200012, 4.200607767705681]
``````

Given the size the problem, probably you will have to resort to heuristics methods such Simulated Annealing or Extremal Optimization. You will find an implementation of extremal optimization in
https://carlobaldassi.github.io/RRRMC.jl/dev/algorithms/#RRRMC.extremal_opt
and simulated annealing can be easily coded using the same repo to implement a MCMC with decreasing temperature

I though about it a bit more and realized it’s much more complex. If it was a norm constraint on x, and `c` was 0, you’d select the eigenvector belonging to the largest eigenvalue. Now, you could start by soling the norm-constrained quadratic program, and then projecting the solution onto the feasible set, but I do not believe that this will always give you the optimal solution.

I believe the feasible set does not form a vector space. What do you mean by projection?

I simply mean a reasonable transformation that takes a vector and produces another one that is close in some sense. For example, keeping the N values closest to 1, the N largest values etc.

Thank you. I’ll take a look.