Hi,
I have a tricky optimization problem where I need to minimize a specific eigenvalue of a matrix M(\theta) that depends continuously on a set of parameters \theta. The problem is that the eigenvalue is usually degenerate, making the minimization problem non-differentiable. There are known ways around this, and with help from the excellent resources shared before on this forum ([1], [2] – thank you for these!), I have implemented a basic solver that works reasonably well for my problem, as long as the matrices are not too large (less than 500x500 elements, say).
However, my matrices are usually very sparse, and I would like to take advantage of this somehow. The central question for me is how I can efficiently and reliably find the full degenerate eigenspace using sparse solvers? My (very rudimentary) understanding of Krylov methods and the like is that a single run will yield an essentially random eigenvector of the degenerate space. Is it safe to just keep running sparse eigensolves with different initial guesses, record all eigenvectors, and stop once the latest vector is linearly dependent?
If there is a package that can deal with this optimization problem out of the box and makes my question obsolete, that would of course be even better.
In case it is relevant: My matrix has only real elements, is not symmetric, but I know on physical grounds that all eigenvalues are nevertheless real and \leq 0, for all values of the parameters \theta. The eigenvalue I am interested in is the largest non-zero eigenvalue, and I know the index of this eigenvalue beforehand. This is because I know a priori how many zero-eigenvalues there are (usually between 3-10) and I even know the eigenspace of all the zero-eigenvalues – in principle I could project them out, but I am not sure if that is worth the effort it if my matrix is larger than a few hundred elements.
Any comments or suggestions are highly appreciated! ![]()