IterativeSolvers and Positive Definite Matrices

Hello dear Julia Users,

In order to calculate eigenvalues and vectors I am currently using the package IterativeSolvers, but I often get the error that my input matrices are not positive definite. My code is like this:

result = lobpcg(K, G, false, nte1 ; maxiter = 200)

where K is the stiffness matrix of a mesh and G the mass matrix. So from a mathematical point of view the matrices have to be positive definite. Somehow the function lobpcg() disagrees and throws an error. Before IterativeSolvers, I used the function eigs() from the package Arpack and I never had any issue of that kind. My matrix implementations are correct, since I compared my results of eigs() to commercial solvers. Does anybody know, how I can solve this?

Best regards,
Jonas

Hi Jonas

From what you describe it sounds a lot like this is related to some similar issues I had with IterativeSolver’s lobpcg some time ago in using it for our DFTK code. See this issue I opened up in the repo which describes the symptoms I observed.

Since in our applications we sometimes need rather low tolerances (1e-10 and below), we ended up coding up our own lobpcg with specifically numerical stability in mind. It’s unfortunately not yet properly packaged for use outside of DFTK at the moment, but the implementation in the file src/eigen/lobpcg_hyper_impl.jl of our repo should be fairly self-contained (modulo the @timing annotations which you can just remove). Perhaps it helps you to solve your problem. I should add, though, that we currently mainly use it for standard eigenproblems, so even though the code has worked for generalised eigenproblems at some point, that’s currently not tested in our CI, so that might well be failing.

Hope that helps
Michael

2 Likes

Hello Michael,

thank you very much for your answer, I will definitely look into your code :smiley:
The strange thing is that even if I change nothing at all in the code (only the mesh might change slightly), the code sometimes is able to run without any problem and in the next run I get the error. This is the first time that annoyed, frequent running of the code does indeed benefit me haha.

Yes I observed the same in the issue I reported. Sometimes you seem to hit magic numbers in the size of the matrix or something like that and it works and sometimes it does not. That’s the tricky part with numerical instabilities … if you have some randomness (e.g. in the guesses) sometimes you get lucky and sometimes not.

Try wrapping the matrix in Symmetric. A non-symmetric matrix in Julia is considered to be not positive definite. The matrix could be symmetric up to some small error which is what the Symmetric wrapper is for to tell Julia it is actually symmetric.

Thank you for the idea, unfortunately this did not work either. I tried it with Hermitian, too, again with no positive result.

I haven’t looked at the lobpcg() code, but are you sure it’s complaining about your K and G matrices?

The LOBPCG algorithm builds up its own subspace basis B as it goes along, and my recollection is that if the implementor is not careful to occasionally re-orthogonalize this basis then it can become numerically ill-conditioned as the algorithm progresses. Maybe the lobpcg() code is missing this re-orthogonalization check and is running into a problem where B^T B (which it has to invert at some point) is nearly singular.

1 Like

That’s precisely the issue discussed in https://github.com/JuliaMath/IterativeSolvers.jl/issues/246.

1 Like

Thank you for your effort.
Best ragards.

I think you are right. I will look into the discussed issues that mohamed82008 has mentioned.
Best regards and thank you :smiley:

You can also try this:

Not the same as lobpcg but should be more general. Though it has been a while that I looked at this and it is not extensively tested.

1 Like

I will look into it, thanks :smiley:

Since it is a stiffness matrix K, the deformation vector u due to a force vector F should be:

u=inv(K)*F.

Try to invert the stiffness matrix and multiply it with a force vector where only one element will be unit, the other being zero. The response vector u might give you a hint, on what part of the matrix needs to be corrected. It sounds as if the system is not constrained correctly and can freely move. Good luck

You worrie me when you say “it is not extensively tested”

By which I meant: there are tests to make sure that the correct result is obtained for simple problems, but I am not using the generalized eigenvalue problem often in my own applications. Furthermore, I am not specialized in numerical mathematics so maybe I am unaware of shortcomings or possible improvements.