`isposdef` inconsistent results, numerically unstable?

I am checking here before opening an issue.

https://www.dropbox.com/s/qgefu7t85rcr7xe/posdef_symm_sparse_matrix.jld?dl=0

Using the above jld file holding a symmetric matrix A, the following results are very weird:

julia> d = load("posdef_symm_sparse_matrix.jld");

julia> A = d["A"];

julia> typeof(A)
Symmetric{Float64,SparseMatrixCSC{Float64,Int64}}

julia> isposdef(A.data)
false

julia> isposdef(Array(A.data))
false

julia> isposdef(Array(A))
true

julia> Array(A) ≈ Array(A.data)
true

julia> Array(A) == Array(A.data)
false

julia> min(eigvals(Array(A.data))...)
0.00017220258699615022

julia> min(eigvals(Array(A))...)
0.0001722025869961005

Is there a more reliable way to check if a sparse matrix is positive definite or not in v0.6.2? The eigmin function fails on sparse matrices in v0.6.2. And finding the whole eigenvalue decomposition is overkill.

I thought I just mentioned it here:

But I guess not. Order of operations matters when taking into account floating point error. Multithreading can change the order of operations, making the multithreaded results stochastic in the last few values due to changing floating point error. I’m not sure what eigs is calling but I would assume that’s what’s happening here.

I would’ve suggested iterative solvers, but they find the eigenvalues in the other order…

True but I thought with Float64, I can get more than just 3 decimal places! The smallest eigenvalue is not even that small. If the error from machine precision overshoots, then that’s a numerical instability of the algorithm, so perhaps something can be done about this.

Well if they work on negative definite matrices, I could run them on -A instead! I didn’t check that path yet, but I thought there must be a simpler and reliable way of doing that very simple operation.

The check for A.data fails because the underlying data is not exactly symmetric. The Symmetric{Float64,SparseMatrixCSC{Float64,Int64}} type enforces symmetry but unfortunately the isposdef method doesn’t work for Symmetric{Float64,SparseMatrixCSC{Float64,Int64}} on 0.6.x. This is fixed on 0.7 so things should be better soon. Meanwhile, you could convert to a SparseMatrixCSC, i.e.

julia> isposdef(convert(SparseMatrixCSC{Float64,Int}, A))
true
1 Like

Is this really the reason? The discrepancy is very very small.

julia> norm(Array(A.data) - Array(A.data)', 1)
3.4701245783114176e-17

This works thanks.

3 decimal places? Your post shows it consistently getting 0.0001722025869961 and just having errors/changes in the last 3 decimal places if I read it correctly?

Iterative solvers will order them by norm. I didn’t know about eigmin but now that I heard of it, I’m curious what it’s doing.

Oh I meant that the smallest eigenvalue is only 0 if you take the first 3 decimal places. So the isposdef function is treating it as less than or equal to 0 when it is clearly not if only you take the fourth decimal place into account. So even a discrepancy as small as 3.4701245783114176e-17 in the matrix can make the isposdef miss the real minimum eigenvalue by a good 0.0001722 or so thinking it is actually 0, which seems too unstable for my taste.

Are you sure it’s even checking the minimum eigenvalue? I’m not seeing that. It checks if it’s Hermitian and if so then it uses LAPACK’s dpotrf to do a Cholesky factorization and then grabs something from it (I don’t know what the second return is there). From the result it seems like an algorithm bug (that was fixed on v0.7 according to @andreasnoack) and not something due to numerical instability in minimum eigenvalue calculations.

I have no idea what it’s doing, but mathematically if the algorithm is led to conclude that the matrix is not positive definite, that’s equivalent to using an eigenvalue algorithm that misses the minimum eigenvalue by a good margin. I was just using mathematical intuition to reason about what the algorithm should return, perhaps not the most direct/relevant discussion though!

Another simple way to view the weirdness of isposdef above is as a classifier that will mis-classify a data point if you perturb its features by 3.4701245783114176e-17! I wouldn’t use such a classifier in any critical application.

Our definition of positive definiteness in isposdef is symmetric (Hermitian) and “can be Cholesky factorized” which is pretty close to having positive eigenvalues but not as robust. We have had some discussions about using a tolerance for the symmetry check but we concluded that it was better to make the users enforce symmetry via the Symmetric wrapper instead of coming up with a possibly arbitrary tolerance for symmetry. Hence, as long as you keep your matrix wrapped in the Symmetric type, a small permutation shouldn’t change the result of isposdef unless the matrix is almost singular.

3 Likes

I see so it’s failing the first condition, makes sense now. I was under the wrong impression that it was using an unstable algorithm and making a wrong conclusion. isposdef didn’t work on Symmetric in v0.6.2, but it works in v0.7 so I won’t face this issue again in the future I guess as long as I use a Symmetric wrapper before checking for isposdef. Thanks again!

But shouldn’t the definition be that the symmetric component can be Cholesky factorized? Being positive definite has nothing to do with being symmetric. But (A+A')/2 being positive definite implies and is implied by A being positive definite.

1 Like

There are two different definitions. The one you mention here and the one I mentioned. We are using the latter. To cite Wikipedia

More generally, an n \times n Hermitian matrix M is said to be positive definite if …

Some authors use more general definitions of “positive definite” that include some non-symmetric real matrices, or non-Hermitian complex ones.

It has come up a couple of times previously but not many times so it seems that in most applications matrices are in fact symmetric (or supposed to be) when people are checking for positive definiteness. Using the stricter definition also makes the implementation slightly cheaper and simpler.

1 Like

This statements says that if M is nxn and Hermitian → it is positive definite when… . It doesn’t say that every positive definite matrix is Hermitian!

May you point me to one established, preferably old, linear algebra book/reference which uses the “less general” definition of positive definiteness?

Perhaps an option would make everyone happy? Also the user has the option of using a Symmetric wrapper, and if not, the function will check for symmetry first thing, so computing the symmetric component will only be relevant in the following cases:

  1. The user has entered a non-symmetric matrix and is expecting isposdef to use the the general definition. This is like me now because I expect isposdef to return true even if the matrix is not perfectly symmetric.
  2. The user has wrongly entered a non-symmetric matrix where he/should should have entered a symmetric one, then probably they should have used a Symmetric wrapper if the matrix is supposed to be symmetric or they should have checked for symmetry separately. I think even those people would agree that a non-symmetric matrix can be positive definite at least according to some authors as you mention, and if they don’t want their matrix to be treated as a non-symmetric one, they should go through the extra “burden” of using a Symmetric wrapper.

An option which defaults to the cheaper behavior would make me happy, but if it defaults to the more general/correct behavior I would be even happier. I am genuinely interested what @stevengj thinks about this.

Although I’m on your side here, the opposition is formidable.

Limitation by association with quadratic forms goes back at least to Gantmacher (1950’s). Springer’s Encyclopedia of Mathematics is in this camp. Gil Strang has promulgated the “restricted” version in his textbooks, which may have poisoned influenced the Julia developers.

Most others (e.g. Golub & van Loan, LAPACK, Mathematica) are careful to write “symmetric (or Hermitian) positive definite” where that applies.

Since it seems obvious to many of us that definiteness is a feature of bilinear forms without regard to symmetry, the LinAlg documentation should be clarified if your suggestions are not followed. Or maybe there should just be separate functions: ishpd, isspd, isposdef?

1 Like

I see, that explains the division. I don’t see a reason to lean to one side though. I am a fan of Golub & van Loan’s masterpiece and there is a very clear distinction between the concept of symmetry and the concept of definiteness in their book.

I second these suggestions.

I would be very interested in learning how non-symmetric positive (semi) definite matrices are useful in practice.

1 Like

If you are solving a system of equations Ax = b and A is unsymmetric and positive definite or indefinite, this will dictate the choice of decomposition or iterative solver to be used, so it is important to know the class of A. Just because many problems have A as symmetric doesn’t mean that there isn’t a whole theory for unsymmetric matrices! A quick Google search brought up Markov chains as an application for solving unsymmetric systems of equations.

I am somewhat skeptical about this for two reasons:

  1. Usually, I know that a matrix has certain properties from the problem setup. If I didn’t, then establishing this numerically is an error-prone endavour, as you have found out.
  2. The methods for establishing whether a matrix is pd are themselves costly, and get you a long way towards a factorization anyway in most cases.

By your argument, one might conclude that there is no need for the isposdef function altogether, but I need it! And maybe someone will need to check the positive definiteness of some unsymmetric matrix to make sure it is invertible for example before solving the system, there is no reason to assume that that won’t happen! Yes it’s expensive, and yes there are probably other methods to do this which one might use as an optimization or to analytically prove the class of a matrix, but the argument here is to ensure that isposdef is a convenient tool for checking positive definiteness of any matrix, symmetric or not, because that’s what a group of people will justifiably think it is supposed to do.