What is the fastest way to check if a hermitian matrix is positive semi-definite?

Hello, I would like to check if a hermitian matrix is positive semi-definite (the smallest eigenvalue is >= 0). The simplest implementation would be

using LinearAlgebra

is_positive(A::Hermitian) = eigmin(A) ≥ 0

and runing

A = hermitianpart(rand(ComplexF64, 6, 6))
@benchmark is_positive($A)

gives me

BenchmarkTools.Trial: 10000 samples with 9 evaluations.
 Range (min … max):  2.232 μs … 119.110 μs  ┊ GC (min … max): 0.00% … 87.23%
 Time  (median):     2.303 μs               ┊ GC (median):    0.00%
 Time  (mean ± σ):   2.551 μs ±   3.433 μs  ┊ GC (mean ± σ):  4.06% ±  2.99%

  ▅██▆▅▄▂▁               ▂▂▂▂▂▂▁▁                             ▂
  ██████████▇▇▆▇▇▇▇▇▇▇▇▇██████████▇▇▆▅▅▄▅▅▅▄▅▅▅▄▅▆▆▆▅▆▅▅▅▄▅▃▄ █
  2.23 μs      Histogram: log(frequency) by time      3.93 μs <

 Memory estimate: 6.12 KiB, allocs estimate: 12.

Is it possible to do any better? Right now I’m only interested in matrix of size at most 6x6, but it would be nice to have a version of the function that is also optimized for larger sizes.

This is probably related (replace largest with smallest):

As a side note, for small arrays, you should get a significant speedup with StaticArrays.jl

No speedup when using KrylovKit, as suggest in the post. Also no speedup with static arrays.

I noticed that there is an extremely fast isposdef in the LinearAlgebra package, but checks if the matrix is positive definite.

As a comparison, here is the benchmark with a 6x6 matrix:

BenchmarkTools.Trial: 10000 samples with 909 evaluations.
 Range (min … max):  118.406 ns … 985.947 ns  ┊ GC (min … max): 0.00% … 76.71%
 Time  (median):     135.907 ns               ┊ GC (median):    0.00%
 Time  (mean ± σ):   145.978 ns ±  60.921 ns  ┊ GC (mean ± σ):  3.73% ±  7.44%

  ▅██▅▃▃▃▂                                                      ▂
  █████████▇▆▆▆▅▇▇▆▆▆▅██▇▅▃▅▅▃▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▃▄▅▄▄▅▅▆ █
  118 ns        Histogram: log(frequency) by time        583 ns <

 Memory estimate: 672 bytes, allocs estimate: 1.
1 Like

Finding the largest eigenvalue/eigenvector (in absolute value) using the power method is fast. If this eigenvalue, t is positive, substract t*I from the matrix, and find the largest eigenvalue again with the power method. If the matrix was positive definite, it should be smaller than the previous t.

The logic of this is pretty simple considering eigenvalues are real. As noted, numerical issues make this determination delicate unless there is a good gap between 0 and minimal eigenvalue.

(based on intuition and a stackoverflow answer)

Numerically, the difference between \geq 0 and > 0 is rather blurred anyway. Can you maybe use isposdef(A + xI) for x really small?

2 Likes

I’d try cholesky decomposition.

1 Like

Just to set expectations, in the presence of numerical errors, the most you can expect to do is to find if A is close to a matrix that is positive semidefinite. If you did have an exactly positive semidefinite matrix, as soon as you do any computations, at best you get results with a small backward error, so you just get results for a matrix near A and are quite likely to see small negative eigenvalues. So in practice you might want to make sure there are no negative eigenvalues of magnitude much larger than the unit roundoff (or much larger than whatever errors might have already been in your matrix.)

Pivoted Cholesky can be used to compute a factorization of a semidefinite matrix, and if that works, you know that the matrix was close to being semidefinite. But it’s a little tricky to use. It bails out at the first negative pivot and isn’t stable on indefinite matrices. I don’t think you can conclude much from that last pivot computed if it’s possible that your matrix was really indefinite, although if the factorization closely approximates A, you certainly know that A is close to semidefinite.

I think it’s simpler to interpret the results if you get all pivots in a way that is stable and reasonably rank revealing on general hermitian matrices so that you have complete information about the inertia of your matrix. A fairly reliable and fast way to do that is LDL^T with rook pivoting. The bunchkaufman function has an option for rook pivoting:

julia> B= randn(6,4); A = B*B';

julia> bunchkaufman(A, true)
BunchKaufman{Float64, Matrix{Float64}, Vector{Int64}}
D factor:
6×6 Tridiagonal{Float64, Vector{Float64}}:
 2.92642e-16   0.0           ⋅         ⋅         ⋅        ⋅ 
 0.0          -5.88677e-16  0.0        ⋅         ⋅        ⋅ 
  ⋅            0.0          0.121139  0.0        ⋅        ⋅ 
  ⋅             ⋅           0.0       0.534579  0.0       ⋅ 
  ⋅             ⋅            ⋅        0.0       7.60426  0.0
  ⋅             ⋅            ⋅         ⋅        0.0      0.966522
U factor:
6×6 UnitUpperTriangular{Float64, Matrix{Float64}}:
 1.0  -0.651647  -0.837972  -0.65639   -0.0999792   0.0403725
  ⋅    1.0        0.590353  -0.696288   0.699611    0.116026
  ⋅     ⋅         1.0       -0.127138   0.435091   -1.12129
  ⋅     ⋅          ⋅         1.0       -0.343945    0.981844
  ⋅     ⋅          ⋅          ⋅         1.0        -0.630295
  ⋅     ⋅          ⋅          ⋅          ⋅          1.0
permutation:
6-element Vector{Int64}:
 1
 2
 3
 4
 5
 6

So in this case I got a factorization of a matrix that turned out to be indefinite, but just barely.

Rook pivoting has a decent degree of reliability in revealing rank and is robust enough that A is close to semidefinite if and only if all nonnegligible diagonal elements of D are positive, where it’s up to you to define “negligible”. If A is positive definite by some larger margin, you won’t get any negligible elements. It’s also possible to get 2\times 2 blocks on the diagonal of D. The inertia includes the signs of the eigenvalues of those blocks. So there is some potential inconvenience in handling 2\times 2 blocks, but it would be significantly faster than computing eigenvalues for larger matrices. I don’t know about something as small as 6\times 6 however.

3 Likes