Hello,

I was playing with random matrices when I noticed this strange behaviour :

```
julia> using LinearAlgebra
julia> a=randn(10,11);
julia> isposdef(a*a')
true
julia> isposdef(a'*a)
false
```

I expected to have either way positive definite matrices since both matrices can be seen as covariance matrices of random data.

Am I doing something wrong here ?

Strictly speaking, this is not correct, since they can have zero eigenvalues. Eg

```
julia> isposdef(zeros(3,3) |> x -> x'*x)
false
```

What you are seeing is probably near-0, possibly negative, eigenvalues as a result of floating point error, which is normal.

Normally, reproducing the exact matrix would be needed to help investigate this issue, but in this case, one can easily generate examples with

```
using LinearAlgebra
global x
for _ in 1:1000
a = randn(10, 11);
if !isposdef(a'*a)
@info "found one"
global x = a
break
end
end
eigen(x'*x) # observe the eigenvalues
```

a’a has 0 as eigenvalue. (It is at most rank 10)

2 Likes

But it seems that the smallest dimension is favored

```
julia> using Statistics
julia> mean([ isposdef(randn(11,10) |> x->x*x') for i in 1:1000])
0.463
julia> mean([ isposdef(randn(10,11) |> x->x*x') for i in 1:1000])
1.0
```

Think of it this way: if a = [1;0;0], a*a’ = diag(1,0,0), which is not positive definite (but is semidefinite). However, a’a = 1, which is positive definite. In general, if a is N x M, then a a’ has rank at most M.

3 Likes

Thanks ! I forgot about that

And numerical error is not how you get `!isposdef`

. In fact, it’s the other way around. The reason you get positive definate `a'a`

at all is due to numerical error shifting the `0`

eigen value to the positive side…

2 Likes