I have constructed a Gram matrix from Gaussian kernel function and tried to get some samples from the Gaussian process with zero mean and the Gram matrix as its covariance.
However, MvNormal throws “Not positive definite” error because some of the eigenvalues have very small but negative values. I believe this is of course due to computational instability but I have no idea how I can proceed. Adding a small perturbation to the Gram matrix seems to make me circumvent the issue, but then the next question may be what would be a good rule of thumb for how small that should be?
What would be the typical solution to this problem?
using Distributions, LinearAlgebra, Plots
k(x,y) = exp(-(x-y)^2)
N = 100
x = range(-1,1, length=N)
K = [k(x[i], x[j]) for i=1:N, j=1:N]
# K += 10^-6*I makes the K positive definite and let the calculation proceed...
z = MvNormal(zeros(N), K)
r = rand(z, 10)
p = plot()
for c in eachcol(r)
plot!(x, c)
end
Another natural solution is tweaking the Cholesky factorization routine so that when it encounters a negative diagonal element A_{k,k} it replaces A_{k:n, k:n} with zeros and stops. This should let you draw samples from the distribution, since all you need is a low-rank factor (even if it is not strictly positive definite).
This approach has the benefit that it does not need to set a ‘magic’ threshold value, and that it works also in the case where your vectors are dependent and the covariance matrix is positive semidefinite only.
Thank you for your advice. Wow indeed…that’s a very beautiful idea.
Although the standard libarary does not perform such a trick, it’s a great comfort to me to learn that there is a way to remove the need for magic threshold.
I’m not sure if it works in this case, but you sometimes can get around this by writing your matrix in a form that forces the numerical computations to give you a posdef matrix. That often isn’t worth the effort given how well @fph’s solution works though.