Now that LKJCholesky is supported in Turing v0.28.2 (see here) I’m trying to adapt the example in this post to remove all of the TransformVariables and PDMats code with just the LKJCholesky distribution. Below is the code for using the LKJ which is hit or miss when sampling. Would you mind showing how to modify this to instead use LKJCholesky:
using Turing, Distributions, LinearAlgebra, Random
Random.seed!(123)
# generate data
sigma = [1,2,3]
Omega = [1 0.3 0.2;
0.3 1 0.1;
0.2 0.1 1]
Sigma = Diagonal(sigma) * Omega * Diagonal(sigma)
N = 100
J = 3
y = rand(MvNormal(Sigma), N)'
# model
@model correlation(J, N, y) = begin
sigma ~ filldist(truncated(Cauchy(0., 5.), lower = 0), J) # prior on the standard deviations
Omega ~ LKJ(J, 1.0) # LKJ prior on the correlation matrix
Sigma = Symmetric(Diagonal(sigma) * Omega * Diagonal(sigma))
for i in 1:N
y[i,:] ~ MvNormal(Sigma) # sampling distribution of the observations
end
return Sigma
end
# 50/50 actually samples but often fails with ERROR: PosDefException: matrix is not positive definite; Cholesky factorization failed.
chain = sample(correlation(J, N, y), HMC(0.01, 5), 1000)
# fails more often than not with same ERROR: PosDefException: matrix is not positive definite; Cholesky factorization failed.
chain = sample(correlation(J, N, y), NUTS(), 1000)
I’ve attempted modifying as follows but I’m fairly certain this isn’t correct:
@model correlation_chol(J, N, y) = begin
sigma ~ filldist(truncated(Cauchy(0., 5.), lower = 0), J) # prior on the standard deviations
Omega ~ LKJCholesky(J, 1.0)
Sigma = Symmetric(Diagonal(sigma) * Omega.L)
for i in 1:N
y[i,:] ~ MvNormal(Sigma) # sampling distribution of the observations
end
return Sigma
end
# same ERROR: PosDefException: matrix is not positive definite; Cholesky factorization failed.
chain2 = sample(correlation_chol(J, N, y), NUTS(), 1000)