# Issue with MvNormal and Turing

I’m trying to recreate the model from chapter 5.1 to estimate correlation from the book Bayesian Cognitive Modeling: A Practical Course using Turing. However, I keep running into an issue with MvNormal where it does not like the covariance matrix because it says that it is not positive semi-definite and that Cholesky decomposition has failed or a LinearAlgebra.SingularException(2) error.

The code to reproduce these errors:

using Turing, Distributions, Random, Statistics
using Plots, StatsPlots, MCMCChains

dataset1 = [0.8 102; 1 98; 0.5 100; 0.9 105; 0.7 103; 0.4 110; 1.2 99; 1.4 87; 0.6 113; 1.1 89;
1.3 93]

dataset2 = [dataset1; dataset1]

println("Number of observations in dataset 1: ", size(dataset1, 1))
println("Number of observations in dataset 2: ", size(dataset2, 1))

#scatter(dataset1[:, 1], dataset1[:, 2], xlabel="Response Time (sec)", ylabel="IQ", title="Dataset 1", legend=false)

@model corr_model(x) = begin
#Prior
r ~ Uniform(-1, 1)
mu ~ MvNormal(zeros(2), 1/sqrt(0.001)) # performance tip from Turing tutorials

lambda = Vector{Real}(undef, 2)
lambda[1] ~ Gamma(0.001, 0.001)
lambda[2] ~ Gamma(0.001, 0.001)
sigma = 1 ./ sqrt.(lambda)

Σ = [1/lambda[1] r*sigma[1]*sigma[2]; r*sigma[1]*sigma[2] 1/lambda[2]]

for i = 1:size(x, 1)
x[i, :] ~ MvNormal(mu, Σ)
end
end

model1 = corr_model(dataset1)
chain1 = sample(model1, NUTS(), 5000, progress=true)

density(chain1[:r], xlabel="Correlation", ylabel="Posterior Density", legend=false)


Any help would be appreciated.

1 Like

I’m currently translating all the models from this book. Are you doing the same thing, or just focussing on that one example?

I’m just focusing on that one example. Looking at your problem it seems that originally you were using std instead of your variances and when you fixed that it worked for you?

Yep. The Normal distribution in JAGS is parameterised by mean and precision, but in most other approaches it’s by mean and std. So I’ve just been setting priors over std instead of exactly duplicating what’s in the text.

This is my full (and working) code. You can probably come up with more sensible priors than uniform over std but…

using Turing, StatsPlots

x = [0.8 102; 1 98; 0.5 100; 0.9 105; 0.7 103; 0.4 110; 1.2 99; 1.4 87; 0.6 113; 1.1 89; 1.3 93]

@model function pearson_correlation(x)
rows = size(x, 1)
# priors
μ₁ ~ Normal(0, 100)
μ₂ ~ Normal(0, 100)
σ₁ ~ Uniform(0, 100)
σ₂ ~ Uniform(0, 100)
r ~ Uniform(-0.99, 0.99)
Σ = [σ₁*σ₁ r*σ₁*σ₂;
r*σ₁*σ₂ σ₂*σ₂]
# likelihood
for i = 1:rows
x[i,:] ~ MvNormal([μ₁, μ₂], Σ)
end
end

chain = sample(pearson_correlation(x), NUTS(), 10000)

1 Like

Thanks for the link Mohamed. It’s the same problem set up in a different way. Unfortunately, I still can’t see why my model is causing the error in the covariance matrix.

Perhaps stepwise convert the working code into what you have to see what bit causes the issue?

I suspect the issue is the Gamma priors here. If you check rand(Gamma(0.001, 0.001)), you get a lot of very small or underflow-to-zero results. The Distributions.jl Gamma distribution is parameterized with shape and scale (i.e. the k and \theta parameterization listed on the Wikipedia page). You probably want your lambda priors to be Gamma(0.001, 1 / 0.001) so that it has mean one.

I think I am one step closer to a solution and @jkbest2 has the right idea. For Gamma distributions, I believe Distributions.jl parameterizes the Gamma distribution with shape \alpha and scale \theta while WinBUGS uses shape \alpha and mean parameter \mu = \alpha\theta. So the Gamma(0.001, 0.001) in WinBUGS is Gamma(0.001, 1) or Gamma(0.001) in Julia.

But now I am often getting warnings for numerical errors when I run this model.

@model function corr_model(x, ::Type{T}=Float64) where T
#Prior
r ~ Uniform(-0.999, 0.999)
# mu ~ MvNormal(zeros(2), 1/sqrt(0.001))
mu = Vector{T}(undef, 2)
mu[1] ~ Normal(0, 1/sqrt(0.001))
mu[2] ~ Normal(0, 1/sqrt(0.001))

lambda = Vector{T}(undef, 2)
lambda[1] ~ Gamma(0.001, 1)
lambda[2] ~ Gamma(0.001, 1)
sigma = 1 ./ sqrt.(lambda)

Σ = [1/lambda[1] r*sigma[1]*sigma[2]; r*sigma[1]*sigma[2] 1/lambda[2]]

for i = 1:size(x, 1)
x[i, :] ~ MvNormal(mu, Σ)
end
end