Singular Exception with LKJCholesky

This seems to happen when a divergence is hit during warm-up. A divergence can cause the parameters to fly off to infinity in unconstrained space. For the Cholesky factors, flying to infinity could result in the diagonal of the Cholesky factor containing a zero. When this happens, the covariance matrix is no longer positive definite and is instead positive semi-definite, which causes PDMats.invquad to throw the error you are seeing as it tries to call ldiv! with the triangular Cholesky factor.

I suspect Stan samples this just fine for at least one of 2 reasons:

  1. While Turing chooses a default delta value of 0.65, Stan chooses 0.8. This decreases the amount of divergences in general so probably also during the warm-up.
  2. Stan may have checks so that instead of simply throwing an error in this case, an infinite log-density is returned, which would be logged as a divergence.

We can try both. When I use NUTS(0.8), after 20 runs, I never once saw the error.

Here’s how we could use the 2nd option:

# model
@model function correlation_cholesky(J, N, y)
  sigma ~ filldist(truncated(Cauchy(0., 5.); lower = 0), J)
  # LKJ work around
  n, eta = J, 1.0

  trans = CorrCholeskyFactor(n)
  R_tilde ~ filldist(Flat(), dimension(trans))
  R_U, logdetJ = transform_and_logjac(trans, R_tilde)
  F = Cholesky(R_U)
  Turing.@addlogprob! logpdf(LKJCholesky(n, eta), F) + logdetJ
  Σ_L = LowerTriangular(collect((sigma .+ 1e-6) .* R_U'))
  Sigma = PDMat(Cholesky(Σ_L))

  if any(i -> iszero(Σ_L[i]), diagind(Σ_L))
    Turing.@addlogprob! Inf
  else
    for i in 1:N
      y[i,:] ~ MvNormal(Sigma)
    end
  end
  return (; Omega = Matrix(F))
end

When I sample with this model, I sometimes get warnings of numerical error, but these seem to only occur during warm-up, so no problems there.

2 Likes