Matrix inverse and multivariate normal distribution

Hello, I am fitting a spatio-temporal-and-informed-with-covariates bayesian model in Julia. I have a regression vector with prior distribution \vec{\beta}_t \sim \mathcal{N}_p(\vec{b_0},s^2I) (where p is the number of covariates, I the identity matrix). At every fitting iteration it has (as the other parameters of the model have) to be updated. The updating rule derive from the full conditional derivation, which corresponds to the following computations:
f(\vec{\beta}_t|-) \sim \exp \left\{ \vec{\beta}_t^T \underbrace{\left( \frac{1}{s^2}I + \sum_i \frac{\vec{x}_{it}\vec{x}_{it}^T}{\sigma^2_{it}}\right)}_{A_\star}\vec{\beta}_t -2 \underbrace{\left( \frac{\vec{b_0}}{s^2} + \sum_i \frac{\text{stuff}_{it} \vec{x}_{it}}{\sigma^2_{it}}\right)}_{\vec{b_\star}}\cdot \vec{\beta}_t \right\}
which is proportional to

\propto \exp \left\{ (\vec{\beta}_t -\vec{b}_\star)^T A_\star (\vec{\beta}_t -\vec{b}_\star)\right\}
which is the kernel of another multivariate gaussian with covariance matrix the inverse (sadly) of A_\star (you can see why the inverse from the picture).

That math translates into the following code:

sum_Y = zeros(p)
A_star = I(p)/s2_beta
for j in 1:n
	X_jt = @view Xlk_covariates[j,:,t]
	sum_Y += (Y[j,t] - muh_iter[Si_iter[j,t],t] - eta1_iter[j]*Y[j,t-1]) * X_jt / sig2h_iter[Si_iter[j,t],t]
	A_star += (X_jt * X_jt') / sig2h_iter[Si_iter[j,t],t]
end
b_star = beta0/s2_beta + sum_Y
Am1_star = inv(Symmetric(A_star))
# Symmetric needed to solve an inversion and hermitian problem
# https://discourse.julialang.org/t/isposdef-and-eigvals-do-not-agree/118191/10
beta_iter[t] = rand(MvNormal(Am1_star*b_star, Am1_star))

which would work fine but for the inversion of the matrix required, which appears to be a major problem. Here is for example of an A_star matrix obtained during the fit and her inverse:

A_star = [89.53344398111983 18.907579544068415 -22.32917044622947 -26.313826364434874; 18.907579544068415 34.821884016362716 -18.103513531335015 -30.76131476315723; -22.32917044622947 -18.103513531335015 199.9111114786475 21.85563450347075; -26.313826364434874 -30.76131476315723 21.85563450347075 75.35950798333245]
#  89.5334   18.9076  -22.3292  -26.3138
#  18.9076   34.8219  -18.1035  -30.7613
# -22.3292  -18.1035  199.911    21.8556
# -26.3138  -30.7613   21.8556   75.3595
cond(A_star)
# 12.015947669881697, which doesn't seem too bad to me
A*inv(A_star) 
#  0.130423    -0.0446254   0.00778601   0.0250668
# -0.0446254    0.47336     0.0190656    0.172111
#  0.00778601   0.0190656   0.0531558   -0.00491498
#  0.0250668    0.172111   -0.00491498   0.21313

So I was looking for some paths to avoid that inversion.

I don’t actually know much, but does this help:

One particular use of the precision matrix is in the context of Bayesian analysis of the multivariate normal distribution: for example, Bernardo & Smith prefer to parameterise the multivariate normal distribution in terms of the precision matrix, rather than the covariance matrix, because of certain simplifications that then arise.[10] For instance, if both the prior and the likelihood have Gaussian form, and the precision matrix of both of these exist (because their covariance matrix is full rank and thus invertible), then the precision matrix of the posterior will simply be the sum of the precision matrices of the prior and the likelihood.

The MvNormalCanon distribution is parameterized in terms of the precision matrix (inverse covariance).

2 Likes