I have a model in Turing which is exceptionally slow and am hoping for some help speeding it up. I am new to Turing so any pointers are helpful.
My model is
@model function many_linreg(X, y; predictors=size(X, 2))
#priors
α ~ Normal(0, sqrt(250))
β ~ filldist(Uniform(0, 1e8), predictors)
σ ~ Uniform(0, 1e4)
N = length(y)
#likelihood
for i in 1:N
y[i] ~ MvNormal(α .+ X * β, σ^2 * I)
end
end
I sample it with
chain2 = sample(model2, NUTS(), MCMCThreads(), 1_000, 20, adtype=AutoReverseDiff(true))
I’d like to run this eventually on ~30k observation vectors, but I can’t make it run in a reasonable time on even 5k. I just get stuck on 0% completion of sampling after finding initial step sizes.
I have tried the common tricks I know to speed things up. For instance I am using a QR decomposed matrix of predictors, and backwards differentiation with tape. I tried to vectorize the for loop but get errors when trying to call filldist on it. I think maybe I should be using arraydist on single normals but don’t know?