This is a general question about understanding running times in sample()
from Turing.jl
when I know how long it takes to evaluate a custom log-likelihood function.
I have a custom log-likelihood function, my_loglik()
, for a p-dimensional parameter (p=7 in this specific example). Evaluating my_loglik(vec)
, for a specific parameter value vec
takes 0.125 seconds. I have a Bayesian model with Normal(0,1)
priors on all parameters and my custom log-likelihood my_loglik()
, which I am implementing as follows:
@model function my_model(my_loglik)
# Sample from priors
par ~ filldist(Normal(0, 1), 7)
# Add log-likelihood to the model
Turing.@addlogprob!(my_loglik(par))
end
model = my_model(my_loglik)
Now, timing the sampling using NUTS()
algorithm with specific initial point (an educated guess in the high probability region) for 100 iterations
# Run the MCMC sampler
Random.seed!(123)
N = 100
@time sample(model, NUTS(), N; init_params=init)
The code seems to be doing what it is supposed to be doing (by looking at the output), and it takes ~850 seconds (over several test runs), so, roughly 8.5 seconds per iteration. I find this to be quite slow in view of the time it takes to evaluate my_loglik
. This would mean 50+ evaluations per iteration, leaving room for other operations.
Is there something I can do to make the sampling faster? Or anything wrong with the way I am defining my_model
?
Apologies if my question is too vague.