How to use ParetoSmooth.jl

LOO is completely independent of the inference method, so no need to use IS() here. In fact, IS() seems to have some problems, so let’s use NUTS() instead.

You can retrieve the likelihood matrix either with Turing.pointwise_loglikelihoods or with ParetoSmooth.pointwise_log_likelihoods, which formats the matrix as needed by ParetoSmooth. But there’s also a convenience function that computes the log-likelihoods for you:

julia> ch = sample(test(Xₜ, Yₜ, 1), NUTS(), MCMCThreads(), 1_000, 4);
┌ Info: Found initial step size
└   ϵ = 0.21250000000000002
┌ Info: Found initial step size
└   ϵ = 0.30000000000000004
┌ Info: Found initial step size
└   ϵ = 0.225
┌ Info: Found initial step size
└   ϵ = 0.0125
Sampling (4 threads) 100%|█████████████████████████████████████████████████████████████████| Time: 0:00:00

julia> psis_loo(test(Xₜ, Yₜ, 1), ch)
[ Info: No source provided for samples; variables are assumed to be from a Markov Chain. If the samples are independent, specify this with keyword argument `source=:other`.
┌ Warning: Some Pareto k values are extremely high (>1). PSIS will not produce consistent estimates.
└ @ ParetoSmooth ~/.julia/packages/ParetoSmooth/A6x5U/src/InternalHelpers.jl:43
Results of PSIS-LOO-CV with 4000 Monte Carlo samples and 1 data points. Total Monte Carlo SE of 0.37.
┌───────────┬───────┬──────────┬───────┬─────────┐
│           │ total │ se_total │  mean │ se_mean │
├───────────┼───────┼──────────┼───────┼─────────┤
│   cv_elpd │  5.65 │      NaN │  5.65 │     NaN │
│ naive_lpd │  8.19 │      NaN │  8.19 │     NaN │
│     p_eff │  2.53 │      NaN │  2.53 │     NaN │
└───────────┴───────┴──────────┴───────┴─────────┘
5 Likes