my question revolves around the topic of Bayesian model comparison and henceforce selection.
I’ve build 3 alternative models (each NUTS(0.65), 4 chains, 8000 draws) and fit my data with the Turing.jl lib.
However I could not find a way in Turing to test loo characteristics and came across some discussions in the internet - from where I implemented the following for each model using the ArviZ interface:
ℓ = Turing.pointwise_loglikelihoods(param_mod, chains) ℓ_mat = reduce(hcat, values(ℓ)); ℓ_arr = reshape(ℓ_mat, 1, size(ℓ_mat)...); # (chain_idx, sample_idx, parameter_idx) data = ArviZ.from_mcmcchains( chains, library = "Turing", log_likelihood = Dict("y" => ℓ_arr) ) criterion = ArviZ.loo(data)
Finally I build a Dict and fed it into the compare function:
compare_dict = Dict( "model 1" => criterion1, "model 2" => criterion2, "model 3" => criterion3, ) compare("compare_dict", ic="loo")
I also tried to feed in the ArviZ interface data – which doesn’t work neither. I always get an none informative error message:
LoadError: PyError ($(Expr(:escape, :(ccall(#= C:\Users\xx\.julia\packages\PyCall\7a7w0\src\pyfncall.jl:43 =# @pysym(:PyObject_Call), PyPtr, (PyPtr, PyPtr, PyPtr), o, pyargsptr, kw))))) <class 'AttributeError'> AttributeError('Encountered error in ic computation of compare.')
From previous discussions here in the forum I know about the StatisticalRethinkingJulia / ParetoSmoothedImportanceSampling.jl package but I couldn’t find a compare function.
From the discussion here Bayesian Model Selection - #8 by ANasc, I got that the ModelComparisons package was in the end not added to the MCMCChains repo…?
Any recommendation how to approach this problem? How can I find out how the compare-function works, which argument it takes in which format? I there any other package that includes model comparison?
I am very happy about any recommendation!