I want to repeatedly calculate likelihoods of the same Turing model at different parameter values, and also with different possible observations. What is the best way to do this in Turing? A minimal example of the type of function I am trying to evaluate as fast as possible:
using Turing
# θ unknown parameters
# X experimental design that has to be optimized, to estimate θ as precisely as possible
# y measurements
@model function lm(X,y) # 2 factors, main effects only and no intercept
θ ~ MvNormal(ones(2),1.0)
y ~ MvNormal(X*θ, 0.1)
return θ, y
end
function expected_KL_div(model_given_design ;n_in=100,n_out=100)
val_out = 0.0
for i = 1:n_out
θ_outer, y_outer = model_given_design()
val_out += logprob"y = y_outer | model = model_given_design, X = model_given_design.args.X, θ = θ_outer"
val_in = 0.0
for j = 1:n_in
θ_inner, ~ = model_given_design()
val_in += logprob"y = y_outer | model = model_given_design, X = model_given_design.args.X, θ = θ_inner"
end
val_out -= val_in/n_in
end
val_out/n_out
end
X_test = [1.0 1.0; 1.0 -1.0; -1.0 1.0; -1.0 -1.0]
model_given_design = lm(X_test,missing)
expected_KL_div(model_given_design )
# optimal_design = maximize(expected_KL_div)
I am mainly wondering if using logprob"..."
inside a loop is a good idea.