Repeatedly calculating likelihood using Turing

I want to repeatedly calculate likelihoods of the same Turing model at different parameter values, and also with different possible observations. What is the best way to do this in Turing? A minimal example of the type of function I am trying to evaluate as fast as possible:

using Turing
# θ unknown parameters
# X experimental design that has to be optimized, to estimate θ as precisely as possible
# y measurements
@model function lm(X,y) # 2 factors, main effects only and no intercept
    θ ~ MvNormal(ones(2),1.0)
    y ~ MvNormal(X*θ, 0.1) 
    return θ, y
end
function expected_KL_div(model_given_design ;n_in=100,n_out=100)
    val_out = 0.0
    for i = 1:n_out
        θ_outer, y_outer = model_given_design()
        val_out += logprob"y = y_outer | model = model_given_design, X = model_given_design.args.X, θ = θ_outer" 
        val_in = 0.0
        for j = 1:n_in
            θ_inner, ~ = model_given_design()
            val_in += logprob"y = y_outer | model = model_given_design, X = model_given_design.args.X, θ = θ_inner"
        end
        val_out -= val_in/n_in 
    end
    val_out/n_out
end
X_test = [1.0 1.0;  1.0 -1.0;   -1.0 1.0;   -1.0 -1.0]
model_given_design = lm(X_test,missing)
expected_KL_div(model_given_design )
# optimal_design = maximize(expected_KL_div)

I am mainly wondering if using logprob"..." inside a loop is a good idea.

Yes nothing wrong with that. One thing you can do to speed it up is to pre-allocate the internal data structure we use in logprob and pass it on the RHS to reuse it. For example:

varinfo = Turing.VarInfo(model_given_design)
logprob"... | varinfo = varinfo, ...."

Thank you, this gave about a x5 speedup when evaluating the objective once. And about x10 when passed to an optimization routine.