hi everyone - I’m thinking I may have miss-specified a Turing model
I want to combine measurements of X, from 2 tests:
test A, has a known precision (test_A_error
)
test B has an unknown precision (test_B_error
) and bias (test_B_bias_pr
)
I’m expecting that increasing the amount of data from test A and from test B should reduce uncertainty in all my parameters: mean_X
, sd_X
, X
, test_B_bias
and test_B_error
…however, this is not the case. Specifically, mean_X
, sd_X
, X
, do not appear to be shifting from their (weakly informative) prior.
I will continue to investigate, but would like to knof if anyone sees anything fundamentally wrong with the model:
@model function test_model(test_A_data, test_A_error, test_B_data)
# priors
mean_X ~ mean_X_pr
sd_X ~ sd_X_pr
test_B_bias ~ test_B_bias_pr
test_B_error ~ test_B_error_pr
# Likelihood for the test data
X ~ Normal(mean_X, sd_X)
for test_A_id in eachindex(test_A_data)
test_A_data[test_id] ~ Normal(X, test_A_error)
end
for test_B_id in eachindex(test_B_data)
test_B_data[test_B_id] ~ Normal(X + test_B_bias, test_B_error)
end
end