Let me see if I understand. You have measures of variables x,y,z and a model such that if you know x and some parameters you can predict y, and a model such a that if you know y and some parameters you can predict z.
And you have a data structure where for each bin value in x there is a dataset in y, and for each bin value in y there is a dataset in z
Is that about the size of it?
Here is my question, evidently you have data for x,y,z values that are not compatible with a single parameterā¦ That is for a given parameter you will narrow down the x possibilities and then that narrows down the y, and that narrows down the zā¦ But for a different parameter you will use different x,y,z values!!! Thatās not going to give you a consistent inference.
Imagine you are trying to find out something about mammals in North Americaā¦ If your parameter is less than 0 you will compare the results to Opossums, if the parameter is between 0 and 1 you will compare to dogs, if itās greater than 1 you will compare to bearsā¦
It makes no sense. So Iām guessing Iām missing something.
For example it would make perfect sense to me if there were 3 parameters, one for the Opossums, with a prior that constraints it to less than 0, one for dogs with a prior that constraints to 0,1 and one for bears with a prior that itās > 1ā¦
Then I grab the value of the Opossums, predict the opossum data and compare to actual, predict the dog data from the dog parameter, and the bear data from the bear parameterā¦ Ultimately Iām comparing to the full dataset every time!