Turing.jl or SciML?: training and testing on different datasets

I am new to doing Bayesian inference, so I wanted to start off with doing a simple regression (here, assumed to be a linear regression). I decided to use Turing.jl, and through the tutorials and previous questions I was able to get the linear regression working.

Working with one dataset appeared to be fine (training and testing on the same dataset). However, I am actually interested in training and testing on different datasets: that is, training on data that is of the form y = a*x + b, and then testing on data of the form y = c*x + d. I assumed this is what could readily be done with a simple model (as done in the examples), but when I tried implementing this the final resulting coefficients were a mix of the two forms (y = a*x + b and y = c*x + d) instead of just the test data (y = c*x + d).

Should this mixing of coefficients be happening, or am I implementing the training and testing incorrectly? Or, is Bayesian inference only able to perform analysis on a single dataset (although the set may be multi-dimensional), and the only way to train on one dataset and test on another is to use machine learning (e.g. perhaps a BNN implemented in Turing,jl)? Thank you for your help.