Think about what it means for Beta to stay at its prior distribution with respect to proper Bayesian inference. It means that the likelihood function is a constant with respect to Beta. And this in turn means that Beta doesn’t inform you about what your data might be.
All you have to do is simply don’t use Beta
As the distribution for the biased beta tends towards infinitely wide… The distribution for the unbiased Beta tends towards the prior.
There is no way to do actual Bayesian inference where you use the distribution for Beta and you don’t distort it in the posterior. If you make the bias infinitely wide, you don’t use the Beta, if you make the bias less than infinitely wide you use the Beta information, but you distort the Beta somewhat. It’s a continuum of possibilities. Simply choose that place on the continuum where you’re happy.
Think about it like this… you have three balls connected by springs. The ball on the right is your data, C, the ball in the middle is your biased parameter, B, and the ball on the left is your clinical Beta, A. Like this:
A ---- B ----- C
If you make the spring between A,B very very soft… then if you pull on C to put it where the real data lies… you’ll bring B along with you, but A will stay behind because B isn’t pulling on it very much…
If you tighten up the spring between A, B you’ll pull A along with B somewhat. If you make A,B connected by a rigid rod with zero length, A will come along and always be exactly where B is.
The model you started with is where A,B are connected by a rigid rod with zero length… By partially decoupling A,B using a “soft spring” (namely a prior on B that is wide and centered around the location of A) then A will remain mainly where it was. If you completely decouple by providing some other prior for B that doesn’t involve A at all… then you’ll sample from the clinical prior for A… but it won’t affect your choice of B at all.