Thanks to suggestions by member Lowpro, I have been looking into using Bayesian inference analysis for a project I am working on. I am hacking through the weeds on numerical methods and have gotten far enough to realize that you must start with some prior beliefs, collect some new data, and then revise the priors to yield posterior estimates that more closely match the data thereby changing your prior beliefs to some degree. However, the posterior is not just a model fitted to the new data but is also influenced by the prior.

If is likewise clear that as a practical matter, a prior distribution with many observations (n=hundreds to thousands of observations) carries a lot more weight and influences the posterior more significantly than a prior with just a few observations (n=tens of observations). Obviously I shouldn't be using the accumulated world of knowledge from the literature else the prior in the experiment will overwhelm the new data (I am sure I can easily find n=tens of thousands in the literature).

So, practically speaking, what do all of you do in your experiments? The trial I have envisioned will have an n=60 with 2:1 asymmetric randomization such that we have 40 treatments and 20 comparator subjects. My thought is to take the 20 comparator observations at each time point (t=0, 180, 365 days) and use those as the priors and then apply Bayesian analysis using the 40 Tx observations at matching times to generate the posteriors. In effect this would say that my prior assumption is that there is no material difference with the Tx group, but allow the new Tx data and the Bayesian analysis the possibility to convince me otherwise by materially shifting the posterior distribution. Obviously if I do this right and the randomization is proper, there will be no meaningful difference at t=0, but maybe a difference at the later time points if the Tx works (or heaven forbid, if the treatment is actually worse than the disease ).

Suggestions are welcome!