correct update of bayesian posteriors

the question looks a bit artificial but I simply tried to get rid of useless details.

There are two persons, A, and B. they each got a number, a and b, correspondingly. and their task is to guess the right number c=(a+b)/2. Each of them knows his/her own number but not the number of the other.

If the B's prior beliefs were distributed normally (or by any distribution that is known to B) with a mean of b, how B should update his posterior probabilities if we informed B that a <K (K - a certain number).

In other words, we just tell B, that c<(K+b)/2 .

I guess he should 'cut' the tail of his distribution right to K, and then redistribute the probabilities to the left of the K. Is that correct? What is the right way to do it?


TS Contributor
I am not a Bayes expert, here is my guess only:

By Bayes Theorem, we have the follow formula

\( p(\theta|x) = \frac {p(x|\theta)p(\theta)} {p(x)} \)

which is useful to update the posterior distribution when we did actually observed the realization of data \( X = x \)

Now you observed a censored data instead, \( X < k \)

So we have

\( p(\theta|X<k) = \frac {p(X<k|\theta)p(\theta)} {p(X<k)} \)

The LHS should be still you desired posterior; and in the RHS you will involve the conditional CDF and marginal CDF (by integrating the \( \theta \) out) in the numerator and denominator respectively.