Consider an experiment consisting of a repeated trial with two random Bernoulli (=binary) variables, A and B. Each trial consists of multiple outcomes for both A and B. Each trial has the same number of samples and the underlying joint distribution of A and B is the same everywhere.
Of course it's not possible to infer the joint probability distribution for a single trial, since we don't assume A and B to be independent.
But is it possible to make an estimation of the joint probability distribution from the set of marginals we obtained?
Just to make things clearer: here's an example.
Trial 1 gives:
p(a=0) = .3 (so p(a=1)=.7)
p(b=0) = .2 (so p(b=1)=.8)
Trial 2 gives:
p(a=0) = .1
p(b=0) = .5
Trial 3 gives:
p(a=0) = .7
p(b=0) = .9
Trial 4 gives:
p(a=0) = .4
p(b=0) = .6
My question is: how can I estimate p(a=0,b=0), p(a=0,b=1), p(a=1,b=0) and p(a=1,b=1) ?
Remember that I don't have access to the individual results of each sample in a trial - I only know the marginals of each trial.
Of course it's not possible to infer the joint probability distribution for a single trial, since we don't assume A and B to be independent.
But is it possible to make an estimation of the joint probability distribution from the set of marginals we obtained?
Just to make things clearer: here's an example.
Trial 1 gives:
p(a=0) = .3 (so p(a=1)=.7)
p(b=0) = .2 (so p(b=1)=.8)
Trial 2 gives:
p(a=0) = .1
p(b=0) = .5
Trial 3 gives:
p(a=0) = .7
p(b=0) = .9
Trial 4 gives:
p(a=0) = .4
p(b=0) = .6
My question is: how can I estimate p(a=0,b=0), p(a=0,b=1), p(a=1,b=0) and p(a=1,b=1) ?
Remember that I don't have access to the individual results of each sample in a trial - I only know the marginals of each trial.