PDA

View Full Version : Correct form of likelihood function for data which has only an upper or lower bound



kurros
07-06-2010, 03:43 AM
(i.e. say we have done some experiments and a negative result was found, ruling out values for some physical quantity below(above) some limit defined by the sensitivity of the experiment)

So, I have this problem I am tackling where I am doing a Bayesian scan of a multi-dimensional model. Most of the quantities predicted by the model have likelihood functions which are normal distributions (as functions of the possible data values), however there are some pieces of experimental data I have for which only upper or lower bounds exist, for example experiments have been done which show the observable must be above a certain value with 95% confidence limit or something, with no upper bound.

What is the correct form of the likelihood function to use for such a quantity? In the literature I have seen two possibilities, one being a compound function which is a normal distribution below the limit (if it is a lower bound) and a uniform distribution above this limit, while the the other is an error function centred on the bound. I have seen no theoretical justification for either of these distributions and was wondering if anybody knew of any.

The half-gaussian at first seems sensible because the maximum likelihood is obtained at the value of the limit and for anything above that, with the gaussian falling off as determined by the confidence % value, however on thinking about it some more I now think that the err function is probably more justified, although I can't prove why or even formulate why I think this very well. I guess it seems to me that even if the theory gives a perfect match with whatever value was observed in the experiment then it shouldn't actually be the maximum likelihood value, since it would be quite the fluke that this should happen.

Perhaps I should explain by scenario some more to make this make sense. One particular observable I am concerned with is the relic density of dark matter. If dark matter is only made of one type of particle, then the relic density as calculated by the astrophysics guys can be used to constrain the relic density of a given dark matter candidate particle for ones favourite model with a gaussian likelihood function. However, if dark matter is assumed to be composed of this candidate particle plus other stuff, then the relic density calculated by the astrophysicists can only provide an upper bound on the relic density of the candidate particle, since one can't disfavour the model if it fails to reach the required relic density (because we have allowed for the possibility that other stuff is lurking out there than is taken care of by the model). So in this latter case what likelihood function is appropriate for the relic density of the partial dark matter candidate? We need to kill the model if the relic density gets too large, but we don't want to penalise it if the relic density is lower than the observed astrophysical value.

Sorry if this is a bit incoherent, I could probably have made that all much more clear. If you need me to clarify anything or indeed write an equation down let me know. Mostly I am looking for some nice probability theory justification for which kind of likelihood function makes the most sense here. I have searched around quite a lot and been unable to find anything. Papers I have read where similar things are done seem to skip over the justification part... and I suspect that many authors actually just guess...
Maybe both forms of likelihood function are valid for different cases. I'm a little lost on this one.
In practice it doesn't make a huge different which of the two possibilities I mentioned are used, but I'd like to know which one is really right!

BGM
07-06-2010, 11:08 AM
I am not sure about the exact form of data you have.
But if you have censored observations, you can try to use the cumulative distribution
function.

kurros
07-06-2010, 09:11 PM
Ok I will give an example :).

Say one is looking for new particles at a particle accelerator. Ones spins an electron beam one way around a huge ring and a positron beam the other way, then at some point the beams cross and the electrons and positrons smash into each other (sometimes). They collide, burst into vast arrays of new particles which fly through a ring of detectors centred on the interaction zone, which can then be used to reconstruct the tracks of the particles that passed through them. One can use these reconstructed tracks to determine if any new and unusual particles popped into existence for a brief time after the original collision.
Say a model predicts that a new particle will be created one in a billion collisions at some certain energy (you need to provide enough energy to create the mass of the particle), and you do 100 billion collisions and don't find anything. This doesn't rule out the possibility that you will find said particle if you do another 100 billion collisions, but it is getting less likely. At some point the experimenters will write a paper saying that they have found nothing at the 95% confidence level, or some such. This information can be used to declare with some confidence that if this new particle exists, then it must have a mass above some certain limit (i.e. outside the ability of the particle accelerator to create).
I now want to construct a likelihood function for the mass of this particle, based on the data telling us that it must be above the limit measured by the collider. Generally people are quite slack about how to do this properly, and just take the 95% confidence limit as a hard cut on the mass. However, since we are only 95% confident about what the collider has and has not ruled out, we should be incorporating this uncertainty into the likelihood function as some kind of standard deviation-like quantity, as we would do if we had actually seen the particle and measured its mass to some accuracy. However, I am not at all sure what form this likelihood function should take. If we had a direct measurement of the mass it is quite acceptable I believe to model the measurement uncertainties as gaussian (strictly the experimenters have some distribution of measured masses and tell us the mean and standard error in these measurements) and so model the mass likelihood function as a normal distribution (over the possible mass values) i.e.

P(\mu_d | \theta ,M) = \frac{1}{\sqrt{2\pi \sigma^2}}exp(\frac{-(\mu (\theta) - \mu_d)^2}{2\sigma^2})

where \theta is a parameter of the model Mwe are assuming, \mu_d is the mean measured experimental value of the particle mass, \mu(\theta) is the particle mass predicted by the model M at the point \theta in the model parameter space and \sigma is the standard deviation of the experimental measurement, although we could include theoretical uncertainties (in the model value) in here as well. This is a likelihood function so it is the \theta parameter we take to be varying.

There is a bit of a conceptual barrier for me here because we don't have any measurement of the mass, we only have measurements telling us something about what the mass is not. It screws with my mind a bit to understand what the correct thing to do in this situation is.

edit: Oh ****, my latex didn't work the way I guessed. How does it work in these forums? -update: fixed!
edit 2: Censored observations eh? I will go read about those, because the cumulative distribution function has the form I would guess for this kind of thing. Perhaps you already gave me the answer I need, I'll get back to you :p.
edit 3: Ok having read a tiny a bit this concept seems to be exactly what I needed, thanks :). I have a slightly different scenario I also want to ask about, but it is possible it will also be covered by this censored observations concept so I'll get back to you once I figure that out.

kurros
07-06-2010, 11:09 PM
Ok so I have found that this idea of censored data might help me, but I'm having a hard time finding much literature on it. Any good books you can suggest? If you know of any that deal particularly with scenarios like the one I described above that would be particularly awesome.