+ Reply to Thread
Results 1 to 6 of 6

Thread: Probability that Bernoulli p-parameter is greater than some value, given N samples?

  1. #1
    Points: 147, Level: 2
    Level completed: 94%, Points required for next Level: 3

    Posts
    5
    Thanks
    1
    Thanked 0 Times in 0 Posts

    Question Probability that Bernoulli p-parameter is greater than some value, given N samples?




    Hello everybody,

    I have the following question. Assume I observe N outcomes from a Bernoulli random variable with unknown success parameter p. Based on these outcomes (or samples), how likely is it that the real parameter p is larger (or smaller) than a given value eps (e.g., 1%)?

    Any ideas how this can be done?

    Thanks a lot,
    Hansel

  2. #2
    TS Contributor
    Points: 22,410, Level: 93
    Level completed: 6%, Points required for next Level: 940

    Posts
    3,020
    Thanks
    12
    Thanked 565 Times in 537 Posts

    Re: Probability that Bernoulli p-parameter is greater than some value, given N sample

    In frequentist settings the parameters. e.g. the proportion p in Bernoulli random variable, are deterministic constants. Therefore we are not interested in asking the probability that it is greater than a certain value, as it is either 1 or 0 (true or false). It will be much more natural to ask this in Bayesian setting.

    And the question is different if you just want to do hypothesis testing with, say H_1: p > 1\%

  3. The Following User Says Thank You to BGM For This Useful Post:

    hansel (08-11-2014)

  4. #3
    Points: 147, Level: 2
    Level completed: 94%, Points required for next Level: 3

    Posts
    5
    Thanks
    1
    Thanked 0 Times in 0 Posts

    Re: Probability that Bernoulli p-parameter is greater than some value, given N sample

    Thanks a lot for the quick response! I was thinking about using confidence intervals of the Binomial distribution but this does not really help. In the Bayesian setting, how would I proceed. Do I need to assume a prior distribution on p? And for the hypothesis test, what is the way to go there?

  5. #4
    Human
    Points: 12,676, Level: 73
    Level completed: 57%, Points required for next Level: 174
    Awards:
    Master Tagger
    GretaGarbo's Avatar
    Posts
    1,362
    Thanks
    455
    Thanked 462 Times in 402 Posts

    Re: Probability that Bernoulli p-parameter is greater than some value, given N sample

    Quote Originally Posted by hansel View Post
    how likely is it that the real parameter p is larger (or smaller) than a given value eps (e.g., 1%)?
    I agree with what BGM said, but didn't the original poster simply ask for:

    "what is the likelihood, given the observed data with say x successes out of N trials, for different values of the unknown parameter p"?

    The usual thing is to calculate the maximum likelihood value for the parameter p, but you can also calculate a curve; one likelihood value for a number of values of p (given the observed x and N values).

    Say if x is 3 and N is 100 (you have had 3 successes in 100 trials). Use the binomial distribution and calculate the probability for 3 out of 100 for many values of p. Say if you calculate the likelihood for 50 values of p between 0 and 5%. Then plot these likelihood values on the y-axis and p on x-axis. Then you will get a graph that is the most revealing I would say. You can also do "likelihood interval" (search the internet for it).

    And you could also do a usual confidence interval.

    But it is not meaningful to from the likelihood try to calculate the probability that p is larger than say 1%, since the likelihood does not sum to one. Such a reasoning would lead to the Bayesian thinking as BGM point out.

  6. #5
    TS Contributor
    Points: 7,081, Level: 55
    Level completed: 66%, Points required for next Level: 69

    Location
    Copenhagen , Denmark
    Posts
    515
    Thanks
    71
    Thanked 123 Times in 116 Posts

    Re: Probability that Bernoulli p-parameter is greater than some value, given N sample

    In the Bayesian setting, how would I proceed. Do I need to assume a prior distribution on p? And for the hypothesis test, what is the way to go there?
    Yes you need to assume a prior.
    The beta family of priors are the conjugate family for binomial observation, meaning that if the prior is beta so is the posterior. Hence it is easy to find the posterior without worrying about any integration, which is one reason to choose a prior from betafamily.

    Specifically if the prior is beta(a,b) then the posterior is beta(a+y,b+n-y) where n is the number of observations and y is the number of successes.
    Where g(p,a,b) \propto p^{a-1}(1-p)^{b-1}
    If you have no prior knowledge about the value of p one option is to use a uniform prior which is the same as beta(1,1).
    It is common to used the expectation of p with respect to the posterior distribution as an estimate of p, that is E[p\lvert y].
    And having the distribution of p given the data you can calculate probabilities Pr(P < p) and construct credibilityintervals which is commonly used for a Bayesian analog to hypothesistesting.
    Last edited by JesperHP; 08-12-2014 at 08:40 AM.

  7. #6
    Points: 147, Level: 2
    Level completed: 94%, Points required for next Level: 3

    Posts
    5
    Thanks
    1
    Thanked 0 Times in 0 Posts

    Re: Probability that Bernoulli p-parameter is greater than some value, given N sample


    Thanks both of you. I am rather familiar with Bayesian statistics so I should be able to do that.

+ Reply to Thread

           




Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts






Advertise on Talk Stats