+ Reply to Thread
Results 1 to 3 of 3

Thread: Power and Alpha combinations for same sample size

  1. #1
    Points: 14, Level: 1
    Level completed: 27%, Points required for next Level: 36

    Posts
    2
    Thanks
    0
    Thanked 0 Times in 0 Posts

    Power and Alpha combinations for same sample size




    Let's say I have a sample size established with alpha=0.05 and power=0.8 (based on time constraint for study).

    So, the same sample size can be achieved with any of:

    alpha = 0.001 and power = 0.31
    alpha = 0.01 and power = 0.58
    alpha = 0.25 and power = 0.95
    alpha = 0.999 and power = 0.998

    I want to know is what levels of risk to expect from the study of this size: what alpha I can aim for and what power I can hope to achieve.

    Which combination of alpha and power do I adopt? Why?
    Last edited by vladmalik; 09-22-2014 at 08:54 PM.

  2. #2
    Points: 14, Level: 1
    Level completed: 27%, Points required for next Level: 36

    Posts
    2
    Thanks
    0
    Thanked 0 Times in 0 Posts

    Re: Power and Alpha combinations for same sample size


  3. #3
    TS Contributor
    Points: 18,889, Level: 87
    Level completed: 8%, Points required for next Level: 461
    CowboyBear's Avatar
    Location
    New Zealand
    Posts
    2,062
    Thanks
    121
    Thanked 427 Times in 328 Posts

    Re: Power and Alpha combinations for same sample size


    Quote Originally Posted by vladmalik View Post
    I want to know is what levels of risk to expect from the study of this size: what alpha I can aim for and what power I can hope to achieve.
    In reality, most people set alpha of 0.05, because that's what most people set (i.e., it's tradition). This usually implies that alpha is smaller than 1-power. Possibly this would be justified if we felt that the consequences of a type 1 error were much more disastrous than those of a type 2 error. But obviously it's ridiculous to claim that this holds true in all of science, and that in all of science the relative costs of type 1 and type 2 errors are so similar that alpha = 0.05 is always the right choice. So again, it's just tradition.

    A commenter on your post on stackexchange brings up the idea of conceptualising significance testing as a ROC analysis. And that can be helpful: We could try and select an alpha that is "optimal" in some specific sense (e.g., being most likely to result in the correct decision). However, methods for selecting an optimal cutoff in ROC analysis pretty much always take into account the prior probability that the case is a "disease" (i.e. that the null hypothesis is false). And no one ever really approaches significance testing in this way; people generally aren't willing to specify a prior probability that the null hypothesis is false. Unless they're Bayesians, in which case they wouldn't be using this framework in the first place.

    tl;dr there is no sensible answer to this question. Abandon significance testing, all ye who enter here.

+ Reply to Thread

           




Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts






Advertise on Talk Stats