# Thread: alpha in one sided tests

1. ## alpha in one sided tests

hello all,

i have a theoretical question, i have found out that on medical research, when performing a one sided test (instead of the preferable two sided), the FDA expect the alpha to be 0.025 a not 0.05.

i wonder why...

alpha is probability of making a type 1 error. if we set it for 5% on a two sided test, why shouldwe take 2.5% foronesided?

i read that the strength of evidence in a 5% two sided is like a 2.5% one sided. i fail to see why, it makes no sense to me. can u explain ?

thanks

edit: a quote i just find, will help u help me...

The approach of setting type I errors for one-sided tests at half the conventional type I error used in two-sided tests is preferable in regulatory settings. This promotes consistency with the two-sided confidence intervals that are generally appropriate for estimating the possible size of the difference between two treatments.”

2. ## Re: alpha in one sided tests

Hi WeeG

It is nice to meet a new contributor who is actually pretty older!

I guess if we predetermine our hypothesis in a one-way direction (for example whether A is necessarily greater than B or not [instead of whether A and B are different in any direction or not]), the alpha should remain at 0.05. However, I suppose some people might cheat this and run a one-sided test, AFTER obtaining the results. Their reason can be to boost the power of their test and obtain a significant P value, as a one-sided test will output a P value half the value of the P value from the same comparison but using a two-sided test.

So I guess, some authorities might want to neutralize this effort by already dividing the routine alpha (0.05) by 2, to offset for the abovementioned way of reducing the P value. In this case, using a one-sided test will not help any authors to suddenly obtain a significant P value, when their real two-sided test was not significant.

3. ## Re: alpha in one sided tests

hi :-)

your argumet makes sense, I guess one could cheat that way.

but what if one plans a one sided test prior calculating the sample size? why would the regulator insist on 0.025 then?

I was thinking on another option. often in one sided tests, a CI is being used. depending on the direction, either the lower or upper limit is being used. In this case when only one limit is being used on a 95% CI, the matching alpha is 2.5%. Am I making sense?

4. ## Re: alpha in one sided tests

hi :-)

your argumet makes sense, I guess one could cheat that way.

but what if one plans a one sided test prior calculating the sample size? why would the regulator insist on 0.025 then?

I was thinking on another option. often in one sided tests, a CI is being used. depending on the direction, either the lower or upper limit is being used. In this case when only one limit is being used on a 95% CI, the matching alpha is 2.5%. Am I making sense?

5. ## Re: alpha in one sided tests

but what if one plans a one sided test prior calculating the sample size? why would the regulator insist on 0.025 then?
I think this is not a valid or at least proper conduct by the regulators. But if I was in charge, I would think most authors who tend to use the one-sided test are those choosing it after the experiment. My reason would be that running a two-sided test with the hypothesis of A = B is of equal chance of getting rejected as the on-sided test of A =< B. So why a researcher would use that one, while he could use the more comprehensive version. So I "guess" perhaps this is why they have opt for considering everyone as cheaters. However, there are of course people who have determined this one-sided test really beforehand. In that case, it is an invalid approach (IMHO) to reduce the alpha to half. I hope statisticians or other experts on the forum can respond on this.

I was thinking on another option. often in one sided tests, a CI is being used. depending on the direction, either the lower or upper limit is being used. In this case when only one limit is being used on a 95% CI, the matching alpha is 2.5%. Am I making sense?
I guess so, but again the whole quoted assumption of yours is discarding the matter of time (i.e. decision on test type before or after the experiments). So it is again analogous to the fact that P values of one-sided test are half the value of P values of the same comparisons using a two-sided test.

Our original point was that "it is not the magnitude of the P value that matters in validation of one-sided versus two-sided tests (and also the broadness of the CI). It is the factor of time. So it is our pre-determination of one-sided versus two-sided tests that matters. If we pre-determine that we want a one-sided test to test the null A >= B, then it is correct that our chance for obtaining a P value rejecting the null A = B gets doubled (or the P value is reduced to half...; or the CI is reduced to one of the upper or lower limits), but this is at the cost of reducing our chance to reject the null if A > B (where in a two-sided test, either A > B or A < B would reject the null)...

So I think their approach is invalid in forcing the author to determine an alpha = 0.025, if the author has really pre-determined the test type, and not changed the test "after" he has seen the results and has decided to reduce the P value to half, later.

On a side note, I remembered our conversations with contributors here. I think Fisherians like to discard the matter of time and act like they are looking at all times at the same time (I might be one of them according to GretaGarbo!). Maybe from such a perspective, there is no pre-determination or post-determination, and such an alpha reduction might be valid... However, I deduced it right now, and with a deeper thinking, I guess even Fisherians might notice the fact that when we "pre-determined" a test type, we are offsetting for a better chance of gaining a P value for rejecting A >= B by reducing our chance of rejecting A = B.

6. ## Re: alpha in one sided tests

I just found this in an article:

"a two-sided symmetric 0.05 test has a greater level of evidence than a one-sided 0.05
test, but the same level of evidence as a one-sided 0.025 test that demonstrates the hypothesized beneficial outcome".

I am not sure I understand this.

 Tweet

#### Posting Permissions

• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts