P-value from test statistic

#1
I'm given a test-statistic of 2.4 in one of those H0 type problem with a 0.05 significance level. In the z-table, if you look up 2.4, it says 0.0082. The question asks for the P-value. The professor says the correct answer is 0.0164. I understand you just simply have to multiply 0.0082 times 2 in order to get the P-value, my question is why?

Both 0.0082 and 0.0164 are listed in the answer choices. On a test, I may have just selected 0.0082 since that's the number I found in the z-table. How do I know when I need to multiply that number by 2?

Thank you!
 
#2
I'm given a test-statistic of 2.4 in one of those H0 type problem with a 0.05 significance level. In the z-table, if you look up 2.4, it says 0.0082. The question asks for the P-value. The professor says the correct answer is 0.0164. I understand you just simply have to multiply 0.0082 times 2 in order to get the P-value, my question is why?

Both 0.0082 and 0.0164 are listed in the answer choices. On a test, I may have just selected 0.0082 since that's the number I found in the z-table. How do I know when I need to multiply that number by 2?

Thank you!
Depending on the type of the hypothesis test (one-tailed or two-tailed), you have to decide if you need to multiply p-value by 2. But I'd rather divide the level of significance by 2. So, if it's a one-tailed test (< or >) then you don't have to divide or multiply anything, just look up the p value and compare it to the level of significance. If it's a two tailed test (=), then find the p-value and compare it to the level of significance divided by 2 (alpha/2). In your example, I'd find p-value (0.0082) and compare it to 0.025 (0.05/2).

edit:
btw, take a look at this: http://cnx.org/content/m17025/latest/hyptest22_cmp1.png
 
Last edited:
#3
Sorry to resurrect an ancient thread, but I thought it was better than posting an entirely new thread on such a similar topic.

I have a related question, but it's about SPSS. (Actually, it's about the free version, PSPP, but close enough).

I have no excuse for not knowing this by now, but what can I say -- I'm a bit rusty I guess.

I am trying to remember exactly how SPSS/PSPP get the p-value of the test statistic, which they report as "Sig. (2-tailed)".

Here's what I've found on ats.ucla.edu:

"Sig. (2-tailed) - The p-value is the two-tailed probability computed using the t distribution. It is the probability of observing a t-value of equal or greater absolute value under the null hypothesis. For a one-tailed test, halve this probability."

I guess what I don't understand is why the p-value is dependent on the type of test you are doing. Isn't p-value the probability of obtaining a sample mean as extreme as, or more extreme than, the one that you obtained? If so then this number shouldn't change depending on the type of test you are doing, one-tailed or two-tailed. The only thing that should change is the number you compare it to: if you are doing a one-tailed, you should compare it to alpha; if you are doing a two-tailed, you should compare it to alpha/2.

However, if what I've just said is correct, then I don't understand why you would HALVE the p-value for a one-tailed test. When SPSS/PSPP report the p-value, are they actually reporting a DOUBLED p-value, on the assumption that we are going to automatically compare whatever it gives us with our full alpha (and that we are too dumb to realize that for a 2-tailed we need to compare to alpha/2)?

Thanks,

bruin
 
Last edited:

hlsmith

Omega Contributor
#5
Of course someone can help you. The confusion probably comes from whether you are talking about the probability from the test (p-value) or the level of significance (e.g., 0.05).

Another scenario would be if you are conducting multiple tests. Say I am comparing a group to 3 other groups. I need to correct my level of significant to account for this new level of chance, since 1 in 20 won't hold up if I am performing three tests. So I say hey, lets do a Bonferroni correction and I divide the level of significance by 3, which results in a cut-off of approximately: 0.0167. Well this irrational number is kind of ugly and when I go to explain this to a study team member they may say why are you using that number.

So another option could be to multiply my three p-values by 3. Then I still use the 0.05 cut-off, but now the p-values will be three times larger and I see how they line up with the new cut-off.

I did not know if viewing your problem from a different perspective may help. In the future you could write your own thread, then just put a link to the relevant thread as a reference or background.
 
#6
Thanks for the reply and I'll take your advice on starting a new thread next time.

Can we keep it simple, for instance a one-sample t-test? When SPSS/PSPP report the p-value for a one-sample t-test, are they actually reporting a DOUBLED p-value, on the assumption that we are going to automatically compare whatever it gives us with our full alpha (and that we are too dumb to realize that for a 2-tailed we need to compare to alpha/2)?

If this is not the case, if the software is not doubling the p-value, then I really do NOT understand why a one-tailed test should require halving the p-value. The p-value should not change based on how many tails you have, right? It's the comparison value that should change -- compare the p-value to alpha for a one-tailed test, compare it to alpha/2 for a two-tailed test. Right?


EDIT: I think I finally figured this out, so please tell me if I am right. For a two-tailed test, SPSS/PSPP is plotting the plus AND minus of the obtained test statistic. Then it is reporting the proportion of the distribution that lies beyond those TWO values. But for a one-tailed test, it is only reporting the proportion of the distribution beyond that ONE test statistic (either plus OR minus). That's why it's twice as big for a two-tailed test. Is that it?
 
Last edited:

BGM

TS Contributor
#7
Actually the \( T \) statistic has a nice property because it is symmetric about 0 under the null distribution.

So for a two tailed t-test, you may interpret the test as follows:

Reject \( H_0 \) when \( |T| \) is large.

Given the observed test statistic is \( t \) (can be either positive or negative),

the p-value is \( \Pr\{|T| > |t|\} \) which you may see how the p-value get doubled.

The more natural way of thinking is that the test statistic at either tail give strong evidence against the null hypothesis and we want to calculate the probability of obtaining a more extreme test statistic. For t-distribution we are glad that it is symmetric; for unsymmetric null like the chi-square distribution, the p-value of the two-tailed test seems like debatable but doubling is a popular approach.
 
#8
Hi BGM, thanks for your reply!

Actually the \( T \) statistic has a nice property because it is symmetric about 0 under the null distribution.

So for a two tailed t-test, you may interpret the test as follows:

Reject \( H_0 \) when \( |T| \) is large.

Given the observed test statistic is \( t \) (can be either positive or negative),
This last bolded part is what has got me confused. The sign of an observed test statistic is meaningful: a positive sign indicates the observed t statistic is above the hypothesized population mean, while a negative sign indicates the observed t statistic is below the hypothesized population mean. So how can we (or SPSS) say that the observed statistic can be treated as either positive or negative? To me, it seems to matter greatly whether it is above or below--and more importantly, it seems to me that it can only be one OR the other, above OR below the hypothesized population mean, not both at the same time.

In terms of probability/proportion, then, it seems to me that p-value should ALWAYS be "one-tailed" (so to speak) whether or not we are splitting the ALPHA into two tails (two-tailed test) or lumping the ALPHA into one tail (one-tailed test). Yet it seems that SPSS is reporting the p-value that is combining the probability of obtaining a t below the negative version of our observed test statistic PLUS the probability of obtaining a t above the positive version of our observed test statistic. I just don't see why this is legal/makes sense given that the sign of an observed test statistic is meaningful.

Thanks again
 
Last edited:
#9
Hi BGM, thanks for your reply!



This last bolded part is what has got me confused. The sign of an observed test statistic is meaningful: a positive sign indicates the observed t statistic is above the hypothesized population mean, while a negative sign indicates the observed t statistic is below the hypothesized population mean. So how can we (or SPSS) say that the observed statistic can be treated as either positive or negative? To me, it seems to matter greatly whether it is above or below--and more importantly, it seems to me that it can only be one OR the other, above OR below the hypothesized population mean, not both at the same time.

In terms of probability/proportion, then, it seems to me that p-value should ALWAYS be "one-tailed" (so to speak) whether or not we are splitting the ALPHA into two tails (two-tailed test) or lumping the ALPHA into one tail (one-tailed test). Yet it seems that SPSS is reporting the p-value that is combining the probability of obtaining a t below the negative version of our observed test statistic PLUS the probability of obtaining a t above the positive version of our observed test statistic. I just don't see why this is legal/makes sense given that the sign of an observed test statistic is meaningful.

Thanks again
Hi bruin,
i am having the same doubt as you. I wonder if you have managed to convince yourself why do we need to multiply the p-value by 2. Care to share?