# Thread: p value of .051

1. ## p value of .051

Hello everyone,

If I have a p-value of .051 is considered over .05?
Should I do not reject the null hypothesis?

Thanks

2. ## Re: p value of .051

If you're using an alpha of .05 then a p-value of 0.051 wouldn't allow you to reject the null.

3. ## The Following User Says Thank You to Dason For This Useful Post:

Anna II (10-09-2017)

4. ## Re: p value of .051

But you could just report the p-value that you got.

(If I was reading a journal article and all they reported was "fail to reject H0," and I later found out that they actually got p=.051, I would be kind of annoyed at their level of NHST fundamentalism.)

5. ## Re: p value of .051

I agree with Dason and bruin. Reporting the pvalue and effect size (e.g., components) will also allow the reader to speculate if your test may have been underpowered.

6. ## Re: p value of .051

Depending on your field of research, you might consider reporting the p size you got, but noting that it is not significant but close to significance. This is called marginal significance. You can usually do this with p-values of up to 0.075 or 0.1, again, depending on your field of research.

Besides, significance isn't everything. Another important factor is effect size. Sometimes a p-value of 0.051 is much more exciting than one of 0.001, if the first has a big effect size and the second a small one.

7. ## Re: p value of .051

Originally Posted by Shachar
Depending on your field of research, you might consider reporting the p size you got, but noting that it is not significant but close to significance. This is called marginal significance. You can usually do this with p-values of up to 0.075 or 0.1, again, depending on your field of research.
I find this practice similar to people claiming "quasi-randomized" when referencing group allocation. The process is either random or it is not. Similarly, a given p-value shouldn't be "marginally significant" when it exceeds the alpha cutoff-- it is simply nonsignificant at the chosen alpha level, per the a priori decision criteria. That is, the test is either significant at a given alpha level or it is not. However, reporting the p-value with your decision allows the reader to see just what that means and mitigates the dichotimization issue people worry about.

Originally Posted by Shachar
Besides, significance isn't everything. Another important factor is effect size. Sometimes a p-value of 0.051 is much more exciting than one of 0.001, if the first has a big effect size and the second a small one.
This is definitely a good point that p-values should also be reported with an effect size, ideally a confidence interval or the necessary values to calculate one.

 Tweet

#### Posting Permissions

• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts