Reporting results p > .05 justified?

DvdR

New Member
#1
Hi all,

Recently I did a field study where I investigated recycling behaviour in the office. We did a baseline and post-intervention measurement of the respons rate (i.e. how much of the total amount of a certain type of waste ends up in the right bin?). For 2 x 2 weeks we collected and analyzed all the trash on a daily base, resulting in 10 + 10 (baseline + post) datapoints.

On four floors we tested interventions to improve reclycling and we compared these to two control conditions. On a small N=20 we found strong effects (Cohen's d > .80). For some effects the p-value was marginally significant (.05-.10) but we still reported those effects. I was taught that if you find strong effects on a small N that are close to being significant, you can assume the effect is actually there. After all, the p-value will automatically decrease when N increases.

We also reported effects with a p-value between .10-.15 with an explicit diclaimer that these effects should be interpreted with great caution and that a follow-up study is required.

Someone, however, heavilly criticized us for reporting effects that have a p-value above .05. I personally think that this person is too rigid concerning the p-value considering the small N and the large effects. But maybe I'm wrong and I shouldn't have reported these results.

Can anybody tell me whether it was justified to report these results or not (also taking into account the disclaimer we used)? Thanks!

Best,
Danny
 

hlsmith

Omega Contributor
#2
"After all, the p-value will automatically decrease when N increases." perhaps it would if you had a random sample representative of the population.


Here is an idea, just don't use p-values and report all effects along with confidence intervals. P-values can be sample size depend, don't tell you about confidence, magnitude, or direction.