Odd statistical argument needs resolution, thanks.

#1
Hi all. I'm having a silly argument online and I need help as my statistics skills are self-taught and very spotty, at best.

I assert that, in a given field, the odds that the complete set of 600 studies that are "independent, well-designed, well-conducted, not fraudulent" will all find "no effect" are related to the published p value, p < 0.05, by the formula (1-p)^600, which means that such a set of studies, if it exists, would be an indication of some kind of bias in the field. I don't understand the argument used to refute my claim. Can anyone explain what I'm not getting?


I say:

If there really are ZERO studies out of 600 that "claim negative effects" --that is, reject the null hypothesis-- the odds that this is due to chance, rather than some kind of bias, is (0.95)^600 (assuming p< 0.05), which is a very small number indeed.

My opponent makes the argument:


Some variation of this seems to be your main complaint about GMO research every time you post here, but you're fundamentally misunderstanding something about statistics.
Taking the p<0.05 level of significance is just a standard which is answered at the yes/no level - is there more or less than a 5% chance that this result could have happened if there was really an effect present? Every paper which claims "no negative effects at the p<0.05 level" has to have had less than a 5% chance of getting that result if there was really an effect, but, and I want to emphasise that this is the point you need to understand, would show the same result if there was less than a 0.1% chance (or any arbitrarily low number you want) of getting that result if there was an effect.
It won't even be mentioned in the paper because the significance level has to be chosen before the experiment starts and you can't just go back afterwards and say "these results are significant at the p<0.001 level" even if you get results that would have passed that significance level.
Saying there's only a (0.95)^600 chance that 600 papers could all show no negative effects is assuming that all 600 papers only just missed the cutoff for showing an effect. You're assuming the worst possible interpretation of your opponents' results, and arguing that this proves there's something fishy. This lack of statistical understanding on your part is why you're not getting more traction with your arguments.