I don't personally know of any references supporting this idea at all. You would need to really go out of your way, in my opinion, to justify the use of this threshold, parametric or otherwise.
Hi!
Would it be OK to use a level of significance of 0.1 -instead of traditional 0.05- when using non-parametric tests in bivariate analyses if clinical significance is suspected? If so, could you please give me some references of papers supporting this assumption?
Thanks a lot!!
I don't personally know of any references supporting this idea at all. You would need to really go out of your way, in my opinion, to justify the use of this threshold, parametric or otherwise.
I'm not sure you would need to really go out of your way to justify this.
0.05 is not set in stone as ,however, it is the generally accepted and expected alpha value in most disciplines.
As long as you justify your choice with sensible reasoning, you should be fine - and backing this up with a peer reviewed article wouldn't hurt.
I have used 0.1 in the past, when, for environmental impact statements we want a more "relaxed" threshold to act as an early warning signal between reference and impact sites (as an example).
The earth is round: P<0.05
I suppose coming from a behavioral sciences I may not have an accurate understanding of acceptable practices in other realms such as environmental sciences.
I agree with bugman that there is no reason you should not use confidence threshold different from 0.05, as long as you say what threshold you are using.
But what I can't see how to justify is using one confidence threshold for parametric tests and another for non-parametric tests. There is nothing fundamentally different about what confidence means for a parametric vs. non-parametric test.
Pick your desired confidence interval and pick the appropriate test, but those are independent decisions based on entirely different criteria.
Thanks a lot for your quick answers!!
Hi Juanpe000,
I am facing the same problem that u did, so did u manage to find any reference on that? Really thanks a lot.
It is really a matter of risk. If you are performing exploratory studies with plans for future confirmation studies AND the risk associated with making a Type 1 error is low then 0.10 is an acceptable alpha. If on the other hand, the risk associated with a type 1 error is very high (e.g., potential for injury/harm) then 0.01 might be more appropriate.
Alpha should always be established based on risk, not because it is "traditional" or generally accepted.
Some serious thread necromancy going on here, but I'll take the opportunity for a couple of choice quotes:
(I'm not sure how God feels about the .1 though)Originally Posted by Rosnow & Rosenthal, 1989
So the marginal probability of a Type I error in the real world* is 0.Originally Posted by Cohen, 1990
That all said, while the probability of a Type I error may be 0, the probability that a reviewer will spit the dummy at the prospect of alpha = 0.1 is approximately 1.
*May depend on your definition of real world.
Yes, but often we don't know the direction of the effect for sure.So the marginal probability of a Type I error in the real world* is 0.
With kind regards
K.
Tweet |