1. Contradictory Results re. Normality Tests

Hello,

I've got an issue regarding normality, I have tried to figure it out but need some help please.

Checking for normality of the variable using SPSS, this is the output I got.....
n = 2945
Skewness = 0.447
Kurtosis 0.119
Kolmogorov-Smirnov Sig. value = 0.000
Shapiro-Wilk Sig. value = 0.000
Histogram/Stem-and-Leaf Plot = close to bell shaped

My problem is that the output seems contradictory. Skewness and kurtosis suggest the data is normally distributed while K-S test suggests it's not. I'm confused!

Any help really appreciated....

2. Re: Contradictory Results re. Normality Tests

Originally Posted by eoinmc
Hello,

I've got an issue regarding normality, I have tried to figure it out but need some help please.

Checking for normality of the variable using SPSS, this is the output I got.....
n = 2945
Skewness = 0.447
Kurtosis 0.119
Kolmogorov-Smirnov Sig. value = 0.000
Shapiro-Wilk Sig. value = 0.000
Histogram/Stem-and-Leaf Plot = close to bell shaped

My problem is that the output seems contradictory. Skewness and kurtosis suggest the data is normally distributed while K-S test suggests it's not. I'm confused!

Any help really appreciated....

The K-S (or S-W) test is rejecting (p<.001) because of your large sample size (almost 3000). This is sometimes referred to as "excessive power".

3. Re: Contradictory Results re. Normality Tests

I'm still confused though....should I disregard the result of the K-S test because the sample size is too large?

With 'excessive power' is it better assume normality (or not) using skewness/kurtosis and looking at a histogram? Is another test possible?

Thanks again...

4. Re: Contradictory Results re. Normality Tests

Well what do you need normality for?

5. Re: Contradictory Results re. Normality Tests

Originally Posted by Dason
Well what do you need normality for?
To know whether to use parametric or non-parametric tests for statistical analysis. Should I present my data as median +/- IQR or mean +/- sd? Should I compare means using Students t-test or Mann-Whitney u-test?

Surely establishing whether or not the distribution is normal first is paramount!?

6. Re: Contradictory Results re. Normality Tests

Well you're going to have the same issue in that you'll have excessive power. With a sample size that large you'll be fine using a t-test (it assumes normality of the sampling distribution which you definitely have).

7. Re: Contradictory Results re. Normality Tests

Using the non-parametric Mann-Whitney test would be safer. If you effect is non-parametricly signifcant, it is really there.

Remember, a t-test with N more than a dozen or so is basicaly a z-test, so obviously the normality assumption is relevant for the exact calculation of a tail probability. But as long as a the data aren't totally non-normal at a visual level, and as long as your t-test result is reasonably significant, a slight departure from normality will only modify P a little, and so will only be important if P is very close to your critical value. For example, if a t-test gives P=2.13E-6, well maybe the true value is P=2.15E-6, but that hardly matters -- your effect is definitely significant. But if a t-test gives P=0.049, well, maybe the true value if P=0.051, so you can't really be sure that your effect is significant at the 95% confidence level.

8. Re: Contradictory Results re. Normality Tests

But with a sample size greater than 2000 the sampling distribution of the mean is basically guaranteed to be normally distributed so the p-value should essentially be exact. Not that I'm saying the t-test is the ultimate "end-all" test. Just that in this situation you wouldn't be breaking the normality assumption.

9. Re: Contradictory Results re. Normality Tests

Originally Posted by Dason
Well you're going to have the same issue in that you'll have excessive power. With a sample size that large you'll be fine using a t-test (it assumes normality of the sampling distribution which you definitely have).
Originally Posted by ichbin
Using the non-parametric Mann-Whitney test would be safer. If you effect is non-parametricly signifcant, it is really there.

Remember, a t-test with N more than a dozen or so is basicaly a z-test, so obviously the normality assumption is relevant for the exact calculation of a tail probability. But as long as a the data aren't totally non-normal at a visual level, and as long as your t-test result is reasonably significant, a slight departure from normality will only modify P a little, and so will only be important if P is very close to your critical value. For example, if a t-test gives P=2.13E-6, well maybe the true value is P=2.15E-6, but that hardly matters -- your effect is definitely significant. But if a t-test gives P=0.049, well, maybe the true value if P=0.051, so you can't really be sure that your effect is significant at the 95% confidence level.
Thanks both for your thoughtful replies, although if I understand correctly, you seem to be giving contradictory advice.

I don't understand why I cannot assume, based on the result of K-S above (p<0.001), that it's not normally distributed?

If both skewness/kurtosis and K-S help establish normality, why have they contradicted each other so significantly here?

10. Re: Contradictory Results re. Normality Tests

We haven't really offered contradictory advice. Both the t-test and the Mann-Whitney tests are reasonable in this situation. Of course you're going to have excessive power in either case.

I think you're confused about a few things. Just because the skewness/kurtosis seem close to what they should be for a normal... they aren't the exact values for a perfect normal distribution. With a sample size as large as yours that ends up in rejecting null hypothesis that the data is normally distributed.

11. Re: Contradictory Results re. Normality Tests

I see how the advice looks contradictory, but really we are just emphasizing different aspects of the same not-entirely-simple whole.

Look, if the point of your study were to determine whether these data are normally distributed, your KS test would be all you need: the data are not normally distributed. They may be close, but your large sample allowed you to detect a statisticly significant departure from normality. Case closed.

But the point of your study is presumably not to determine whether the data are normally distributed. The point is to compare two data sets to see wether the average values are different between them. You want to know whether to use a t-test or an MW test to do that.

My point was that the MW test is always "safe" -- you don't have to worry about parameteric assumptions.

Dason's point was the the t-test may be safer than you think. The assumption behind the t-test isn't really that the data are normal (that's sufficient but not necessary), it's that the sampling distribution of the mean is normal. By the central limit theorem, that assumption will get better and better satisfied as N increases, regardless of what the distribution of the data is (except for a small class of troublesome distributions, like the Cauchy distribution, that do not satisfy the requirements of the central limit theorem).

I agree entirely with Dason's point and think it's an important one for you and everyone doing statistical tests to digest. I am just adding the cavat that, since the CLT is only true in the limiting sense, you only have reason to believe that the sampling distribution of the mean is approximately normal, and you have no quantitative limit on just how close to normal that is. This is not likely to be a problem for tail probabilities far smaller than your critical threshold -- it doesn't matter if those are a little bit off. But it could be a problem for tail probabilities that are very close to your citicial threshold, because then even a small correction could matter.

12. Re: Contradictory Results re. Normality Tests

Very well said ichbin.

13. Re: Contradictory Results re. Normality Tests

Originally Posted by ichbin

My point was that the MW test is always "safe" -- you don't have to worry about parameteric assumptions.

I would favor the MW test as well. I would also point out that Monte Carlo studies (e.g. Blair & Higgins) have empirically demonstrated that nonparametric tests can be more powerful than the usual OLS parametric (t or F) tests in such situations (i.e. the values of skew and kurtosis given above in the initial post).

14. Re: Contradictory Results re. Normality Tests

Ok, while I was unable to fully understand some of the above points, I think I gathered enough information to satisfy my question for now. I'll go away and think on it with a better understanding now.
Again, I'm really grateful for all the advice, cheers guys...

15. Re: Contradictory Results re. Normality Tests

Does the Central Limit Theorem apply? If so, the data is normally distributed. You then verify this by checking the data. So either do not do any testing at all, or do it only after you think about whether CLT applies.