Using the Anderson-Darling test for normality instead of the Shapiro-Wilk

I have to test the normality of my data in R but I'm not sure if the Anderson-Darling test does the same as the Shapiro-Wilk one (the number of observations is too high, more than 50000, so the Shapiro-Wilk test cannot be used). I've read about this test, which constitutes a variant of the Kolmogorov-Smirnov test (the reason I haven't used the last test mentioned is that my data has ties, and computed values might be erroneous).

Can someone tell me if the Anderson-Darling test is an optimal test and if it is valid to check the normality?

NOTE. Besides using the AD test, I'm also checking the associated QQplots in order to be sure my data is not normal.


No cake for spunky
I have never read anything positive about such tests in general. I switched to using QQ plots specifically because of the criticisms of the standardized test. If you reject the null, which is pretty likely for some of them if you have enough cases, what does that tell you?

Of course if you have 5000 cases normality almost certainly does not matter anyhow. :) Why are you testing for normality with so many cases? If you are running a regression model, ANOVA etc, normality will have no impact when that many cases.
Thanks for your reply! :)

I tested the normality of my data because I wanted to know whether to calculate mean and standard deviation or median, Q25 and Q75, While my data is not normal, I used the second option (median and quantiles), not only for creating a table but also to represent graphically my data (using the median despite the mean).

In the statistical course of my University, we used the Shapiro test for determining the normality of datasets, unless QQplots were used to check the hypothesis of the normality of the residuals in a model... Why are these tests not recommended?



No cake for spunky
I would do both mean and median and see if they vary much. That is always a good idea.

There are several reasons they are not recommended and I have not read the material in a while (once I read the critique I quit using them).

All those test have a null say its normality. But when you have a lot of cases, a lot of statistical power, you are likely to reject the null regardless. More importantly, what does rejecting the null really mean. Say it means your data is non-normal. But how non-normal, in what form? I would guess all real world data is somewhat non-normal (there is a joke that goes if the data is normal someone made it up). QQ plots address both problems to some extent. They tell you how, and in what form, they are non-normal.

Most people, although this is not true in your case, are concerned about normality because it is an assumption of regression and ANOVA. But it turns out with a lot of cases it rarely matters if it is non-normal for these methods. They still teach it in classes, most who run models ignore it as long as they have a hundred of so cases.


No cake for spunky
lol I thought it was that. But I was sure an elite engineer like yourself would not suggest that....

How does one operationalize fat...
Last edited: