I think you need to do the hypothesis testing. Plz see the link below:
I know that when doing a Shapiro-Wilks Test you need a W value close to 1 for normality to be adequately met. But how close is close? For example, what does it mean if your W value is .812? I can't find guidelines about this anywhere.
I'm still finding these terms confusing at this point. For the following output for the Shapiro-Wilks test, is .812 the W value? Does this mean that normality has not been met?
Statistic = .812
df = 99
Sig = .000
Does your software require input of the alpha (risk of rejecting the null hypotheseis that the distribution is normal)? If not, it may have a default alpha, such as 0.05, in which case you appear to have strong evidence against normality.
On further review, I retract the above since we are not clear on what the SPSS output represents. I would suggest going to the software's help to clarify the output
Last edited by Chris; 05-05-2009 at 09:19 AM. Reason: misunderstanding of Labtec's question.
Thank you. I'm using SPSS, which doesn't ask for the alpha, and as far as I can tell, is testing for normality.
So, if the statistics number was higher, say .987 or thereabouts, would that mean that it was normal? I'm still not clear on what the cut-off value would be that tells you whether the data is normal or not.
The following URL should help.
This example shows SPSS output, and while it gives the W value, the important value is the "Sig", which if low, (<.05) indicates significant difference from the null hypothesis of no difference from normality, so Labtec's results are not normal.
Thank you - this makes much more sense now.
Advertise on Talk Stats