My understanding is that the t-test is a special case of a oneway ANOVA.
I am confused by highly discrepant results when comparing the means of 2 variables. I am using Stata 10.
We have a control group and a treated group each with n = 20
control mean = 0.08 sd = .58
treated mean = 0.9 sd = .56
We first note that Bartlett's test for equal variances: chi2(3) = 12.5823 Prob>chi2 = 0.006
Hence, using t-test with unequal population variance, we have
ttest control == treated, unpaired unequal
gives a p value < 0.0001
The command:
ttesti 20 0.08 .58 20 .9 .56, unequal welch
also gives p < 0.0001
But
oneway control treated
gives an F = 2.46
p = .1206
Manual calculation with a log-likelihood ratio test matches the ANOVA p value almost exactly.
Non parameteric Kruskal-Wallis gives p = .3121
My question is:
What assumptions are being violated such that the t-test gives a wildly different answer to the ANOVA?
Cheers
Rob
I am confused by highly discrepant results when comparing the means of 2 variables. I am using Stata 10.
We have a control group and a treated group each with n = 20
control mean = 0.08 sd = .58
treated mean = 0.9 sd = .56
We first note that Bartlett's test for equal variances: chi2(3) = 12.5823 Prob>chi2 = 0.006
Hence, using t-test with unequal population variance, we have
ttest control == treated, unpaired unequal
gives a p value < 0.0001
The command:
ttesti 20 0.08 .58 20 .9 .56, unequal welch
also gives p < 0.0001
But
oneway control treated
gives an F = 2.46
p = .1206
Manual calculation with a log-likelihood ratio test matches the ANOVA p value almost exactly.
Non parameteric Kruskal-Wallis gives p = .3121
My question is:
What assumptions are being violated such that the t-test gives a wildly different answer to the ANOVA?
Cheers
Rob