non-significant Paired T-test, what happens next??

#1
Hi,

I'm analysing my data for my master's dissertation and I'm having problems!

I have 3 "before" and "after" treatment groups (n=20 each) and I've applied paired samples t-test to the before and after data and found no significant differences between either of the groups at a p level of 0.05.

My vice-principal who is also in charge of the statistic module is telling us that we need to go on to do a one-way ANOVA analysis anyways (she's been wrong before and no one knows what they're talking about!!)

so I have 2 questions:

1) is an ANOVA really necessary since none of the groups show a statistically significant difference on t-tests?

2) so I did go ahead to do a one-way ANOVA test just to see, and despite the fact that my 3 paired t-tests are not significant, my ANOVA IS significant (p=0.005!)... How is this possible and what does it mean??

I'm very confused, please help!!
 

noetsi

No cake for spunky
#3
It is possible that your t-test failed because you have too few cases to find an effect that did exist. That is your power is to low. I am surprised with 60 cases your ANOVA worked.

It is nearly always better to use ANOVA over multiple t-test because of the issue of familywise error. If you can do an ANOVA in this case you should.

Regardless, if you found an effect in ANOVA it likely means that there is one. If I was you in my write up I would simply say it is better to use ANOVA than multiple t-test (you can find this covered in nearly any text that addresses ANOVA) and not even mention the t-test (20 cases is very little data to draw conclusions on even without fw error issues).

Your next step should be to do something like a post-hoc test (Turkey HSD is reccomended by many) and find out what specifically differed from what (I assume you only did an omnibus f test).

If you can find more data, that would be a very good thing.
 
#4
thank you for your post.

my understanding (please correct me if I'm wrong) is that I should use an ANOVA over multiple INDEPENDENTsample t-tests to establish variation between the 3 groups but that a DEPENDENT t-test is separate from that rule as it is not comparing groups against each other but the before and after effect of a said treatment approach

I am under the strong impression that this is a 2 step process which was not interchangeable: dependent t-test to see if treatment works and then ANOVA to see which of the 2 treatments works best. Is that not the case?

Are you saying that I should skip the DEPENDENT t-test and only use the ANOVA?

Or assume that since the ANOVA showed significant results I should assume that the DEPENDENT t-test results (that showed no difference in before/after treatment) didn't show a difference that does exist?

I realise that my power is low and bigger samples would be best but we're very limited in ressources and that's not going to be possible...

I have done a tukey HSD test on the ANOVA result - and found significance between groups B & C- and a variance test (levene's = not significant) on SPSS already for the comparison between the 3 groups but I didn't know what to make out of the fact that none of the DEPENDENT T-tests are signinficant.

sorry if i'm repeating myself but I'm very confused and my study field is a world away from stats...

thanks for your help
 

CB

Super Moderator
#5
my understanding (please correct me if I'm wrong) is that I should use an ANOVA over multiple INDEPENDENTsample t-tests to establish variation between the 3 groups but that a DEPENDENT t-test is separate from that rule as it is not comparing groups against each other but the before and after effect of a said treatment approach
The two situations you mention are not really that different. Usually whether we are dealing with dependent or independent samples, it's still better to start with an ANOVA rather than multiple t-tests. However, the big caveat to that is you need to be using a repeated measures ANOVA, not a oneway ANOVA. In fact in your case there would be both a between-subjects factor (group), and a within-subjects factor (time). You can then follow up the RM ANOVA with post-hoc tests to see specifically where differences lie. It could just be that there are differences in mean DV level between the three groups, but no time effect.
 
#6
Thanks for that, I'll look into it

now next stupid question(!):

If I am applying a dependent T-test to group A on the before and after data, and then do the same for groups B & C but without crossing groups over (i.e.: A vs B)

Am I doing repeated t-tests?

I thought that since I'm only applying one t-test one set of data for each group that would be ok (since I'm not applying a t-test to the same data twice)?


It's slowly becoming clearer although now I'm starting to wonder about the use of t-tests full stop! ... but that's for another time! thanks
 

CB

Super Moderator
#7
Yes you are doing repeated t-tests in that scenario. Whether that implies that you need to apply a correction for familywise Type I error rates is not an easy question to answer.
 

noetsi

No cake for spunky
#8
To me the safest approach if you are concerned about FW error is simply to do a posthoc test like Tukey which automatically corrects for it. Then, whether it occurs or not, it is corrected for. Having said that, I am sure that Tukey HSD or similar test is not a magic bullet that works all the time, but it's better than nothing if you don't know whether FW is occuring or not.