It would be much easier to respond if we knew a bit more about the design. Are all 3 factors repeated (so every subject is in every cell)? How many levels does each have? Which of the two factors interacted?
For a three-way repeated measures ANOVA: Do I use simple main effects syntax or just run t-tests in SPSS in my follow up analyses (pairwise comparisons?) for a non-significant three-way interaction, with 1 significant 2-way, and 3 significant main effects? I'm interested not just in the interaction but also the pairwise comparisons. Should these post hoc tests be reported as F values or t values? On the net elsewhere I found advice to run "four t tests", and other places said to add syntax and run simple main effects (and report F values) Confused. Thanks!! (stats/spss newbie)
It would be much easier to respond if we knew a bit more about the design. Are all 3 factors repeated (so every subject is in every cell)? How many levels does each have? Which of the two factors interacted?
Thank you for your reply! (I'm totally new to stats so apologies in advance for weird wording and/or logic!)
Sorry, yes, each factor is on two levels, all repeated measures (every subject in every cell). If factors are A, B, C, then A*C interacted. A, B, C were main effects. So I guessed the next step is to do either post hoc tests or simple main effects in SPSS to describe the interaction? I just read that the post hoc tests could be paired-sample t-tests with the alpha adjusted (bonferroni) and I could just report the p values. But somewhere else I read that I should be using the Estimated Marginal Means and use the Simple Main Effects syntax in SPSS, and again just report the p values? I just tried both and in the t-test result one of the comparisons had a p value of .08 so I concluded it was not significant. But when I ran the simple effects syntax the p value for that same comparison was .026. Is alpha still .05 in this latter case, and therefore that comparison now significant? So sorry if these are silly questions!!
I wrote a more detailed reply but I got logged out and lost it ugh. I've realised that those two different p values actually came from the two post hoc t tests I ran but due to the way the column variables are set up I had to make comparisons on separate levels of B. What I want to do is compare A1C1 cell with A2C1 cell and A1C2 cell with A2C2 cell, but I can't see where this is in the Simple Main Effects output. I can see the tables: Estimates, Pairwise Comparisons, and Multivariate Tests under the heading 'A' for example, but it seems to be comparing A1 with A2 only and I can't see how this is different to a Main Effect? confused :/
Best to sometimes compose long replies in Notepad, so you can save them! Even without saving, an editor on your machine is likely to be more stable. Online entry carries risk of losing the answer. (Took my own advice -- Good thing I did, since I nearly lost this reply! Note that if the reply is visible when it logs you out, select and copy/paste it into an editor, then paste it back).
I gather you already understand that no followup is needed for the main effects. With only two levels, the p-value for them *is* your comparison of the two levels they represent.
If you had 3 non-repeated factors, the issue would be a bit more complicated, because you would not want to compare all the subjects in A1C1 to A2C1 with a regular unpaired t-test while some of each were in B1 and others in B2. Given a main effect of B, you would be greatly increasing the variability of subjects within the two groups. To avoid that, you would use the error term from within the individual cells in an appropriate way.
Manipulating repeated measures error terms is more difficult, but I think you have an easy solution. For every subject, get the average of his or her value in A1B1C1 abd A1B2C1 etc.. In other words, average across the two levels of B for each subject. Then do the analysis on just A and C.
I'd guess that the main effects of A and C and the interaction will have the same F and P as they did overall. For terms that don't directly involve B the analysis averages over it anyway.
So now you can do paired t-tests to compare your cells of interest. Using a pooled error term from the whole design could give you more degrees of freedom, and perhaps a smaller p value, but the value is small unless your n is tiny. Plus doing that makes assumptions. I don't know that I could guide you on *exactly* how to do that without digging upsome research or trying some examples.
Thank you very much for your answer! It's helped me understand the SPSS output much better now! (I say 'much' although I've still a whole lot of stats concepts to learn I'm sure! not least 'pooled error term' :s) I averaged across the two levels of B as you said then ran a 2x2 RM ANOVA with A and C. A and C main effects and interaction had the same F and P just as you said. I then tried both the methods again to compare A2C1 and A1C1: Method (1) Simple Effects - using syntax in SPSS - using " /EMMEANS=TABLES(A*C) COMPARE(A) ADJ(BONFERRONI)" and method (2) instead, just using the "compare means/paired sample t-test" dialogue boxes. The output p value is exactly the same, p = .026, in both cases (happily). However, in the case of (2) wouldn't I need to adjust for multiple comparisons (e.g. bonferroni), and so in fact the alpha would be .05/2 (I'd also compare A2C2 and A1C2) and therefore .026 is not significant? In the case of (1) the SPSS output says, "mean difference, 11.137* / Sig.b = .026 / where *the mean difference is significant at the .05 level. b = Adjustment for multiple comparisons: Bonferroni. In this output would I still need to divide alpha by 2 or is SPSS saying that taking everything into account and after adjustments that the .026 is significant? Thanks v much!
Glad you are getting useful results that make sense!
I'm not sure what exactly it does with the compare means Bonferroni when there are only 2 means. Check to see if the F from that is the square of the t value from the paired t-test. And if the df are the same. If both are the same, then it didn't adjust the p value in either case.
Do we need an adjustment? Not sure. I usually adjust when I have multiple levels of one independent variable. When following up to interpret an interaction, I don't adjust for the number of simple effect comparisons -- the point there is to pull apart the interaction, not to agonize over whether a given simple effect is significant. If one is significant and the other is not, you can say that, because the interaction itself is supporting the idea that there are differences between the simple effects.
Of course, if it is critically important theoretically to show that the A effect is significant specifically in C1 and you're going to be writing, "Future studies can restrict to C1, because that's where the effect is" then I WOULD Bonferroni correct. Some of this is judgment.
You know, in truth, sometimes I don't do simple effects at all, depending on what I want to show. If the A difference is 11 for C1 and -2 for C2, and the interaction is significant, you can state that the effect of A (in the direction A1-A2) was significantly more positive in C1 than C2. Just based on the interaction. That may answer the question.
This approach becomes essential is the A effect is 6 for C1 (p = 0.2) and -6 for C2 (p = 0.2). Neither is significant. But you can still interpret the interaction!
Tweet |