# Thread: significant interaction but non significant simple effects?

1. ## significant interaction but non significant simple effects?

Hello everyone,

I conducted a 2x2 mixed design ANOVA. The between subjects factor was group (high vs. low), and the within subjects factor was word valence (threat vs. neutral). I have a significant interaction between group and valence (p=.048), but when I try to break down the interaction using one-way ANOVAs there are no significant differences at any level...

1) High: threat vs neutral p=.130
2) Low: threat vs neutral p=.203
3) Threat: high vs low p=.914
4) Neutral: high vs low p=.358

It's numbers 1 and 4 that I would have expected to be significant looking at my graph.

I didn't know this was possible and I dont know how to interpret my results. Is it because my interaction is only just significant? The only paper I have been able to find that briefly talks about this situation says that the null hypothesis isn't rejected, but shouldn't be accepted either.

Thanks so much for any help

2. The interaction F test in the ANOVA and the simple effects tests actually are testing two different things.

The interaction is testing whether the difference in means of one factor in one condition of the other is unequal to the difference in means in the other condition. Ie, is the difference in high vs. low the same during threat or during neutral.

Simple effect tell us if the difference in means in each condition is unequal to 0. Ie, is the difference in high vs. low during threat = 0. It's not the same hypothesis.

Especially in something like a crossover interaction, the difference in means can be positive in one condition and negative in the other. They might be significantly different from each other, even if neither is different from 0.

Or it could just be that it's because your interaction is barely significant.

Take a look at your pattern of means. If you have a crossover, that's probably it.

Karen

3. I just noticed something in your post, btw.

In order to do simple effects without raising Type 1 error (which doesn't really seem to be a problem at this point) in post-hoc tests, you need to replace the MSE of the one-way anovas with the MSE of the original ANOVA. See Geoffrey Keppel's book for more info.

Karen

I'm unsure how I ought to explain these results... am I right in thinking I can say I have a significant interaction but can't conclude where the significant difference is?

And thank you for the information about the MSE

5. Originally Posted by spiral

I'm unsure how I ought to explain these results... am I right in thinking I can say I have a significant interaction but can't conclude where the significant difference is?

And thank you for the information about the MSE

You can say you have a significant interaction and you do know where the difference is, since it's just a 2x2. If it were even a 2x3, you'd have problems. But you have only two differences in means and you know at p=.048, that they are significantly different.

Karen

6. I see, thanks again

7. Originally Posted by TheAnalysisFactor
I just noticed something in your post, btw.

In order to do simple effects without raising Type 1 error (which doesn't really seem to be a problem at this point) in post-hoc tests, you need to replace the MSE of the one-way anovas with the MSE of the original ANOVA. See Geoffrey Keppel's book for more info.

Karen
I just happened to notice your post and I have to ask (although it´s not directly related to the thread), do you think that using the overall error term reduces the risk of a Type 1 error? I would say that you use the overall error term in order to get a better estimate with more degrees of freedom, and the upshot is that you increase power. If you want to reduce the risk of a Type 1 error you should use corrected alphas (or orthogonal contrasts).

8. Yup, you're right. I misspoke (or mis-thought? ).

Simple effects tests are not multiple comparison tests, and therefore don't control TYPE I error. Using the overall MSE increases power, but I believe the main reason to use them is to increase precision of the variance estimate.

I found a nice article about it with more info, although I could only access part of it online:

Geoffrey Keppel (2001). Identifying the Appropriate MSE Term for an F Test.
Journal of Consumer Psychology, Vol. 10, No. 1/2, Methodological and Statistical Concerns of the Experimental Behavioral Researcher (2001), pp. 11-15. http://www.jstor.org/pss/1480449

9. Hi! I have the exact same situation as Spiral: a sig. interaction from a 2x2 ANOVA with crossover. I'm also confused about how to interpret this result. So I want to make sure I understand your answer. You say:

Originally Posted by TheAnalysisFactor
You can say you have a significant interaction and you do know where the difference is, since it's just a 2x2. If it were even a 2x3, you'd have problems. But you have only two differences in means and you know at p=.048, that they are significantly different.
In other words, you can say that the differences between the two sets of means are statistically different from each other? You cannot say that each of the two differences are themselves significant, though, since that is what the simple effects tests check, right?

In other words, if you have:

A x B = sig (the interaction)
A1 - B1 = ns (the simple effect tests)
A2 - B2 = ns
A1 - A2 = ns
B1 - B2 = ns

Then in interpreting it you can simply say that A1 - B1 is significantly different from A2 - B2? Or am I confused? That seems not very useful. And since simple effects were not significant, I can't say, for example, that, "A switched from being more prevalent than B to being less prevalent," can I?

Many thanks for any help! Like Spiral, I'm stuck on this point...

10. Hmmm, in reading up on this some more, I come across an explanation in a book and an article posted by SPSS (ftp://ftp.spss.com/pub/spss/statisti...s/testsnoa.txt) which say similar things. As I understand it (though I am quite possibly mistaken!), it is a logical impossibility to have a significant interaction but no significant simple effects. However, since the tests use only statistical likelihoods, it is nonetheless possible to get such logically contradictory results. But they are due to sampling error. It's said in similar terms here (excerpt from that article):

In the case of a significant omnibus F-statistic and nonsignificant pairwise
comparisons, some people have proposed the explanation that while no two means
are different, some more complicated contrast among the means is nonzero,
leading to the significant omnibus F. Such an explanation mistakes the
mechanics of the methodology of the F-statistic for the hypothesis being
tested. That is, while the F-statistic can be constructed as a function of
the maximal single degree of freedom contrast computable from the sample
data, the hypothesis tested is still that the population means are all
equal, and the contrast value can only be nonzero in the population if at
least one population mean is different from the others.
Does this mean that I should conclude that my significant interaction is a result of sampling error, or that the insignificant simple effect comparison results are due to sampling error? Or do I just present the result along with a large heap of salt to take grains from whilst interpreting?

This would seem to be a different course than what was advised for Spiral...

Thanks again!

11. Originally Posted by magpie
Does this mean that I should conclude that my significant interaction is a result of sampling error, or that the insignificant simple effect comparison results are due to sampling error? Or do I just present the result along with a large heap of salt to take grains from whilst interpreting?

This would seem to be a different course than what was advised for Spiral...

Thanks again!
Hi Magpie,

No, you shouldn't. That article is not referring to an interaction and simple effects. At least the excerpt isn't. It's referring to multiple comparison tests, like a Tukey's, that you do after a one-way ANOVA, or even a significant main effect in a more complicated model.

It is NOT a logical impossibility to have a significant interaction, but no significant simple effects. They are testing two different, but related hypotheses.

You were correct in your first post that the interaction is testing if A1 - B1 = A2 - B2. The simple effects are testing whether A1-B1=0 and A2-B2=0 (null) or not.

If you have a crossover interaction, you can have A1-B1 slightly positive and A2-B2 slightly negative. While neither is significantly different from 0, they are significantly different from each other.

And it is highly useful for answering many research questions to know if the differences in the means in one condition equal the differences in the means for the other. It might be true that it's not testing a hypothesis you're interested in, but in many studies, all the interesting effects are in the interactions.

Karen

12. Hi Karen,

Thanks for the quick and helpful reply! For some reason I found it quite difficult to get my head around this particular issue, but I see now that you are right.

It's an odd situation, statistically, but probably a result of a lack of power in my design (probably due to small sample size). Thanks again for the help!

Cheers!

13. ## Same Problem Encountered - Any References?

Hi Karen. Me too, I got the same results as Spiral and Magpie (significant interaction while simple effect t-tests are all insignificant). As you explained, my interaction is a cross-over, and also barely significant (p=0.0475). I was wondering if there are specific references that explain this case, in order for me to cite them.
Thanks.
Alex2il.

14. Hi Alex2il,

Sure. There is a very nice, detailed explanation of simple effects in Geoffrey Keppel's book "Design and Analysis: A Researcher's Handbook." In the edition I have, Ch 11 is "Detailed Analyses of Main Effects and Simple Effects."

Karen

#### Posting Permissions

• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts