# 2X3 Anova: Interac is marg sig (p = .08) do I proceed to post hoc or increase power?!

#### Jake

Re: 2X3 Anova: Interac is marg sig (p = .08) do I proceed to post hoc or increase pow

do you mean here that Z is the usual multiplicative product of X1 and X2 or is Z just multiplying X2?
Z is X2. So what I mean is this:
lm(y ~ x1 + x1:x2)

There is disagrement on this point (note that normally you have to have two main effects to have an interaction - which I don't see in your example, but I assume is there). Some say it does not really matter if the main effects are signficant or not - don't interpret them with signficant interaction (be that regression or ANOVA). Others say that you can interpret them if the interaction is not disordinal. That is when the levels of the categorical variable don't shift their relative order, they simply are not parallel to each other at different levels of the other IV. For instance if females always have higher results on the DV than male (but the gap between them varies at different levels of another IV) you can interpret main effect despite the interaction.

Others say just do simple effects.
I think we're not quite on the same page here but I have a few thoughts anyway.

In my view, there is simply no such thing as a "main effect" (in what I view as the proper sense of the phrase) of a particular predictor when you have relaxed the assumption of additivity by including a higher-order product of that predictor. In that case, there is only the various simple effects of the predictor at different levels of either the other predictors or of itself--nothing more and nothing less. It doesn't make any difference whether the coefficient for the product is large or small, significant or nonsignificant, "ordinal" or "disordinal," etc. Now, that doesn't mean that you can't or shouldn't interpret those effects. For instance, it can still be sensible to talk about the simple effect of a predictor at the average level of another predictor, or at the average level of itself. But we shouldn't call that a "main effect" in the strict sense because the real answer to the question of "what is the predictor's effect?" is really just "it depends." Phrased as is, that is only a well-formed question if we assume that the effect of the variable is strictly additive, an assumption we have not made if we have included a higher-order product term.

Edit: Dason, I didn't have one case in mind and it doesn't matter much to me which one you want to talk about. Whichever you think is easier to explain. Or both if you're feeling ambitious.

#### spunky

##### Can't make spagetti
Re: 2X3 Anova: Interac is marg sig (p = .08) do I proceed to post hoc or increase pow

Z is X2. So what I mean is this:
lm(y ~ x1 + x1:x2)
thanks for clarifying the notation. the Z kind of threw me off so i was kind of confused there. anyways, so... i'm not particularly sure whether this would answer your question (i was thinking about it on my way home) but i figured i would at least try something out see what happens. i've always kinda liked interactions in the general linear model

let us compare two easy models here. the "full model" in which i have $$y= b_{0}+b_{1}X_{1}+b_{2}X_{2}+b_{3}X_{1}X_{2}$$ and a "restricted" one where i only have $$y= b_{0}+b_{1}X_{1}+b_{3}X_{1}X_{2}$$. let me focus first on the full model and throw a little calculus to it so i can see better how things change in the regression surface by using partial derivatives. so if i take the first partial derivative with respect to X1 i get $$\frac{\partial y}{\partial X_{1}}=b_{1}+b_{3}X_{2}$$ and with X2 i get something pretty similar, $$\frac{\partial y}{\partial X_{2}}=b_{2}+b_{3}X_{1}$$. now this is something that kinda makes sense intuitively (a lot of things here are not gonna be very rigorous and i'm sorry for that, mind is not working properly at 1am anymore...) when you think about an interaction. the change/partial derivative across X1 depends linearly both on the coefficient of X1 AND on the values of X2. same can be said about X2 and this works generally well with the definition of how an interaction is supposed to act, right? simple effects dont give us the whole story, we need a little bit of that other variable to fully account for the change in it.

now, let's focus on the restricted model. if i take the partial derivative with respect to X1 i get again $$\frac{\partial y}{\partial X_{1}}=b_{1}+b_{3}X_{2}$$ BUT, when i take my partial derivative with respect to X2 i only end up with $$\frac{\partial y}{\partial X_{2}}=b_{3}X_{1}$$ by not allowing the possibility of a b2 coefficient to account for simple or main effects of X2 to be there, it's almost as if you're forcing an interaction into the model, regardless of whether it's warranted or not. the only way in which any change can happen in X2 is through an interaction, even if such interaction may or may not be really present... which sort of concerns me (at least) because i know (i can provide the lit later one for anyone who's interested) that it is easier to find statistically significant main/simple effects than it is to find interactions (again, it is also very well possible for a model to have no significant main effects and a significant interaction only, but i do remember reading somewhere in the lit of moderated regression that people complain widely about trying to find interactions and coming back empty-handed) so you may very well find change in X2 but that change in X2 comes from X2's influence by itself, which is masked by an interaction-only model. it reminds me a little bit of this whole "regression-through-the-origin" kind of situation... why force the regression line to go through a point that may or may not even be present in your data? i dunno, in my mind the analogy kind of works here as well, in which you're restricting the model to behave in a certain way.

anyways, that's the \$0.02 for today before bed...