testing whether the role of a predictor differs between two dependent variables

#1
I have some questions about an analysis I am running. Sorry if they are very basic questions or completely off-course, but some help would be greatly appreciated!

I have two dependent variables, Y1 and Y2, which are related to each other (they are two measures from the same sample). I have a regression model with three independent predictors, A, B and C, and all their two- and three-way interactions entered. I would like to run this regression on each of my two dependent variables separately. I would then like to take the regression coefficient for the A*B interaction term (which is the one of interest) from each model and compare them (by dividing the difference between betas by the square root of the sum of the squared SE’s), to show that the A*B term predicts significantly more of the variance of dependent variable Y1 than it does of dependent variable Y2.

My questions are:

a) Is running two identical regressions on Y1 and Y2 an acceptable method of comparing the effect of a specific term on Y1 and Y2?

b) The overall model (and the A*B term) for Y1 is significant, whereas the overall model (and the A*B term) for Y2 is not significant (which follows our predictions). When comparing the two A*B regression coefficients, do I take them from these two full regressions with all the A,B,C and interaction terms in? Or can I reduce the model for Y1 by removing all C terms (given that C is not significant, and neither are its interactions) and using the reduced (still overall significant) model to provide me with my A*B regression coefficient?

c) And finally, if I do take the A*B regression coefficient from the reduced model for Y1, do I also reduce the model for Y2 so I can compare regression coefficients from identical models?

Many thanks!
 

jpkelley

TS Contributor
#2
My answers to your questions:
(a) The regression don't even need to be exactly the same as far as model structure goes. As long as the A+B+AxB terms are in each model, it doesn't matter what other terms are in there. This relates to question (b).

(b) Most people might try some model simplification process (backwards selection, AIC or BIC model selection, etc.). In your case, your two separate regression equations might end up with different forms. In your case, you might want to do an AIC model selection approach or backwards selection for the Y1 and Y2 models. Again, just make sure you have all your A+B+AxB terms in all models (i.e. you don't simplify past this point).

(c) No need to make the model parameterization equivalent. You can compare the coefficients between models.

I hope this helps.
 

jpkelley

TS Contributor
#4
Yes, jrai is correct. My previous response was conditioned on already testing for multicolinearity. There are mixed opinions about this, however. That is, the type of correlation (positive/negative) should loosely dictate whether you can reasonably eliminate this before beginning a model selection procedure. Generally, if there is positive correlation between terms, it's good to eliminate one of them.
 
#5
Thanks to both of you for your great help. Yes, C is positively correlated with A*B (and A), and does not contribute significantly to either my Y1 or Y2 in terms of F-change statistics when added to the models, so I guess it is ok to remove C from both models? I assume this means, in other words, that A*B is already explaining most of the variance that C could explain, and doing it ‘better’ than C could, so C is redundant...is this vaguely the right way of thinking about it? Removing C also leads to an improvement in my AIC scores for both models, so I guess this justifies its removal. However, I am new to model selection using AIC so I need to read up on it!

Given that my model is so simple anyway (A, B, C, A*B, A*C, B*C), removing C will leave me just A, B and A*B in both my Y1 and Y2 models, which is the minimum necessary to test my hypothesis. I have one further related question: The A, B and A*B model gives the best AIC score for Y1, but the best AIC score for Y2 is just with A, excluding B and the interaction term. However, given that I want to statistically show that A*B is a significant predictor of Y1 but not Y2, I guess the fact that I could reduce the Y2 model more is irrelevant (and none of the models for Y2, regardless of how much they are or are not reduced, are significant at p<.05). Does this make sense?

Many thanks for your help, it is much appreciated. I am learning a lot!
 
#8
[SOLVED] testing role of a predictor differs between two dependent variables

Thanks everyone for this. Very clear and helpful answers!