What? Why do you want to do this? Are Y and Z related in any way?
I am running two regressions, each with the same independent variables but with two different dependent variables.
Y = b1 + b2*X + b3*C (1)
Z = b1 + b2*X + b3*C (2)
I need to find if the difference between the coefficients for X in both regressions are statistically significant. Is there any test for that?
Many thanks.
Mike
What? Why do you want to do this? Are Y and Z related in any way?
Yes in way. Y and Z represent a measure of consensus earnings forecasts. However, each one is measured differently. I have found that the coefficient on X is a bit higher for regression (1). I need to know if the difference on X between (1) and (2) is significant.
What are C(1) and C(2) representing? Also, since you were specifying two seperate models in your original post it would be a good idea to call the parameters something different. i.e.
Y = b0 + b1X + b2*C(1)
Z = b3 + b4X + b5*C(2)
Unless you're saying that b0 = b3, b1 = b4 (which is apparently what you're trying to test), and b2 = b5.
My bad, (1) and (2) should be model (1) and (2). It got nothing to do with C per se. I am assuming that b0 = b3 etc. What I am trying to test is whether the coefficient on X (in my case its a measure of quality) is "really" different between model (1) and (2) or are they basically the same. In other words, is X is associated more with Y or Z?
I have looked for a test of this sort and I have found none so far. The best one I got is here http://www.psy.surrey.ac.uk/cfs/p5.htm but it seems that its for different groups rather than different dependent variables.
wouldn't it be better to use a structural equation model then? Then you can bring in both Y en Z as dependent variables in one model instead of two seperate models...
I would suggest multivariate regression, i.e. the dependent is multivariate (bivariate here). the parameters will be the same exactly but the standard errors will differ. Then you canask for the covariances betweeen all the parameters. then assuming normality of the parameters you can work theoretically the probability that a>b. But, also you can do bootstrap (simple or naive, just resample the rows each time) and see how many times a>b, this is an estimate of the probability.
Thinking about your question, I don't think it would be valid to try and consider testing if the coefficients are significantly different for the models. Model 1 says the average change in Y given X and model 2 says the average change in Z given X. Let me give you an example.
How much food I eat is correlated with how much weight I gain. How much food I eat is also correlated with my waist size. However, if I were to set up two models with amount of food eaten as my dependent variable, then 1) the units and magnitudes of the parameter estimate would be completely different, and 2) the inference that I'm drawing from trying to compare them wouldn't make sense.
It could be that I'm just not understanding what you're trying to do clearly. I do welcome more clarity. Perhaps I might be able to help.
I guess it doesnt have any practical benefit for you, but you can investigate R squares, Akaike, Schwarz and Hannan Queen criteria for the comparison, you can do it by E views,the source of the difference is the sample, also you can compare SSE.
You are right if waist size and weight are different. But in my case, X is the same in model (1) and (2). What changes is Y and Z, the dependent variables. Y and Z are not totally different, they are measuring the same thing but each one is based on different set of distributions. To be more precise, Y is the median of earnings forecasts based on accurate analysts and Z is the median of earnings forecasts based on less accurate analysts. What I have found is that the coefficient on X is higher in magnitude for Y compared to Z. I need to find if the higher magnitude on X for Y compared to Z is significant.
It is possible that I don't need to do any test since X is significant for both models but I don't know that and thats why I am asking. Anyhow, here is what I have done and please let me know if it makes sense. I am running my analysis on panel data. So I have performed a yearly regression for 19 years. I counted the number of times that the coefficient on X is higher for model (1) compared to model (2). I got 12/19. Can I use that to support my results?
I think you misunderstood my example. Weight and waist size were the dependent variables (Y and Z). How much you eat is the independent variable (X) that is in both models. X would be interpreted differently in each model.
I think what you're saying is that Y and Z are the same variables but measured through different methods. Let me think about this. I'll get back to you later if you haven't figured it out.
I want to second Masteras' idea to use MANOVA. Its just great idea for this type of thing, it is almost like it was invented for precisely this type of problem?!
Sorry for the thread-o-mancy. I just came across this question via Google. The problem here is one that is often ill addressed in statistics. There are two types of regression comparisons - nested (2 or more groups, same regression), and non-nested (1 group, 2 or more regressions). The op is asking for non-nested regression comparisons. For this you need either STATA or SAS. You need to use either the Vuong or the Clarke Test here. Check the papers by Clarke (2001, 2003) for more here. It's relatively new stuff so that's prob. why it hasn't slipped into common usage yet.
Tweet |