I have recently been working on a mediation analysis, in which inclusion of an additional variable (say, C1) implies reduction of the effect of a focal predictor (X).

Now, moving further, I add a new variable into the model (say, C2). It's impact on Y is negative; however the impact of the focal regressor (X) on Y is increasing --

Assuming inclusion of C2 decreased the impact of X1, I would suspect it acts as a suppressor (considering C2 has negative impact on Y). But I am a bit confused by C2 boosting X's impact. Note, there are no signs of multicollinearity. Predictors are correlated at .3 max (both, VIF and condition number tests are at minimums -- 1.5 and 3, respectively).

Your comments are greatly appreciated :) ]]>

I conducted some diagnostic tests:

(i) The Breusch-Pagan LM and the Pesaran CD test for cross-sectional dependence (XSD) in panels say that there is cross-sectional dependence.

(ii) The Breusch-Godfrey/Wooldridge test for serial correlation in panel models suggests that there is serial correlation.

(iii) The Breusch-Pagan test says that heteroscedasticity is presence.

Croissant/Millo’s (2008, p- 28) article about >> Panel Data Econometrics in R: The plm Package << states that there is no robust covariance matrices to perform valid inference in the presence of cross-sectional dependence.

I want to allow for heteroskedasticity and serial correlation. Thus, I decided to use >> arellano << as a robust estimator of the covariance matrix of coeﬃcients:

# Random effects estimator

random <- plm(Y ~ X, data=pdata, model= "random")

summary(random)

coeftest(random, vcovHC(random, method = "arellano"))

and

# Fixed effects estimator

fixed <- plm(Y ~ X, data=pdata, model= "within")

summary(fixed)

coeftest(fixed, vcovHC(fixed, method = "arellano"))

, where

Y <- cbind(INV)

X <- cbind(ALB, CY, AD, EX)

pdata <- plm.data(mydata, index=c("Entity","Time")) .

random <- plm(Y ~ X, data=pdata, model= "random")

summary(random)

coeftest(random, vcovHC(random, method = "arellano"))

and

# Fixed effects estimator

fixed <- plm(Y ~ X, data=pdata, model= "within")

summary(fixed)

coeftest(fixed, vcovHC(fixed, method = "arellano"))

, where

Y <- cbind(INV)

X <- cbind(ALB, CY, AD, EX)

pdata <- plm.data(mydata, index=c("Entity","Time")) .

(Q3) How can I compute a Hausman-Test employing the robust standard errors? Or are the Hausman-Test assumptions not satisfied?

Best and many thanks in advance,

Wooly ]]>

I wondered if anybody can explain to me how I can use the AIC for manual reduction of my GLM. I calculate a GLM with the following independent variables (full model):

a

b

c

d

a*b

a*c

a*d

b*c

b*d

c*d

a*b*c

a*b*d

a*c*d

b*c*d

Some of those main effects/interactions went significant, others don't. Is there any stepwise reduction procedure (based on the AIC) I can follow to skip some of the not-significant main effects/interactions?

I already did a quick web-search and always found the information that I can use the AIC to compare two (or more) models (the one with the lowest AIC is the better one) but I am looking for a procedure to reduce my full model similar to the classical stepwise backward reduction procedure based on p-values.

Please don't just suggest R commands like step() or stepAIC(). That would not help since I'm not working with R. But it would probably help if you know what those commands do. Maybe I can perform those procedures manually in my statistic-software.

Thanks!

Fred. ]]>