Help with interpretation

arps

New Member
#1
Hi All,

Here is what i need help with. I am working with small sample size and this is the result so far.

X and Y are not correlated ; however, when I place X in a stepwise multiple regression predicting Y, alongside three (A, B, C) other (related) variables, X and two other variables (A, B) are significant predictors of Y. Note that the two other (A, B) variables are significantly correlated with Y outside of the regression.

How should I interpret these findings? X predicts unique variance in Y, but since these are not correlated (Pearson), it is somehow difficult to interpret.

My other concern, I suppose, is how to interpret it practically rather than perhaps statistically or mathematically. Let's say for example swimming speed and trait anxiety are not correlated, but trait anxiety is a significant predictor of swimming speed in a multiple regression alongside other predictors. How can this make sense, practically?

Thanks in advance
 

rogojel

TS Contributor
#2
hi,
I do not think that it is required for a factor to show correlation to the IV for it to be possibly significant when other factors are accounted for. I would see it an effect that is masked by the total variation due to all the other factors. Once the variation is reduced, by accounting for the effects of the other factors, the effect might become visible, whicj is what the multiple regression does.

regards
 

arps

New Member
#3
Thank you so much for replying :)
Could you please elaborate a little that how even after partial correlation ,the X(IV) is found have no significant relationship with Y(DV) however stepwise multiple regression shows X is a predictor of Y.

Regards
Arps
 

rogojel

TS Contributor
#4
hi,
I only have an iPad so I cant generate any data but this is how I think this works, hopefully someone will correct ke if I got it wrong.
Imagine the "true" relationship is Y=a1*X1 +a2*X2+eps where eps is the residual error.

If you only use X1 you get a model Y=a1*X1+EPS with a much larger error term because the influence of X2 is now hidden within the error.To calculate the significance you will compare the variance explained by X1 to the variance of the residuals (EPS) and as this residual variance will be large you might not see the effect of X1 as significant.

In the better model the error term is much smaller because both the effect of X1 and X2 have been accounted for, so the test comparing the variance explained by X1 to the residual variance( eps) has a much higher chance of becoming significant.

I hope this helps