Comparing beta's of different regression analyses for significance

#1
Hi all,

I have been struggling with this for a while now, so I really hope one of you can help me out:

I am trying to compare two different multiple regression analyses to see which association is stronger. I was looking at the beta coefficient to do this, but the problem is that this way I don't know if one association is significantly stronger.

To illustrate, the hypotheses I want to test are the following:

"I expect (1) psychopathy to be positively associated with both physical and social aggression, with a stronger association with physical than with social aggression
(2) narcissism to be positively associated with both physical and social aggression, with a stronger association with social than with physical aggression.

So to test the relation between psychopathy and narcissism (the independent variables) and physical and social aggression (the dependent variables), I wanted to compute two multiple regression analyses. But then I don't know if they are significantly stronger.

Does anyone know how to do this? Or in my case maybe more importantly, how to formulate this correctly in a method section?

Many many thanks to the one that is able to help me!
 

noetsi

Fortran must die
#2
I am not sure what you want to know is stronger. The overall impact on the DV of all regressors (adjusted R square might work, but only if you use the same sample for the two regression equations otherwise its an invalid comparison I believe). Or do you want to know, within a given regression, which has the greatest impact? In that case this might help.

http://www.talkstats.com/showthread.php/68516-Relative-impact-of-regressors-on-Y.

If you know if a specific regressor had greater impact on one DV than another I think you have to use the same sample for both analysis although I could be wrong. I would think if you do use the same sample you could use the approach in the thread above.