# Thread: Relative impact of predictors in linear regression

1. ## Relative impact of predictors in linear regression

I want to rank variables from most to least important, in terms of impact on the DV. My predictors are either interval or dummy. My DV is interval. I can generate standardized beta weights, but I have heard conflicting opinions about if that is a good way to rank relative impact.

2. ## Re: Relative impact of predictors in linear regression

Define important. Is important the magnitude of the slope or the % of the DV variation explained.

If it is the slope standardize your IVs and use the coefficients. If it is the % variation explained, use eta^2 or similar measure.

Depending on your goal, one may be more important than the other. For example, say your DV is a left skewed survey rating bounded between 0 and 10. If I want to move all responses higher, I might be interested in the IV with the greater coefficient. However, if I want to focus on moving the low responses in the tail area, I would be interested in which IVs explain the greatest variation.

3. ## The Following User Says Thank You to Miner For This Useful Post:

noetsi (04-11-2016)

4. ## Re: Relative impact of predictors in linear regression

That is a very good question because this is an area where substance and statistics don't neatly mesh at least from my understanding. I think here the term important means, relative to other predictors which predictor is causing relatively greater change in the dependent variable - which causes hours and income to grow faster. I would guess that is slope but the details you mention are not ones I am familiar with. We have no experts in vocational rehabilitation here - so there is no theory to build on.

Some argue that standardizing dummy variables [which have only two levels] is not valid. What do you think about that argument Miner.

5. ## Re: Relative impact of predictors in linear regression

If you are after greater change, that would be the coefficients (slopes).

I agree on the dummy variables not needing standardization. You standardize continuous variables because the coefficient is dependent on the measured value of the IV, and standardizing puts all IVs on an equal level. This does not apply to a discrete IV.

6. ## The Following User Says Thank You to Miner For This Useful Post:

noetsi (04-12-2016)

7. ## Re: Relative impact of predictors in linear regression

Lol it never occurred to me that the way a dummy variable is measured does not really matter, perhaps because I was thinking of the categorical variable before it was transformed (or maybe because I never thought about it at all). When you have measured levels of 1 and 0 that hardly matters.

8. ## Re: Relative impact of predictors in linear regression

Here is an interesting question. In one case I had to rank 5 dummy variables. When you look at the raw data and the standardized beta weights two of the variables switch places in terms of relative impact (a variable that had the second highest impact in the raw data, became 3rd when standardized and the variable that had had the 3rd highest impact became 2nd).

So what do you use here, the raw data or standardized slopes to determine relative impact? I decided to go with the standardized slopes simply because in the future other variables in the model we are not being asked to rank now may be and some of them are interval. But that is pretty ad hoc logic.

9. ## Re: Relative impact of predictors in linear regression

It sounds like you have a more complicated situation. Let's start with a simple case:

Simple case: No interactions
Impact of IVs ~ standardized coefficients (beta weights) of each IV

Complex case: Interactions
Impact of the IVs must be assessed by adding the standardized coefficient of the interaction at each level of the interacting factor to the standardized coefficient of the IV alone. Therefore, the impact of an IV could change depending on the level of the interacting IV.

10. ## The Following User Says Thank You to Miner For This Useful Post:

noetsi (04-14-2016)

11. ## Re: Relative impact of predictors in linear regression

I had not considered interaction. There is no theory that I am aware of that suggests interaction is occurring (although theory in this area is likely non-existent, VR is not big on quantitative analysis).

Are you suggesting that you add the interaction term value to the main effect value to generate the relative ranking? One problem with that is if some terms interact with say one IV and others two (or none) terms with more interaction are always going to have the greatest impact (because you are adding more values together).

12. ## Re: Relative impact of predictors in linear regression

I reread your prior post in light of your latest. I think you are taking the correct approach with the standardized slopes. It would not be unexpected for the rankings to shift after standardizing because the original rankings were based on IVs with different relative magnitudes. Standardizing equalizes these magnitudes and would force a re-ranking. The larger the original differences in magnitude, the greater the impact on re-ranking.

Regarding you question on interactions, yes you would add them. The more interactions an IV is part of the more impact it has on the DV, so this makes sense.

 Tweet

#### Posting Permissions

• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts