# Interpreting regression coefficients (transformed)

#### Ochotona

##### New Member
Hello,

I ran a GLM (poisson) analysis and have a list of my coefficients with their SEs. I need help interpreting the betas because the predictor variable data were log transformed prior to analysis.

Here is the example:
Y = dependent variable
A = independent variable
B = natural log of (A+1)

The regression equation is:
Y ~ constant + B

So, if variable B had a beta estimate of -0.19 +/- 0.02, then how do I interpret this in terms of A's effect on Y? ie, the effect of untransformed data on Y?

Do I simply back-transform the beta estimate and SE? How do I back-transform the SE's?

#### fed1

##### TS Contributor
I think you should interpret the coefficient as is.

Notice that your slope in your original model (the coef multiplied to B) is the effect of changing your value of A from some value say A0 to (A0 + 1)base - 1. It would be nice if base of log were 10 then the slope is the effect of increasing A ten fold and subtracting 9.

There is no need to alter the standard error from your original model.
its estimate is asymptotically (big n) normal, no matter how we interpret it. The standard errors are true (asymptotically).

#### jamesbond6

##### New Member
Hi,

I did overcome similiar problem and I want to make sure whether it is not solvable (at least by my method). I have a model ln(Y) = b*X + ...

In my dataset X has a lot of negative values and a lognormal distribution. I want to make a natural logarithm of it, but then I lose half of the observations. So I make constant C equal to absolute value of the lowest value of X, plus 1. Then I have a model:

ln(Y) = b*ln(X+C) + ...

After estimation I wanted to interpret the b coefficient and I couldn't. Computing elasticity leaves me with:

b*[d(ln(X+C))/d(lnX)]

I doubt it is solvable and interpretable. Maybe there is some way to work around the negative values data problem?

#### fed1

##### TS Contributor
Not familiar with elasticity. What is it?

#### Ochotona

##### New Member
I think you should interpret the coefficient as is.

Notice that your slope in your original model (the coef multiplied to B) is the effect of changing your value of A from some value say A0 to (A0 + 1)base - 1. It would be nice if base of log were 10 then the slope is the effect of increasing A ten fold and subtracting 9.

There is no need to alter the standard error from your original model.
its estimate is asymptotically (big n) normal, no matter how we interpret it. The standard errors are true (asymptotically).
Thanks for your feedback. I can see the reasoning for leaving it 'as is', however, for the purposes of describing the results in the text of my report/manuscript I would like to be able to discuss it in such a way that a layperson could understand. Any tips?

#### fed1

##### TS Contributor
OOPS!

You are running poisson regression right, with log link? So you are modelling

Log( E(Y; A) ) = b + m log(A + 1);

g( m ) = E(Y; A = k*A0 - 1)/E(Y; A = A0 - 1)

= e^(b)( k*A0 )^m / e^(b)( A0 )^m

= ( k )^m

so g(m) is the fold increase in the expected value (hazard) of the poisson process in moving from a value of A, say A0 - 1 to k*A0 - 1.

standard errors for this function can be found using the delta method, if you are so inclined I can help you with this. Wiki has a page as well.

#### Ochotona

##### New Member
Thanks again for your help. I need to ask a follow up question about interpreting a specific variable (specifically variable B) in the original example. Hopefully I have improved my explanation:

Y = dependent variable (count data)
A = independent variable (count data)

The regression equation is:
Y ~ constant - 0.19*(log(A+1)) [I used natural logs]

I want to be able to say that changing A by one unit has a corresponding ___% decrease in Y.

How do I calculate the % change in Y caused by changes in A?

Thanks,
Shawn

Last edited:

#### fed1

##### TS Contributor
Sorry, did not mean to confuse you. I will be more direct.
Here is what goes in your box 1 - log(A+2/A+1)^(-0.19), change increase to decrease.

The change in Y depends on what value of A you start from. I would phrase the result like, (insert actual number for x)

"Increasing A from a value of x to x+1 gave a (1 - log(x+2/x+1)^(-0.19))*100 percent decrease in Y. "

Choose x to relevant to your situation. Do you want to calc standard errors?

Now here is the explanation. You can skip this.
------------------------------------------------------------------
First, plot the function log( E(Y) ) = constant - 0.19*(log(A+1)) in excel.

Notice that it is decreasing in A. Also, the % change in Y caused by changein A one unit depends on what value of A you start from.

What is the % change in Y caused changing A one unit??

Notice,
log( E(Y) ) = constant - 0.19*(log(A+1)) => E(Y) = e^( constant - 0.19*(log(A+1)))

Now,
E(Y when A = a + 1)/E(Y when A = a) =
e^( constant - 0.19*(log(a+2)))/e^( constant - 0.19*(log(a+1))) =
log(A+2/A+1)^(-0.19)

So %decrease in Y = 1 - log(a+2/a+1)^(-0.19)
So for example

Increasing A from 2 to 3 gives a (100 - 94)% decrease in Y. (6% decrease in Y).

#### Ochotona

##### New Member
Thank you very much for your help! I think I've got it now.

#### zofia

##### New Member
hello,
I have a similar problem. I run regression where dependent variable was in natural logarithm, some independent were also in logarithms, others were dummy variables. I need to interpret the effect of dummies.
B coefficient for dummy equals -.723.
What I did was:
exp(-.723)=.48
so my conclusion was that when dummy=1, the dependent variable will decrease by 52%

Though this seems to be wrong.
Any suggestions?

#### fed1

##### TS Contributor

E[ ln(Y) | dummy = 1 ] = B_dummy + whatever_else.

How to estimate E[ Y | dummy = 1]?

use exp( E[ Y | dummy = 1] )= exp( B_dummy + whatever_else).
{consistant by delta method, invariance of mle,..tech detail}

So we have

E[ Y | dummy = 1] / E[ Y | dummy = 0] == exp( B_dummy) = .48

or if you like

1 - .48 = .52 = [ E[ Y | dummy = 0] - E[ Y | dummy = 1] ]/ E[ Y | dummy = 0]

#### modus23

##### New Member
Confused over interpretation of dummy IV's when DV is log transformed...

hello,
I have a similar problem. I run regression where dependent variable was in natural logarithm, some independent were also in logarithms, others were dummy variables. I need to interpret the effect of dummies.
B coefficient for dummy equals -.723.
What I did was:
exp(-.723)=.48
so my conclusion was that when dummy=1, the dependent variable will decrease by 52%

Though this seems to be wrong.
Any suggestions?
I also have this issue, but my B coefficients (e^) for dummies in my model are positive and over 1. Do I use the same method? Or is something really wrong?

Dummy #1 B = .694 = exp(.694)= 2.00

Dummy #2 B = .758 = exp(.758)= 2.13

And Dummy #3 B = .452 = exp(.452)= 1.57

Do I interpret as, for example Dummy #1 = %200 or 1 - 2 = -1 or -%100?

Or is it simply: when dummy = 1, the DV will increase by %200?

Sorry I'm really confused.

#### fed1

##### TS Contributor

Let me ammend earlier post. I had

use exp( E[ Y | dummy = 1] )= exp( B_dummy + whatever_else).

Should be

use exp( E[ ln(Y) | dummy = 1] )= exp( B_dummy + whatever_else).

Sorry if confusion ensued.

Dummy #1 B = .694 = exp(.694)= 2.00
implies that
E[ Y | dummy = 1] / E[ Y | dummy = 0] = 2.00
or it is twice the size when dummy is 'on'.

It turns out for you, unlike zofia, E[ Y | dummy = 1] > E[ Y | dummy = 0]
hence the negativity in

Dummy #1 = %200 or 1 - 2 = -1 or -%100?

this is fine, really, but you may want to change sign to get,

[ E[ Y | dummy = 1] - E[ Y | dummy = 0] ]/ E[ Y | dummy = 0]

So there is 100% relative increase in E(Y) when dummy is turned on.

#### helenoftroy

##### New Member
hello,
I have a similar problem. I run regression where dependent variable was in natural logarithm, some independent were also in logarithms, others were dummy variables. I need to interpret the effect of dummies.
B coefficient for dummy equals -.723.
What I did was:
exp(-.723)=.48
so my conclusion was that when dummy=1, the dependent variable will decrease by 52%

Though this seems to be wrong.
Any suggestions?

I have a similar issue. In my regression the dependent variable (salary in $000's) is transformed using natural logarithm. The beta coefficient on my female indicator variable (1 for females, 0 for males) = -0.09. I tried exponential(-0.09) = 0.913 meaning salary is 8.6% lower for females. However, I was told this is incorrect and that I should do: exponential(0.09) = 1.094 meaning salary is$1094 lower for females.

I am really confused and despite my searching google for hours I can't seem to find the correct answer. I'm hoping someone here can help explain it to me.

Thanks (can't wait to start some econometrics and stats classes next semester!)

#### ssamdani

##### New Member
Hi all. Though this thread is quite old I am hoping someone would help out as my problem is also related to back transformation of regression coefficients. though perhaps a bit simpler than the one's discussed here. I have my DV squareroot transformed. all IVs are in their original form and there are quite a few of them. how do I interpret my unstandardized regression coefficients so that they represent increase or decrease in the DV rather than in the sqrt of DV. do I just square them? is it that simple? I am new to this and confused.

regards
S