Logistic regression with significant coefficient but .000 Exp(B)

#1
I am getting an output for a binary logistic regression I can't quite figure out how to interpret.

It is a 0/1 binary logit, with 10 numeric IVs. The scale of each IV is a percentage in the form of a two decimal number (ie .62)

4 of the IVs are significant at .05. So far so good.

Three of the betas are negative (-17, -12, and -5). Which makes the odds ratio, exp(B), smaller than .0000.

Not sure how to explain/interpret that? If the odds are that low, I think it would not be significant.

I don't run a lot of logistic regressions and have not seen that before. I have seen them less than 1 (like .62), but never .00000005.

Thoughts or comments?
 

noetsi

No cake for spunky
#2
Actually I believe the odds ratio (exp(b)) is less than 1 not 0 when slopes are negative. It is hard to imagine a zero for an odds ratio because one of the odds (that in the numerator) would have to be 0 and I don't think odds of 0 exist (what would it mean substantively to say the odds of this occuring are 0)? Nor do I believe negative odd ratios exist
 
#4
Well, I know mathematically it is correct, as I checked it on a calculator. When you take a beta of -17 to the e, you get .0000004 as the odds ratio. I have never seen it before either.

I have read that you can get negative betas in a Log Reg, but can't find any literature that talks about zero odds ratio. However (mathematically), when you get a negative beta at a magnitude of -2 or more, the odds ratio converges to basically zero.
 
Last edited:
#5
let me add that if odds ratio (OR) is close to 1.0, then it is less likely or unlikely to be significant. An OR close to zero or close to infinity will definitely be significant.

Three of the betas are negative (-17, -12, and -5). Which makes the odds ratio, exp(B), smaller than .0000.

Not sure how to explain/interpret that? If the odds are that low, I think it would not be significant.
OR is the ratio of odds of the event A divided by the odds of the event B. So when you have an OR = 10,000 it means that A is 10000 times more likely to happen than B is. When the OR is 0.0001, it means the opposite: now B is 10000 times more likely to happen than A is. This was an interpretation template.

I don't run a lot of logistic regressions and have not seen that before. I have seen them less than 1 (like .62), but never .00000005.
Your data seems to be pathological. You should fix that first.

noetsi said:
I guess in theory you could have an odds ratio of 0 (but not negative) although what an OR of 0 means or how it could occur is beyond me.
I think if one of the events are completely unlikely to happen, the OR can become 0 or infinity. However, I don't think it is practical to even talk about (let alone research on) something that is not even happening. For example odds of the sun turning into a teddy bear is perfectly zero (at least it seems to me perfectly zero). So OR of sun-to-teddybear transformation / sun-to-white-dwarf-star transformation is zero. But who cares about such an OR. So perhaps this is why I have not seen any ORs = 0.
 

maartenbuis

TS Contributor
#6
First of all, as others already noted an odds ratio near 0 is a huge negative effect. No effect in an odds ratio is represented by the value 1 not 0.

I suspect that the problem has to do with the scaling of your independent/explanatory/x-variables: The odds ratio gives you the expected ratio by which the odds change for a unit change in a independent variable. In your case your independent variables are proportions, so a unit change represents a change from the theoretical minimum (0) to the theoretical maximum (1). It is not surprising that you get unrealistically large effect sizes, as that is probably a (large) extrapolation.

The easiest solution is to just multiply your independent variables by 100, which means that your variables are now percentages and the unit is 1% point change. In your case the largest negative effect will now be exp(-0.17) = .84. So a percentage point change in that explantory variable is associated with a change in the odds of success by a factor of .84 or (.84-1)*100% = -16%.
 

hlsmith

Less is more. Stay pure. Stay poor.
#7
I like your response maartenbuis, it helps with the interpretation. in addition, it is alway prudent to include confidence intervals.


Though I am a little more intrigued about the Sun turning into a teddybear.
 

noetsi

No cake for spunky
#8
However, I don't think it is practical to even talk about (let alone research on) something that is not even happening.
That would make a great sig although there are practical advantages in research to doing exactly this. No one can ever show you are wrong empirically if what you study does not actually exist (or can not be measured which is essentially the same thing). The social science research is full of such for the last half century....
 

noetsi

No cake for spunky
#10
Long ago someone wrote, in one of the elite journals, a tongue in cheek set of rules of how to get published. One of them was to write articles on things no one could ever disprove because it could not be operationalized (another was to be so vague that you could come down on either side of a winning issue or to just come down on both sides]. Sadly they were talking about very real researchers and research and in the best of journals.

I doubt anything has changed.