I would assume your interaction term is significant if you left it in the model. If so, you no longer calculate an OR for a variable - you have to stratify by the other variable and calculate two ORs, one for each of the group levels.
Hi,
I ran a logistic regression in SPSS with two dichotomous categorical predictors and their interaction to predict my dichotomous outcome variable.
I ended up with one of the predictors being significant which was expected, however the exp(B) for this variable seemed on the large side. It was 58.24.
I am wondering why it would be so large and if this is a problem?
This significant predictor variable is sector of work, and the presence of the outcome in one sector is about 97% compared to 45% in the other, so I would expect a high odds ratio, but surely it shouldn't be this high? Perhaps it can be though!
Thanks for any advice with this
I would assume your interaction term is significant if you left it in the model. If so, you no longer calculate an OR for a variable - you have to stratify by the other variable and calculate two ORs, one for each of the group levels.
Stop cowardice, ban guns!
jamie157 (10-22-2014)
Which is what is called a simple effect in ANOVA (although the same thing can be done in regression).
I think this might help.
http://www-01.ibm.com/support/docvie...id=swg21481351
"Very few theories have been abandoned because they were found to be invalid on the basis of empirical evidence...." Spanos, 1995
jamie157 (10-22-2014)
Hi,
Thanks for your replies, the interaction was actually NOT significant. So I'm not sure how to go forward with this. I have read somewhere that by having a categorical predictor with two levels than you can get an overinflated odds ration (expB), and moreover one of the conditions has almost 100% of one of the other factors (sector) falling within it, so there isn't much variance there. Could this all lead to my result? I just feel uncomfortable with saying one group is 58 times more likely to have the presence of the outcome...
Thanks
First, you need to drop the interaction term out of the model if it is not significant.
Stop cowardice, ban guns!
In linear regression, and I assume logistic regression, slopes can be attenuated with dummy variables when 90 or more percent of the data is in one level of an IV. This has nothing to do with interaction and I am not sure how this influences an odds ratio exactly. Ultimately this effect is tied to reduced variation due to so many in one level. If your interaction term is not significant you should remove it from the model as hlsmith said.
"Very few theories have been abandoned because they were found to be invalid on the basis of empirical evidence...." Spanos, 1995
How does the model fit?
I have had huge ORs like that before. I just make sure they are in the ballpark via calculating them by hand as well.
Stop cowardice, ban guns!
Thanks for your responses. I had a cox and snell R^2 of .20.
I actually noticed something, however. I did not specify these variables as categorical variables in spss so perhaps they were being read wrong. The exp(B) when down dramatically but is still significant.
However, not the odds do seem TOO LOW. They went from 58 to less than 1 (0.032). This is strange, not sure what to rely on now...
what program are you using?
Stop cowardice, ban guns!
Im using SPSS 20...
Awh, you are doing everything correct. You just don't have the right reference group 1/0.032 = 31.25. Which if I remember correctly, was approximately the OR for calculating it be hand. Just figure out how to flip the reference groups in SPSS. If you can't figure it out you can always recode data.
That is all given you had your data coded correctly the first time.
Stop cowardice, ban guns!
Tweet |