+ Reply to Thread
Results 1 to 6 of 6

Thread: Logistic Regression: Classification table vs. Chi-square?

  1. #1
    Points: 55, Level: 1
    Level completed: 10%, Points required for next Level: 45

    Posts
    4
    Thanks
    2
    Thanked 0 Times in 0 Posts

    Logistic Regression: Classification table vs. Chi-square?




    Hello,

    I'm using binary logistic regression for my bachelor thesis (psychology).

    My problem is that the classification table does not change between the null model and the model with the predictor but chi-square is significant. I don't know how I should interpret this - does this mean that the model with the predictor ist not different to the null model, or is the difference so low that it does not show in the classification table? And if I reported that the modell was insufficient, how would I explain this? Would the reference to the indifference in the classification table be enough?

    I've attached the SPSS output as pdf. It's in german but I translated the classification table, so you probably can understand it anyway. If not, please tell me.

    I know that my problem is quite similiar to this one but the classification table only shows worse overall percentage when I change the cutoff value, so it's probably not same problem?

    I appreciate any help. Even ideas for search keywords would be helpful.

    Best regards,
    kazu
    Attached Images

  2. #2
    Omega Contributor
    Points: 38,334, Level: 100
    Level completed: 0%, Points required for next Level: 0
    hlsmith's Avatar
    Location
    Not Ames, IA
    Posts
    6,998
    Thanks
    398
    Thanked 1,186 Times in 1,147 Posts

    Re: Logistic Regression: Classification table vs. Chi-square?

    So your model accurately predicts 81% then you changed the model and it still only predicts 81%, correct.

    Please, better describe what change you made to the model. Did you add another variable, if so was it determined to be significant in its Wald test? Did the AIC or anything else change?
    Stop cowardice, ban guns!

  3. The Following User Says Thank You to hlsmith For This Useful Post:

    kazu (03-28-2013)

  4. #3
    Points: 55, Level: 1
    Level completed: 10%, Points required for next Level: 45

    Posts
    4
    Thanks
    2
    Thanked 0 Times in 0 Posts

    Re: Logistic Regression: Classification table vs. Chi-square?

    hlsmith, thank you very much for your reply.

    I only added one predictor to the model (which did not have any predictors before), and the Wald test for this predictor was significant. Additionally, the Nagelkerke R squared changed to .20.

    As far as I understand this, everything except the classification table indicates that there is an effect. My problem is probably that I don't quite understand how the classification table relates to Nagelkerke R, Chi- and Wald-Test. The most plausible explanation I can imagine is that there is an effect, but it is too weak to show in the classification table. However, I don't know if this is possible at all.

    Best regards,
    kazu

  5. #4
    Fortran must die
    Points: 58,790, Level: 100
    Level completed: 0%, Points required for next Level: 0
    noetsi's Avatar
    Posts
    6,532
    Thanks
    692
    Thanked 915 Times in 874 Posts

    Re: Logistic Regression: Classification table vs. Chi-square?

    None of the analysis I have done in logistic regression actually discusses classification tables because logistic regression really is an alternative to those tables.

    One thing you can do is do is generate one of the goodness of fit statistics such as Pearson's Goodness of Fit or Hosmer Lemeshow's version and see how the model fits compare to only an intercept and the one where you add a predictor. If you are really lucky the intercept won't fit the data well (the p value will be below .05) and the one with the predictor will, but even if both do you can get a general sense of which is best by looking at how high the p score is in the goodness of fit tests (that is how high it is for the two different models you ran. This of course is tied to the deviance levels in the two models).

    Another alternative, I think, is that you can do a Chi Square difference test between the model with just an intercept and with a predictor. But you would have to look that up.

    Its difficult to compare the pseudo R squared in logistic regression across models because they have no intuitive meaning. Also there is not as far as I know a F change test equivalent (this is the test in linear regression which shows if the predictability of the model increased signficantly when you add a variable).
    "Very few theories have been abandoned because they were found to be invalid on the basis of empirical evidence...." Spanos, 1995

  6. The Following User Says Thank You to noetsi For This Useful Post:

    kazu (03-28-2013)

  7. #5
    Points: 55, Level: 1
    Level completed: 10%, Points required for next Level: 45

    Posts
    4
    Thanks
    2
    Thanked 0 Times in 0 Posts

    Re: Logistic Regression: Classification table vs. Chi-square?

    Quote Originally Posted by noetsi
    None of the analysis I have done in logistic regression actually discusses classification tables because logistic regression really is an alternative to those tables.
    Thank you, this information helps me very much. All examples about logistic regression which I've read only noted that there was an increase in the overall percentage as if this happened always automatically.

    I can't get SPSS to print the goodness of fit statistics for the null model. If you have the time to answer, do you know if SPSS is able to do this?

  8. #6
    Points: 75, Level: 1
    Level completed: 50%, Points required for next Level: 25

    Posts
    3
    Thanks
    0
    Thanked 0 Times in 0 Posts

    Re: Logistic Regression: Classification table vs. Chi-square?


    I know that this thread is old, but I've come across exactly the same problem like you Kazu.

    Entering all the 3 categorical variables at the same time gives me a significantly improved model. Homer-Lemeshow goodness of fit test is non-significant which indicates good fit. Area-under-the-curve (ROC-curve) is significant - 0.72, not good but with some predictive value. Entering the variables one on one, one of the variables shows a significant improvement of the model.

    But the classification table shows any percentual improvement of the predicitive value of the model. Did you ever find out what happened?

+ Reply to Thread

           




Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts






Advertise on Talk Stats