+ Reply to Thread
Results 1 to 2 of 2

Thread: Comparing different, related measures of the same thing to see which is most sig?

  1. #1
    Points: 412, Level: 8
    Level completed: 24%, Points required for next Level: 38

    Posts
    5
    Thanks
    0
    Thanked 0 Times in 0 Posts

    Comparing different, related measures of the same thing to see which is most sig?




    I want to start out saying my stat knowledge is weak. I am writing my Masters thesis and I'm struggling to understand how to get the answers I want. I am looking at NFL head coaches and whether they get fired. What I'm really trying to determine is whether winning percentage is more significant than the winning percentage that the Pythagorean Win Expectation formula predicts for each team. Because the two are closely related (teams that win a lot tend to have a high PWE and vice versa), I understand that putting both in a binary logistic regression is a bad idea. If I run 2 different binary logistic regressions with the same data set, one using coaching tenure and win percentage with the other using coaching tenure and PWE, can I compare the outputs and find which is more significant? Is there a better way? If this is the right way, is it as simple as comparing the value of B or is there some sort of equation I need to use (potentially including SE, Wald, and maybe p-score)?

    Thanks in advance for the help. Any advice would be useful at this stage.

  2. #2
    Points: 3,631, Level: 37
    Level completed: 88%, Points required for next Level: 19
    staassis's Avatar
    Location
    New York
    Posts
    226
    Thanks
    2
    Thanked 41 Times in 39 Posts

    Re: Comparing different, related measures of the same thing to see which is most sig?


    The easiest thing is comparing the two models using Akaike Information Criterion (AIC). In several statistical packages (like SPSS or Stata), the AIC score is printed automatically during the estimation of the model. The smaller the AIC score is the better is the model.

    Your analysis would add more robustness / credibility if you used cross-validation based on misclassification rate as the second way of comparing the two models. This would require some programming. Languages like R or Matlab would be best for that.

+ Reply to Thread

           




Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts






Advertise on Talk Stats