Why do I never find this threads on time

?

Anyway, I think the "easy to explain" issue is of low importance when selecting a model (or any analysis for that matter). When talking about binary models I try to focus on the expected values, which is what we predict with the model. In this case, that expected value is a probability, so I just skip odds ratios and talk about probabilities, which I think is an easier concept to understand for non statisticians. As far as I know, both logit and probit models are estimated with maximum likelihood methods and you can have probabilities with both, so I personally don't have much of a difference with interpretations (at least when presenting results).

So I'd like to mention the only conceptual reason I know to choose between logistic or probit regression. Probit models can be hard to estimate when the outcome is rare. This means that in cases when the number of "1's" in your dataset is low the model may have problems, even if you have a huge dataset. Logit models, on the other hand, are way better when handling rare outcomes. For reasons I have yet to determine, in my life most datasets have had a really low number of successes (is that the real plural?

). I assume that most studies work in understanding the probabilities of something unusual happening, so logit models may be more appropriate for these particular situations. That is why I think logit is far more common.

By the way, log odds don't have to be necessarily linear. In fact, you have to check for that linearity in your model. If the relationship is not linear, you may have to correct the model, such as you do with OLS, by using adjusted variables or Fractional Polynomials.