Nonlinear odds-to-probs conversion

  • The New England Patriots are playing the Buffalo Bills.
  • The fractional odds are Patriots 1/2 and the Bills 2/1, where a bet on the Patriots risks $2 to win $1, and a bet on the Bills risks $1 to win $2.
  • A rational bettor has $1 to bet.
  • A $1 wager on the favorite Patriots yields a $0.50 profit, if they win.
  • Or, a $1 wager on the underdog Bills yields a $2.00 profit, if they win.
  • Therefore, the relative payout on the Bills is 4X that of the Patriots (2.00/0.50; this simple ratio mathematically ties the teams’ odds together, making the odds-to-probability relationship nonlinear).
  • As odds-and-probs have an inverse relationship, the probability that the Bills win is 1/4th that of the Patriots.
  • Therefore, the Bills have a 20% chance of winning, which is 1/4th that of the Patriots 80% chance of winning.
Can you show that this is NOT the case?

The conventional conversion of odds-to-probability is inverse-linear (see link below), where the probabilities are derived independently for each outcome from its odds. This overstates the probability of the underdog, while understating the probability of the favorite ... hence, the well-known 'longshot bias' (see link below).

Linear odds-to-probs conversion:

Longshot bias:
Last edited:
This chart shows the odds-to-probs 'misunderstanding' -- assuming the above is correct -- of the implied probability from the underdog's odds (the difference between linear-inverse and nonlinear-inverse conversion).

As noted earlier, this could account for 'longshot bias' in betting.

Note: The absolute misunderstanding -- the difference between linear and nonlinear conversions -- appears to be shrinking as the underdog's odds increase, but the relative error 'misunderstanding' goes thru the roof.
Last edited:
Assuming that 'odds' is the fractional odds for an outcome (or the payout on a $1 bet), the odds and its implied probability have a direct inverse-proportional relationship: Prob.x = f(1/Odds.x) and Odds.x = f(1/Prob.x).

Therefore, with any zero-sum event with n competitive outcomes:

Odds.1 x Prob.1 = Odds.2 x Prob.2 = ... = Odds.n x Prob.n

In the example of the hypothetical football game:

Patriots: 1/2 x 0.8 = 0.4
Bills: 2/1 x 0.2 = 0.4

Using the conventional inverse-linear odds-to-probs conversion -- where 1) Prob.x = f[1/(Odds.x+1)], and then 2) normalizing the overround -- distorts this natural relationship between odds and probability.

As a test of functionality of odds-to-probs conversion, take listed odds in a zero-sum event and convert them to implied probabilities in each of the outcomes. Then, take those calculated probabilities and convert them back into their implied odds (no house take). How do those implied odds compare to the listed odds?
Last edited: