Betting Statistics - Calculation of odds accuracy - what am I measuring here?

#1
Hello folks!

Before anything, an important disclaimer: I am an amateur at statistics and I’m trying to learn by my own. Please bear with me if the questions below sounds like utter nonsense to you guys:

In an experiment where you have Team A playing a game of any sports against Team B, both teams had their odds of winning calculated by a bookmaker. For the sake of argument, let's assume that these odds are being calculated in "chances of winning %" for each team, as seen represented by the decimals of the spreadsheet below:



My intention is to find out by how much the odds of winning are overrated or underrated for Team A. The initial idea was to do the following calculation:

Total # of games won minus the sum of all odds of team A winning. In other words: # of matches won or lost beyond the expected amount of matches that the team should have won or lost given the odds of winning.

Results are expressed as 0 or 1, being 0 a loss and 1 a win.

For Match 1 for example: Team A was supposed to win only 35% of the times. But it won the match. So, the correct probability should have been 100%. Therefore, the odds were wrong by 65%: 1 - 0,35 = 0,65

For match 2: Team was supposed to win 52% of the times, but it lost. 0 - 0,52 = -0,52

...and so on.

I assumed that:

● If the final result is NEGATIVE, it should mean that the team lost more than the odds indicated it should have. Therefore, the probabilities are OVERRATED (higher than they should be).

● If the final result is POSITIVE, should mean that the team won more than the odds indicated it should have. Therefore, the probabilities are UNDERRATED (lower than they should be).

● If the final result is ZERO, should mean that the odds are fair and completely correct.

After doing this calculation for all the matches: 8 – 7,41 = 0,59

Meaning that the in this set, the team won 0,59 matches more than the odds indicated it should have. The odds are, therefore, underrated. If I divide the result (0,59) by the # of games, I assume that I should be finding the average of how much the odds were wrong (positively or negatively) by game.

It feels to me like I am making a lot of dangerous assumptions without any strong arguments as to why they work, since I don't have the necessary knowledge to validate these ideas. I am not sure if this calculation I am experimenting with holds any ground mathematically speaking.

Also, after doing more research, I found out about Brier Score / Mean Squared Error, which apparently can also be used to calculate this - but I am not sure if these methods apply here either (they seem to be used to calculate the variance of the set - hence, the number can not be negative. It seems to me like what I am calculating is different from that).

Could anyone please help me with (in as much of an ELI5 way you could possibly get):

a) telling me if what I am measuring with this calculation really is "by how much odds of team winning are wrong";
b) pointing out mistakes in this thought process;
c) offering me better suggestions of how to proceed with this idea.

Thank you in advance and pardon me for any mistakes
 
Last edited:
#2
If you just want to see if A wins more games than expected, you could take the average chance of team A winning (column A added up and divided by number of games), and compare it to the actual percentage of games won.
 
#3
If you just want to see if A wins more games than expected, you could take the average chance of team A winning (column A added up and divided by number of games), and compare it to the actual percentage of games won.
Hey CE479, thanks for the reply.

What I am aiming for here is actually to clearly understand by how much the odds given are underrated or overrated.

I believe that another way to find out if the team is winning more games than expected or not is by calculating the Expected Value - but that is also not what I'm looking for here (despite my calculation resulting in something that in my opinion, sort of resembles EV).