Hi Forum,

I'm working on a problem regarding win rates and can't find a good solution for calculation of standard error of my results.

I have two groups playing games against each other. I want to calculate the win rate for Team A (then WR of Team B is 100%-WR of Team A). To do so, I observe a lot of matches, but far less than 1% of total matches.

Each observation gives me a results for a certain player about the past few matches he played. The number of matches is usally fairly low (1-5). Therefore, the individual sample may be 1:0, 0:1, 2:1 or something similar.

I calculate the total win rate by adding all observations for Team A which is:
WRa = sum of wins for team A / sum of loses for team A
and for team B:
WRb = sum of wins for team B / sum of loses for team B

Because the sum of WRa and WRb should always be 100%, I calculate
F = 100%/(WRa+WRb)
and scale WRa = WRa * F and WRb = WRb * F if F is not equal to 100%.

I hope this solution is right so far. Now I want to calculate the precision of my analysis. The only idea I have so far is to calculate the standard error by using the variance. Unfortunately, the single samples consist of few matches as mentiones above. So when calculating the individual win rates for each sample I get no gaussian distribution at all. For 10000 samples it looks like:
~ 4500x 0:1 (= 0% WR)
~ 4500x 1:0 (= 100% WR)
~ 1000x something else

Most of my samples are in the extremes, so I get a sigma of 46.7% when treating it as a gaussian distribution.

The final result is a standard error of 0.46% for 10000 samples and a WR of 48% to 52% for team A against B.

Is this the correct way to calculate the error for my problem? Has anyone a better idea?

Thank you in advance!