1. ## Game probability problem

Two armies of attackers and defenders do battle. If , then there is 1 attacking die; if there are 2 attacking dice; if there are 3 attacking dice. If there is 1 defending die; if there are 2 defending dice. (You may recognize this as Risk, and you'd be right.)

Then, the highest-scoring attacking die is compared with the highest-scoring defending die: if the defender's score is greater than or equal to the attacker's, the defender wins the pair, and so 1 army is removed from the attacker (i.e. drops to ). If the attacker's score is greater, the attacker wins and 1 army is removed from the defender ( becomes ). If there are 2 (or 3) attacking dice and 2 defending dice being rolled, then there will be a second pair as well, where the second-highest scoring attacking die is compared with the second-highest scoring defending die, with a loss for either side costing that side 1 army.

The probabilities of every 1 turn outcome are known, having been ascertained largely with the help of BGM in this thread:

We can write the probability that, when 1 attacking die is rolled and 2 defending die are rolled, the attacker will win as P(Attacker wins 1,1,2). The probability that each side will win one pair when 3 attacking dice are rolled and 2 defending dice are rolled is P(Both win 1,3,2). In general P(Outcome,Attacking Dice,Defending Dice) is my suggested notation.

Then, in terms of these different probabilities (suggest your own notation if you want, but I'd like it algebraic) - all their values are known to me, I can provide them if necessary - what is the probability of an attacking side with attackers finally winning over the defending side with defenders? And in such a battle, what are the expected number of losses the side with more armies would suffer?

2. ## Re: Game probability problem

Yes I am right I have guessed you are talking about the board game Risk in the previous post.

I think this problem can be analyzed using a Markov chain, with the states

(not all states are possible to reach of course)

The initial state is at while in each turn it may have 3 different trasitions like , and when are sufficiently large.

The states and are absorbing states. Now you just need to use standard matrix algebra to compute the absorbing probabilities for those states.

The overall winning probabilities will just be the sum of absorbing probabilities for each side. To calculate the expected value you just use the absorbing probabilities as your pmf.

3. ## Re: Game probability problem

Sorry I just complicated the matter.

From a practical perspective, you can just start to calculate the required probability and expectation at respectively. Then you can calculate etc recursively by the law of total probability/expectation. So in general you can build a really large table for use. And actually you are doing the same thing as the previous post suggestion but the previous one is from the more theoretical point of view.

4. ## Re: Game probability problem

So let's say I have calculated (where is P(defender number,attacker number)) . These should be easy for me, I'll save it for the end. Then in terms of , and all the probabilities we found in our previous thread, what recurrence relations in terms of and do we use, to calculate overall probability of the attacker winning?

5. ## Re: Game probability problem

Ok maybe I give a very simple example.

Let say you know how to calculate

and

The former and latter one is the probability that you end up at and respectively given you currently at

Now consider you are currently at . Now you may want to calculate the probability of the possible endings:

You know that at there are 3 possible transitions

and you already know how to calculate those transition probabilities. One good thing is that and respectively as the chain once reached , it is impossible to end at or .

Next denote

By law of total probability and the Markov property (this is called a first step analysis),

as the latter two terms vanish. And you know how to calculate this term.

Now the defenders win if the game end at or so the probability is ; and similarly the probability of the attacker win is

The expected number of defenders at the end is

More generally, if you want to calculate , using first step analysis we have

You should know how to calculate the transition probabilities (because it is just the probability of the outcome of a turn); and the absorbing probabilities you just need to calculate recursively. Once you get all the probabilities you can calculate the winning probability and expectation. Of course note that the above transition holds for and once any one of them reached 1 you need to modify it slightly.

6. ## Re: Game probability problem

Originally Posted by BGM
More generally, if you want to calculate , using first step analysis we have

You should know how to calculate the transition probabilities (because it is just the probability of the outcome of a turn); and the absorbing probabilities you just need to calculate recursively. Once you get all the probabilities you can calculate the winning probability and expectation. Of course note that the above transition holds for and once any one of them reached 1 you need to modify it slightly.
It looks like for each of the absorbing probabilities there will be threefold branching. Won't this make things pretty complicated?

7. ## Re: Game probability problem

Yes it is quite complex. It should looks like you are doing a backward substitutions to solve a large linear system. It certainly require you to write a good algorithm to do this.

8. ## Re: Game probability problem

I see. Is there a non-recursive method of solving the problem using matrices or something like that? If so, what matrix do we write/solve?

The following paper may help us, though I can't say I understand much of the method.

9. ## Re: Game probability problem

Well I have take a quick look and I think the paper is presenting the same general framework - first by writing out the recursive relationship. Then the paper seems to contribute the method to solve the recursive relation in a neat way.

10. ## Re: Game probability problem

Since the paper appears to reach a general solution rather than just a 3-fold increasing expansion, I would like to look into the method used there.

Edit: Before we discuss this, it might be easier not to continue looking at this paper and to look at the one I linked in my post below instead, since that (seems to) offer a much quicker and more direct solution without all these strange undefined summations.

The best question would be, what is the general equation for , in terms of , or better still in full form? After all and are finally dummy variables, but we first need to be able to write out (equations 15+16) before we then sum over all (equation 17).

To break this down, the questions would be: firstly, what is ? Our first step should be to express that in terms of dummy variables and M, N, m and n. Secondly, how do we quantitatively write the summation in equation 5, and shouldn't there be 3 summations, as there are 3 dummy variables (L, s, and k)? Finally, given that we can now write in a fully understood form, how do we transform that to ?

11. ## Re: Game probability problem

This PDF (http://web.archive.org/web/200609192...earch/RISK.pdf) offers a similar approach, again, but appears to be slightly condensed. Section entitled "The Probability of Winning a Battle" and below is what is relevant to us; I believe the summations at the bottom of page 5 answer the question of how probable is it that an attacker will win a battle with attackers and defenders. is specified as a sum to infinity slightly higher up the page; so then, question is, how do we calculate , according to the equation at the bottom of page 4? (I don't understand how to calculate the probabilities it lists there) Then , and are dummy variables found on page 4 and 5 and so we can compute solutions, once we get as a function of , and . Or have I misinterpreted it?

It's probably best if we only look at one PDF or the other; I leave which one up to you, but it does look like these reports find mathematical solutions rather than purely algorithmic ones, so I would like to figure out how they calculate the probabilities in one of those reports mathematically. Second report seems to have found a simpler method, and the results are the same.

12. ## Re: Game probability problem

Can you help, for either PDF?

13. ## Re: Game probability problem

Ok both pdf are continue to outline the method I suggested in the previous post and summarize it. At best I can try to work the simple example, the calculation start with with you.

We have the following transition matrix with the possible six states, and the final 4 are absorbing states (the state position is not important but we put those absorbing states at the bottom for notation convenience). Ignore the notation in #5, I just arbitrarily rename those transition probabilities.

where are transition probabilities which you already know how to calculate.

In general you may list states (i.e. in this case), but some states are not possible to reach (i.e. the states in this case), so we may safely omit them. We have a -1 here because we does not exist.

Some background knowledge:
Since we have a deterministic initial state , we can use

to denote the probability distribution of the initial state. Then

will be the probability distribution of the states after transitions. When goes sufficiently large, then we get the limiting distribution which we want.

For a transition matrix with absorbing state(s), we may use the block matrix notation to partition the matrix:

where is the identity matrix, with the dimension same as the number of absorbing states; is a matrix with all entries are 0; and is a square matrix of course.

With simple matrix multiplication, we have

With the infinite geometric series formula

, it suggest that

(I have abused the notation here as the dimension of the two identity matrix are not necessarily equal; the new one here of course must match the dimension of )

Therefore you just need to look at the matrix for the limiting distribution of the absorbing states; and the first row entries will be the solution, given the Markov chain begin at the first state surely.

Ok back to the problem. In this example we have

and you may calculate accordingly. One special feature about this game, different from a general absorbing Markov chain is that this game always end within a finite time of transition; at most it will take

turns to end the game (you may refine this bound by considering odd/even numbers).

In this example , so the game at most end at turns 2.

So you may not need to find the matrix inversion, instead you may just find the first row of

(with )

After extracting the first row, you can calculate the probabilities of winning and the expectation as before. In this example the limiting distribution will be

To obtain the expectation of each sides, we multiply "the matrix of the absorbing states":

And the winning probabilities of each side will be

which those 1's are non-zero indicator of the absorbing states.

One more good thing is that with this approach you will also calculate the solution if the game begins at . In such case you just extract the 2nd row out and do the same calculation. So if you list more states then required, it is also ok.

14. ## Re: Game probability problem

So we want the first row of the matrix

And this first row is our probability distribution, of which we sum up the (non-0,0) probabilities to find the probability of defender winning for notation (D,A) where D is number of defenders and A is number of attackers.

Then I have only 2 questions for now.

Firstly, is the method starting from the initial state (D,A) identical to the one you listed, except that it will have an arbitrary number of states (8 for the (2,2) case you solved; however you left out the (2,1) and (1,2) rows as they cannot be reached from (2,2), only because you wanted to shorten the solution here, i.e. it would not have affected the solution to include them and in my method I will have to include all states)?

Secondly, what is the exact method for deducing the matrices Q, I and R from the general matrix P that I write for (D,A) in the same way as you wrote for (2,2)?

Following the writing of matrices Q, I and R from P for (D,A) I would then perform the combination listed above and from the first row, pertaining to probabilities that start from initial state (D,A), P(Defender wins, given (D,A) initial state)=Sum of probabilities for states in the first row which correspond to state (d,0) where d is greater than 0, and P(Defender wins, given (D,A) initial state)=Sum of probabilities for states in the first row which correspond to state (0,a) where a is greater than 0.

15. ## Re: Game probability problem

1) Including non-reachable states are ok - it just lengthen your calculations. It will not affect your final solution. So for a computer it may be easier to list all the rectangular grid points

2) I think it is not difficult to list all the states. At most transient states there are only 3 possible transitions and they transition probabilities are easy to obtain.

As long as you put all the absorbing states at the bottom then you will be fine. I think it is not that hard to figure out - you just partition the transition matrix according to the number of absorbing states.

In the previous example, there are absorbing states, so the identity matrix is .

is always a square matrix, with dimension as there are a total of states listed.

As a result, the dimension of is . It must agree with the number of rows of and the number of columns of .

Page 1 of 3 1 2 3 Last

 Tweet