+ Reply to Thread
Page 1 of 3 1 2 3 LastLast
Results 1 to 15 of 35

Thread: Game probability problem

  1. #1
    Points: 225, Level: 4
    Level completed: 50%, Points required for next Level: 25

    Posts
    29
    Thanks
    2
    Thanked 0 Times in 0 Posts

    Game probability problem




    Two armies of n attackers and m defenders do battle. If n=1, then there is 1 attacking die; if n=2 there are 2 attacking dice; if n \geq 3 there are 3 attacking dice. If m=1 there is 1 defending die; if m \geq 2 there are 2 defending dice. (You may recognize this as Risk, and you'd be right.)

    Then, the highest-scoring attacking die is compared with the highest-scoring defending die: if the defender's score is greater than or equal to the attacker's, the defender wins the pair, and so 1 army is removed from the attacker (i.e. n drops to n-1). If the attacker's score is greater, the attacker wins and 1 army is removed from the defender (m becomes m-1). If there are 2 (or 3) attacking dice and 2 defending dice being rolled, then there will be a second pair as well, where the second-highest scoring attacking die is compared with the second-highest scoring defending die, with a loss for either side costing that side 1 army.

    The probabilities of every 1 turn outcome are known, having been ascertained largely with the help of BGM in this thread:

    http://www.talkstats.com/showthread....bility-problem

    We can write the probability that, when 1 attacking die is rolled and 2 defending die are rolled, the attacker will win as P(Attacker wins 1,1,2). The probability that each side will win one pair when 3 attacking dice are rolled and 2 defending dice are rolled is P(Both win 1,3,2). In general P(Outcome,Attacking Dice,Defending Dice) is my suggested notation.

    Then, in terms of these different probabilities (suggest your own notation if you want, but I'd like it algebraic) - all their values are known to me, I can provide them if necessary - what is the probability of an attacking side with n attackers finally winning over the defending side with m defenders? And in such a battle, what are the expected number of losses the side with more armies would suffer?

  2. #2
    TS Contributor
    Points: 22,410, Level: 93
    Level completed: 6%, Points required for next Level: 940

    Posts
    3,020
    Thanks
    12
    Thanked 565 Times in 537 Posts

    Re: Game probability problem

    Yes I am right I have guessed you are talking about the board game Risk in the previous post.

    I think this problem can be analyzed using a Markov chain, with the states

    (x, y), x \in \{0, 1, \ldots, m\}, y \in \{0, 1, \ldots n\}
    (not all (m + 1) \times (n + 1) states are possible to reach of course)

    The initial state is at (m, n) while in each turn it may have 3 different trasitions like (m, n) \to (m - 2, n), (m, n) \to (m - 1, n - 1) and (m, n) \to (m, n -2) when m, n are sufficiently large.

    The states (x, 0) and (0, y) are absorbing states. Now you just need to use standard matrix algebra to compute the absorbing probabilities for those states.

    The overall winning probabilities will just be the sum of absorbing probabilities for each side. To calculate the expected value you just use the absorbing probabilities as your pmf.

  3. #3
    TS Contributor
    Points: 22,410, Level: 93
    Level completed: 6%, Points required for next Level: 940

    Posts
    3,020
    Thanks
    12
    Thanked 565 Times in 537 Posts

    Re: Game probability problem

    Sorry I just complicated the matter.

    From a practical perspective, you can just start to calculate the required probability and expectation at (m,n) = (1, 1), (1, 2), (2, 1) respectively. Then you can calculate (2, 2), (2, 3), (3, 2) etc recursively by the law of total probability/expectation. So in general you can build a really large table for use. And actually you are doing the same thing as the previous post suggestion but the previous one is from the more theoretical point of view.

  4. #4
    Points: 225, Level: 4
    Level completed: 50%, Points required for next Level: 25

    Posts
    29
    Thanks
    2
    Thanked 0 Times in 0 Posts

    Re: Game probability problem

    So let's say I have calculated (where P(m,n) is P(defender number,attacker number)) P(1,1)=p_1, P(1,2)=p_2, P(2,1)=p_3. These should be easy for me, I'll save it for the end. Then in terms of p_1,p_2,p_3, and all the probabilities we found in our previous thread, what recurrence relations in terms of n and m do we use, to calculate overall probability of the attacker winning?

  5. #5
    TS Contributor
    Points: 22,410, Level: 93
    Level completed: 6%, Points required for next Level: 940

    Posts
    3,020
    Thanks
    12
    Thanked 565 Times in 537 Posts

    Re: Game probability problem

    Ok maybe I give a very simple example.

    Let say you know how to calculate

    P((0, 1)|(1, 1)) and P((1, 0)|(1, 1))

    The former and latter one is the probability that you end up at (0, 1) and (1, 0) respectively given you currently at (1, 1)

    Now consider you are currently at (2, 2). Now you may want to calculate the probability of the possible endings:

    p_1 = P((0, 1)|(2, 2))
    p_2 = P((0, 2)|(2, 2))
    p_3 = P((1, 0)|(2, 2))
    p_4 = P((2, 0)|(2, 2))

    You know that at (2, 2) there are 3 possible transitions

    (2, 2) \to (0, 2)
    (2, 2) \to (1, 1)
    (2, 2) \to (2, 0)

    and you already know how to calculate those transition probabilities. One good thing is thatp_2 = P((2, 2) \to (0, 2)) and p_4 = P((2, 2) \to (2, 0)) respectively as the chain once reached (1, 1), it is impossible to end at (0, 2) or (2, 0).

    Next denote p_5 = P((2, 2) \to (1, 1))

    By law of total probability and the Markov property (this is called a first step analysis),

    p_1 = P((0, 1)|(2, 2))

    = P((0, 1)|(1, 1))P((2, 2) \to (1, 1)) + P((0, 1)|(0, 2))P((2, 2) \to (0, 2)) + P((0, 1)|(2, 0))P((2, 2) \to (2, 0))

    = P((0, 1)|(1, 1))p_5

    as the latter two terms vanish. And you know how to calculate this term.

    Now the defenders win if the game end at (1, 0) or (2, 0) so the probability is p_3 + p_4; and similarly the probability of the attacker win is p_1 + p_2 = 1 - (p_3 + p_4)

    The expected number of defenders at the end is 0 \times (p_3 + p_4) + 1 \times p_1 + 2 \times p_2


    More generally, if you want to calculate P((x, 0)|(m, n), using first step analysis we have

    P((x, 0)|(m, n))
    = P((x, 0)|(m-1,n-1))P((m, n) \to (m-1,n-1))
    + P((x, 0)|(m,n-2))P((m, n) \to (m,n-2))
    + P((x, 0)|(m-2,n))P((m, n) \to (m-2,n))

    You should know how to calculate the transition probabilities (because it is just the probability of the outcome of a turn); and the absorbing probabilities you just need to calculate recursively. Once you get all the probabilities you can calculate the winning probability and expectation. Of course note that the above transition holds for m, n \geq 2 and once any one of them reached 1 you need to modify it slightly.

  6. #6
    Points: 225, Level: 4
    Level completed: 50%, Points required for next Level: 25

    Posts
    29
    Thanks
    2
    Thanked 0 Times in 0 Posts

    Re: Game probability problem

    Thanks for the reply.

    Quote Originally Posted by BGM View Post
    More generally, if you want to calculate P((x, 0)|(m, n), using first step analysis we have

    P((x, 0)|(m, n))
    = P((x, 0)|(m-1,n-1))P((m, n) \to (m-1,n-1))
    + P((x, 0)|(m,n-2))P((m, n) \to (m,n-2))
    + P((x, 0)|(m-2,n))P((m, n) \to (m-2,n))

    You should know how to calculate the transition probabilities (because it is just the probability of the outcome of a turn); and the absorbing probabilities you just need to calculate recursively. Once you get all the probabilities you can calculate the winning probability and expectation. Of course note that the above transition holds for m, n \geq 2 and once any one of them reached 1 you need to modify it slightly.
    It looks like for each of the absorbing probabilities there will be threefold branching. Won't this make things pretty complicated?

  7. #7
    TS Contributor
    Points: 22,410, Level: 93
    Level completed: 6%, Points required for next Level: 940

    Posts
    3,020
    Thanks
    12
    Thanked 565 Times in 537 Posts

    Re: Game probability problem

    Yes it is quite complex. It should looks like you are doing a backward substitutions to solve a large linear system. It certainly require you to write a good algorithm to do this.

  8. #8
    Points: 225, Level: 4
    Level completed: 50%, Points required for next Level: 25

    Posts
    29
    Thanks
    2
    Thanked 0 Times in 0 Posts

    Re: Game probability problem

    I see. Is there a non-recursive method of solving the problem using matrices or something like that? If so, what matrix do we write/solve?

    The following paper may help us, though I can't say I understand much of the method.
    Attached Images
    Last edited by GameGod; 05-30-2013 at 04:51 PM.

  9. #9
    TS Contributor
    Points: 22,410, Level: 93
    Level completed: 6%, Points required for next Level: 940

    Posts
    3,020
    Thanks
    12
    Thanked 565 Times in 537 Posts

    Re: Game probability problem

    Well I have take a quick look and I think the paper is presenting the same general framework - first by writing out the recursive relationship. Then the paper seems to contribute the method to solve the recursive relation in a neat way.

  10. #10
    Points: 225, Level: 4
    Level completed: 50%, Points required for next Level: 25

    Posts
    29
    Thanks
    2
    Thanked 0 Times in 0 Posts

    Re: Game probability problem

    Since the paper appears to reach a general solution rather than just a 3-fold increasing expansion, I would like to look into the method used there.

    Edit: Before we discuss this, it might be easier not to continue looking at this paper and to look at the one I linked in my post below instead, since that (seems to) offer a much quicker and more direct solution without all these strange undefined summations.

    The best question would be, what is the general equation for P*(M,N,m,n), in terms of P_{m,n}, or better still in full form? After all m and n are finally dummy variables, but we first need to be able to write out P*(M,N,m,n) (equations 15+16) before we then sum over all m,n (equation 17).

    To break this down, the questions would be: firstly, what is \delta_{m,M+N-n-2s}? Our first step should be to express that in terms of dummy variables and M, N, m and n. Secondly, how do we quantitatively write the summation in equation 5, and shouldn't there be 3 summations, as there are 3 dummy variables (L, s, and k)? Finally, given that we can now write P_{m,n} in a fully understood form, how do we transform that to P*(M,N,m,n)?
    Last edited by GameGod; 05-31-2013 at 01:32 PM.

  11. #11
    Points: 225, Level: 4
    Level completed: 50%, Points required for next Level: 25

    Posts
    29
    Thanks
    2
    Thanked 0 Times in 0 Posts

    Re: Game probability problem

    This PDF (http://web.archive.org/web/200609192...earch/RISK.pdf) offers a similar approach, again, but appears to be slightly condensed. Section entitled "The Probability of Winning a Battle" and below is what is relevant to us; I believe the summations at the bottom of page 5 answer the question of how probable is it that an attacker will win a battle with A attackers and D defenders. f_{AD,j} is specified as a sum to infinity slightly higher up the page; so then, question is, how do we calculate f^n_{ij}, according to the equation at the bottom of page 4? (I don't understand how to calculate the probabilities it lists there) Then i=A*D, n and j are dummy variables found on page 4 and 5 and so we can compute solutions, once we get f^n_{ij} as a function of i, j and n. Or have I misinterpreted it?

    It's probably best if we only look at one PDF or the other; I leave which one up to you, but it does look like these reports find mathematical solutions rather than purely algorithmic ones, so I would like to figure out how they calculate the probabilities in one of those reports mathematically. Second report seems to have found a simpler method, and the results are the same.
    Last edited by GameGod; 05-31-2013 at 01:29 PM.

  12. #12
    Points: 225, Level: 4
    Level completed: 50%, Points required for next Level: 25

    Posts
    29
    Thanks
    2
    Thanked 0 Times in 0 Posts

    Re: Game probability problem

    Can you help, for either PDF?

  13. #13
    TS Contributor
    Points: 22,410, Level: 93
    Level completed: 6%, Points required for next Level: 940

    Posts
    3,020
    Thanks
    12
    Thanked 565 Times in 537 Posts

    Re: Game probability problem

    Ok both pdf are continue to outline the method I suggested in the previous post and summarize it. At best I can try to work the simple example, the calculation start with (2, 2) with you.

    We have the following transition matrix \mathbf{P} with the possible six states, and the final 4 are absorbing states (the state position is not important but we put those absorbing states at the bottom for notation convenience). Ignore the notation in #5, I just arbitrarily rename those transition probabilities.

    \begin{matrix} (2,2) \\ (1,1) \\ (0,1) \\ (0, 2) \\ (1,0) \\ (2,0)\end{matrix}\begin{bmatrix}0 & p_1 & 0 & p_2 & 0 & p_3 \\0 & 0 & p_4 & 0 & p_5 & 0 \\0 & 0 & 1 & 0 & 0 & 0 \\0 & 0 & 0 & 1 & 0 & 0 \\0 & 0 & 0 & 0 & 1 & 0 \\0 & 0 & 0 & 0 & 0 & 1\end{bmatrix}

    where p_i, i = 1, 2, \ldots, 5 are transition probabilities which you already know how to calculate.

    In general you may list (m + 1)(n + 1) - 1 states (i.e. (2 + 1)(2 + 1) - 1 = 8 in this case), but some states are not possible to reach (i.e. the states (1, 2), (2, 1) in this case), so we may safely omit them. We have a -1 here because we (0, 0) does not exist.

    Some background knowledge:
    Since we have a deterministic initial state (2, 2), we can use

    \mathbf{e}_1 = \begin{bmatrix} 1 & 0 & 0 & 0 & 0 & 0 \end{bmatrix}

    to denote the probability distribution of the initial state. Then

    \mathbf{e_1P^n}

    will be the probability distribution of the states after n transitions. When n goes sufficiently large, then we get the limiting distribution which we want.

    For a transition matrix with absorbing state(s), we may use the block matrix notation to partition the matrix:

    \mathbf{P} = \begin{bmatrix} \mathbf{Q} & \mathbf{R} \\ \mathbf{0} & \mathbf{I} \end{bmatrix}

    where \mathbf{I} is the identity matrix, with the dimension same as the number of absorbing states; \mathbf{0} is a matrix with all entries are 0; and \mathbf{Q} is a square matrix of course.

    With simple matrix multiplication, we have

    \mathbf{P}^2 = \begin{bmatrix} \mathbf{Q}^2 & \mathbf{QR+R} \\ \mathbf{0} & \mathbf{I} \end{bmatrix},\mathbf{P}^n = \begin{bmatrix} \mathbf{Q}^n & \sum_{k=0}^{n-1} \mathbf{Q}^k\mathbf{R} \\ \mathbf{0} & \mathbf{I} \end{bmatrix}

    With the infinite geometric series formula

    \sum_{k=0}^{+\infty} q^k = (1 - q)^{-1}, |q| < 1, it suggest that

    \lim_{n\to+\infty} \mathbf{P}^n = \begin{bmatrix} \mathbf{0} & (\mathbf{I-Q})^{-1}\mathbf{R} \\ \mathbf{0} & \mathbf{I} \end{bmatrix}

    (I have abused the notation here as the dimension of the two identity matrix are not necessarily equal; the new one here of course must match the dimension of \mathbf{Q})

    Therefore you just need to look at the matrix (\mathbf{I-Q})^{-1}\mathbf{R} for the limiting distribution of the absorbing states; and the first row entries will be the solution, given the Markov chain begin at the first state surely.

    Ok back to the problem. In this example we have

    \mathbf{Q} = \begin{bmatrix} 0 & p_1 \\ 0 & 0 \end{bmatrix},\mathbf{R} = \begin{bmatrix} 0 & p_2 & 0 & p_3 \\ p_4 & 0 & p_5 & 0 \end{bmatrix}

    and you may calculate accordingly. One special feature about this game, different from a general absorbing Markov chain is that this game always end within a finite time of transition; at most it will take

    t = \left\lceil \frac {1} {2} \min\{m, n\}\right\rceil + \max\{m,n\}

    turns to end the game (you may refine this bound by considering odd/even numbers).

    In this example \mathbf{Q}^2 = 0, so the game at most end at turns 2.

    \mathbf{P}^2 = \mathbf{P}^3 = \mathbf{P}^4 = \ldots = \begin{bmatrix}0 & 0 & p_1p_4 & p_2 & p_1p_5 & p_3 \\0 & 0 & p_4 & 0 & p_5 & 0 \\0 & 0 & 1 & 0 & 0 & 0 \\0 & 0 & 0 & 1 & 0 & 0 \\0 & 0 & 0 & 0 & 1 & 0 \\0 & 0 & 0 & 0 & 0 & 1\end{bmatrix}

    So you may not need to find the matrix inversion, instead you may just find the first row of

    \sum_{k=0}^{t-1} \mathbf{Q}^k \mathbf{R} (with \mathbf{Q}^0 = \mathbf{I})

    as your probability distribution.

    After extracting the first row, you can calculate the probabilities of winning and the expectation as before. In this example the limiting distribution will be

    \pi = \begin{bmatrix} p_1p_4 & p_2 & p_1p_5 & p_3 \end{bmatrix}

    To obtain the expectation of each sides, we multiply "the matrix of the absorbing states":

    \pi \begin{bmatrix} 0 & 1 \\ 0 & 2 \\ 1 & 0 \\ 2 & 0 \end{bmatrix}

    And the winning probabilities of each side will be

    \pi \begin{bmatrix} 0 & 1 \\ 0 & 1 \\ 1 & 0 \\ 1 & 0 \end{bmatrix}

    which those 1's are non-zero indicator of the absorbing states.

    One more good thing is that with this approach you will also calculate the solution if the game begins at (1, 1). In such case you just extract the 2nd row out and do the same calculation. So if you list more states then required, it is also ok.

  14. #14
    Points: 225, Level: 4
    Level completed: 50%, Points required for next Level: 25

    Posts
    29
    Thanks
    2
    Thanked 0 Times in 0 Posts

    Re: Game probability problem

    So we want the first row of the matrix

    \mathbf{I} \mathbf{R} + \sum_{k=1}^{t-1} \mathbf{Q}^k \mathbf{R}

    And this first row is our probability distribution, of which we sum up the (non-0,0) probabilities to find the probability of defender winning for notation (D,A) where D is number of defenders and A is number of attackers.

    Then I have only 2 questions for now.

    Firstly, is the method starting from the initial state (D,A) identical to the one you listed, except that it will have an arbitrary number of states (8 for the (2,2) case you solved; however you left out the (2,1) and (1,2) rows as they cannot be reached from (2,2), only because you wanted to shorten the solution here, i.e. it would not have affected the solution to include them and in my method I will have to include all states)?

    Secondly, what is the exact method for deducing the matrices Q, I and R from the general matrix P that I write for (D,A) in the same way as you wrote for (2,2)?

    Following the writing of matrices Q, I and R from P for (D,A) I would then perform the combination listed above and from the first row, pertaining to probabilities that start from initial state (D,A), P(Defender wins, given (D,A) initial state)=Sum of probabilities for states in the first row which correspond to state (d,0) where d is greater than 0, and P(Defender wins, given (D,A) initial state)=Sum of probabilities for states in the first row which correspond to state (0,a) where a is greater than 0.
    Last edited by GameGod; 06-07-2013 at 12:41 PM.

  15. #15
    TS Contributor
    Points: 22,410, Level: 93
    Level completed: 6%, Points required for next Level: 940

    Posts
    3,020
    Thanks
    12
    Thanked 565 Times in 537 Posts

    Re: Game probability problem


    1) Including non-reachable states are ok - it just lengthen your calculations. It will not affect your final solution. So for a computer it may be easier to list all the (m +1) \times (n + 1) rectangular grid points

    2) I think it is not difficult to list all the states. At most transient states there are only 3 possible transitions and they transition probabilities are easy to obtain.

    As long as you put all the absorbing states at the bottom then you will be fine. I think it is not that hard to figure out \mathbf{Q}, \mathbf{R} - you just partition the transition matrix according to the number of absorbing states.

    In the previous example, there are 4 absorbing states, so the identity matrix is 4 \times 4.

    \mathbf{Q} is always a square matrix, with dimension (6 - 4) \times (6 - 4) = 2 \times 2 as there are a total of 6 states listed.

    As a result, the dimension of \mathbf{R} is 2 \times 4. It must agree with the number of rows of \mathbf{Q} and the number of columns of \mathbf{I}.

+ Reply to Thread
Page 1 of 3 1 2 3 LastLast

           




Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts






Advertise on Talk Stats