Random walk on a clock

#1
Consider the numbers 1, 2,..., 12 written around a ring as they usually are on a clock. Consider a Markov chain that at any point jumps with equal probability to the two adjacent numbers. Let this Markov chain be denoted X_{n}.

What is the probability X_{n} will visit all the other states before returning to its starting position ?


* First approach:

I think we may assume WLOG that the starting position is 1 (for the probability we want to compute is invariant under changing the starting position, by symmetry).

Define for every state j = 1, 2,... a r.v. T_{j} := min{ n > 0 | X_{n} = j}. This is the "first time after time 0 that the Markov chain will visit state j".


I think the probability we need to compute is then

P{ max(T_{2}, T_{3},...,T_{12}) < T_1 | X_{0} = 1 }.

But how to calculate this thing ?

* Second approach (more intuitive). The asked probability is the probability that you can go "around to the right" in 12 steps (exactly 10 possibilities) + the probability that you can go around to the right in 14 steps + the probability that you can go around to the right in 16 + ...

The problem is: consider for example the case "14 steps". It's hard to calculate in how many ways this can be done if you think about it...


I therefore think my first approach is certainly best, but how do you tackle this kind of problem ?


Thanks for the work!
 
Last edited:

BGM

TS Contributor
#2
I think you can simplify your problem a bit by considering the following
13 states \( \{0, 1, 2, ..., 12\} \) markov chain X:

\( \Pr\{X_{k+1} = 0|X_{k} = 0\} = \Pr\{X_{k+1} = 12|X_{k} = 12\} = 1
\),
i.e. both state 0 and 12 are the absorbing states

\( \Pr\{X_{k+1} = i - 1|X_{k} = i\} = \Pr\{X_{k+1} = i + 1|X_{k} = i\}
= \frac {1} {2}, ~ i = 1, 2, ..., 11 \)
i.e. A simple random walk model

\( X_{0} = 1 \) surely

Absorbing in state 0 means that you go back to the original position without
visiting all the other states, while absorbing in state 12 means you have
already visiting all the other states.

We start in state 1 as it denotes the first reached adjacent state from the
original state.

So you want to calculate the probability of absorbing in state 12.

Since now in this case \( p = \frac {1} {2} \),
the required probability is simply \( \frac {1} {12} \)
 
#3
I think you can simplify your problem a bit by considering the following
13 states \( \{0, 1, 2, ..., 12\} \) markov chain X:

\( \Pr\{X_{k+1} = 0|X_{k} = 0\} = \Pr\{X_{k+1} = 12|X_{k} = 12\} = 1
\),
i.e. both state 0 and 12 are the absorbing states

\( \Pr\{X_{k+1} = i - 1|X_{k} = i\} = \Pr\{X_{k+1} = i + 1|X_{k} = i\}
= \frac {1} {2}, ~ i = 1, 2, ..., 11 \)
i.e. A simple random walk model

\( X_{0} = 1 \) surely

Absorbing in state 0 means that you go back to the original position without
visiting all the other states, while absorbing in state 12 means you have
already visiting all the other states.

We start in state 1 as it denotes the first reached adjacent state from the
original state.

So you want to calculate the probability of absorbing in state 12.

Since now in this case \( p = \frac {1} {2} \),
the required probability is simply \( \frac {1} {12} \)
I begin to understand what you mean, and guess this approach indeed should work! Ingenious.

However, I think your conclusion may be false; I received a numerical solution (which I think is right probably) from the professor which says the answer is 1/11 ?


EDIT: I now fully understand your solution. Also, indeed for the probability of ending up in the absorbing state 12, starting from 1, I get 1/12, like you.
 
Last edited:

Dason

Ambassador to the humans
#4
I ran a quick simulation and it suggests 1/11 as well. The code:
Code:
test <- function(){
  reached <- logical(12)
  start <- 12
  reached[start] = T
  cont <- T
  current <- start
  while(cont){
    current <- current + (-1)^rbinom(1,1,.5)
    if(current == 0){
      current <- 12
    }
    if(current == 13){
      current <- 1
    }
    reached[current] <- T
    if(current == start){
      cont <- F
    }
  }
  return(sum(!reached)==0)
}

dat <- replicate(100000,test())
mean(dat)
The output:
Code:
> dat <- replicate(100000,test())
> mean(dat)
[1] 0.09068
 
#5
I begin to understand what you mean, and guess this approach indeed should work! Ingenious.

However, I think your conclusion may be false; I received a numerical solution (which I think is right probably) from the professor which says the answer is 1/11 ?


EDIT: I now fully understand your solution. Also, indeed for the probability of ending up in the absorbing state 12, starting from 1, I get 1/12, like you.
I think there was a minor mistake in transferring the model stated in the problem to what you've made of it; however, if you do exactly the same thing as you did, i.e. a simple random walk, but on the set {1,...,12} (or on {0,...,11}, this doesn't matter of course) instead of {0,1,...,12}, with 1 and 12 the absorbing states, then it's correct. Transferring the model here is a little subtile I guess. We get 1/11 as a result.

Do you agree with this, or what do you think?

Thanks anyway! This was a really ingenious idea.

But is there a more "standard/elementary" approach (which may work in a broader sense) to this problem? This is very nice idea but ad hoc.


@ Dason: nice simulation, thanks.
 
Last edited:

BGM

TS Contributor
#6
Yes, you are correct. It was careless.
When the chain has reached the last state, it already fulfilled the requirement
that "visiting all states before reaching the original state", no matter afterward
it is reaching the original state from the left or right.
So it will should has 1 state less and we would get \( \frac {1} {11} \).
Thanks for correcting.