Interpreting the limiting distribution of markov chains?

#1
Hi all,

I have a few questions on the interpretation and application of the steady state or limiting distribution for a markov chain transition matrix. Let's call this steady state matrix X where

X = [0.49 0.51]
The states are unemployed and employed from left to right or state 1 and 2.

To arrive at this matrix, I used a transition matrix, T, that represented the transition probabilities between the states of unemployment and employment within the year and used the condition that xT = x where x is the stationary distribution. (i.e. x1 = 0.49 and x2=0.51)

My first question is to clarify whether X also represents a matrix within a year?

From what I understand about the steady state, X, it means that in the long run, the probability of being unemployed is 0.49 within the year?

Based on the paper I read, (www.columbia.edu/~ks20/stochastic-I/stochastic-I-MCI.pdf) we can also use the steady state to represent the "long run proportion of time that the chain spends" in a state. So for the above...does it mean that in the long run, a person will spend half a year unemployed and little more than half a year employed?

Also, I have read that I can use X to determine the "expected number of steps(time) until the chain revisits" a state e.g. E[time from state 1 to 1] = 1/x1 . So this would be the expected number of steps the chain revisits the unemployment state given the chain started in the unemployment state i.e. 1/0.49 = 2.
Does this mean that if I'm unemployed, and leave that state into employment...I expect to be unemployed again after 2 years?

Therefore, if apply the above for person's working career before retirement, say for 40 years...it means that the person will spend about 20 years unemployed and expects to be unemployed every 2 years?