Answer:
If for each j ∈ S, Recalling that P m ij is precisely the (ij) th component of the matrix P
m, we conclude that (4) can be expressed in matrix form byπj exists as defined in (3) and is independent of the initial
state i, and P j∈S πj = 1, then the probability distribution π = (π0, π1, . . .) on the state space S is called the limiting or stationary or steady-state distribution of the Markov chain.
see attached image 1.
That is, when we average the m-step transition matrices, each row converges to the vector of stationary probabilities π = (π0, π1, . . .). The i th row refers to the intial condition X0 = i in (4), and for each such fixed row i, the j th element of the averages converges to πj . A nice way of interpreting π: If you observe the state of the Markov chain at some random time way out in the future, then πj is the probability that the state is j. To see this: Let N (our random observation time) have a uniform distribution over the integers {1, 2, . . . n}, and be independent of the chain; P(N = m) = 1/n, m ∈ 2, . . . n}. Now assume that X0 = i and that n is very large. Then by conditioning on N = m we obtain (see image 2)