Consider an irreducible finite Markov chain with states 0, 1, ..., N. (a) (20 pts) Starting in state i, what is the probability that the process will visit state j at most 3 times in total? (b) (10 pts) Let xi = P {visit state N before state 0|start in i}. Find a set of linear equations which the xi satisfy, i = 0, 1, ..., N. (c) (10 pts) If P j jPi,j = i for i = 1, ..., N −1, show that xi = i/N is a solution to the equations in part (b).

Respuesta :

Answer:

If for each j ∈ S, Recalling that P  m ij is precisely the (ij) th component of the matrix P

m, we conclude that (4)  can be expressed in matrix form byπj exists as defined in (3) and is independent of the initial

state i, and P  j∈S πj = 1, then the probability distribution π = (π0, π1, . . .) on the state space S  is called the limiting or stationary or steady-state distribution of the Markov chain.

see attached image 1.

That is, when we average the m-step transition matrices, each row converges to the vector of  stationary probabilities π = (π0, π1, . . .). The i th row refers to the intial condition X0 = i in  (4), and for each such fixed row i, the j th element of the averages converges to πj . A nice way of interpreting π: If you observe the state of the Markov chain at some random  time way out in the future, then πj is the probability that the state is j.  To see this: Let N (our random observation time) have a uniform distribution over the  integers {1, 2, . . . n}, and be independent of the chain; P(N = m) = 1/n, m ∈ 2, . . . n}.  Now assume that X0 = i and that n is very large. Then by conditioning on N = m we obtain  (see image 2)

Ver imagen rameenzaheer1
Ver imagen rameenzaheer1