# CMPSCI 250 Discussion #7: Markov Processes

#### 31 October 2007

We gave an informal presentation of Markov processes -- these are finite-state processes where at each time step, the process moves from its current state to a state chosen randomly according to a distribution depending only on the state. Markov processes are discrete (there are only finitely many states), probabilistic (in that their behavior is random), and memoryless (in that the behavior probability depends only on the state and not, for example, on the past history).

This presentation and the example we gave is Excursion 8.4 in the text.

By the Path-Matrix Theorem, if A[i,j] is the probability that we go from state i to state j in one move, Ak[i,j] is the probability that we go from state i to state j in exactly k moves. Most Markov processes tend toward a steady-state distribution. That is, for large k, Ak[i,j] depends only on j, not i -- the process has "lost the memory" of where it started. The steady-state distribution can be given as a vector (a 1 by n matrix, where n is the number of states) consisting of a probability for each state, and this vector v has the property that vA = v. In the text there is an example of a matrix A where by calculating A4 and A8, we can observe that the entries A[i,1]. A[i,2], and A[i,3] on each row are in a 2:2:1 ratio. We guess that (0.4 0.4 0.2) is the steady-state distribution, and confirm this by verifying that vA = v.

Writing Exercise 1: Given the following matrix A for a 3-state Markov process, find the steady-state distribution:

``````
(1 2 0)
A = (1/3)  (1 1 1)
(0 1 2)
``````

We calculate some powers of A until a trend emerges:

``````
(3 4 2)
A^2 = (1/9) (2 4 3)
(1 3 5)

(19 34 28)
A^4 = (1/81) (17 33 31)
(14 31 36)

The next thing would be to get A^8 but A^6 is easier for hand calculation:

(153 296 280)
A^6 = (1/729) (148 293 288)
(140 288 301)
``````

A 1:2:2 ratio seems to be emerging, so we calculate:

``````

(1 2 2) (1 2 0)   (3 6 6)
(1 1 1) =
(0 1 2)
``````

If we set v to be (1/5) (1 2 2), scaling it to be a vector of probabilities, then because A is (1/3) times the integer matrix above, we get exactly vA = v.

Writing Exercise 2: Any two-state Markov process can be described by a matrix

``````
(1-a   a )
( b   1-b)
``````
for two real numbers a and b in the range from 0 through 1. (The matrix must have this form because the entries on each row must add to 1.) Determine the steady state distribution, if any, of such a process in terms of a and b.

Solution: The steady-state distribution, if any, is a vector of the form v = (x y) where x and y are real numbers with x + y = 1. The matrix equation v = Av is equivalent to two equations on real numbers: (1-a)x + by = x for the first entry of the result and ax + (1-b)y = y for the second entry. These each solve to ax = by, and ax = by together with x + y = 1 tells us that x = b/(a+b) and y = a/(a+b), unless of course a = b = 0. Except in that case, we have a steady-state distribution with these probabilities. If a = b = 0, then the matrix is the identity matrix. If we start with a particular distribution in that case, the distribution remains the same because the process never changes the state. So there is no single steady-state distribution in that case.