Suppose we want to construct a transition matrix for a Markov chain that has a given steady-state vector. We will examine one way to do this.

Assume there are \(n\) states, and the steady-state vector has the values

\[p=(p_1,p_2,\dots,p_n).\] We utilize the fact that a Markov chain in steady-state has a balance of agents moving between states. That is, the number of agents that move *into* a state is (approximately) the number that move *out of* the state at each time step. We strengthen this assumption by requiring that for any two states \(S_i\) and \(S_j\), the number of agents moving from \(S_i \rightarrow S_j\) (approximately) equals the number moving from \(S_j \rightarrow S_i\).

The steps to constructing the transition matrix \(P\) are as follows:

- Select a \(n \times n\)
*proposal*transition matrix \(Q\). It is helpful, but not essential that \(Q\) is symmetric, that is, \(Q_{i,j}=Q_{j,i}\) for all indices. - Given states \(S_j\) and \(S_i\) with \(i \ne j\), define the transition probability \(P_{i,j}\):
- If \(p_j \le p_i\), then transition with probability \(P_{i,j} = Q_{i,j}\).
- If \(p_j > p_i\), then transition with probability \(P_{i,j} = \frac{p_i}{p_j} Q_{j,i}\).

- Define \(P_{i,i} = 1-\sum_{k \not = i} P_{k,i}\).

That’s all there is to it! It’s not the hardest thing in the world to show that this process results in a legitimate transition matrix with \(p\) as an eigenvector with eigenvalue 1.

Let’s create a Markov chain on \(n=3\) states with \(p=(1/6,1/3,1/2)\) as the steady state vector.

`p <- c(1/6,1/3,1/2)`

Anything that defines a symmetric set of transitions between the three states. The easiest possible example is:

\[ Q = \begin{bmatrix} \frac{1}{3} & \frac{1}{3} & \frac{1}{3} \\ \frac{1}{3} & \frac{1}{3} & \frac{1}{3} \\ \frac{1}{3} & \frac{1}{3} & \frac{1}{3} \\ \end{bmatrix}\]

or in R-ese:

`Q = matrix(rep(1/3,9), nrow=3)`

Now let’s create the real transition matrix \(P\).

`P = matrix(0, nrow=3, ncol=3)`

Since \(S_1\) has the smallest steady state probability (\(p_1 = 1/6\)),

\[\begin{align*} P_{2,1} &= Q_{2,1}\\ P_{3,1} &= Q_{3,1} \end{align*} \]

and

\[ P_{1,1} = 1 - (P_{2,1} + P_{3,1}). \]

That is,

```
P[2,1] <- Q[2,1]
P[3,1] <- Q[3,1]
P[1,1] <- 1 - (P[2,1] + P[3,1])
```

For \(S_2\), only \(S_1\) has a lower probability, hence:

\[\begin{align*} P_{1,2} &= \frac{p_1}{p_2}Q_{2,1} \\ P_{3,2} &= Q_{3,2} \\ P_{2,2} &= 1 - (P_{1,2} + P_{3,2}) \end{align*}\]

In R, that is:

```
P[1,2] <- p[1]/p[2]*Q[2,1]
P[3,2] <- Q[3,2]
P[2,2] <- 1 - (P[1,2] + P[3,2])
```

Lastly, since \(S_3\) has the largest steady-state probability: \[\begin{align*} P_{1,3} &= \frac{p_1}{p_3}Q_{3,1}\\ P_{2,3} &= \frac{p_2}{p_3}Q_{3,2}\\ P_{3,3} &= 1 - (P_{1,3} + P_{2,3}) \end{align*}\]

That is:

```
P[1,3] <- p[1]/p[3]*Q[3,1]
P[2,3] <- p[2]/p[3]*Q[3,2]
P[3,3] <- 1 - (P[1,3] + P[2,3])
```

Take a peek at the final \(P\).

`P`

```
## [,1] [,2] [,3]
## [1,] 0.3333333 0.1666667 0.1111111
## [2,] 0.3333333 0.5000000 0.2222222
## [3,] 0.3333333 0.3333333 0.6666667
```

Now we can compute the steady-state vector for \(P\). Using the eigenvector:

```
eigenStuff <- eigen(P)
vecs <- eigenStuff$vectors
ssVec <- vecs[,1]
ssVec <- ssVec/sum(ssVec)
ssVec
```

`## [1] 0.1666667 0.3333333 0.5000000`

As desired.

*Note:* If you want, you could confirm this by multiplying a high power of matrix \(P\) times an arbitrary starting distribution:

```
p0 <- c(1,0,0)
M <- 10000
for(m in 1:M){
p0 <- P %*% p0
}
p0
```

```
## [,1]
## [1,] 0.1666667
## [2,] 0.3333333
## [3,] 0.5000000
```

The same thing.

We can write some general code that creates the transition matrix \(P\) from the steady state vector \(p\). Here is one way to do it:

```
## fix the number of states
n <- length(p)
## define the proposal transition matrix Q
Q = matrix(rep(1/n, n*n), nrow=n)
## P starts as Q
P <- Q
## loop over the columns
for(colIndex in 1:n){
## loop over the rows
for(rowIndex in 1:n){
## skip the diagonal entries
if(colIndex != rowIndex){
## check if going to a lower-probability state
if(p[rowIndex] < p[colIndex]){
P[rowIndex,colIndex] <- Q[colIndex,rowIndex]*p[rowIndex]/p[colIndex]
}
}
}
}
## now fix the diagonal entries of P
for(i in 1:n){
P[i,i] <- 1 - (sum(P[,i]) - P[i,i])
}
```

And here is P again:

`P`

```
## [,1] [,2] [,3]
## [1,] 0.3333333 0.1666667 0.1111111
## [2,] 0.3333333 0.5000000 0.2222222
## [3,] 0.3333333 0.3333333 0.6666667
```

Same as before.

A nice feature of this construction is that the steady state probabilities do not have to be *normalized*; in other words, we can use frequencies. For example, instead of

\[p = (1/6,1/3,1/2)\]

we could use

\[p = (1,2,3).\] This is because the quantities only appear as ratios in the construction. Even better, in practice you only have to know the ratios \(p_i/p_j\) in order to construct the Markov chain.

Suppose we have a Markov chain defined on a \(3 \times 3\) grid. There are a total of 9 states:

\[ \begin{array}{|c|c|c|} \hline S_1 & S_2 & S_3 \\ \hline S_4 & S_5 & S_6 \\ \hline S_7 & S_8 & S_9 \\ \hline \end{array} \]

The desired steady-state probability weights are:

\[ \begin{array}{|c|c|c|} \hline 1 & 2 & 1\\ \hline 2 & 4 & 2\\ \hline 1 & 2 & 1\\ \hline \end{array} \]

Notice that the probabilities add up to 16, not 1. Given what we said above, this is OK. However, if it makes you feel better, you may divide the weights by 16.

**Your task**: Construct a \(9 \times 9\) Markov chain transition matrix that has the desired steady-state distribution. Do this under each of the following assumptions:

Start with any proposal transition matrix you like. Find the transition matrix. Then confirm you have a the right steady-state distribution in two ways:

- Using eignvectors/eigenvalues.
- Starting with an arbitrary initial distribution (e.g., \(S_1 = 1\), the rest 0), calculate the distribution after a large number of steps.

For each state in this Markov chain, it will be possible to move only to states that are adjacent horizontally or vertically. Build this assumption into your proposal transition matrix \(Q\), assuming that you can move to any adjacent state connected horizontally or vertically with equal probability. The probability of transition to any other state is \(0\).

For example, in the proposal transition matrix \(Q\):

- \(S_1\) can move to \(S_2\) or \(S_4\), each with probability \(1/2\).
- \(S_2\) can move to \(S_1\), \(S_3\), or \(S_5\), each with probability \(1/3\).
- \(S_5\) can move to \(S_2\), \(S_4\), \(S_6\), or \(S_8\), each with probability \(1/4\).

Does your transition matrix respect your assumption about the moves? Confirm that you have the steady-state correct distribution.

Now assume that for each state in this Markov chain, it will be possible to move only to states that are adjacent horizontally, vertically, or diagonally. Build this assumption into your transition matrix \(Q\), and find the steady-state distribution. Does your transition matrix respect your assumption about the moves? Confirm that you have the steady-state correct distribution.

Submit your solutions to #1, 2, and 3 to the Markov chain inverse assignment on Moodle. Make sure your work is clear, but it is not necessary to provide a lot of discussion about your answers. Also, please remove all of the content above the *Assignment* heading in this file; just submit the problem and your solution.