Solutions to Stochastic Processes Ch.8

随机过程-第二版》(英文电子版Sheldon M. Ross 答案整理,此书作为随机过程经典教材没有习题讲解,所以将自己的学习过程记录下来,部分习题解答参考了网络,由于来源很多,出处很多也不明确,无法一一注明,在此一并表示感谢!希望对大家有帮助,水平有限,不保证正确率,欢迎批评指正,转载请注明出处。
Solutions to Stochastic Processes Sheldon M. Ross Second Edition(pdf)
Since there is no official solution manual for this book, I
handcrafted the solutions by myself. Some solutions were referred from web, most copyright of which are implicit, can’t be listed clearly. Many thanks to those authors! Hope these solutions be helpful, but No Correctness or Accuracy Guaranteed. Comments are welcomed. Excerpts and links may be used, provided that full and clear credit is given.

In Problem 8.1, 8.2 and 8.3, let \(\{X(t), t \geq 0\}\) denote a Brownian motion process.

8.1 Let \(Y(t) = tX(1/t)\).
(a) What is distribution of \(Y(t)\)?
(b) Compute \(Cov(Y(s), Y(t)\)
(c) Argue that \(\{Y(t), t \geq 0\}\) is also Brownian motion
(d) Let \(T = inf\{t>0: X(t)=0\}\). Using (c) present an argument that \(P\{T = 0\} = 1\).


(a) \(X(1/t) \sim N(0, 1/t)\) then \(Y(t) = tX(1/t) \sim N(0, t)\)
(b) $$\begin{align}
Cov(Y(s), Y(t)) &= Cov(sX(1/s), tX(1/t)) \\
&= st\cdot Cov(X(1/s), X(1/t)) \\
&= st\cdot min(1/s, 1/t) = min(s, t) \end{align}$$
(c) Since \(\{X(t)\}\) is a Gaussian process so is \(\{Y(t)\}\). Further from parts (a) and (b) above \(\{Y(t)\}\) is a Brownian motion.
(d) Since \(Y(t)\) is Brownian motion then \(T_1 \equiv sup\{t: Y(t) = 0\} = \infty\) with probability 1. Note \(\{T = 0\} = \{T_1 = \infty\}\). Thus, \(P\{T = 0\} = 1\)


8.2 Let \(W(t) = X(a^2t)/a\) for \(a > 0\). Verify that \(W(t)\) is also Brownian motion.


\(W(0) = X(0)/a = 0\). Non-overlapping increments of \(W(t)\) map to non-overlapping increments of \(X(t)\). Thus increments of \(W(t)\) are independent. Further, for \(s < t\),
$$W(t) – W(s) = \frac{X(a^2t) – X(a^2s)}{a} \sim N(0, t-s)$$
Thus \(W(t)\) has stationary increments with required distribution. Therefore, \(W(t)\) is a Brownian motion.


8.5 A stochastic process \(\{X(t), t \geq 0\}\) is said to be stationary if \(X(t_1), \dots, X(t_n)\) has the same joint distribution as \(X(t_1+a), \dots, X(t_n +a)\) for all \(n, a, t_1, \dots, t_n\).
(a) Prove that a necessary and sufficient condition for a Gaussian process to be stationary is that \(Cov(X(s), X(t))\) depends only on \(t-s, s \leq t\), and \(E[X(t)] = c\).
(b) Let \(\{X(t), t \geq 0\}\) be Brownian motion and define
$$V(t) = e^{-\alpha t/2}X(\alpha e^{\alpha t})$$
Show that \(\{V(t), t \geq 0\}\) is a stationary Gaussian process. It is called Ornstein-Uhlenbeck process.


(a) If the Gaussian process is stationary then for \(t > s, (X(t), X(s))\) and \((X(t-s), X(0))\) have same distribution. Thus, \(E[X(s)] = E[X(0)]\) for all \(s\) and \(Cov(X(t), X(s)) = Cov(X(t-s), X(0))\), for all \(t < s\). Now, assume \(E[X(t)] = c\) and \(Cov(X(t), X(s)) = h(t-s)\). For any \(T = (t_1, \dots, t_k)\) define vector \(X_T \equiv (X(t_1), \dots, X(t_k))\). Let \(\tilde{T} = (t_1-a, \dots, t_k -a)\). If \(\{X(t)\}\) is a Gaussian process then both \(X_T\) and \(X_{\tilde{T}}\) are multivariate normal and it suffices to show that they have the same mean and covariance. This follows directly from the fact that they have the same element-wise mean \(c\) and the equal pair-wise covariances, \(Cov(X(t_i-a), X(t_j -a)) = h(t_i-t_j) = Cov(X(t_i), X(t_j))\)
(b) Since all finite dimensional distributions of \(\{V(t)\}\) are normal, it is a Gaussian process. Thus from part (a) is suffices to show the following:
(i) \(E[V(t)] = e^{-at/2}E[X(\alpha e^{\alpha t})] = 0\). Thus \(E[V(t)]\) is constant.
(ii) For \(s \leq t\), $$\begin{align}
Cov(V(s), V(t)) &= e^{-\alpha(t+s)/2}Cov(X(\alpha e^{\alpha s}), X(\alpha e^{\alpha t}))\\
&= e^{-\alpha(t+s)/2}\alpha e^{\alpha s} = \alpha e^{-\alpha(t-s)/2} \end{align}$$
which depends only on \(t-s\).


8.8 Suppose \(X(1) = B\). Characterize, in the manner of Proposition 8.1.1, \(\{X(t), 0 \leq t \leq 1\}\) given that \(X(1) = B\).


Condition on \(X(1) = B, X(t) \sim N(Bt, t(1-t))\), then \(Z(t) \sim N(0, t(1-t))\) which is a Brownian motion.


8.9 Let \(M(t) = max_{0 \leq s \leq t} X(s)\) and show that
$$P\{M(t) > a| M(t) = X(t)\} = e^{-a^2/2t}, \quad a > 0$$


From Section 8.3.1, we get
$$P\{M(t) > y, X(t) < x\} = \int_{2y-x}^{\infty}\frac{1}{\sqrt{2\pi t}}e^{-u^2/2t}du $$
By using Jacobian formula, we can derive the density function of \(M(t)\) and \(W(t) = M(t) – X(t)\), which we denote by \(f_{MW}\). Thus
$$f_W(w) = 2\int_0^{\infty}f_{MW}(m, w)dm \\
P\{M(t) > a | W(t) = 0\} = 1 – \int_0^a \frac{f_{MW}(m, 0)}{f_W(0)}dm$$
The last equation can be computed, which equal \(e^{-a^2/2t}\)


8.10 Compute the density function of \(T_x\), the time until Brownian motion hits \(x\).


$$\begin{align}
f_{T_x}(t) &= F_{T_x}^{\prime}(t) = (\frac{2}{\sqrt{2\pi}}\int_{|x|/\sqrt{t}}^{\infty}e^{-y^2/2}dy)^{\prime} \\
&= -\frac{2}{\sqrt{2\pi}} \cdot e^{-x^2/2t} \cdot \frac{x}{2}t^{-3/2}\\
&= -\frac{x}{\sqrt{2\pi}}e^{-x^2/2t}t^{-3/2}
\end{align}$$


8.11 Let \(T_1\) denote the largest zero of \(X(t)\) that is less than \(t\) and let \(T_2\) be the smallest zero greater than \(t\). Show that
(a) \(P\{T_2 < s\} = (2/\pi)\arccos\sqrt{t/s}, s> t\).
(b) \(P\{T_1 < s, T_2 > y\} = (2/\pi)\arcsin\sqrt{s/y}, s < t< y\).


(a) $$\begin{align}
P\{T_2 < s\} &= 1 – P\{\text{no zeros in } (t, s)\} \\
&= 1 – \frac{2}{\pi}\arcsin\sqrt{t/s} \\
&= (2/\pi)\arccos\sqrt{t/s} \end{align}$$
(b) \(P\{T_1 < s, T_2 > y\} = P\{\text{no zeros in } (s, y)\} = \frac{2}{\pi}\arcsin\sqrt{s/y}\)


8.12 Verify the formulas given in (8.3.4) for the mean and variance of \(|X(t)|\).


$$f_Z(y) = (\frac{2}{\sqrt{2\pi t}}\int_{-\infty}^y e^{-x^2/2t}dx – 1)^{\prime} = \frac{2}{\sqrt{2\pi t}}e^{-y^2/2t}\\
E[Z(t)] = \int_{0}^{\infty}yf_Z(y)dy = -\frac{2t}{\sqrt{2\pi t}}e^{-y^2/2t}|_0^{\infty} = \sqrt{2t/\pi} \\
Var(Z(t)) = E[Z^2(t)] – E^2[Z(t)] = E[X^2(t)] – E^2[Z(t)] = (1 – \frac{2}{\pi})t$$


8.13 For Brownian motion with drift coefficient \(\mu\), show that for \(x>0\)
$$P\{max_{0 \leq s \leq h} |X(s)| > x\} = o(h).$$



8.18 Let \(\{X(t), t \geq 0\}\) be a Brownian motion with drift coefficient \(\mu, \mu < 0\), which is not allowed to become negative. Find the limiting distribution of \(X(t)\).



8.19 Consider Brownian motion with reflecting barriers of \(-B\) and \(A, A >0, B > 0\). Let \(p_t(x)\) denote the density function of \(X_t\).
(a) Compute a differential equation satisfied by \(p_t(x)\).
(b) Obtain \(p(x) = \lim_{t \to \infty} p_t(x)\).



8.20 Prove that, with probability 1, for Brownian motion with drift \(\mu\).
$$\frac{X(t)}{t} \to \mu, \quad \text{ as } t \to \infty$$



8.21 Verify that if \(\{B(t), t \geq 0\}\) is standard Brownian motion then \(\{Y(t), t \geq 0\}\) is a martingale with mean 1, when \(Y(t) = exp\{cB(t) – c^2t/2\}\)


$$\begin{align}
E[Y(t)] &= \int_{-\infty}^{\infty} exp\{cx – c^2t/2\}\frac{1}{\sqrt{2\pi t}}exp\{-x^2/2t\}dx\\
&= \int_{-\infty}^{\infty} \frac{1}{\sqrt{2\pi t}}exp\{-(x-ct)^2/2t\}dx = 1 \end{align}$$ $$\begin{align}
E[Y(t)|Y(u), 0 \leq u \leq s] &= Y(s) + E[Y(t) – Y(s)|Y(u), 0 \leq u \leq s ]\\
&= Y(s) \cdot E[exp\{c(B(t) – B(s)) – c^2(t-s)/2\}] \\
&= Y(s) \cdot E[Y(t-s)] = Y(s)
\end{align}$$


8.22 In Problem 8.16, find \(Var(T_a)\) by using a martingale argument.



8.23 Show that
$$p(x,t;y) \equiv \frac{1}{\sqrt{2\pi t}}e^{-(x – y – \mu t)^2/2t}$$
satisfies the backward and forward diffusion. Equations (8.5.1) and (8.5.2)


Just do it : )


8.24 Verify Equation (8.7.2)


Let \(f(s, y) = [\phi(se^{-\alpha y}) – 1]dy\), then
$$\begin{align}
E[X(t)] &=\frac{d}{ds}E[exp\{sX(t)\}]|_{s=0} \\
&= exp\{\lambda\int_0^t f(0, y)dy\} \lambda \int_0^t \frac{d}{ds}f(s, y)|_{s=0} dy \\
&= \lambda E[X](1 – e^{-\alpha t})/\alpha\\
Var(X(t)) &= E[X^2(t)] – E^2[X(t)] \\
&= \frac{d^2}{ds^2}E[exp\{sX(t)\}]|_{s=0} – E^2[X(t)] \\
&= \lambda E[X^2](1 – e^{-2\alpha t})/2\alpha
\end{align}$$


8.25 Verify that \(\{X(t) = N(t + L) – N(t), t \geq 0\}\) is stationary when \(\{N(t)\}\) is a Poisson process.


For any \(t, X(t) = N(t + L) – N(t) = N(L)\), thus
$$E[X(t)] = E[N(L)] = \lambda L\\
Cov(X(t), X(t+s)) = Var(N(L)) = \lambda L$$


8.26 Let \(U\) be uniformly distributed over \((-\pi, \pi)\), and let \(X_n = cos(nU)\). By using trigonometric identity
$$\cos x \cos y = \frac{1}{2} [\cos(x+y) + \cos(x-y)]$$
verify that \(\{X_n, n \geq 1\}\) is a second-order stationary process.


$$\begin{align}
E[X_n] &= \frac{1}{n\pi}\int_{-n\pi}^{n\pi} \cos xdx = 0\\
Cov(X_{n+L}, X_n) &= E[X_{n+L}X_n] – E[X_{n+L}]E[X_n] \\
&= \frac{1}{2}E[X_{2n+L} + X_L] = 0
\end{align}$$


8.27 Show that
$$\sum_{i=1}^n \frac{R(i)}{n} \to 0 \quad \text{implies} \quad {\sum\sum}_{i < j < n}\frac{R(j-i)}{n^2} \to 0$$
thus completing the proof of Proposition 8.8.1



8.28 Prove the Cauchy-Schwarz inequality:
$$(E[XY])^2 \leq E[X^2]E[Y^2]$$
(Hint: Start with the inequality \(2|xy| \leq x^2 + y^2\) and then substitute \(X/\sqrt{E[X^2]}\) for \(x\) and \(Y/\sqrt{E[Y^2]}\) for \(y\))


Since \(2xy \leq x^2 + y^2\), then
$$\begin{align}
2\frac{X}{\sqrt{E[X^2]}}\frac{Y}{\sqrt{E[Y^2]}} &\leq \frac{X^2}{E[X^2]} + \frac{Y^2}{E[Y^2]} \\
E[2\frac{X}{\sqrt{E[X^2]}}\frac{Y}{\sqrt{E[Y^2]}}] &\leq E[\frac{X^2}{E[X^2]} + \frac{Y^2}{E[Y^2]}]\\
2\frac{E[XY]}{\sqrt{E[X^2]E[Y^2]}} &\leq 2\\
(E[XY])^2 &\leq E[X^2]E[Y^2] \end{align}$$


8.29 For a second-order stationary process with mean \(\mu\) for which \(\sum_{i=0}^{n-1}R(i)/n \to 0\), show that for any \(\varepsilon > 0\)
$$\sum_{i=0}^{n-1}P\{|\bar{X_n} – \mu| > \varepsilon \} \to 0 \quad \text{as } n \to \infty$$




Solutions to Stochastic Processes Ch.7

随机过程-第二版》(英文电子版Sheldon M. Ross 答案整理,此书作为随机过程经典教材没有习题讲解,所以将自己的学习过程记录下来,部分习题解答参考了网络,由于来源很多,出处很多也不明确,无法一一注明,在此一并表示感谢!希望对大家有帮助,水平有限,不保证正确率,欢迎批评指正,转载请注明出处。
Solutions to Stochastic Processes Sheldon M. Ross Second Edition(pdf)
Since there is no official solution manual for this book, I
handcrafted the solutions by myself. Some solutions were referred from web, most copyright of which are implicit, can’t be listed clearly. Many thanks to those authors! Hope these solutions be helpful, but No Correctness or Accuracy Guaranteed. Comments are welcomed. Excerpts and links may be used, provided that full and clear credit is given.

7.1 Consider the following model for the flow of water in and out of a dam. Suppose that, during day \(n, Y_n\) units of water flow into the dam from outside sources such as rainfall and river flow. At the end of each day, water is released from the dam according to the following rule: If the water content of the dam is greater than \(a\), then the amount of \(a\) is released. If it is less than or equal to \(a\), then the total contents of the dam are released. The capacity of the dam is \(C\), and once at capacity any additional water that attempts to enter the dam is assumed lost. Thus, for instance, if the water level at the beginning of day \(n\) is \(x\), then the level at the end of the day (before any water is released) is \(min(x + Y_n, C)\). Let \(S_n\) denote the amount of water in the dam immediately after the water been released at the end of day \(n\). Assuming that the \(Y_n, n \geq 1\), are independent and identically distributed, show that \(\{S_n, n \geq 1\}\) is a random walk with reflecting barriers at 0 and \(C-a\).



7.2 Let \(X_1, \dots, X_n\) be equally likely to be any of the \(n!\) permutations of \((1,2,\dots, n)\). Argue that,
$$P\{\sum_{j=1}^njX_j \leq a\} = P\{\sum_{j=1}^njX_j\geq n(n+1)^2/2 -a\}$$


$$\begin{align}
P\{\sum_{j=1}^njX_j \leq a \} &= P\{nS_n – \sum_{i=1}^{n-1}S_i \leq a\} \\
&= P\{\sum_{i=1}^nS_i \geq (n+1)S_n – a\} \\
&= P\{\sum_{j=1}^njX_j\geq n(n+1)^2/2 -a\} \end{align}$$


7.3 For the simple random walk compute the expected number of visits to state \(k\).


Suppose \(p \geq 1/2\), and starting at state \(i\). When \(i \leq k\),
$$p_{ik} = 1 \\
f_{kk} = 2 -2p\\
E = 1 + \frac{1}{2p – 1}$$
When \(i > k\)
$$p_{ik} = (\frac{1-p}{p})^{i – k}\\
f_{kk} = 2 – 2p\\
E = p_{ik}(1 + \frac{1}{2p-1})$$


7.4 Let \(X_1, X_2, \dots, X_n\) be exchangeable. Compute \(E[X_1|X_{(1)}, X_{(2)}, \dots, X_{(n)}]\), where \(X_{(1)} \leq X_{(2)} \leq \dots \leq X_{(n)}\) are the \(X_i\) in ordered arrangement.


Since \(X_i\) is exchangeable, \(X_1\) can be any of \(X_{(i)}\) with equal probability. Thus,
$$E[X_1|X_{(1)}, X_{(2)}, \dots, X_{(n)}] = \frac{1}{n}\sum_{i=1}^nX_{(i)}$$


7.6 An ordinary deck of cards is randomly shuffled and then the cards are exposed one at a time. At some time before all the cards have been exposed you must say “next”, and if the next card exposed is a spade then you win and if not then you lose. For any strategy, show that at the moment you call “next” the conditional probability that you win is equal to the conditional probability that the last card is spade. Conclude from this that the probability of winning is 1/4 for all strategies.


Let \(X_n\) indicate if the nth card is a spade and \(Z_n\) be the proportion of spades in the remaining cards after the \(n\) card. Thus \(E|Z_n| < \infty\) and
$$E[Z_{n+1}|Z_1, \dots , Z_n] = \frac{(52 -n)Z_n – 1}{52 -n-1}Z_n + \frac{(52-n)Z_n}{52-n-1}(1 – Z_n) = Z_n$$
Hence \(Z_n\) is a martingale.
Note that \(X_52 = Z_51\). Thus
$$\begin{align}
E[X_{n+1}|X_1, \dots, X_n]&= E[X_{n+1}|Z_1, \dots, Z_n] = Z_n\\
&= E[Z_51|Z_1, \dots, Z_n] = E[X_52|X_1, \dots, X_n]
\end{align}$$
Finally, let \(N\) be the stopping time corresponding to saying “next” for a given strategy.
$$\begin{align}
P\{\text{Win}\} &= E[X_{N+1}] = E[E[X_{N+1}|N]] \\
&= E[Z_N] = E[Z_1] = 1/4
\end{align}$$


7.7 Argue that the random walk for which \(X_i\) only assumes the values \(0, \pm 1, \dots, \pm M\) and \(E[X_i] = 0\) is null recurrent.



7.8 Let \(S_n, n \geq 0\) denote a random walk for which
$$\mu = E[S_{n+1} – S_n] \neq 0$$
Let, for \(A >0, B > 0\),
$$N = min\{n: S_n \geq A \text{ or } S_n \leq -B\}$$
Show that \(E[N] < \infty\). (Hint: Argue that there exists a value \(k\) such that \(P\{S_k > A +B\} > 0\). Then show that \(E[N] \leq kE[G]\), where \(G\) is an appropriately defined geometric random variable.)


Suppose \(\mu > 0\), and let \(k > (A+B)/\mu\), then
$$\begin{align}
k\mu – A -B &= E[S_k – A – B] \\
&= E[S_k – A – B|S_k > A + B]P\{S_k > A+B\} \\&+ E[S_k – A – B|S_k \leq A + B]P\{S_k \leq A+B\}\\
&\leq E[S_k – A – B|S_k > A + B]P\{S_k > A+B\}
\end{align}$$ Thus, there exists \(k > (A+B)/\mu, p = P\{S_k > A +B\} > 0\). Let \(Y_i = \sum_{j = ik+1}^{(i+1)k} X_j\), then \(P\{Y_i > A + B\} = p\). And it’s obviously that if any of \(Y_i\) exceeds \(A+B\), \(S_N\) occurs. Hence, \(E[N] \leq k/p\)


7.10 In the insurance ruin problem of Section 7.4 explain why the company will eventually be ruined with probability 1 if \(E[Y] \geq cE[X]\).



7.11 In the ruin problem of Section 7.4 let \(F\) denote the interarrival distribution of claims and let \(G\) be the distribution of the size of a claim. Show that \(p(A)\), the probability that a company starting with \(A\) units of assets is ever ruined, satifies
$$p(A) = \int_0^{\infty}\int_0^{A + ct}p(A + ct -x)dG(x)dF(t) + \int_0^{\infty}\bar{G}(A+ct)dF(t)$$


Condition on the first claim, then
$$\begin{align}
p(A) &= P\{\text{ruined at first claim}\} + P\{\text{ruined after first claim}\} \\
&= \int_0^{\infty}\bar{G}(A+ct)dF(t) + \int_0^{\infty}\int_0^{A + ct}p(A + ct -x)dG(x)dF(t)
\end{align}$$


7.12 For a random walk with \(\mu = E[X] > 0\) argue that, with probability 1,
$$\frac{u(t)}{t} \to \frac{1}{\mu} \quad \text{as } t \to \infty$$
where \(u(t)\) equals the number of \(n\) for which \(0 \leq S_n \leq t\).



7.13 Let \(S_n = \sum_{i=1}^n X_i\) be a random walk and let \(\lambda_i, i > 0\), denote the probability that a ladder height equals \(i\) — that is, \(\lambda_i = P\){first positive value of \(S_n\) equals \(i\)}.
(a) Show that if
$$P\{X_i = j\} = \left\{\begin{array}{ll}
q \quad j = -1 \\
\alpha_j \quad j \geq 1 \\ \end{array}\right. \\
q + \sum_{j=1}^{\infty} \alpha_j = 1$$
then \(\lambda_i\) satisfies
$$\lambda_i = \alpha_i + q(\lambda_{i+1} + \lambda_1\lambda_i) \quad i > 0$$
(b) If \(P\{X_i = j\} = 1/5, j = -2,-1,0,1,2\), show that
$$\lambda_1 = \frac{1+\sqrt{5}}{3+\sqrt{5}} \quad \lambda_2 = \frac{2}{3+\sqrt{5}} $$



7.14 Let \(S_n, n\geq 0\), denote a random walk in which \(X_i\) has distribution \(F\). Let \(G(t,s)\) denote the probability that the first value of \(S_n\) that exceeds \(t\) is less than or equal to \(t+s\). That is,
$$G(t,s) = P\{\text{first sum exceeding } t \text{ is } \leq t+s\}$$
Show that
$$G(t, s) = F(t + s) – F(t) + \int_{-\infty}^t G(t-y, s)dF(y)$$


\(S_n|X_1\) is distributed as \(X_1 + S_{n-1}\). Thus if \(A\)={first sum exceeding \(t\) is \(\leq t + s\)},
$$\begin{align}
G(t,s) &\equiv P\{A\} = E[P\{A|X_1\}] \\
&= F(t+s) – F(t) + \int_{-\infty}^t G(t-y, s)dF(y)
\end{align}$$


Solutions to Stochastic Processes Ch.6

随机过程-第二版》(英文电子版Sheldon M. Ross 答案整理,此书作为随机过程经典教材没有习题讲解,所以将自己的学习过程记录下来,部分习题解答参考了网络,由于来源很多,出处很多也不明确,无法一一注明,在此一并表示感谢!希望对大家有帮助,水平有限,不保证正确率,欢迎批评指正,转载请注明出处。
Solutions to Stochastic Processes Sheldon M. Ross Second Edition(pdf)
Since there is no official solution manual for this book, I
handcrafted the solutions by myself. Some solutions were referred from web, most copyright of which are implicit, can’t be listed clearly. Many thanks to those authors! Hope these solutions be helpful, but No Correctness or Accuracy Guaranteed. Comments are welcomed. Excerpts and links may be used, provided that full and clear credit is given.

6.1 If \(\{Z_n, n \geq 1 \}\) is a martingale show that, for \(1 \leq k < n,\)
$$E[Z_n|Z_1, \dots, Z_k] = Z_k$$


$$\begin{align}
E[Z_n|Z_1, \dots, Z_k] &= E[E[Z_n|Z_1, \dots, Z_{n-1}]|Z_1, \dots, Z_k]\\
&= E[Z_{n-1}|Z_1, \dots, Z_k]\\
&\dots\\
&=E[Z_{k+1}|Z_1, \dots, Z_k] = Z_k
\end{align}$$


6.3 Verify that \(X_n/m^n, n\geq 1\), is a martingale when \(X_n\) is the size of the nth generation of a branching process whose mean number of offspring per individual is \(m\)


$$\begin{align}
E[\frac{X_{n+1}}{m^{n+1}}|\frac{X_1}{m}, \dots, \frac{X_n}{m^n}]&=E[\frac{\sum_{i=1}^{X_n}Z_i}{m^{n+1}}|\frac{X_1}{m}, \dots, \frac{X_n}{m^n}] \\
&= E[\frac{mX_n}{m^{n+1}}|\frac{X_1}{m}, \dots, \frac{X_n}{m^n}] \\
&= \frac{X_n}{m^n}
\end{align}$$


6.4 Consider the Markov chain which at each transition either goes up 1 with probability \(p\) or down with probability \(q = 1 – p\). Argue that \((q/p)^{S_n}, n \geq 1\), is a martingale.


$$\begin{align}
E[(q/p)^{S_{n+1}}|(q/p)^{S_1}, \dots, (q/p)^{S_n}]&= E[(q/p)^{S_n}(q/p)^{X_{n+1}}|(q/p)^{S_1}, \dots, (q/p)^{S_n}] \\
&= (q/p)^{S_n} E[(q/p)^{X_{n+1}}] \\
&= (q/p)^{S_n}
\end{align}$$


6.5 Consider a Markov chain \(\{X_n, n \geq 0\}\) with \(p_{NN} = 1\). Let \(P(i)\) denote the probability that this chain eventually enters state \(N\) given that it starts in state \(i\). Show that \(\{P(X_n), n \geq 0\}\) is a martingale.


$$\begin{align}
E[P(X_{n+1})|P(X_0), \dots, P(X_n)]&= \sum_k P_{X_n, k}P(k) \\
&=P(X_n)
\end{align} $$


6.6 Let \(X(n)\) denote the size of the nth generation of a branching process, and let \(\pi_0\) denote the probability that such a process, starting with a single individual, eventually goes extinct. Show that \(\{\pi_0^{X_n}, n \geq 0\}\) is a martingale.


$$\begin{align}
E[\pi_0^{X_{n+1}}|\pi_0^{X_1}, \dots, \pi_0^{X_n}]&= \prod_{i=1}^{X_n}E[\pi_0^{Z_i}|\pi_0^{X_1}, \dots, \pi_0^{X_n}] \\
&=\pi_0^{X_n}
\end{align}$$


6.9 A process \(\{Z_n, n \geq 1\}\) is said to be a reverse, or backwards, martingale if \(E|Z_n| < \infty\) for all \(n\) and
$$E[Z_n|Z_{n+1}, Z_{n+2}, \dots] = Z_{n+1}$$
Show that if \(X_i, i \geq 1\), are independent and identically distributed random variables with finite expectation, then \(Z_n = (X_1 + \dots + X_n)/n, n \geq 1\), is a reverse martingale.


$$\begin{align}
&E[Z_n|Z_{n+1}, Z_{n+2}, \dots]\\
=& E[\frac{(n+1)Z_{n+1} – X_{n+1}}{n}|Z_{n+1}, Z_{n+2}, \dots ] \\
=& E[Z_{n+1}|Z_{n+1}, Z_{n+2}, \dots] + \frac{1}{n}E[\frac{\sum_{i=1}^{n+1}X_i}{n+1} – X_{n+1}| Z_{n+1}, Z_{n+2}, \dots] \\
=& Z_{n+1}
\end{align}$$


6.10 Consider successive flips of a coin having probability \(p\) of landing heads. Use a martingale argument to compute the expected number of flips until the following sequences appear:
(a) HHTTHHT
(b) HTHTHTH


(a) $$X_n = N-2 – (p^{-4}q^{-3} – 1) – (p^{-2}q^{-1} – 1) \\
E[N] = E[X_n] + p^{-4}q^{-3} + p^{-2}q^{-1} = p^{-4}q^{-3} + p^{-2}q^{-1} $$
(b) $$E[N] = p^{-4}q^{-3} + p^{-3}q^{-2} + p^{-2}q^{-1} + p^{-1} $$


6.11 Consider a gambler who at each gamble is equally likely to either win or lose 1 unit. Suppose the gambler will quit playing when his winnings are either \(A\) or \(-B, A > 0, B > 0\). Use an appropriate martingale to show that the expected number of bets is \(AB\).


Similar to Example 6.2(C), let two players with A and B coins. When the gambler win, one player gives a coin to another.


6.12 In Example 6.2(C), find the expected number of stages until one of the players is eliminated.


$$M_n = X_nY_nZ_n + ns/3\\
E[M_T] = E[M_0] = xyz\\
M_T = ns/3$$
It’s easy to prove that \(M_n\) is a martingale. Then \(E[T] = 3E[M_T]/s = 3xyz/s\)


6.13 Let \(Z_n = \prod_{i=1}^n X_i\), where \(X_i, i \geq 1\) are independent random variables with$$P\{X_i = 2\} = P\{X_i = 0\} = 1/2$$
Let \(N = Min\{n: Z_n = 0\}\). Is the martingale stopping theorem applicable? If so, what would you conclude? If not, why not?


Not applicable.


6.14 Show that the equation
$$e^{\beta} – e^{-\beta} = 2\beta e^{\beta^2/2}$$
has no solution when \(\beta \neq 0\).
(Hint: Expand in a power series.)


$$\begin{align}
&e^{\beta} – e^{-\beta} – 2\beta e^{\beta^2/2}\\
=& \sum_{i=0}^{\infty} \beta^{2i+1}(1/(2i+1)! – 1/(i!2^i)) \\ \end{align}$$
Since, it equals zero iff \(\beta = 0\).


6.15 Let \(X\) denote the number of heads in \(n\) independent flips of a fair coin. Show that:
(a) \(P\{X- n/2 \geq a\} \leq exp\{-2a^2/n\}\)
(b) \(P\{X- n/2 \leq -a\} \leq exp\{-2a^2/n\}\)


\(Z_i = X_i – i/2\) is a martingale with mean 0, and \(-1/2 \leq Z_i – Z_{i-1} \leq 1/2\). From Azuma inequality, we get the result.


6.16 Let \(X\) denote the number of successes in \(n\) independent Bernoulli trials, with trial \(i\) resulting in a success with probability \(p_i\). Given an upper bound for \(P\{|X – \sum_{i=1}^n p_i| \geq a\}\).


\(Z_i = X_i – \sum_{k=1}^ip_k\) is a martingale with mean 0. and \(0 \leq Z_i – Z_{i-1} \leq 1\). From Azuma inequality,
$$ P\{X – \sum_{i=1}^n p_i \geq a\} \leq 2exp\{-2a^2/n\} \\
P\{X – \sum_{i=1}^n p_i \leq -a\} \leq 2exp\{-2a^2/n\} $$


6.17 Suppose that 100 balls are to be randomly distributed among 20 urns. Let \(X\) denote the number of urns that contains at least five balls. Derive an upper bound for \(P\{X \geq 15\}\).


$$E[X] = 20[1 – \sum_{k=0}^4{100 \choose k}(\frac{1}{20})^k(\frac{19}{20})^{100-k}]$$
And it’s easy to see that, if \(x\) and \(y\) differ in at most one coordinate, then \(|h(x) – h(y)| \leq 1\). Thus,
$$P\{X – E[X] \geq a\} \leq exp\{-a^2/200\}$$
Let \(a = 15 – E[X]\), we get the result.


6.18 Let \(p\) denote the probability that a random selection of 88 people will contain at least three with the same birthday. Use Azuma’s inequality to obtain an upper bound on \(p\). (It can be shown that \(p \approx 0.50\)).


Similarly to Problem 6.17,
$$p \leq exp\{-a^2/176\} \\
a = 1 – E[X]\\
E[X] = 365[1 – \sum_{i=0}^2{88 \choose i}(\frac{1}{365})^i(\frac{364}{365})^{88-i}]$$


6.19 For binary n-vectors \(\boldsymbol x\) and \(\boldsymbol y\) (meaning that each coordinate of these vectors is either 0 or 1) define the distance between them by
$$\rho(\boldsymbol x, \boldsymbol y) = \sum_{i=1}^n |x_i – y_i|$$
(This is called the Hamming distance). Let \(A\) be a finite set of such vectors, and let \(X_1, \dots, X_n\) be independent random variables that are each equally likely to be either 0 or 1. Set $$D = min_{y \in A} \rho(\boldsymbol X, \boldsymbol y)$$
and let \(\mu = E[D]\). In terms of \(\mu\), find an upper bound for \(P\{D \geq b\}\) when \(b > \mu\)


It easy to see that \(|h(x) – h(y)| \leq 1\), thus
$$P\{D – E[D] \geq a\} \leq exp\{-a^2/2n\}$$
Let \(a + E[D] = b\), we get
$$P\{D \geq b\} \leq exp\{-(b – \mu)^2/2n\}$$


6.21 A group of \(2n\) people, consisting of \(n\) men and \(n\) women, are to be independently distributed among \(m\) rooms. Each woman chooses room \(j\) with probability \(p_j\) while each man chooses it with probability \(q_j, j = 1, \dots, m\). Let \(X\) denote the number of rooms that will contain exactly one man and one woman.
(a) Find \(\mu = E[X]\)
(b) Bound \(P\{|X – \mu| > b\}\) for \(b > 0\)


(a) \(E[X] = \sum_{i=1}^m{n \choose 1}p_i(1-p_i)^{n-1}{n \choose 1}q_i(1-q_i)^{n-1} \)
(b) Let \(2a = b\) then, \(P\{|X – \mu| > b\} \leq exp\{-b^2/16n\}\)


6.22 Let \(\{X_n, n \geq 0\}\) be a Markov process for which \(X_0\) is uniform on (0, 1) and, conditional on \(X_n\),
$$X_{n+1} = \left\{\begin{array}{ll}
\alpha X_n + 1 – \alpha \quad \text{with probability } X_n \\
\alpha X_n \quad \text{with probability } 1 – X_n \\ \end{array}\right. $$
where \(0 < \alpha < 1\). Discuss the limiting properties of the sequence \(X_n, n \geq 1\).


$$E[X_{n+1}|X_n] = X_n(\alpha X_n + 1 – \alpha) + (1 – X_n)\alpha X_n = X_n$$
Thus, \(X_n\) is martingale, and \(E[|X_n|] = E[X_n] = E[X_1] < \infty\), so the limit exists.


6.23 An urn initially contains one white and one black ball. At each stage a ball is drawn and is then replaced in the urn along with another ball of the same color. Let \(Z_n\) denote the fraction of balls in the urn that are white after the nth replication.
(a) Show that \(\{Z_n, n \geq 1\}\) is a martingale.
(b) Show that the probability that the fraction of white balls in the urn is ever as large as 3/4 is at most 2/3.


(a) $$\begin{align}
&E[Z_{n+1}|Z_1, \dots, Z_n] \\
=&Z_n\frac{(n+2)Z_n + 1}{n+3} + (1-Z_n)\frac{(n+2)Z_n}{n+3}\\
=&Z_n\end{align}$$
(b) $$P\{max(Z_1, \dots, Z_n) > 3/4\} \leq 4E[Z_n]/3 = 4E[Z_1]/3 = 2/3$$


6.24 Consider a sequence of independent tosses of a coin and let \(P\{head\}\) be the probability of a head on any toss. Let \(A\) be the hypothesis that \(P\{head\} = a\) and let \(B\) be the hypothesis that \(P\{head\} = b, 0 < a,b < 1\). Let \(X_i\) denote the outcome of the ith toss and let
$$Z_n = \frac{P\{X_1, \dots, X_n|A\}}{P\{X_1, \dots, X_n|B\}}$$
Show that if \(B\) is true, then:
(a) \(Z_n\) is a martingale, and
(b) \(\lim_{n \to \infty} Z_n\) exists with probability 1.
(c) If \(b = \neq a\), what is \(\lim_{n \to \infty} Z_n\)?


(a) Let \(X_i=1\) if the outcome of the ith toss is head, \(X_i = 0\) if it is tail. From independence,
$$Z_n = \frac{P\{X_1, \dots, X_n|A\}}{P\{X_1, \dots, X_n|B\}} = Z_{n-1}\frac{P\{X_n|A\}}{P\{X_n|B\}}$$ Now,
$$E[\frac{P\{X_n|A\}}{P\{X_n|B\}}] = E[\frac{a^{X_n}(1-a)^{1-X_n}}{b^{X_n}(1-b)^{1-X_n}}] = \frac{a}{b}b + \frac{1-a}{1-b}(1-b) = 1$$
(b) Since \(Z_n\) is a nonnegative martingale, from Corollary 6.4.7, the limit exists.
(c) From (a), it is clear that \(Z_n\) can have a finite (random or constant) non-zero limit only if
$$\lim_{n \to \infty}\frac{P\{X_n|A\}}{P\{X_n|B\}} = 1$$
with probability 1. However for \(a\neq b\) it is not possible. Thus the limit is 0.


6.25 Let \(Z_n, n \geq 1\), be a sequence of random variables such that \(Z_1 \equiv 1\) and given \(Z_1, \dots, Z_{n-1}, Z_n\) is a Poisson random variable with mean \(Z_{n-1}, n > 1\). What can we say about \(Z_n\) for \(n\) large?


\(Z_n\) is nonnegative martingale, and \(E[Z_n] =E[Z_1] = 1\). Thus the limit of \(Z_n\) exists.


6.26 Let \(X_1, X_2, \dots\) be independent and such that
$$P\{X_i = -1\} = 1 – 1/2^i\\
P\{X_i = 2^i – 1\} = 1/2^i, \quad i \geq 1$$
Use this sequence to construct a zero mean martingale \(Z_n\) such that \(\lim_{n \to \infty}Z_n = -\infty\) with probability 1. (Hint: Make use of the Borel-Cantelli lemma)


It is easy to see that \(Z_n = \sum_{i=1}^nX_i\) is a martingale with mean 0. And
$$\sum_{i=1}^{\infty} P\{X_i = 2^i – 1\} = 1/4 < \infty$$
From Borel-Cantelli lemma,
$$P\{X_i \text{ is positive occurs infinite}\} = 0\\
P\{X_i \text{ is negative occurs infinite}\} = 1 $$
Thus, \(\lim_{n \to \infty}Z_n = -\infty\)


A continuous-time process \(\{X(t), t \geq 0\}\) is said to be a martingale if \(E[|X(t)|] < \infty\) for all \(t\) and, for all \(s < t\),
$$E[X(t)|X(u), 0 \leq u \leq s] = X(s)$$

6.27 Let \(\{X(t), t \geq 0\}\) be a continuous-time Markov chain with infinitesimal transition rates \(q_{ij}, i \neq j\). Given the conditions on the \(q_{ij}\) that result in \(\{X(t), t \geq 0\}\) being a continuous-time martingale.



Do Problem 6.28-6.30 under the assumptions that (a) the continuous-time analogue of the martingale stopping theorem is valid, and (b) any needed regularity conditions are satisfied.

6.28 Let \(\{X(t), t \geq 0\}\) be a nonhomogeneous Poisson process with intensity function \(\lambda(t), t \geq 0\). Let \(T\) denote the time at which the nth event occurs. Show that
$$n = E[\int_0^T \lambda(t)dt]$$



6.29 Let \(\{X(t), t \geq 0\}\) be a continuous-time Markov chain that will, in finite expected time, enter a an absorbing state \(N\). Suppose that \(X(0) = 0\) and let \(m_i\) denote the expected time the chain is in state \(i\). Show that for \(j \neq 0, j\neq N\).
(a) \(E[\)number of times the chain leaves state \(j]=v_j m_j\), where \(1/v_j\) is the mean time the chain spends in \(j\) during a visit.
(b) \(E[\)number of times it enters state \(j] = \sum_{i \neq j}m_i q_{ij}\).
(c) Argue that
$$v_jm_j = \sum_{i \neq j}m_iq_{ij}, \quad j \neq 0 \\
v_0m_0 = 1 + \sum_{i \neq 0}m_iq_{i0}$$



6.30 Let \(\{X(t), t \geq 0\}\) be a compound Poisson process with Poisson rate \(\lambda\) and component distribution \(F\). Define a continuous-time martingale related to this process.



Solutions to Stochastic Processes Ch.5

随机过程-第二版》(英文电子版Sheldon M. Ross 答案整理,此书作为随机过程经典教材没有习题讲解,所以将自己的学习过程记录下来,部分习题解答参考了网络,由于来源很多,出处很多也不明确,无法一一注明,在此一并表示感谢!希望对大家有帮助,水平有限,不保证正确率,欢迎批评指正,转载请注明出处。
Solutions to Stochastic Processes Sheldon M. Ross Second Edition(pdf)
Since there is no official solution manual for this book, I
handcrafted the solutions by myself. Some solutions were referred from web, most copyright of which are implicit, can’t be listed clearly. Many thanks to those authors! Hope these solutions be helpful, but No Correctness or Accuracy Guaranteed. Comments are welcomed. Excerpts and links may be used, provided that full and clear credit is given.

5.1 A population of organisms consists of both male and female members. In a small colony any particular male is likely to mate with any particular female in any time interval of length \(h\), with probability \(\lambda h + o(h)\). Each mating immediately produces one offspring, equally likely to be male of female. Let \(N_1(t)\) and \(N_2(t)\) denote the number of males and females in the population at \(t\). Derive the parameters of the continuous-time Markov chain \(\{N_1(t), N_2(t)\}\).


$$q[(n, m), (n+1, m)] = q[(n,m),(n, m+1)] = \frac{\lambda mn}{2}$$


5.2 Suppose that a one-celled organism can be in one of two states–either \(A\) or \(B\). An individual in state \(A\) will change to state \(B\) at an exponential rate \(\alpha\), an individual in state \(B\) divides into two new individuals of type \(A\) at an exponential rate \(\beta\). Define an appropriate continuous-time Markov chain for a population of such organisms and determine the appropriate parameters for this model.


$$\begin{align}
q[(n, m), (n-1, m+1)] &= \alpha n\\
q[(n, m), (n+2, m-1)] &= \beta m
\end{align}$$


5.4 For a pure birth process with birth parameters \(\lambda_n, n \geq 0\), compute the mean, variance, and moment generating function of the time it takes the population to go from size \(1\) to size \(N\).


Let \(T_i\) denote the time between birth of \(i-1\) and \(i\), then \(T_i\) are independent and \(T_i \sim \text{Exponential}(\lambda_i)\).
$$
E[\sum_{i=1}^{n}T_i|n = N-1] = \sum_{i=1}^{N-1} E[T_i] = \sum_{i=1}^{N-1} 1/\lambda_i \\
Var(\sum_{i=1}^{n}T_i|n = N-1 ) = \sum_{i=1}^{N-1} Var(T_i) = \sum_{i=1}^{N-1} 1/\lambda_i^2 \\
\phi(t) = \prod_{i=1}^{N-1}\frac{\lambda_i}{\lambda_i – t}$$


5.6 Verify the formula
$$A(t) = a_0 + \int_0^t X(s)ds,$$
given in Example 5.3(B).


Consider this as a renewal reward process, every one get unit paid per unit time, then the sum of cost is also the sum of the age.
$$A(t) = a_0 + t + \int_0^t X(s) – 1 ds = a_0 + \int_0^t X(s)ds$$


5.7 Consider a Yule process starting with a single individual and suppose that with probability \(P(s)\) an individual born at time \(s\) will be robust. Compute the distribution of the number of robust individuals born in \((0, t)\).


Let \(N(t)\) denote the number of individuals born in \((0, t)\), and \(R(t)\) of which are robust. Then
$$\begin{align}
&P\{R(t) = k\} \\
=& \sum_{n=k}^{\infty}P\{R(t) = k|N(t) = n\}P\{N(t) = n\}\\
=& \sum_{n=k}^{\infty}{n \choose k}[\int_0^t f(s)P(s)ds]^k[ 1- \int_0^t f(s)P(s)ds] e^{-\lambda t}(1 – e^{-\lambda t})^{n}
\end{align}$$
where \(f(s) = \frac{\lambda e^{-\lambda(t-s)}}{1 – e^{-\lambda t}}\)


5.9 Prove Lemma 5.4.2.


$$\begin{align}
P_{ij}(t+s) &= P\{X(t+s) = j | X(0) = i\} \\
&= \sum_{k=0}^{\infty} P\{X(t+s) = j, X(t) = k | X(0) = i\} \\
&= \sum_{k=0}^{\infty} P\{X(t+s) = j| X(t) = k, X(0) = i\}P\{X(t) = k | X(0) = i\}\\
&= \sum_{k=0}^{\infty} P\{X(t+s) = j| X(t) = k\}P\{X(t) = k | X(0) = i\}\\
&= \sum_{k=0}^{\infty} P_{kj}(s)P_{ik}(t)\\
\end{align}$$


5.10 Let \(P(t) = P_{00}(t)\).
(a) Find \(\lim_{t \to 0}\frac{1 – P(t)}{t}\).
(b) Show that \(P(t)P(s) \leq P(t+s) \leq 1 – P(s) + P(s)P(t)\).
(c) Show \(|P(t) – P(s)| \leq 1 – P(t-s), \quad s < t\), and conclude that \(P\) is continuous.


(a) \(\lim_{t \to 0}\frac{1 – P(t)}{t} = \lim_{t \to 0}\frac{1 – P(t)}{t} = v_0 \)
(b) $$\begin{align}
P(t)P(s) &\leq \sum_{k=0}^{\infty}P_{0k}(t)P_{k0}(s) = P(t+s)\\
&= \sum_{k=1}^{\infty}P_{0k}(s)P_{k0}(t) + P_{00}(t)P_{00}(s) \\
&\leq \sum_{k=0}^{\infty}P_{0k}(s) + P(s)P(t) \\
&= 1 – P(s) + P(s)P(t)
\end{align}$$
(c) From (b) we have
$$P(s)P(t-s) \leq P(t) \leq 1 – P(t-s) + P(t-s)P(s)\\
P(s)(P(t-s) – 1) \leq P(t) – P(s) \leq (1 – P(s))(1 – P(t-s)) \\
P(t-s) – 1 \leq P(t) – P(s) \leq (1 – P(t-s) $$
From the inequality, we have
$$\lim_{t \to s}|P(t) – P(s)| \leq \lim_{t \to s} 1 – P(t-s) = 1 – P(0) = 0$$


5.11 For the Yule process
(a) Verify that
$$P_{ij}(t) = {j – 1 \choose i – 1}e^{-i\lambda t}(1 – e^{-\lambda t})^{j-i}$$
satisfies the forward and backward equations.
(b) Suppose that \(X(0) = 1\) and that at time \(T\) the process stops and is replaced by an emigration process in which departures occur in a Poisson process of rate \(\mu\). Let \(\tau\) denote the time taken after \(T\) for the population to vanish. Find the density function of \(\tau\) and show that \(E[\tau] = e^{\lambda T}/\mu\).



(a) $$\begin{align}
P_{ij}^{\prime}(t) &= {j-1 \choose i-1} \lambda e^{-i\lambda t} [-i(1 – e^{-\lambda t})^{j-i} + (j-i)e^{-\lambda t}(1 – e^{-\lambda t})^{j-i-1}] \\
&= {j-1 \choose i-1} \lambda e^{-i\lambda t}(1 – e^{-\lambda t})^{j-i-1}(je^{-\lambda t} – i)\\
&\sum_{k \neq i} q_{ik}P_{kj}(t) – v_iP_{ij}(t) \\
=&q_{i,i+1}P_{i+1, j}(t) – v_iP_{ij}(t) \\
=&i\lambda[{j – 1 \choose i – 2}e^{-i\lambda t}e^{-\lambda t}(1 – e^{-\lambda t})^{j-i -1} – {j – 1 \choose i – 1}e^{-i\lambda t}(1 – e^{-\lambda t})^{j-i}]\\
=&i\lambda e^{-i\lambda t}(1 – e^{-\lambda t})^{j-i-1} [{j-1 \choose i}e^{-\lambda t} – {j-1 \choose i – 1}(1 – e^{-\lambda t})] \\
=&{j-1 \choose i-1} \lambda e^{-i\lambda t}(1 – e^{-\lambda t})^{j-i-1}(je^{-\lambda t} – i)\\
&\sum_{k \neq j} q_{kj}P_{ik}(t) – v_jP_{ij}(t) \\
=&q_{j-1,j}P_{i, j-1}(t) – v_jP_{ij}(t) \\
=&\lambda[(j-1){j – 2 \choose i – 1}e^{-i\lambda t}(1 – e^{-\lambda t})^{j-1-i} – j{j – 1 \choose i – 1}e^{-i\lambda t}(1 – e^{-\lambda t})^{j-i}]\\
=&\lambda e^{-i\lambda t}(1 – e^{-\lambda t})^{j-i-1} [(j-1){j-2 \choose i – 1} – j{j-1 \choose i – 1}(1 – e^{-\lambda t})] \\
=&{j-1 \choose i-1} \lambda e^{-i\lambda t}(1 – e^{-\lambda t})^{j-i-1}(je^{-\lambda t} – i)\\
\end{align}$$
(b) $$\begin{align}
P\{\tau = t\} &= \sum_{n=1}^{\infty} P\{\tau = t | X(T) = n\}P\{X(T) = n\} \\
&= \sum_{n=1}^{\infty} e^{-\mu t}\frac{(\mu t)^{n-1}}{(n-1)!}e^{-\lambda T}(1 – e^{-\lambda T})^{n-1} \end{align}$$
Let \(t_i\) denote the time between \(i-1\) and \(i\) member departure. Then
$$E[\tau] = E[\sum_{i=1}^{X(T)} t_i] = E[X(T)]E[t_i] = e^{\lambda T}/\mu $$


5.12 Suppose that the “state” of the system can be modeled as a two-state continuous-time Markov chain with transition rates \(v_0 = \lambda, v_1 = \mu\). When the state of the system is \(i\), “events” occur in accordance with a Poisson process with rate \(\alpha_i, i = 0, 1\). Let \(N(t)\) denote the number of events in \((0, t)\).
(a) Find \(\lim_{t \to \infty}N(t)/t\).
(b) If the initial state is state 0, find \(E[N(t)]\).


(a) Define a renewal reward process as follows. A renewal occurs when the process enters state 0 and reward in cycle equals the number of events in that cycle. Let the length of nth cycle \(X_n\) is the sum of time spent in state 0 and 1, say \(X_{0n}\) and \(X_{1n}\) respectively. Thus \(E[X_n] = E[X_{0n}] + E[X_{1n}] = \lambda^{-1} + \mu^{-1}\). Further if \(R_n\) is the reward in the nth cycle, with \(R_{0n}\) and \(R_{1n}\) earned in state 0 and 1 respectively
$$\begin{align}
E[R_n] &= E[R_{0n}] + E[R_{1n}] = E[E[R_{0n}| X_{0n}] + E[R_{1n}|X_{1n}]] \\
&= E[\alpha_0X_{0n}] + E[\alpha_1X_{1n}]\\
&= \alpha_0/\lambda + \alpha_1/\mu.
\end{align}$$Thus
$$\lim_{t \to \infty} \frac{N(t)}{t} = \lim_{t \to \infty} \frac{R(t)}{t} = \frac{E[R]}{E[X]} = \frac{\alpha_0\mu + \alpha_1\lambda}{\lambda + \mu}$$

(b) Let \(T_i(t)\) denote the time spent in state \(i\) by the time \(t\).
$$\begin{align}
E[N(t)] &= \alpha_0E[T_0(t)] + \alpha_1(1 – E[T_0(t)])\\
&= (\alpha_0 – \alpha_1)E[T_0(t)] + \alpha_1t \\
&= \alpha_1t + (\alpha_0 – \alpha_1)\int_0^t P_{00}(s)ds
\end{align}$$
By the forward equation, \(P_{00}^{\prime} = – (\lambda + \mu)P_{00}(t) + \mu\), and \(P_{00}(t) = 1\). Solve the equations, we get
$$\begin{align}
P_{00}(t) &= \frac{\mu}{\lambda + \mu} + \frac{\lambda}{\lambda + \mu}e^{-(\lambda + \mu)t} \\
E[N(t)] &= \frac{\alpha_0\mu + \alpha_1\lambda}{\lambda + \mu}t + \frac{\alpha_1 – \alpha_0}{(\lambda + \mu)^2}\lambda(e^{-(\lambda + \mu)t} – 1).
\end{align}$$


5.13 Consider a birth and death process with birth rates \(\{\lambda_n\}\) and death rates \(\{\mu_n\}\). Starting in state \(i\), find the probability that the first \(k\) events are all births.


$$\prod_{n=i}^{i+k-1}\frac{\lambda_n}{\lambda_n + \mu_n}$$


5.15 Consider a population in which each individual independently gives birth at an exponential rate \(\lambda\) and dies at an exponential rate \(\mu\). In addition, new members enter the population in accordance with a Poisson process with rate \(\theta\). Let \(X(t)\) denote the population size at time \(t\).
(a) What type of process is \(\{X(t), t \geq 0\}\)?
(b) What are its parameters?
(c) Find \(E[X(t)|X(0)=i]\).


(a)(b) This is a linear birth/death process with immigration, having parameters \(\mu_n = n\mu\) and \(\lambda_n = n\lambda + \theta\).
(c) Note that
$$E[X(t+h)|X(0)] = E[X(t)|X(0)] + (\lambda – \mu)E[X(t)|X(0)]h + \theta h + o(h)$$
Defining \(M(t) \equiv E[X(t)|X(0)]\) we get the differential equation:
$$M^{\prime}(t) = (\lambda – \mu)M(t) + \theta$$Solve the equations with \(M(0) = i\)
$$M(t) = \left\{\begin{array}{ll}
\theta t + i \quad \lambda = \mu \\
(i+\frac{\theta}{\lambda – \mu})e^{(\lambda-\mu)t}-\frac{\theta}{\lambda-\mu} \quad otherwise \\ \end{array}\right. $$


5.16 In Example 5.4(D), find the variance of the number of males in the population at time \(t\).


$$\begin{align}
&E[Y^2(t+h)|Y(t)]\\
=& [Y(t) + 1]^2\lambda(1-p)X(t)h + [Y(t) – 1]^2vY(t)h\\ +& Y^2(t)[1 -\lambda(1-p)X(t)h – vY(t)h] + o(h)\\
=& Y^2(t) – 2vhY^2(t) + 2\lambda(1-p)hY(t)X(t) + vY(t)h + \lambda(1-p)X(t)h
\end{align}$$
Let \(M_{Y2}(t)\) denote \(E[Y^2(t)]\), divide above by \(h\) and taking expectation,
$$M_{Y2}^{\prime} = 2\lambda(1-p)M_{XY}(t) + \lambda(1-p)M_X(t) – 2vM_{Y2}(t) + vM_Y(t)$$


5.17 Let \(A\) be a specified set of states of a continuous-time Markov chain and \(T_i(t)\) denote the amount of time spent in \(A\) during the time interval \([0,t]\) given that the chain begins in state \(i\). Let \(Y_1, \dots, Y_n\) be independent exponential random variables with mean \(\lambda\). Suppose the \(Y_i\) are independent of the Markov chain, and set \(t_i(n) = E[T_i(Y_1 + \dots + Y_n)]\).
(a) Derive a set of linear equations for \(t_i(1), i \geq 0\).
(b) Derive a set of linear equations for \(t_i(n)\) in terms of the other \(t_j(n)\) and \(t_i(n-1)\).
(c) When \(n\) is large, for what value of \(\lambda\) is \(t_i(n)\) a good approximation of \(E[T_i(t)]\).



5.18 Consider a continuous-time Markov chain with \(X(0) = 0\). Let \(A\) denote a set of states that does not include 0 and set \(T = Min\{t > 0: X(t) \in A\}\). Suppose that \(T\) is finite with probability 1. Set \(q_i = \sum_{j \in A}q_{ij}\), and consider the random variable \(H = \int_0^Tq_{X(t)}dt\), called the random hazard.
(a) Find the hazard rate function of \(H\), That is, find \(\lim_{h \to 0} P\{s < H < s+h|H>s\}/h\)(Hint: Condition on the state of the chain at the time \(\tau\) when \(\int_0^Tq_{X(t)}dt = s\)).
(b) What is the distribution of \(H\)?



5.19 Consider a continuous-time Markov chain with stationary probabilities \(\{P_i, i \geq 0\}\), and let \(T\) denote the first time that the chain has been in state 0 for \(t\) consecutive time units. Find \(E[T|X(0)=0]\).



5.20 Each individual in a biological population is assumed to give birth at an exponential rate \(\lambda\) and to die at an exponential rate \(\mu\). In addition, there is an exponential rate of increase \(\theta\) due to immigration. However, immigration is not allowed when the population size is \(N\) or larger.
(a) Set this up as a birth and death model.
(b) If \(N=3, 1 = \theta=\lambda, \mu=2\), determine the proportion of time that immigration is restricted.



5.21 A small barbershop, operated by a single barber, has room for at most two customers. Potential customer arrive at a Poisson rate of three per hour, and the successive service times are independent exponential random variables with mean 1/4 hour.
(a) What is the average number of customers in the shop?
(b) What is the proportion of potential customers that enter the shop?
(c) If the barber could work twice as fast, how much more business would she do?



5.22 Find the limiting probabilities for the M/M/s system and determine the condition needed for these to exist.



5.23 If \(\{X(t), t \geq 0\}\) and \(\{Y(t), t \geq 0\}\) are independent time-reversible Markov chains, show that the process \(\{X(t), Y(t), t \geq 0\}\) is also.



5.24 Consider two M/M/1 queues with respective parameters \(\lambda_i, \mu_i, i=1,2\). Suppose they both share the same waiting room, which has finite capacity \(N\). (That is, whenever this room is full all potential arrivals to either queue are lost.) Compute the limiting probability that there will be \(n\) people at the first queue (1 being served and \(n-1\) in the waiting room when \(n > 0\) and \(m\) at the second. (Hint: Use the result of Problem 5.23)



5.25 What can you say about the departure process of the stationary M/M/1 queue having finite capacity.



5.26 In the stochastic population model of Section 5.6.2:
(a) Show that \(P(\underline{n})q(\underline{n},D_j\underline{n}) = P(D_j\underline{n})q(D_j\underline{n},\underline{n})\), where \(P(\underline{n})\) is given by (5.6.4) with \(\alpha_j = (\lambda/jv)(v/\mu)^j\).
(b) Let \(D(t)\) denote the number of families that die out in \((0,t)\). Assuming that the process is in steady state 0 at time \(t = 0\), what type of stochastic process is \(\{D(t), t \geq 0\}\)? What if the population is initially empty at \(t = 0\)?



5.27 Complete the proof of the conjecture in the queueing network model of Section 5.7.1



5.28 \(N\) customers move about among \(r\) servers. The service times at server \(i\) are exponential at rate \(\mu_i\) and when a customer leaves server \(i\) it joins the queue (if there are any waiting–or else it enters service) at server \(j, j \neq i\), with probability \(1/(r-1)\). Let the state be \((n_1, \dots, n_r)\) when there are \(n_i\) customers at server \(i, i = 1, \dots, r\). Show the corresponding continuous-time Markov chain is time reversible and find the limiting probabilities.



5.29 Consider a time-reversible continuous-time Markov chain having parameters \(v_i, P_{ij}\) and having limiting probabilities \(P_j, j \geq 0\). Choose some state–say state 0–and consider the new Markov chain, which makes state 0 an absorbing state. That is, reset \(v_0\) to equal 0. Suppose now at time points chosen according to a Poisson process with rate \(\lambda\), Markov chains–all of the above type (having 0 as an absorbing state)–are started with the initial states chosen to be \(j\) with probability \(P_{0j}\). All the existing chains are assumed to be independent. Let \(N_j(t)\) denote the number of chains in state \(j, j > 0\), at time \(t\)
(a) Argue that if there are no initial chains, then \(N_j(t), j > 0\), are independent Poisson random variables.
(b) In steady state argue that the vector process \(\{(N_1(t), N_2(t), \dots)\}\) is time reversible with stationary probabilities
$$P(\underline{n}) = \prod_{j=1}^{\infty}e^{-\alpha_j}\frac{\alpha_j^{n_j}}{n_j!}, \quad \text{for } \underline{n} = (n_1, n_2, \dots),$$
where \(\alpha_j = \lambda P_j/P_0v_0\).



5.30 Consider an \(M/M/\infty\) queue with channels (servers) numbered \(1, 2, \dots\) On arrival, a customer will choose the lowest numbered channel that is free. Thus, we can think of all arrivals as occurring at channel 1. Those who find channel 1 busy overflow and become arrivals at channel 2. Those finding both channels 1 and 2 busy overflow channel 2 and become arrivals at channel 3, and so on.
(a) What fraction of time is channel 1 busy?
(b) By considering the corresponding M/M/2 loss system, determine the fraction of time that channel 2 is busy.
(c) Write down an expression for the fraction of time channel \(c\) is busy for arbitrary \(c\).
(d) What is the overflow rate from channel \(c\) to channel \(c+1\)? Is the corresponding overflow process a Poisson process? A renewal process? Express briefly.
(e) If the service distribution were general rather than exponential, which (if any) of your answer to (a)-(d) would change? Briefly explain.



5.31 Prove Theorem 5.7.1



5.32 (a) Prove that a stationary Markov process is reversible if, and only if, its transition rates satisfy
$$q(j_1, j_2)q(j_2, j_3)\dots q(j_{n-1}, j_n)q(j_n, j_1)=q(j_1, j_n)q(j_n, j_{n-1})\dots q(j_3, j_2)q(j_2, j_1) $$for any finite sequence of state \(j_1, j_2, \dots, j_n\)
(b) Argue that is suffices to verify that the equality in (a) holds for sequences of distinct states.
(c) Suppose that the stream of customers arriving at a queue forms a Poisson process of rate \(v\) and that there are two servers who possibly differ in efficiency. Specifically, suppose that a customer’s service time at server \(i\) is exponentially distributed with rate \(\mu_i\), for \(i=1, 2\), where \(\mu_1 + \mu_2 > v\). If a customer arrives to find both servers free, he is equally likely to be allocated to either server. Define an appropriate continuous-time Markov chain this model, show that it is time reversible, and determine the limiting probabilities.



5.33 The work in a queueing system at any time is defined as the sum of the remaining service times of all customers in the system at that time. For the M/G/1 in steady state compute the mean and variance of the work in the system.



5.36 Consider the two-state Markov chain of Example 5.8(A), with \(X(0) = 0\).
(a) Compute \(Cov(X(s), X(y))\).
(b) Let \(S_0(t)\) denote the occupation time of state 0 by \(t\). Use (a) and (5.8.4) to compute \(Var(S(t))\).



5.37 Let \(Y_i = X_{(i)} – X_{(i-1)}, i = 1, \dots, n+1\) where \(X_{(0)} = 0\), \(X_{(n+1)} = t\), and \(X_{(1)} \leq X_{(2)} \leq \dots \leq X_{(n)}\) are the ordered values of a set of \(n\) independent uniform \((0,t)\) random variables. Argue that \(P\{Y_i \leq y_i, i=1, \dots, n+1\}\) is a symmetric function of \(y_1, \dots, y_n\).



Solutions to Stochastic Processes Ch.4

随机过程-第二版》(英文电子版Sheldon M. Ross 答案整理,此书作为随机过程经典教材没有习题讲解,所以将自己的学习过程记录下来,部分习题解答参考了网络,由于来源很多,出处很多也不明确,无法一一注明,在此一并表示感谢!希望对大家有帮助,水平有限,不保证正确率,欢迎批评指正,转载请注明出处。
Solutions to Stochastic Processes Sheldon M. Ross Second Edition(pdf)
Since there is no official solution manual for this book, I
handcrafted the solutions by myself. Some solutions were referred from web, most copyright of which are implicit, can’t be listed clearly. Many thanks to those authors! Hope these solutions be helpful, but No Correctness or Accuracy Guaranteed. Comments are welcomed. Excerpts and links may be used, provided that full and clear credit is given.

4.1 A store that stocks a certain commodity uses the following \((s, S)\) ordering policy; if its supply at the beginning of a time period is \(x\), then it orders
$$
\left\{
\begin{array}{ll}
0 \quad x \geq s \\
S-x \quad x < n \\
\end{array}
\right. $$
The order is immediately filled. The daily demands are independent and equal \(j\) with probability \(\alpha_j\). All demands that cannot be immediately met are lost. Let \(X_n\) denote the inventory level at the end of the nth time period. Argue that \(\{X_n, n \geq 1\}\) is a Markov chain and compute its transition probabilities.


Let \(Y_i\) denote the demand of the ith day, then
$$
X_n = \left\{
\begin{array}{ll}
X_{n-1} – Y_n \quad X_{n-1} \geq s \\
S-Y_n \quad X_{n-1} < s \\
\end{array}
\right.
$$
Since \(Y_i\) is independent, \(\{X_n, n \geq 1\}\) is a Markov chain, and its transition probabilities are
$$P_{ij} = \left\{
\begin{array}{ll}
\alpha_{i -j} \quad i \geq s, i \geq j \\
\alpha_{S-j} \quad i < s \\
0 \quad i \geq s, i < j
\end{array}
\right.
$$


4.2 For a Markov chain prove that
$$P\{X_n = j | X_{n_1} = i_1, \dots, X_{n_k} = i_k\} = P\{X_n = j | X_{n_k} = i_k\}$$
whenever \(n_1 < n_2 < \dots < n_k < n\).


$$\begin{align}
&P\{X_n = j | X_{n_1} = i_1, \dots, X_{n_k} = i_k\} \\
=&\frac{P_{i_kj}^{n-n_k}\prod_{l=1}^{k-1}P_{i_li_{l+1}}^{n_{l+1} – n_l}}{\prod_{l=1}^{k-1}P_{i_li_{l+1}}^{n_{l+1} – n_l}} \\
=&P_{i_kj}^{n-n_k} \\
=&P\{X_n = j | X_{n_k} = i_k\}
\end{align}$$


4.3 Prove that if the number of states is \(n\), and if state \(j\) is accessible from state \(i\), then it is accessible in \(n\) or fewer steps.


\(j\) is accessible from \(i\) if, for some \(k \ geq 0, P_{ij}^k > 0\). Now
$$P_{ij}^k = \sum\prod_{m=1}^k P_{i_mi_{m+1}}$$
where the sum is taken over all sequences \((i_0, i_1, \dots, i_k)\in\{1, \dots, n\}^{k+1}\) of states with \(i_0 = i\) and \(i_k = j\). Now, \(P_{ij}^k > 0\) implies that at least one term is positive, say \(\prod_{m=1}^kP_{i_mi_{m+1}} > 0\). If a state \(s\) occurs twice, say \(i_a = i_b = s\), and \((a, b) \neq (0, k)\), then the sequence of states \((i_0, \dots, i_{a-1}, i_b, \dots, i_k)\) also has positive probability, without this repetition. Thus, the sequence \(i_0, \dots, i_k\) can be reduced to another sequence, say \(j_0, \dots, j_r\), in which no state is repeated. This gives \(r \leq n-1\), so \(i \neq j\) is accessible in at most \(n – 1\) steps. If \(i = j\), we cannot remove this repetition. This gives the possibility of \(r = n\), when \(i=j\), but there are no other repetitions.


4.4 Show that$$P_{ij}^n = \sum_{k=0}^n f_{ij}^kP_{jj}^{n-k}$$


$$\begin{align}
&P_{ij}^n\\
=&\sum_{k=0}^n P\{\text{visit state j at step n }|\text{first visit state j at step k}\}P\{\text{first visit j at step k}\} \\
=&\sum_{k=0}^n f_{ij}^kP_{jj}^{n-k}
\end{align}$$


4.5 For states \(i, j, k, k \neq j\), let
$$P_{ij/k}^n = P\{X_n = j, X_l \neq k, l = 1, \dots, n-1|X_0 = i\}.$$
(a) Explain in words what \(P_{ij/k}^n\) represents.
(b) Prove that, for \(i \neq j, P_{ij}^n = \sum_{k=0}^n P_{ii}^kP_{ij/i}^{n-k}\).


(a) \(P_{ij/k}^n\) is the probability of being in \(j\) at time \(n\), starting in \(i\) at time 0, while avoiding \(k\).
(b) Let \(N\) denote the time at which \(X_k\) is last \(i\) before time \(n\). Then since \(0 \leq N \leq n\),
$$\begin{align}
&P_{ij}^n = P\{X_n = j | X_0 = i\} \\
=& \sum_{k=0}^n P\{X_n = j, N=k|X_0 = j\} \\
=& \sum_{k=0}^n P\{X_n = j, X_k = i, X_l \neq i: k + 1 \leq l \leq n | X_0 = i\} \\
=& \sum_{k=0}^n P\{X_n = j, X_l \neq i : k + 1 \leq l \leq n|X_0 = i, X_k = i\}P\{X_k = i|X_0=i\}\\
=& \sum_{k=0}^n P_{ii}^kP_{ij/i}^{n-k}
\end{align}$$


4.6 Show that the symmetric random walk is recurrent in two dimensions and transient in three dimensions.


The symmetric random walk can considered as independent walk, then the limiting probability of return state 0 can be compute as following
$$\lim_{n \to \infty}P_{00}^{2n} = (\frac{1}{\sqrt{\pi n}}^d)$$
where \(d\) is the dimension. Then we get,
$$
\sum_{n=1}^{\infty} P_{00}^n = \left\{
\begin{array}{ll}
\sum_{n=1}^{\infty} \frac{1}{n\pi} = \infty \quad d=2 \\
\sum_{n=1}^{\infty} \frac{1}{n^{3/2}} < \infty \quad d=3 \\
\end{array}
\right.
$$Thus, result proven by Proposition 4.2.3


4.7 For the symmetric random walk starting at 0:
(a) What is the expected time to return to 0?
(b) Let \(N_n\) denote the number of returns by time \(n\). Show that
$$E[N_{2n}] = (2n + 1){2n \choose n}(\frac{1}{2})^{2n} – 1$$
(c) Use (b) and Stirling’s approximation to show that for \(n\) large \(E[N_n]\) is proportional to \(\sqrt{n}\).


(a) From Equation 3.7.2, we know,
$$P\{Z_1 \neq 0, \dots, Z_{2n-1} \neq 0, Z_{2n} = 0\} = \frac{{2n \choose n}(\frac{1}{2})^{2n}}{2n-1}$$
Then, then expected time to return to 0 is:
$$\begin{align}
E[T] &= \sum_{k=1}^{\infty}\frac{k}{2k-1}{2k \choose k}(\frac{1}{2})^{2k}\\
&= \sum_{k=0}^{\infty} {2k \choose k}(\frac{1}{2})^{2k} \\
&= \frac{1}{\sqrt{1 – 4/4}} = \infty
\end{align}$$
(b) Let \(I_i\) denote the indicator of whether returned 0 at ith step. Then,
$$E[N_{2n}] = \sum_{k=0}^n E[I_{2k}]= \sum_{k=0}^n P_{00}^{2k}$$
Obviously, it holds true for \(n = 0\). Assume it holds for \(n = k\), when \(n = k+1\),
$$\begin{align}
E[N_{2k+2}] &= E[N_{2k}] + P_{00}^{2k+2} \\
&= (2k+1){2k \choose k}(\frac{1}{2})^{2k} – 1 + {2k+2 \choose k+1}(\frac{1}{2})^{2k+2}\\
&= (2k + 3){2k+2 \choose k+1}(\frac{1}{2})^{2k+2} -1
\end{align}$$
(c) $$\lim_{n \to \infty} \frac{E[N_n]}{\sqrt{n}} \sim \sqrt{\frac{2}{\pi}} + \sqrt{\frac{2}{\pi n}} – \frac{1}{\sqrt{n}} = \sqrt{\frac{2}{\pi}}$$


4.8 Let \(X_1, X_2, \dots\) be independent random variables such that \(P\{X_i = j\} = \alpha_j, j \geq 0\). Say that a record occurs at time \(n\) if \(X_n > max(X_1, \dots, X_{n-1})\), where \(X_0 = -\infty\), and if a record does occur at time \(n\) call \(X_n\) the record value. Let \(R_i\) denote the ith record value.
(a) Argue that \(\{R_i, i \geq 1\}\) is a Markov chain and compute its transition probabilities.
(b) Let \(T_i\) denote the time between the ith and \((i + 1)\)st record. Is \(\{T_i, i \geq 1\}\) a Markov chain? What about \(\{(R_i, T_i), i \geq 1\}\)? Compute transition probabilities where appropriate.
(c) Let \(S_n = \sum_{i=1}^n T_i, n \geq 1\). Argue that \(\{S_n, n \geq 1\}\) is a Markov chain and find its transition probabilities.


(a) $$
P_{ij} = \left\{
\begin{array}{ll}
0 \quad i \geq j \\
\alpha_j/\sum_{k=i+1}^{\infty} \alpha_k \quad i < j \\
\end{array}
\right. $$


4.9 For a Markov chain \(\{X_n, n \geq 0\}\), show that
$$P\{X_k = i_k|X_j = i_j, \text{for all } j \neq k\} = P\{X_k = i_k|X_{k-1} = i_{k-1}, X_{k+1} = i_{k+1}\}$$


$$\begin{align}
& P\{X_k = i_k|X_j = i_j, \text{for all } j \neq k\} \\
=& \frac{P\{X_k = i_k, \text{for all } k\} }{ P\{X_j = i_j, \text{for all } j \neq k\}} \\
=& \frac{\prod_{j=0}^{\infty} P_{i_ji_{j+1}}}{\prod_{j \neq k-1, k}^{\infty} P_{i_ji_{j+1}}P^2_{i_{k-1}i_{k+1}} } \\
=& \frac{P\{X_{k} = i_{k} | X_{k-1} = i_{k-1}\}P\{X_{k+1} = i_{k+1} | X_{k} = i_{k}\} }{P\{X_{k+1} = i_{k+1} | X_{k-1} = i_{k-1}\}} \\
=&\frac{P\{X_{k} = i_{k} | X_{k-1} = i_{k-1}\}P\{X_{k+1} = i_{k+1} | X_{k} = i_{k}, X_{k-1} = i_{k-1}\} P\{X_{k-1} = i_{k-1}\}}{P\{X_{k+1} = i_{k+1} | X_{k-1} = i_{k-1}\}P\{X_{k-1} = i_{k-1}\} } \\
=& P\{X_k = i_k|X_{k-1} = i_{k-1}, X_{k+1} = i_{k+1}\}
\end{align}$$


4.11 If \(f_{ii} < 1\) and \(f_{jj} < 1\), show that:
(a) $$\sum_{n=1}^{\infty} P_{ij}^n < \infty$$
(b) $$f_{ij} = \frac{\sum_{n=1}^{\infty} P_{ij}^n}{1 + \sum_{n=1}^{\infty}P_{jj}^n}$$


(a) Since \(j\) is transient, the expected number of visits to \(j\) is finite, also when from \(i\). Proven.
(b) $$\begin{align}
&E[\text{# of visits j starting from i}] \\
=& \sum_{n=1}^{\infty} P_{ij}^n \\
=&\sum_{n=1}^{\infty}E[\text{# of visits j starting from i}|\text{first visit j at step n}]f_{ij}^n \\
=& \sum_{n=1}^{\infty}f_{ij}^n(1 + E[\text{# of visits j starting from j}]) \\
=& f_{ij}(1 + \sum_{n=1}^{\infty}P_{jj}^n)
\end{align}$$


4.12 A transition probability matrix \(P\) is said to be doubly stochastic if
$$\sum_i P_{ij} = 1 \quad \text{for all } j.$$
That is, the column sums all equal 1. If a doubly stochastic chain has \(n\) states and is ergodic, calculate its limiting probabilities.


Double stochastic implies that a state can be possibly from any of the states, and can transition to any of the states. We may guess the stationary maybe evenly distributed across all the states.
Since \(\pi = (1/n, \dots, 1/n)\) follows the equation \(\pi = \pi P\), also due to the uniqueness of the limiting distribution, \(\pi\) is the result.


4.15 In the \(M/G/1\) system (Example 4.3(A)) suppose that \(\rho < 1\) and thus the stationary probabilities exist. Compute \(\pi^{\prime}(s)\) and find, by taking the limit as \(s \to 1, \sum_0^{\infty}i\pi_i\).


$$\begin{align}
\pi^{\prime}(s) &= (1 – A^{\prime}(1))\frac{A(s) – A^2(s) + s(s-1)A^{\prime}(s)}{[s-A(s)]^2} \\
\sum_0^{\infty}i\pi_i &= \lim_{s \to 1} \pi^{\prime}(s) \\
&= \lim_{s \to 1}(1 – A^{\prime}(1)) \frac{A^{\prime}(s)}{2[1 – A^{\prime}(s)]} \\
&= \lambda E[S]/2
\end{align}$$


4.18 Jobs arrive at a processing center in accordance with a Poisson process with rate \(\lambda\). However, the center has waiting space for only \(N\) jobs and so an arriving job finding \(N\) others waiting goes away. At most 1 job per day can be processed, and processing of this job must start at the beginning of the day. Thus, if there are any jobs waiting for processing at the beginning of a day, then one of them is processed that day, and if no jobs are waiting at the beginning of a day then no jobs are processed that day. Let \(X_n\) denote the number of jobs at the center at the beginning of day \(n\).
(a) Find the transition probabilities of the Markov chain \(\{X_n, n \geq 0\}\).
(b) Is this chain ergodic? Explain.
(c) Write the equations for the stationary probabilities.


Let \(p(j) = \lambda^je^{-\lambda}/j!\)
(a) $$\begin{align}
P_{0j} &= \left\{
\begin{array}{ll}
p(j) \quad 0 \leq j < N \\
\sum_{k=N}^{\infty}p(k) \quad j = N \\
\end{array}
\right. \\
P_{ij} &= \left\{
\begin{array}{ll}
p(j-i+1) \quad i – 1 \leq j < N \\
\sum_{k=N – i+1}^{\infty}p(k) \quad j = N \\
\end{array}
\right. \end{align}$$
(b) The Markov chain is irreducible, since \(P_0j > 0\) and \(P_{j0}^{j} = (p(0))^k > 0\). It is aperiodic, since \(P_{00} > 0\). A finite state Markov chain which is irreducible and aperiodic is ergodic (since it it not possible for all states to be transient or for any states to be null recurrent).
(c) There is no particularly elegant way to write these equations. Since \(\pi_j = \sum_{k=0}^{j+1}\pi_kP_{kj}\), This can be rewritten as a recursion:
$$\pi_{j+1} = \frac{\pi_j(1 – P_{jj}) – \sum_{k=0}^{j-1}\pi_kP_{kj}}{P_{j+1, j}}$$


4.19 Let \(\pi_j, j \geq 0\), be the stationary probabilities for a specified Markov chain.

(a) Compute the following statements: \(\pi_iP_{ij}\) is the proportion of all transition that . . . .
Let \(A\) denote a set of states and let \(A^c\) denote the remaining states.
(b) Finish the following statement: \(\sum_{j \in A^c}\sum_{i \in A} \pi_iP_{ij}\) is the proportion of all transitions that . . . .
(c) Let \(N_n(A, A^c)\) denote the number of the first \(n\) transitions that are from a state in \(A\) to one in \(A^c\); similarly, let \(N_n(A^c, A)\) denote the number that are from a state in \(A^c\) to one in \(A\). Argue that
$$|N_n(A, A^c) – N_n(A^c, A) | \leq 1$$
(d) Prove and interpret the following result:
$$ \sum_{j \in A^c}\sum_{i \in A} \pi_iP_{ij} = \sum_{j \in A^c}\sum_{i \in A} \pi_jP_{ji} $$


(a) from \(i\) to \(j\)
(b) from \(A^c\) to \(A\)
(c) Since the states are either in \(A\) or in \(A^c\), the number of transition can only be equal or only one more another.
(d) Obviously, in the long run, the proportion of all transitions that result in the state of the chain moving from \(A\) to \(A^c\) is equal to the proportion of all transitions that result in the state of the chain moving from \(A^c\) to \(A\).


4.22 Compute the expected number of plays, starting in \(i\), in the gambler’s ruin problem, until the gambler reaches either 0 or \(N\).


From Example 4.4(A) we know,
$$E[B] = \frac{1}{2p-1}\{\frac{n[1 – (q/p)^i]}{1 – (q/p)^N} – i\}$$


4.23 In the gambler’s ruin problem show that
\(P\{\)she wins the next gamble|present fortune is \(i\), she eventually reaches \(N\}\)
$$=\left\{
\begin{array}{ll}
p[1- (p/q)^{i+1}]/[1-(q/p)^i] \quad p \neq 1/2 \\
(i+1)/2i \quad p = 1/2 \\
\end{array}
\right. $$


Let event \(A\) denote she wins the next gamble|present fortune is \(i, B\) denote present fortune is \(i\), she eventually reaches \(N\). Then,
$$\begin{align}
P\{A|B\} &= \frac{P\{AB\}}{P\{B\}}\\
&=\left\{\begin{array}{ll}
\frac{p[1- (p/q)^{i+1}]/[1-(q/p)^N]}{[1- (p/q)^{i}]/[1-(q/p)^N]} \quad p \neq 1/2 \\
\frac{(i+1)/2N}{i/N} \quad p = 1/2 \\
\end{array}
\right.\\
&=\left\{
\begin{array}{ll}
p[1- (p/q)^{i+1}]/[1-(q/p)^i] \quad p \neq 1/2 \\
(i+1)/2i \quad p = 1/2 \\
\end{array}
\right.
\end{align}$$


4.24 Let \(T = \{1, \dots, t\}\) denote the transient states of a Markov chain, and let \(Q\) be, as in Section 4.4, the matrix of transition probabilities from states in \(T\) to states in \(T\). Let \(m_{ij}(n)\) denote the expected amount of time spent in state \(j\) during the first \(n\) transitions given that the chain begins in state \(i\), for \(i\) and \(j\) in \(T\). Let \(M_n\) be the matrix whose element in row \(i\), column \(j\) is \(m_{ij}(n)\).
(a) Show that \(M_n = I + Q + Q^2 + \dots + Q^n\).
(b) Show that \(M_n – I + Q^{n+1} = Q[I + Q + Q^2 + \dots + Q^n]\).
(c) Show that \(M_n = (I – Q)^{-1}(I – Q^{n+1})\).


(a) Obviously, the element of \(Q^{k}\) in row \(i\), column \(j\) is \(P_{ij}^k\), then we have \(m_{ij}(n) = 1 + \sum_{k=1}^n P_{ij}^k\). Proven.
(b) As shown in (a).
(c) \(M_n\) can be computed as common geometric sequence.


4.25 Consider the gambler’s ruin problem with \(N = 6\) and \(p=7\). Starting in state 3, determine:
(a) the expected number of visits to state 5.
(b) the expected number of visits to state 1.
(c) the expected number of visits to state 5 in the first 7 transitions.
(d) the probability of ever visiting state 1.


Calculate \(M = (I – Q)^{-1}\), then
(a) \(m_{3,5} = 1.3243\) (b) \(m_{3,1} = 0.2432\) (d) \(f_{3,1} = m_{3,1}/m_{1,1} = 0.1717\)
(c) From Problem 4.24 (c), compute \(M_{7} = M(I – Q^8)\), then the (3, 5) element of \(M_7\) is \(0.9932\).
Recommend a tool for computing matrix


4.26 Consider the Markov chain with states \(0, 1, \dots, n\) and transition probabilities
$$P_{0,1} = 1 = P_{n, n-1}, \quad P_{i, i+1} = p = 1 – P_{i, i-1}, \quad 0 < i < n.$$
Let \(\mu_{i,n}\) denote the mean time to go from state \(i\) to state \(n\).
(a) Derive a set of linear equations for the \(\mu_{i, n}\).
(b) Let \(m_i\) denote the mean time to go from state \(i\) to state \(i+1\). Derive a set of equations for \(m_i, i=0, \dots, n-1\), and show how they can be solved recursively, first for \(i=0\), then \(i = 1\), and so on.
(c) What is the relation between \(\mu_{i, n}\) and the \(m_j\).
Starting at state 0, say that an excursion ends when the chain either returns to 0 or reaches state \(n\). Let \(X_j\) denote the number of transitions in the jth excursion (that is, the one that begins at the jth return to 0), \(j \geq 1\)
(d) Find \(E[X_j]\)
(Hint: Relate it to the mean time of a gambler’s ruin problem)
(e) Let \(N\) denote the first excursion that ends in state \(n\), and find \(E[N]\)
(f) Find \(\mu_{0,n}\).
(g) Find \(\mu_{i,n}\).


(a) \(\mu_{i, n} = 1 + p\mu_{i+1,n} + (1-p)\mu_{i-1, n}\)
(b) $$\begin{align}
m_i &= p + (1-p)(1 + m_{i-1} + m_{i})\\
m_i + \frac{1}{1 – 2p} &= \frac{1-p}{p}(m_{i-1} + \frac{1}{1-2p}) \\
m_0 &= 1, m_1 = \frac{2-p}{p}
\end{align}$$
(c) \(\mu_{i, n} = \sum_{j=i}^{n-1}m_j\)
(d) \(E[X_j] = 1 + \frac{1}{2p-1}\{\frac{n[1 – (q/p)^j]}{1 – (q/p)^n} – j\}\)
(e) \(N\) is a geometric variable with parameter \( a = (1 – q/p)/[1 – (q/p)^n]\), thus \(E[N = 1/a]\).
(f) We can compute \(\mu_{0, n}\) in two ways:
$$E[N](E[X_1] + 1) = \sum_{j=0}^{n-1}m_j = \frac{2p(p-1)}{(2p – 1)^2}[1 – (\frac{1-p}{p})^n] – \frac{n}{1-2p}$$
(g) $$\frac{2p(p-1)}{(2p-1)^2}[(\frac{1-p}{p})^i – (\frac{1-p}{p})^n] – \frac{n-i}{1-2p}$$


4.27 Consider a particle that moves along a set of \(m+1\) nodes, labeled \(0, 1, \dots, m\). At each move it either goes one step in the clockwise direction with probability \(p\) or one step in the counterclockwise direction with probability \(1-p\). It continues moving until all the nodes \(1, 2, \dots, m\) have been visited at least once. Starting at node 0, find the probability that node \(i\) is the last node visited, \(i = 1, \dots, m\).


As shown in Example 1.9(B),
$$\begin{align}
&P\{i \text{ is the last node visited}\}\\
=&(1-f_{m-1, m})P\{\text{visit i – 1 before i + 1}\} + f_{1,m-1}P\{\text{visit i + 1 before i – 1}\} \\
=&(1-f_{m-1, m})f_{m-i, m-1} + f_{1, m-1}(1 – f_{m-i, m-1}) \\
\end{align}$$


4.28 In Problem 4.27, find the expected number of additional steps it takes to return to the initial position after all nodes have been visited.


$$\begin{align}
P_i &= (1-f_{m-1, m})f_{m-i, m-1} + f_{1, m-1}(1 – f_{m-i, m-1}) \\
q &= 1 – p\\
B_i &= \frac{1}{2p-1}\{ \frac{(m+1)[1 – (q/p)^i]}{1 – (q/p)^{m+1}} – i\} \\
E &= \sum_{i=1}^{m} B_iP_i
\end{align}$$


4.29 Each day one of \(n\) possible elements is requested, the ith one with probability \(P_i, i \geq 1, \sum_{i=1}^n P_i = 1\). These elements are at all times arranged in an ordered list that is revised as follows: the element selected is moved to the front of the list with the relative positions of all the other elements remaining unchanged. Defined the state at any time to be the list ordering at that time.
(a) Argue that the above is a Markov chain.
(b) For any state \(i_1, \dots, i_n\)(which is a permutation of \(1,2, \dots, n\)), let \(\pi(i_1, \dots, i_n)\) denote the limiting probability. Argue that
$$\pi(i_1, \dots, i_n) = P_{i_1}\frac{P_{i_2}}{1 – P_{i_1}}\dots\frac{P_{i_{n-1}}}{1 – P_{i_1} – \dots – P_{i_{n-2}}}$$



4.31 A spider hunting a fly moves between locations 1 and 2 according to a Markov chain with transition matrix \(\begin{bmatrix} 0.7 & 0.3 \\ 0.3 & 0.7\end{bmatrix}\) starting in location 1. The fly, unaware of the spider, starts in location 2 and moves according to a Markov chain with transition matrix \(\begin{bmatrix} 0.4 & 0.6 \\ 0.6 & 0.4\end{bmatrix}\). The spider catches the fly and the hunt ends whenever they meet in the same location.
Show that the progress of the hunt, except for knowing the location where it ends, can be described by a three-state Markov chain where one absorbing state represents hunt ended and the other two that the spider and fly are at different locations. Obtain the transition matrix for this chain.
(a) Find the probability that at time \(n\) the spider and fly are both at their initial locations.
(b) What is the average duration of the hunt?


Let state 1 denote spider at 1, fly at 2; state 2 denote spider at 2, fly at 1; state 3 denote they are at same location. Then the transition matrix is$$
P = \begin{bmatrix} 0.28 & 0.18 & 0.54 \\ 0.18 & 0.28 & 0.54\\ 0 & 0 & 1\end{bmatrix} $$
(a) By diagonalize the transition matrix, we can compute
$$P^n =
\begin{bmatrix} 1 & -1 & 1 \\ 1 & 1 & 1\\ 1 & 0 & 0\end{bmatrix}
\begin{bmatrix} 1 & 0 & 0 \\ 0 & 0.1^n & 0\\ 0 & 0 & 0.46^n\end{bmatrix}
\begin{bmatrix} 0 & 0 & 1 \\ -0.5 & 0.5 & 0\\ 0.5 & 0.5 & -1\end{bmatrix} $$
thus, \((0.1^n + 0.46^n)/2\)
(b) The duration is a geometric variable with mean \(50/27\)


4.32 Consider a simple random walk on the integer points in which at each step a particle moves one step in the positive direction with probability \(p\), one step in the negative direction with probability \(p\), and remains in the same place with probability \(q = 1 – 2p(0 < p < 1/2)\). Suppose an absorbing barrier is placed at the origin–that is, \(P_{00}=1\)–and a reflecting barrier at \(N\)–that is, \(P_{N, N-1} = 1\)–and that the particle starts at \(n (0 < n < N)\).
Show that the probability of absorption is 1, and find the mean number of steps.


Let \(A_i\) denote the probability of absorption for a particle starting at \(i\). Then
$$A_i = pA_{i-1} + pA_{i+1} + (1-2p)A_i, \quad A_0 = 1, \quad A_{N} = A_{N-1}$$
Solve the equations we get, \(A_i = 1\) for all \(i\).
Let \(T_i\) denote the expected number of steps starting at \(i\), then
$$\begin{align}
E[T_i] &= 1 + pE[T_{i-1}] + pE[T_{i+1}] + (1-2p)E[T_i],\\
E[T_0] &= 0, \\
E[T_N] &= 1 + E[T_{N-1}]\end{align}$$
Solve the equation above, we get \(E[T_n] = \frac{n(2N – n + 2p -1)}{2p}\)


4.33 Given that \(\{X_n, n \geq 0\}\) is a branching process:
(a) Argue that either \(X_n\) converges to 0 or infinity.
(b) Show that
$$Var(X_n|X_0 = 1) = \left\{\begin{array}{ll}
\sigma^2\mu^{n-1} \frac{\mu^n – 1}{\mu – 1} \quad \mu \neq 1 \\
n\sigma^2 \quad \mu = 1 \\
\end{array}
\right. $$
where \(\mu\) and \(\sigma^2\) are the mean and variance of the number of offspring an individual has.


(a) Since any finite set of transient states \(\{1, 2, \dots, n\}\) will be visited only finitely often, this leads to if \(P_0 > 0\), then the population will either die out or its size will converge to infinity.
(b) $$\begin{align}
&Var(X_n|X_0 = 1)\\
=& E[Var(X_n|X_{n-1}, X_0=1)] + Var(E[X_n|X_{n-1}, X_0 = 1]) \\
=& \sigma^2 E[X_{n-1}|X_0=1] + Var(\mu X_{n-1}|X_0 = 1) \\
=& \sigma^2\mu^{n-1} + \mu^2Var(X_{n-1}|X_0=1) \\
=& \sigma^2(\mu^{n-1} + \mu^n + \dots + \mu^{2n-2}) \\
=& \left\{\begin{array}{ll}
\sigma^2\mu^{n-1} \frac{\mu^n – 1}{\mu – 1} \quad \mu \neq 1 \\
n\sigma^2 \quad \mu = 1 \\
\end{array}
\right. \end{align}$$


4.34 In a branching process the number of offspring per individual has a binomial distribution with parameters \(2, p\). Starting with a single individual, calculate:
(a) the extinction probability;
(b) the probability that the population becomes extinct for the first time in the third generation.
Suppose that, instead of starting with a single individual, the initial population size \(Z_0\) is a random variable that is Poisson distributed with mean \(\lambda\). Show that, in this case, the extinction probability is given, for \(p > 1/2\), by
$$exp\{\lambda(1-2p)/p^2\}$$


(a) $$\pi_0 = \sum_{k=0}^2 \pi^k{2 \choose k}p^k(1-p)^{2-k}\\
\pi_0 = \left\{\begin{array}{ll}
1 \quad p \leq 0.5 \\
(1 – 1/p)^2 \quad p > 0.5 \\
\end{array}
\right. $$
(b) Let \(\phi_n(s) = E[s^{X_n}]\). Then \(\phi_n(s) = \phi_1(\phi_{n-1}(s))\), and it’s easy to see that \(\phi_1(s) = (sp + 1 -p)^2, P\{X_n = 0\} = \phi_n(0)\). Thus,
$$\begin{align}
&\phi_3(0) – \phi_2(0) \\
=& \phi_1(\phi_1(\phi_1(0))) – \phi_1(\phi_1(0))\\
=& 4p^2(1-p)^4 + 6p^3(1-p)^5 + 6p^4(1-p)^6 + 4p^5(1-p)^7 + p^6(1-p)^8
\end{align}$$
(c) $$\begin{align}
&\sum_{k=0}^{\infty} (1 – 1/p)^{2k} \frac{\lambda^ke^{-\lambda}}{k!} \\
=& e^{\lambda(1-2p)/p^2} \sum_{k=0}^{\infty} \frac{[\lambda(1 – 1/p)^2]^k}{k!}e^{-\lambda(1 – 1/p)^2 }\\
=&exp\{\lambda(1-2p)/p^2\}
\end{align}$$


4.35 Consider a branching process in which the number of offspring per individual has a Poisson distributed with mean \(\lambda, \lambda > 1\). Let \(\pi_0\) denote the probability that, starting with a single individual, the population eventually becomes extinct. Also, let \(a, a < 1\), be such that$$ae^{-a} = \lambda e^{-\lambda}$$
(a) Show that \(a = \lambda\pi_0\).
(b) Show that, conditional on eventual extinction, the branching process follows the same probability law as the branching process in which the number of offspring per individual is Poisson with mean \(a\).


(a) $$\begin{align}
\pi_0 &= \sum_{k=0}^{\infty} \frac{\pi^k\lambda^k}{k!} e^{-\lambda}\\
&= e^{\lambda\pi_0-\lambda}\sum_{k=0}^{\infty}\frac{(\lambda\pi_0)^k}{k!}e^{-\lambda\pi_0}\\
&= e^{\lambda\pi_0-\lambda} \\ \end{align}$$
Thus,
$$ae^{-a} = \lambda e^{-\lambda} = \lambda\pi_0e^{-\lambda\pi_0}$$
(b) $$\begin{align}
&P\{X = k| \text{eventually die out}\} \\
=&\frac{P\{X=k\}\pi_0^k}{\pi_0} \\
=&\frac{(\pi_0\lambda)^{k-1}\lambda e^{-\lambda}}{k!}\\
=&\frac{a^ke^{-a}}{k!}
\end{align}$$


4.36 For the Markov chain model of Section 4.6.1, namely,
$$P_{ij} = \frac{1}{i – 1}, \quad j = 1, \dots, i-1, \quad i > 1,$$
suppose that the initial state is \(N \equiv {n \choose m}\), where \(n>m\). Show that when \(n, m\) and \(n-m\) are large the number of steps to reach 1 from state \(N\) has approximately a Poisson distribution with mean
$$m[c\log\frac{c}{c-1} + \log(c-1)],$$
where \(c = n/m\). (Hint: Use Stirling’s approximation.)


From Proposition 4.6.2 (iii) we get \(T_N \sim \pi(\log {n \choose m})\).
$$\begin{align}
\log {n \choose m} &= \log n! – \log m! – \log (n-m)! \\
&\sim n\log n – n – m\log m + m -(n-m)\log(n-m) + (n-m) \\
&= n\log\frac{n}{n-m} + m\log\frac{n-m}{m}\\
&= m[c\log\frac{c}{c-1} + \log(c-1)]
\end{align}$$


4.37 For any infinite sequence \(x_1, x_2, \dots\), we say that a new long run begins each time the sequence changes direction. That is, if the sequence starts 5, 2, 4, 5, 6, 9, 3, 4, then there are three long runs–namely, (5, 2), (4, 5, 6, 9), and (3, 4). Let \(X_1, X_2, \dots\) be independent uniform (0, 1) random variables and let \(I_n\) denote the initial value of the nth long run. Argue that \(\{I_n, n \geq 1\}\) is a Markov chain having a continuous state space with transition probability density given by
$$p(y|x) = e^{1-x} + e^x – e^{|y-x|} – 1.$$


Similarly to Section 4.6.2, When ascend changed to descend,
$$\begin{align}
&P\{I_{n+1} \in (y+dy), L_n = m | I_n = x\} \\
=&\left\{\begin{array}{ll}
\frac{x^{m-1}}{(m-1)!}dy[1 – (\frac{x-y}{x})^{m-1}] \quad y < x \\
\frac{x^{m-1}}{(m-1)!}dy \quad y > x \\
\end{array}\right.
\end{align} $$
When descend changed to ascend,
$$\begin{align}
&P\{I_{n+1} \in (y+dy), L_n = m | I_n = x\} \\
=&\frac{x^{m-1}}{(m-1)!}dyP\{min(X_1, \dots, X_{m-1}) < y | X_i < x, i = 1, \dots, m-1\} \\
=&\left\{\begin{array}{ll}
\frac{x^{m-1}}{(m-1)!}dy[1 – (\frac{x-y}{x})^{m-1}] \quad y < x \\
\frac{x^{m-1}}{(m-1)!}dy \quad y > x \\
\end{array}\right.
\end{align}$$Thus,
$$\begin{align}
p(y|x) &= \left\{\begin{array}{ll}
\sum_{m=2}^{\infty} \{\frac{(1-x)^{m-1}}{(m-1)!} + \frac{x^{m-1}}{(m-1)!}dy[1 – (\frac{x-y}{x})^{m-1}]\} \quad y < x \\
\sum_{m=2}^{\infty} \{\frac{(1-x)^{m-1}}{(m-1)!}dy[1 – (\frac{y-x}{1-x})^{m-1}] + \frac{x^{m-1}}{(m-1)!}dy\} \quad y > x \\
\end{array}\right. \\
&= e^{1-x} + e^x – e^{|y-x|} – 1
\end{align}$$


4.38 Suppose in Example 4.7(B) that if the Markov chain is in state \(i\) and the random variable distributed according to \(q_{ij}\) takes on the value \(j\), then the next state is set equal to \(j\) with probability \(a_j/(a_j + a_i)\) and equal to \(i\) otherwise. Show that the limiting probabilities for this chain are \(\pi_j = a_j/\sum_j a_j\).


From Example 4.7(B), we can verify the chain is time reversible with stationary probabilities \(\pi_j\) by verifying:
$$\pi_iP_{ij} = \frac{a_i}{\sum_k a_k}\frac{q_{ij}a_j}{a_i+a_j} = \pi_jP_{ji}$$
Since the above equation is immediate, we can follow the same reason to conclude that \(\pi_j = a_j/\sum_j a_j\) are the limiting probabilities.


4.39 Find the transition probabilities for the Markov chain of Example 4.3(D) and show that it is time reversible.


$$P_{ij} = \sum_{k=0}^{min(i,j)} {i \choose k}(1-p)^kp^{i-k}\frac{e^{-\lambda}\lambda^{j-k}}{(j-k)!}$$
Thus by the formula \(\pi_j = e^{-\lambda/p}(\lambda/p)^j/j!\), we get \(\pi_iP_{ij} = \pi_jP_{ji}\)


4.41 A particle moves among \(n\) locations that are arranged in a circle (with the neighbors of location \(n\) being \(n-1\) and 1). At each step, it moves one position either in the clockwise position with probability \(p\) or in the counterclockwise position with probability \(1 – p\)
(a) Find the transition probabilities of the reverse chain.
(b) Is the chain time reversible?


(a) \(P_{i, i+1} = p, P_{i, i-1} = 1-p\) for \(1 < i < n\), and \(P_{1,2} = P_{n,1} = p\), \(P_{1,n} = P_{n, n-1} = 1-p\).
(b) The chain is time reversible when \(p = 1/2\)


4.42 Consider the Markov chain with states \(0, 1, \dots, n\) and with transition probabilities
$$P_{0,1} = P_{n, n-1} = 1, \quad P_{i, i+1} = p_i = 1 – P_{i, i-1}, \quad i = 1, \dots, n-1.$$
Show that this Markov chain is of the type considered in Proposition 4.7.1 and find its stationary probabilities.


The chain can be considered as a weighted graph, with \(w_{ij} = P_{ij}\). The stationary probabilities are
$$\pi_i = \frac{\sum_j P_{ij}}{\sum_j\sum_iP{ij}} = \frac{1}{n}$$


4.44 Consider a time-reversible Markov chain with transition probabilities \(P_{ij}\) and limiting probabilities \(\pi_i\), and now consider the same chain truncated to the states \(0, 1, \dots, M\). That is, for the truncated chain its transition probabilities \(P_{ij}\) are$$
\bar{P_{ij}} = \left\{
\begin{array}{ll}
P_{ij} + \sum_{k > M}P_{ik} \quad 0 \leq i \leq M, j = i \\
P_{ij} \quad 0 \leq i \neq j \leq M \\
0 \quad \text{otherwise}.
\end{array}
\right. $$
Show that the truncated chain is also time reversible and has limiting probabilities given by$$\bar{\pi_i} = \frac{\pi_i}{\sum_{i=0}^M\pi_i}$$


$$\begin{align}
\bar{\pi_i}\bar{P_{ij}} &= \frac{\pi_i}{\sum_{i=0}^{M}\pi_i}(P_{ij} + I_{\{i=j\}}\sum_{k>M}^MP_{ik}) \\
&= \frac{\pi_j}{\sum_{i=0}^{M}\pi_i}(P_{ji} + I_{\{i=j\}}\sum_{k>M}^MP_{jk}) \\
&= \bar{\pi_j}\bar{P_{ji}}
\end{align}$$


4.45 Show that a finite state, ergodic Markov chain such that \(P_{ij} > 0\) for all \(i \neq j\) is time reversible if, and only if,
$$P_{ij}P_{jk}P_{ki} = P_{ik}P_{kj}P_{ji} \quad \text{for all } i, j, k$$


Suppose \(\pi_j = cP_{ij}/P_{ji}\), where \(c\) must be chosen so that \(\sum_j \pi_j = 1\). Then \(\pi_jP_{jk}= \pi_kP_{kj}\) if and only if
$$\frac{cP_{ij}P_{jk}}{P_{ji}} = \frac{cP_{ik}P_{kj}}{P_{ki}} $$
if and only if \(P_{ij}P_{jk}P_{ki} = P_{ik}P_{kj}P_{ji} \)


4.47 \(M\) ball are initially distributed among \(m\) urns. At each stage one of the balls is selected at random, taken from whichever urn it is in, and placed, at random, in one of the other \(m-1\) urns. Consider the Markov chain whose state at any time is the vector \(n_1, \dots, n_m\), where \(n_i\) denotes the number of balls in urn \(i\). Guess at the limiting probabilities for this Markov chain and then verify your guess and show at the same time that the Markov chain is time reversible.


Guess the limiting distribution is the Multinomial distribution:
$$\pi(n_1, n_2, \dots, n_m) = \frac{M!}{n_1!\dots n_m!m^M}$$ where \(n_i \geq 0, n_1 + \dots + n_m = M\). Now for two vectors \( (n_1, n_2, \dots, n_m)\) and \( (n_1, n_2, \dots, n_i+1, \dots, n_j -1, \dots, n_m)\), if \(i \neq j, n_j > 0\), then the transition probability is
$$\begin{align}
&P((n_1, \dots, n_i + 1, \dots, n_j – 1, \dots, n_m ),(n_1, \dots, n_m))\\
=&\frac{n_i + 1}{M}\frac{1}{m-1}\end{align}$$
Therefore, we have
$$\begin{align}
\sum_{1 \leq i \neq j \leq m} &\pi(n_1, \dots, n_i + 1, \dots, n_j – 1, \dots, n_m)\\
&P((n_1, \dots, n_i + 1, \dots, n_j – 1, \dots, n_m ),(n_1, \dots, n_m)) \\
=&\sum_{1 \leq i \neq j \leq m} \frac{M!}{n_1!\dots (n_i+1)!\dots(n_j-1)!\dots n_m!m^M}\frac{n_i+1}{M}\frac{1}{m-1} \\
=& \frac{M!}{n_1!\dots n_m!m^M} \sum_{1 \leq i \neq j \leq m} \frac{n_j}{M(m-1)}\\
=& \pi(n_1, \dots, n_m)
\end{align}$$


4.48 For an ergodic semi-Markov process.
(a) Compute the rate at which the process makes a transition from \(i\) into \(j\).
(b) Show that \(\sum_{i} P_{ij}/\mu_{ii} = 1/\mu_{jj}.\)
(c) Show that the proportion of time that the process is in state \(i\) and headed for state \(j\) is \(P_{ij}\eta_{ij}/\mu_{ii}\), where \(\eta_{ij} = \int_0^{\infty}\bar{F_{ij}}(t)dt.\)
(d) Show that the proportion of time that the state is \(i\) and will next be \(j\) within a time \(x\) is$$\frac{P_{ij}\eta_{ij}}{\mu_{ii}}F_{i,j}^e(x),$$ where \(F_{i,j}^e\) is the equilibrium distribution of \(F_{ij}\).


(a) Define a (delayed) renewal reward process: a renewal occurs when state \(i\) is entered from other states and the reward of each n-th cycle \(R_n\) equals 1 if in the n-th cycle, the state after \(i\) is \(j\) and 0 otherwise. Let \(R_{ij}(t)\) be the total number of transitions from \(i\) to \(j\) by time \(t\). We have
$$\sum_{n=0}^{N(t)}R_n \leq R_{ij}(t) \leq \sum_{n=0}^{N(t) + 1}R_n \leq \sum_{n=0}^{N(t)} R_n + 1.$$ Thus the rate at which the process makes a transition from \(i\) to \(j\) equals
$$\lim_{t \to \infty} \frac{R_{ij}(t)}{t} = \frac{E[R]}{E[X]} = \frac{P_{ij}}{\mu_{ii}}$$
(b) Let \(R_j(t)\) be the number of visits to state \(j\) by time \(t\). Thus
$$\begin{align}
\sum_i R_{ij}(t) &= R_j(t) \\
\sum_i \lim_{t \to \infty} \frac{R_{ij}(t)}{t} &= \lim_{t \to \infty}\frac{R_j(t)}{t} \\
\sum_{i} P_{ij}/\mu_{ii} &= 1/\mu_{jj}
\end{align}$$
(c) Define cycle as in part (a) and the reward in a cycle to be 0 if the transition from \(i\) is not into \(j\) and \(T_{ij}\) the time taken for transition if the transition from \(i\) is into \(j\). Thus
$$\lim_{t \to \infty} \frac{R(t)}{t} = \frac{E[R]}{E[X]} = \frac{P_{ij}E[T_{ij}]}{\mu_ij} = \frac{P_{ij}\eta_{ij}}{\mu_{ii}}$$
(d) Define cycle as in last part and the reward in a cycle as 0 if the transition from \(i\) is not into \(j\) and \(min(x, T_{ij})\) if the transition from \(i\) is into \(j\). Thus
$$\lim_{t \to \infty} \frac{R(t)}{t} = \frac{E[R]}{E[X]} = \frac{P_{ij}E[min(x, T_{ij})]}{\mu_{ii}} = \frac{P_{ij}\eta_{ij}}{\mu_{ii}}F_{i,j}^e(x) $$


4.49 For an ergodic semi-Markov process derive an expression, as \(t \to \infty\), for the limiting conditional probability that the next state visited after \(t\) is state \(j\), given \(X(t) = i\).



4.50 A taxi alternates between three locations. When it reaches location 1, it is equally likely to go next to either 2 or 3. When it reaches 2, it will next go to 1 with probability 1/3 and to 3 with probability 2/3. From 3 it always goes to 1. The mean times between locations \(i\) and \(j\) are \(t_{12} = 20, t_{13} = 30, t_{23} = 30 (t_{ij} = t_{ji})\).
(a) What is the (limiting) probability that the taxi’s most recent stop was at location \(i, i = 1, 2, 3\) ?
(b) What is the (limiting) probability that the taxi is heading for location 2?
(c) What fraction of time is the taxi traveling from 2 to 3? Note: Upon arrival at a location the taxi immediately departs.


The transition matrix is \(\begin{bmatrix} 0 & 1/2 & 1/2 \\ 1/3 & 0 & 2/3\\ 1 & 0 & 0\end{bmatrix} \)
And the stationary probabilities are \(\pi_1 = 6/14, \pi_2 = 3/14, \pi_3 = 5/14\).
Since \(\mu_i = \sum_i P_{ij}\mu_{ij}\), we have \(\mu_1 = 25, \mu_2 = 80/3, \mu_3 = 30\).
(a) From Proposition 4.8.3, we have \(P_1 = 15/38, P_2 = 8/38, P_3 = 16/38\).
(b) From Problem 4.48(c), we have \(P_{12}\eta_{12}P_1/\mu_1 = 3/19\).
(c) Similar to (b), \(P_{23}\eta_{23}P_2/\mu_2 = 3/19\).