## Solutions to Stochastic Processes Ch.5

Solutions to Stochastic Processes Sheldon M. Ross Second Edition(pdf)
Since there is no official solution manual for this book, I
handcrafted the solutions by myself. Some solutions were referred from web, most copyright of which are implicit, can’t be listed clearly. Many thanks to those authors! Hope these solutions be helpful, but No Correctness or Accuracy Guaranteed. Comments are welcomed. Excerpts and links may be used, provided that full and clear credit is given.

5.1 A population of organisms consists of both male and female members. In a small colony any particular male is likely to mate with any particular female in any time interval of length $$h$$, with probability $$\lambda h + o(h)$$. Each mating immediately produces one offspring, equally likely to be male of female. Let $$N_1(t)$$ and $$N_2(t)$$ denote the number of males and females in the population at $$t$$. Derive the parameters of the continuous-time Markov chain $$\{N_1(t), N_2(t)\}$$.

$$q[(n, m), (n+1, m)] = q[(n,m),(n, m+1)] = \frac{\lambda mn}{2}$$

5.2 Suppose that a one-celled organism can be in one of two states–either $$A$$ or $$B$$. An individual in state $$A$$ will change to state $$B$$ at an exponential rate $$\alpha$$, an individual in state $$B$$ divides into two new individuals of type $$A$$ at an exponential rate $$\beta$$. Define an appropriate continuous-time Markov chain for a population of such organisms and determine the appropriate parameters for this model.

\begin{align} q[(n, m), (n-1, m+1)] &= \alpha n\\ q[(n, m), (n+2, m-1)] &= \beta m \end{align}

5.4 For a pure birth process with birth parameters $$\lambda_n, n \geq 0$$, compute the mean, variance, and moment generating function of the time it takes the population to go from size $$1$$ to size $$N$$.

Let $$T_i$$ denote the time between birth of $$i-1$$ and $$i$$, then $$T_i$$ are independent and $$T_i \sim \text{Exponential}(\lambda_i)$$.
$$E[\sum_{i=1}^{n}T_i|n = N-1] = \sum_{i=1}^{N-1} E[T_i] = \sum_{i=1}^{N-1} 1/\lambda_i \\ Var(\sum_{i=1}^{n}T_i|n = N-1 ) = \sum_{i=1}^{N-1} Var(T_i) = \sum_{i=1}^{N-1} 1/\lambda_i^2 \\ \phi(t) = \prod_{i=1}^{N-1}\frac{\lambda_i}{\lambda_i – t}$$

5.6 Verify the formula
$$A(t) = a_0 + \int_0^t X(s)ds,$$
given in Example 5.3(B).

Consider this as a renewal reward process, every one get unit paid per unit time, then the sum of cost is also the sum of the age.
$$A(t) = a_0 + t + \int_0^t X(s) – 1 ds = a_0 + \int_0^t X(s)ds$$

5.7 Consider a Yule process starting with a single individual and suppose that with probability $$P(s)$$ an individual born at time $$s$$ will be robust. Compute the distribution of the number of robust individuals born in $$(0, t)$$.

Let $$N(t)$$ denote the number of individuals born in $$(0, t)$$, and $$R(t)$$ of which are robust. Then
\begin{align} &P\{R(t) = k\} \\ =& \sum_{n=k}^{\infty}P\{R(t) = k|N(t) = n\}P\{N(t) = n\}\\ =& \sum_{n=k}^{\infty}{n \choose k}[\int_0^t f(s)P(s)ds]^k[ 1- \int_0^t f(s)P(s)ds] e^{-\lambda t}(1 – e^{-\lambda t})^{n} \end{align}
where $$f(s) = \frac{\lambda e^{-\lambda(t-s)}}{1 – e^{-\lambda t}}$$

5.9 Prove Lemma 5.4.2.

\begin{align} P_{ij}(t+s) &= P\{X(t+s) = j | X(0) = i\} \\ &= \sum_{k=0}^{\infty} P\{X(t+s) = j, X(t) = k | X(0) = i\} \\ &= \sum_{k=0}^{\infty} P\{X(t+s) = j| X(t) = k, X(0) = i\}P\{X(t) = k | X(0) = i\}\\ &= \sum_{k=0}^{\infty} P\{X(t+s) = j| X(t) = k\}P\{X(t) = k | X(0) = i\}\\ &= \sum_{k=0}^{\infty} P_{kj}(s)P_{ik}(t)\\ \end{align}

5.10 Let $$P(t) = P_{00}(t)$$.
(a) Find $$\lim_{t \to 0}\frac{1 – P(t)}{t}$$.
(b) Show that $$P(t)P(s) \leq P(t+s) \leq 1 – P(s) + P(s)P(t)$$.
(c) Show $$|P(t) – P(s)| \leq 1 – P(t-s), \quad s < t$$, and conclude that $$P$$ is continuous.

(a) $$\lim_{t \to 0}\frac{1 – P(t)}{t} = \lim_{t \to 0}\frac{1 – P(t)}{t} = v_0$$
(b) \begin{align} P(t)P(s) &\leq \sum_{k=0}^{\infty}P_{0k}(t)P_{k0}(s) = P(t+s)\\ &= \sum_{k=1}^{\infty}P_{0k}(s)P_{k0}(t) + P_{00}(t)P_{00}(s) \\ &\leq \sum_{k=0}^{\infty}P_{0k}(s) + P(s)P(t) \\ &= 1 – P(s) + P(s)P(t) \end{align}
(c) From (b) we have
$$P(s)P(t-s) \leq P(t) \leq 1 – P(t-s) + P(t-s)P(s)\\ P(s)(P(t-s) – 1) \leq P(t) – P(s) \leq (1 – P(s))(1 – P(t-s)) \\ P(t-s) – 1 \leq P(t) – P(s) \leq (1 – P(t-s)$$
From the inequality, we have
$$\lim_{t \to s}|P(t) – P(s)| \leq \lim_{t \to s} 1 – P(t-s) = 1 – P(0) = 0$$

5.11 For the Yule process
(a) Verify that
$$P_{ij}(t) = {j – 1 \choose i – 1}e^{-i\lambda t}(1 – e^{-\lambda t})^{j-i}$$
satisfies the forward and backward equations.
(b) Suppose that $$X(0) = 1$$ and that at time $$T$$ the process stops and is replaced by an emigration process in which departures occur in a Poisson process of rate $$\mu$$. Let $$\tau$$ denote the time taken after $$T$$ for the population to vanish. Find the density function of $$\tau$$ and show that $$E[\tau] = e^{\lambda T}/\mu$$.

(a) \begin{align} P_{ij}^{\prime}(t) &= {j-1 \choose i-1} \lambda e^{-i\lambda t} [-i(1 – e^{-\lambda t})^{j-i} + (j-i)e^{-\lambda t}(1 – e^{-\lambda t})^{j-i-1}] \\ &= {j-1 \choose i-1} \lambda e^{-i\lambda t}(1 – e^{-\lambda t})^{j-i-1}(je^{-\lambda t} – i)\\ &\sum_{k \neq i} q_{ik}P_{kj}(t) – v_iP_{ij}(t) \\ =&q_{i,i+1}P_{i+1, j}(t) – v_iP_{ij}(t) \\ =&i\lambda[{j – 1 \choose i – 2}e^{-i\lambda t}e^{-\lambda t}(1 – e^{-\lambda t})^{j-i -1} – {j – 1 \choose i – 1}e^{-i\lambda t}(1 – e^{-\lambda t})^{j-i}]\\ =&i\lambda e^{-i\lambda t}(1 – e^{-\lambda t})^{j-i-1} [{j-1 \choose i}e^{-\lambda t} – {j-1 \choose i – 1}(1 – e^{-\lambda t})] \\ =&{j-1 \choose i-1} \lambda e^{-i\lambda t}(1 – e^{-\lambda t})^{j-i-1}(je^{-\lambda t} – i)\\ &\sum_{k \neq j} q_{kj}P_{ik}(t) – v_jP_{ij}(t) \\ =&q_{j-1,j}P_{i, j-1}(t) – v_jP_{ij}(t) \\ =&\lambda[(j-1){j – 2 \choose i – 1}e^{-i\lambda t}(1 – e^{-\lambda t})^{j-1-i} – j{j – 1 \choose i – 1}e^{-i\lambda t}(1 – e^{-\lambda t})^{j-i}]\\ =&\lambda e^{-i\lambda t}(1 – e^{-\lambda t})^{j-i-1} [(j-1){j-2 \choose i – 1} – j{j-1 \choose i – 1}(1 – e^{-\lambda t})] \\ =&{j-1 \choose i-1} \lambda e^{-i\lambda t}(1 – e^{-\lambda t})^{j-i-1}(je^{-\lambda t} – i)\\ \end{align}
(b) \begin{align} P\{\tau = t\} &= \sum_{n=1}^{\infty} P\{\tau = t | X(T) = n\}P\{X(T) = n\} \\ &= \sum_{n=1}^{\infty} e^{-\mu t}\frac{(\mu t)^{n-1}}{(n-1)!}e^{-\lambda T}(1 – e^{-\lambda T})^{n-1} \end{align}
Let $$t_i$$ denote the time between $$i-1$$ and $$i$$ member departure. Then
$$E[\tau] = E[\sum_{i=1}^{X(T)} t_i] = E[X(T)]E[t_i] = e^{\lambda T}/\mu$$

5.12 Suppose that the “state” of the system can be modeled as a two-state continuous-time Markov chain with transition rates $$v_0 = \lambda, v_1 = \mu$$. When the state of the system is $$i$$, “events” occur in accordance with a Poisson process with rate $$\alpha_i, i = 0, 1$$. Let $$N(t)$$ denote the number of events in $$(0, t)$$.
(a) Find $$\lim_{t \to \infty}N(t)/t$$.
(b) If the initial state is state 0, find $$E[N(t)]$$.

(a) Define a renewal reward process as follows. A renewal occurs when the process enters state 0 and reward in cycle equals the number of events in that cycle. Let the length of nth cycle $$X_n$$ is the sum of time spent in state 0 and 1, say $$X_{0n}$$ and $$X_{1n}$$ respectively. Thus $$E[X_n] = E[X_{0n}] + E[X_{1n}] = \lambda^{-1} + \mu^{-1}$$. Further if $$R_n$$ is the reward in the nth cycle, with $$R_{0n}$$ and $$R_{1n}$$ earned in state 0 and 1 respectively
\begin{align} E[R_n] &= E[R_{0n}] + E[R_{1n}] = E[E[R_{0n}| X_{0n}] + E[R_{1n}|X_{1n}]] \\ &= E[\alpha_0X_{0n}] + E[\alpha_1X_{1n}]\\ &= \alpha_0/\lambda + \alpha_1/\mu. \end{align}Thus
$$\lim_{t \to \infty} \frac{N(t)}{t} = \lim_{t \to \infty} \frac{R(t)}{t} = \frac{E[R]}{E[X]} = \frac{\alpha_0\mu + \alpha_1\lambda}{\lambda + \mu}$$

(b) Let $$T_i(t)$$ denote the time spent in state $$i$$ by the time $$t$$.
\begin{align} E[N(t)] &= \alpha_0E[T_0(t)] + \alpha_1(1 – E[T_0(t)])\\ &= (\alpha_0 – \alpha_1)E[T_0(t)] + \alpha_1t \\ &= \alpha_1t + (\alpha_0 – \alpha_1)\int_0^t P_{00}(s)ds \end{align}
By the forward equation, $$P_{00}^{\prime} = – (\lambda + \mu)P_{00}(t) + \mu$$, and $$P_{00}(t) = 1$$. Solve the equations, we get
\begin{align} P_{00}(t) &= \frac{\mu}{\lambda + \mu} + \frac{\lambda}{\lambda + \mu}e^{-(\lambda + \mu)t} \\ E[N(t)] &= \frac{\alpha_0\mu + \alpha_1\lambda}{\lambda + \mu}t + \frac{\alpha_1 – \alpha_0}{(\lambda + \mu)^2}\lambda(e^{-(\lambda + \mu)t} – 1). \end{align}

5.13 Consider a birth and death process with birth rates $$\{\lambda_n\}$$ and death rates $$\{\mu_n\}$$. Starting in state $$i$$, find the probability that the first $$k$$ events are all births.

$$\prod_{n=i}^{i+k-1}\frac{\lambda_n}{\lambda_n + \mu_n}$$

5.15 Consider a population in which each individual independently gives birth at an exponential rate $$\lambda$$ and dies at an exponential rate $$\mu$$. In addition, new members enter the population in accordance with a Poisson process with rate $$\theta$$. Let $$X(t)$$ denote the population size at time $$t$$.
(a) What type of process is $$\{X(t), t \geq 0\}$$?
(b) What are its parameters?
(c) Find $$E[X(t)|X(0)=i]$$.

(a)(b) This is a linear birth/death process with immigration, having parameters $$\mu_n = n\mu$$ and $$\lambda_n = n\lambda + \theta$$.
(c) Note that
$$E[X(t+h)|X(0)] = E[X(t)|X(0)] + (\lambda – \mu)E[X(t)|X(0)]h + \theta h + o(h)$$
Defining $$M(t) \equiv E[X(t)|X(0)]$$ we get the differential equation:
$$M^{\prime}(t) = (\lambda – \mu)M(t) + \theta$$Solve the equations with $$M(0) = i$$
$$M(t) = \left\{\begin{array}{ll} \theta t + i \quad \lambda = \mu \\ (i+\frac{\theta}{\lambda – \mu})e^{(\lambda-\mu)t}-\frac{\theta}{\lambda-\mu} \quad otherwise \\ \end{array}\right.$$

5.16 In Example 5.4(D), find the variance of the number of males in the population at time $$t$$.

\begin{align} &E[Y^2(t+h)|Y(t)]\\ =& [Y(t) + 1]^2\lambda(1-p)X(t)h + [Y(t) – 1]^2vY(t)h\\ +& Y^2(t)[1 -\lambda(1-p)X(t)h – vY(t)h] + o(h)\\ =& Y^2(t) – 2vhY^2(t) + 2\lambda(1-p)hY(t)X(t) + vY(t)h + \lambda(1-p)X(t)h \end{align}
Let $$M_{Y2}(t)$$ denote $$E[Y^2(t)]$$, divide above by $$h$$ and taking expectation,
$$M_{Y2}^{\prime} = 2\lambda(1-p)M_{XY}(t) + \lambda(1-p)M_X(t) – 2vM_{Y2}(t) + vM_Y(t)$$

5.17 Let $$A$$ be a specified set of states of a continuous-time Markov chain and $$T_i(t)$$ denote the amount of time spent in $$A$$ during the time interval $$[0,t]$$ given that the chain begins in state $$i$$. Let $$Y_1, \dots, Y_n$$ be independent exponential random variables with mean $$\lambda$$. Suppose the $$Y_i$$ are independent of the Markov chain, and set $$t_i(n) = E[T_i(Y_1 + \dots + Y_n)]$$.
(a) Derive a set of linear equations for $$t_i(1), i \geq 0$$.
(b) Derive a set of linear equations for $$t_i(n)$$ in terms of the other $$t_j(n)$$ and $$t_i(n-1)$$.
(c) When $$n$$ is large, for what value of $$\lambda$$ is $$t_i(n)$$ a good approximation of $$E[T_i(t)]$$.

5.18 Consider a continuous-time Markov chain with $$X(0) = 0$$. Let $$A$$ denote a set of states that does not include 0 and set $$T = Min\{t > 0: X(t) \in A\}$$. Suppose that $$T$$ is finite with probability 1. Set $$q_i = \sum_{j \in A}q_{ij}$$, and consider the random variable $$H = \int_0^Tq_{X(t)}dt$$, called the random hazard.
(a) Find the hazard rate function of $$H$$, That is, find $$\lim_{h \to 0} P\{s < H < s+h|H>s\}/h$$(Hint: Condition on the state of the chain at the time $$\tau$$ when $$\int_0^Tq_{X(t)}dt = s$$).
(b) What is the distribution of $$H$$?

5.19 Consider a continuous-time Markov chain with stationary probabilities $$\{P_i, i \geq 0\}$$, and let $$T$$ denote the first time that the chain has been in state 0 for $$t$$ consecutive time units. Find $$E[T|X(0)=0]$$.

5.20 Each individual in a biological population is assumed to give birth at an exponential rate $$\lambda$$ and to die at an exponential rate $$\mu$$. In addition, there is an exponential rate of increase $$\theta$$ due to immigration. However, immigration is not allowed when the population size is $$N$$ or larger.
(a) Set this up as a birth and death model.
(b) If $$N=3, 1 = \theta=\lambda, \mu=2$$, determine the proportion of time that immigration is restricted.

5.21 A small barbershop, operated by a single barber, has room for at most two customers. Potential customer arrive at a Poisson rate of three per hour, and the successive service times are independent exponential random variables with mean 1/4 hour.
(a) What is the average number of customers in the shop?
(b) What is the proportion of potential customers that enter the shop?
(c) If the barber could work twice as fast, how much more business would she do?

5.22 Find the limiting probabilities for the M/M/s system and determine the condition needed for these to exist.

5.23 If $$\{X(t), t \geq 0\}$$ and $$\{Y(t), t \geq 0\}$$ are independent time-reversible Markov chains, show that the process $$\{X(t), Y(t), t \geq 0\}$$ is also.

5.24 Consider two M/M/1 queues with respective parameters $$\lambda_i, \mu_i, i=1,2$$. Suppose they both share the same waiting room, which has finite capacity $$N$$. (That is, whenever this room is full all potential arrivals to either queue are lost.) Compute the limiting probability that there will be $$n$$ people at the first queue (1 being served and $$n-1$$ in the waiting room when $$n > 0$$ and $$m$$ at the second. (Hint: Use the result of Problem 5.23)

5.25 What can you say about the departure process of the stationary M/M/1 queue having finite capacity.

5.26 In the stochastic population model of Section 5.6.2:
(a) Show that $$P(\underline{n})q(\underline{n},D_j\underline{n}) = P(D_j\underline{n})q(D_j\underline{n},\underline{n})$$, where $$P(\underline{n})$$ is given by (5.6.4) with $$\alpha_j = (\lambda/jv)(v/\mu)^j$$.
(b) Let $$D(t)$$ denote the number of families that die out in $$(0,t)$$. Assuming that the process is in steady state 0 at time $$t = 0$$, what type of stochastic process is $$\{D(t), t \geq 0\}$$? What if the population is initially empty at $$t = 0$$?

5.27 Complete the proof of the conjecture in the queueing network model of Section 5.7.1

5.28 $$N$$ customers move about among $$r$$ servers. The service times at server $$i$$ are exponential at rate $$\mu_i$$ and when a customer leaves server $$i$$ it joins the queue (if there are any waiting–or else it enters service) at server $$j, j \neq i$$, with probability $$1/(r-1)$$. Let the state be $$(n_1, \dots, n_r)$$ when there are $$n_i$$ customers at server $$i, i = 1, \dots, r$$. Show the corresponding continuous-time Markov chain is time reversible and find the limiting probabilities.

5.29 Consider a time-reversible continuous-time Markov chain having parameters $$v_i, P_{ij}$$ and having limiting probabilities $$P_j, j \geq 0$$. Choose some state–say state 0–and consider the new Markov chain, which makes state 0 an absorbing state. That is, reset $$v_0$$ to equal 0. Suppose now at time points chosen according to a Poisson process with rate $$\lambda$$, Markov chains–all of the above type (having 0 as an absorbing state)–are started with the initial states chosen to be $$j$$ with probability $$P_{0j}$$. All the existing chains are assumed to be independent. Let $$N_j(t)$$ denote the number of chains in state $$j, j > 0$$, at time $$t$$
(a) Argue that if there are no initial chains, then $$N_j(t), j > 0$$, are independent Poisson random variables.
(b) In steady state argue that the vector process $$\{(N_1(t), N_2(t), \dots)\}$$ is time reversible with stationary probabilities
$$P(\underline{n}) = \prod_{j=1}^{\infty}e^{-\alpha_j}\frac{\alpha_j^{n_j}}{n_j!}, \quad \text{for } \underline{n} = (n_1, n_2, \dots),$$
where $$\alpha_j = \lambda P_j/P_0v_0$$.

5.30 Consider an $$M/M/\infty$$ queue with channels (servers) numbered $$1, 2, \dots$$ On arrival, a customer will choose the lowest numbered channel that is free. Thus, we can think of all arrivals as occurring at channel 1. Those who find channel 1 busy overflow and become arrivals at channel 2. Those finding both channels 1 and 2 busy overflow channel 2 and become arrivals at channel 3, and so on.
(a) What fraction of time is channel 1 busy?
(b) By considering the corresponding M/M/2 loss system, determine the fraction of time that channel 2 is busy.
(c) Write down an expression for the fraction of time channel $$c$$ is busy for arbitrary $$c$$.
(d) What is the overflow rate from channel $$c$$ to channel $$c+1$$? Is the corresponding overflow process a Poisson process? A renewal process? Express briefly.
(e) If the service distribution were general rather than exponential, which (if any) of your answer to (a)-(d) would change? Briefly explain.

5.31 Prove Theorem 5.7.1

5.32 (a) Prove that a stationary Markov process is reversible if, and only if, its transition rates satisfy
$$q(j_1, j_2)q(j_2, j_3)\dots q(j_{n-1}, j_n)q(j_n, j_1)=q(j_1, j_n)q(j_n, j_{n-1})\dots q(j_3, j_2)q(j_2, j_1)$$for any finite sequence of state $$j_1, j_2, \dots, j_n$$
(b) Argue that is suffices to verify that the equality in (a) holds for sequences of distinct states.
(c) Suppose that the stream of customers arriving at a queue forms a Poisson process of rate $$v$$ and that there are two servers who possibly differ in efficiency. Specifically, suppose that a customer’s service time at server $$i$$ is exponentially distributed with rate $$\mu_i$$, for $$i=1, 2$$, where $$\mu_1 + \mu_2 > v$$. If a customer arrives to find both servers free, he is equally likely to be allocated to either server. Define an appropriate continuous-time Markov chain this model, show that it is time reversible, and determine the limiting probabilities.

5.33 The work in a queueing system at any time is defined as the sum of the remaining service times of all customers in the system at that time. For the M/G/1 in steady state compute the mean and variance of the work in the system.

5.36 Consider the two-state Markov chain of Example 5.8(A), with $$X(0) = 0$$.
(a) Compute $$Cov(X(s), X(y))$$.
(b) Let $$S_0(t)$$ denote the occupation time of state 0 by $$t$$. Use (a) and (5.8.4) to compute $$Var(S(t))$$.

5.37 Let $$Y_i = X_{(i)} – X_{(i-1)}, i = 1, \dots, n+1$$ where $$X_{(0)} = 0$$, $$X_{(n+1)} = t$$, and $$X_{(1)} \leq X_{(2)} \leq \dots \leq X_{(n)}$$ are the ordered values of a set of $$n$$ independent uniform $$(0,t)$$ random variables. Argue that $$P\{Y_i \leq y_i, i=1, \dots, n+1\}$$ is a symmetric function of $$y_1, \dots, y_n$$.

## Solutions to Stochastic Processes Ch.4

Solutions to Stochastic Processes Sheldon M. Ross Second Edition(pdf)
Since there is no official solution manual for this book, I
handcrafted the solutions by myself. Some solutions were referred from web, most copyright of which are implicit, can’t be listed clearly. Many thanks to those authors! Hope these solutions be helpful, but No Correctness or Accuracy Guaranteed. Comments are welcomed. Excerpts and links may be used, provided that full and clear credit is given.

4.1 A store that stocks a certain commodity uses the following $$(s, S)$$ ordering policy; if its supply at the beginning of a time period is $$x$$, then it orders
$$\left\{ \begin{array}{ll} 0 \quad x \geq s \\ S-x \quad x < n \\ \end{array} \right.$$
The order is immediately filled. The daily demands are independent and equal $$j$$ with probability $$\alpha_j$$. All demands that cannot be immediately met are lost. Let $$X_n$$ denote the inventory level at the end of the nth time period. Argue that $$\{X_n, n \geq 1\}$$ is a Markov chain and compute its transition probabilities.

Let $$Y_i$$ denote the demand of the ith day, then
$$X_n = \left\{ \begin{array}{ll} X_{n-1} – Y_n \quad X_{n-1} \geq s \\ S-Y_n \quad X_{n-1} < s \\ \end{array} \right.$$
Since $$Y_i$$ is independent, $$\{X_n, n \geq 1\}$$ is a Markov chain, and its transition probabilities are
$$P_{ij} = \left\{ \begin{array}{ll} \alpha_{i -j} \quad i \geq s, i \geq j \\ \alpha_{S-j} \quad i < s \\ 0 \quad i \geq s, i < j \end{array} \right.$$

4.2 For a Markov chain prove that
$$P\{X_n = j | X_{n_1} = i_1, \dots, X_{n_k} = i_k\} = P\{X_n = j | X_{n_k} = i_k\}$$
whenever $$n_1 < n_2 < \dots < n_k < n$$.

\begin{align} &P\{X_n = j | X_{n_1} = i_1, \dots, X_{n_k} = i_k\} \\ =&\frac{P_{i_kj}^{n-n_k}\prod_{l=1}^{k-1}P_{i_li_{l+1}}^{n_{l+1} – n_l}}{\prod_{l=1}^{k-1}P_{i_li_{l+1}}^{n_{l+1} – n_l}} \\ =&P_{i_kj}^{n-n_k} \\ =&P\{X_n = j | X_{n_k} = i_k\} \end{align}

4.3 Prove that if the number of states is $$n$$, and if state $$j$$ is accessible from state $$i$$, then it is accessible in $$n$$ or fewer steps.

$$j$$ is accessible from $$i$$ if, for some $$k \ geq 0, P_{ij}^k > 0$$. Now
$$P_{ij}^k = \sum\prod_{m=1}^k P_{i_mi_{m+1}}$$
where the sum is taken over all sequences $$(i_0, i_1, \dots, i_k)\in\{1, \dots, n\}^{k+1}$$ of states with $$i_0 = i$$ and $$i_k = j$$. Now, $$P_{ij}^k > 0$$ implies that at least one term is positive, say $$\prod_{m=1}^kP_{i_mi_{m+1}} > 0$$. If a state $$s$$ occurs twice, say $$i_a = i_b = s$$, and $$(a, b) \neq (0, k)$$, then the sequence of states $$(i_0, \dots, i_{a-1}, i_b, \dots, i_k)$$ also has positive probability, without this repetition. Thus, the sequence $$i_0, \dots, i_k$$ can be reduced to another sequence, say $$j_0, \dots, j_r$$, in which no state is repeated. This gives $$r \leq n-1$$, so $$i \neq j$$ is accessible in at most $$n – 1$$ steps. If $$i = j$$, we cannot remove this repetition. This gives the possibility of $$r = n$$, when $$i=j$$, but there are no other repetitions.

4.4 Show that$$P_{ij}^n = \sum_{k=0}^n f_{ij}^kP_{jj}^{n-k}$$

\begin{align} &P_{ij}^n\\ =&\sum_{k=0}^n P\{\text{visit state j at step n }|\text{first visit state j at step k}\}P\{\text{first visit j at step k}\} \\ =&\sum_{k=0}^n f_{ij}^kP_{jj}^{n-k} \end{align}

4.5 For states $$i, j, k, k \neq j$$, let
$$P_{ij/k}^n = P\{X_n = j, X_l \neq k, l = 1, \dots, n-1|X_0 = i\}.$$
(a) Explain in words what $$P_{ij/k}^n$$ represents.
(b) Prove that, for $$i \neq j, P_{ij}^n = \sum_{k=0}^n P_{ii}^kP_{ij/i}^{n-k}$$.

(a) $$P_{ij/k}^n$$ is the probability of being in $$j$$ at time $$n$$, starting in $$i$$ at time 0, while avoiding $$k$$.
(b) Let $$N$$ denote the time at which $$X_k$$ is last $$i$$ before time $$n$$. Then since $$0 \leq N \leq n$$,
\begin{align} &P_{ij}^n = P\{X_n = j | X_0 = i\} \\ =& \sum_{k=0}^n P\{X_n = j, N=k|X_0 = j\} \\ =& \sum_{k=0}^n P\{X_n = j, X_k = i, X_l \neq i: k + 1 \leq l \leq n | X_0 = i\} \\ =& \sum_{k=0}^n P\{X_n = j, X_l \neq i : k + 1 \leq l \leq n|X_0 = i, X_k = i\}P\{X_k = i|X_0=i\}\\ =& \sum_{k=0}^n P_{ii}^kP_{ij/i}^{n-k} \end{align}

4.6 Show that the symmetric random walk is recurrent in two dimensions and transient in three dimensions.

The symmetric random walk can considered as independent walk, then the limiting probability of return state 0 can be compute as following
$$\lim_{n \to \infty}P_{00}^{2n} = (\frac{1}{\sqrt{\pi n}}^d)$$
where $$d$$ is the dimension. Then we get,
$$\sum_{n=1}^{\infty} P_{00}^n = \left\{ \begin{array}{ll} \sum_{n=1}^{\infty} \frac{1}{n\pi} = \infty \quad d=2 \\ \sum_{n=1}^{\infty} \frac{1}{n^{3/2}} < \infty \quad d=3 \\ \end{array} \right.$$Thus, result proven by Proposition 4.2.3

4.7 For the symmetric random walk starting at 0:
(b) Let $$N_n$$ denote the number of returns by time $$n$$. Show that
$$E[N_{2n}] = (2n + 1){2n \choose n}(\frac{1}{2})^{2n} – 1$$
(c) Use (b) and Stirling’s approximation to show that for $$n$$ large $$E[N_n]$$ is proportional to $$\sqrt{n}$$.

(a) From Equation 3.7.2, we know,
$$P\{Z_1 \neq 0, \dots, Z_{2n-1} \neq 0, Z_{2n} = 0\} = \frac{{2n \choose n}(\frac{1}{2})^{2n}}{2n-1}$$
\begin{align} E[T] &= \sum_{k=1}^{\infty}\frac{k}{2k-1}{2k \choose k}(\frac{1}{2})^{2k}\\ &= \sum_{k=0}^{\infty} {2k \choose k}(\frac{1}{2})^{2k} \\ &= \frac{1}{\sqrt{1 – 4/4}} = \infty \end{align}
(b) Let $$I_i$$ denote the indicator of whether returned 0 at ith step. Then,
$$E[N_{2n}] = \sum_{k=0}^n E[I_{2k}]= \sum_{k=0}^n P_{00}^{2k}$$
Obviously, it holds true for $$n = 0$$. Assume it holds for $$n = k$$, when $$n = k+1$$,
\begin{align} E[N_{2k+2}] &= E[N_{2k}] + P_{00}^{2k+2} \\ &= (2k+1){2k \choose k}(\frac{1}{2})^{2k} – 1 + {2k+2 \choose k+1}(\frac{1}{2})^{2k+2}\\ &= (2k + 3){2k+2 \choose k+1}(\frac{1}{2})^{2k+2} -1 \end{align}
(c) $$\lim_{n \to \infty} \frac{E[N_n]}{\sqrt{n}} \sim \sqrt{\frac{2}{\pi}} + \sqrt{\frac{2}{\pi n}} – \frac{1}{\sqrt{n}} = \sqrt{\frac{2}{\pi}}$$

4.8 Let $$X_1, X_2, \dots$$ be independent random variables such that $$P\{X_i = j\} = \alpha_j, j \geq 0$$. Say that a record occurs at time $$n$$ if $$X_n > max(X_1, \dots, X_{n-1})$$, where $$X_0 = -\infty$$, and if a record does occur at time $$n$$ call $$X_n$$ the record value. Let $$R_i$$ denote the ith record value.
(a) Argue that $$\{R_i, i \geq 1\}$$ is a Markov chain and compute its transition probabilities.
(b) Let $$T_i$$ denote the time between the ith and $$(i + 1)$$st record. Is $$\{T_i, i \geq 1\}$$ a Markov chain? What about $$\{(R_i, T_i), i \geq 1\}$$? Compute transition probabilities where appropriate.
(c) Let $$S_n = \sum_{i=1}^n T_i, n \geq 1$$. Argue that $$\{S_n, n \geq 1\}$$ is a Markov chain and find its transition probabilities.

(a) $$P_{ij} = \left\{ \begin{array}{ll} 0 \quad i \geq j \\ \alpha_j/\sum_{k=i+1}^{\infty} \alpha_k \quad i < j \\ \end{array} \right.$$

4.9 For a Markov chain $$\{X_n, n \geq 0\}$$, show that
$$P\{X_k = i_k|X_j = i_j, \text{for all } j \neq k\} = P\{X_k = i_k|X_{k-1} = i_{k-1}, X_{k+1} = i_{k+1}\}$$

\begin{align} & P\{X_k = i_k|X_j = i_j, \text{for all } j \neq k\} \\ =& \frac{P\{X_k = i_k, \text{for all } k\} }{ P\{X_j = i_j, \text{for all } j \neq k\}} \\ =& \frac{\prod_{j=0}^{\infty} P_{i_ji_{j+1}}}{\prod_{j \neq k-1, k}^{\infty} P_{i_ji_{j+1}}P^2_{i_{k-1}i_{k+1}} } \\ =& \frac{P\{X_{k} = i_{k} | X_{k-1} = i_{k-1}\}P\{X_{k+1} = i_{k+1} | X_{k} = i_{k}\} }{P\{X_{k+1} = i_{k+1} | X_{k-1} = i_{k-1}\}} \\ =&\frac{P\{X_{k} = i_{k} | X_{k-1} = i_{k-1}\}P\{X_{k+1} = i_{k+1} | X_{k} = i_{k}, X_{k-1} = i_{k-1}\} P\{X_{k-1} = i_{k-1}\}}{P\{X_{k+1} = i_{k+1} | X_{k-1} = i_{k-1}\}P\{X_{k-1} = i_{k-1}\} } \\ =& P\{X_k = i_k|X_{k-1} = i_{k-1}, X_{k+1} = i_{k+1}\} \end{align}

4.11 If $$f_{ii} < 1$$ and $$f_{jj} < 1$$, show that:
(a) $$\sum_{n=1}^{\infty} P_{ij}^n < \infty$$
(b) $$f_{ij} = \frac{\sum_{n=1}^{\infty} P_{ij}^n}{1 + \sum_{n=1}^{\infty}P_{jj}^n}$$

(a) Since $$j$$ is transient, the expected number of visits to $$j$$ is finite, also when from $$i$$. Proven.
(b) \begin{align} &E[\text{# of visits j starting from i}] \\ =& \sum_{n=1}^{\infty} P_{ij}^n \\ =&\sum_{n=1}^{\infty}E[\text{# of visits j starting from i}|\text{first visit j at step n}]f_{ij}^n \\ =& \sum_{n=1}^{\infty}f_{ij}^n(1 + E[\text{# of visits j starting from j}]) \\ =& f_{ij}(1 + \sum_{n=1}^{\infty}P_{jj}^n) \end{align}

4.12 A transition probability matrix $$P$$ is said to be doubly stochastic if
$$\sum_i P_{ij} = 1 \quad \text{for all } j.$$
That is, the column sums all equal 1. If a doubly stochastic chain has $$n$$ states and is ergodic, calculate its limiting probabilities.

Double stochastic implies that a state can be possibly from any of the states, and can transition to any of the states. We may guess the stationary maybe evenly distributed across all the states.
Since $$\pi = (1/n, \dots, 1/n)$$ follows the equation $$\pi = \pi P$$, also due to the uniqueness of the limiting distribution, $$\pi$$ is the result.

4.15 In the $$M/G/1$$ system (Example 4.3(A)) suppose that $$\rho < 1$$ and thus the stationary probabilities exist. Compute $$\pi^{\prime}(s)$$ and find, by taking the limit as $$s \to 1, \sum_0^{\infty}i\pi_i$$.

\begin{align} \pi^{\prime}(s) &= (1 – A^{\prime}(1))\frac{A(s) – A^2(s) + s(s-1)A^{\prime}(s)}{[s-A(s)]^2} \\ \sum_0^{\infty}i\pi_i &= \lim_{s \to 1} \pi^{\prime}(s) \\ &= \lim_{s \to 1}(1 – A^{\prime}(1)) \frac{A^{\prime}(s)}{2[1 – A^{\prime}(s)]} \\ &= \lambda E[S]/2 \end{align}

4.18 Jobs arrive at a processing center in accordance with a Poisson process with rate $$\lambda$$. However, the center has waiting space for only $$N$$ jobs and so an arriving job finding $$N$$ others waiting goes away. At most 1 job per day can be processed, and processing of this job must start at the beginning of the day. Thus, if there are any jobs waiting for processing at the beginning of a day, then one of them is processed that day, and if no jobs are waiting at the beginning of a day then no jobs are processed that day. Let $$X_n$$ denote the number of jobs at the center at the beginning of day $$n$$.
(a) Find the transition probabilities of the Markov chain $$\{X_n, n \geq 0\}$$.
(b) Is this chain ergodic? Explain.
(c) Write the equations for the stationary probabilities.

Let $$p(j) = \lambda^je^{-\lambda}/j!$$
(a) \begin{align} P_{0j} &= \left\{ \begin{array}{ll} p(j) \quad 0 \leq j < N \\ \sum_{k=N}^{\infty}p(k) \quad j = N \\ \end{array} \right. \\ P_{ij} &= \left\{ \begin{array}{ll} p(j-i+1) \quad i – 1 \leq j < N \\ \sum_{k=N – i+1}^{\infty}p(k) \quad j = N \\ \end{array} \right. \end{align}
(b) The Markov chain is irreducible, since $$P_0j > 0$$ and $$P_{j0}^{j} = (p(0))^k > 0$$. It is aperiodic, since $$P_{00} > 0$$. A finite state Markov chain which is irreducible and aperiodic is ergodic (since it it not possible for all states to be transient or for any states to be null recurrent).
(c) There is no particularly elegant way to write these equations. Since $$\pi_j = \sum_{k=0}^{j+1}\pi_kP_{kj}$$, This can be rewritten as a recursion:
$$\pi_{j+1} = \frac{\pi_j(1 – P_{jj}) – \sum_{k=0}^{j-1}\pi_kP_{kj}}{P_{j+1, j}}$$

4.19 Let $$\pi_j, j \geq 0$$, be the stationary probabilities for a specified Markov chain.

(a) Compute the following statements: $$\pi_iP_{ij}$$ is the proportion of all transition that . . . .
Let $$A$$ denote a set of states and let $$A^c$$ denote the remaining states.
(b) Finish the following statement: $$\sum_{j \in A^c}\sum_{i \in A} \pi_iP_{ij}$$ is the proportion of all transitions that . . . .
(c) Let $$N_n(A, A^c)$$ denote the number of the first $$n$$ transitions that are from a state in $$A$$ to one in $$A^c$$; similarly, let $$N_n(A^c, A)$$ denote the number that are from a state in $$A^c$$ to one in $$A$$. Argue that
$$|N_n(A, A^c) – N_n(A^c, A) | \leq 1$$
(d) Prove and interpret the following result:
$$\sum_{j \in A^c}\sum_{i \in A} \pi_iP_{ij} = \sum_{j \in A^c}\sum_{i \in A} \pi_jP_{ji}$$

(a) from $$i$$ to $$j$$
(b) from $$A^c$$ to $$A$$
(c) Since the states are either in $$A$$ or in $$A^c$$, the number of transition can only be equal or only one more another.
(d) Obviously, in the long run, the proportion of all transitions that result in the state of the chain moving from $$A$$ to $$A^c$$ is equal to the proportion of all transitions that result in the state of the chain moving from $$A^c$$ to $$A$$.

4.22 Compute the expected number of plays, starting in $$i$$, in the gambler’s ruin problem, until the gambler reaches either 0 or $$N$$.

From Example 4.4(A) we know,
$$E[B] = \frac{1}{2p-1}\{\frac{n[1 – (q/p)^i]}{1 – (q/p)^N} – i\}$$

4.23 In the gambler’s ruin problem show that
$$P\{$$she wins the next gamble|present fortune is $$i$$, she eventually reaches $$N\}$$
$$=\left\{ \begin{array}{ll} p[1- (p/q)^{i+1}]/[1-(q/p)^i] \quad p \neq 1/2 \\ (i+1)/2i \quad p = 1/2 \\ \end{array} \right.$$

Let event $$A$$ denote she wins the next gamble|present fortune is $$i, B$$ denote present fortune is $$i$$, she eventually reaches $$N$$. Then,
\begin{align} P\{A|B\} &= \frac{P\{AB\}}{P\{B\}}\\ &=\left\{\begin{array}{ll} \frac{p[1- (p/q)^{i+1}]/[1-(q/p)^N]}{[1- (p/q)^{i}]/[1-(q/p)^N]} \quad p \neq 1/2 \\ \frac{(i+1)/2N}{i/N} \quad p = 1/2 \\ \end{array} \right.\\ &=\left\{ \begin{array}{ll} p[1- (p/q)^{i+1}]/[1-(q/p)^i] \quad p \neq 1/2 \\ (i+1)/2i \quad p = 1/2 \\ \end{array} \right. \end{align}

4.24 Let $$T = \{1, \dots, t\}$$ denote the transient states of a Markov chain, and let $$Q$$ be, as in Section 4.4, the matrix of transition probabilities from states in $$T$$ to states in $$T$$. Let $$m_{ij}(n)$$ denote the expected amount of time spent in state $$j$$ during the first $$n$$ transitions given that the chain begins in state $$i$$, for $$i$$ and $$j$$ in $$T$$. Let $$M_n$$ be the matrix whose element in row $$i$$, column $$j$$ is $$m_{ij}(n)$$.
(a) Show that $$M_n = I + Q + Q^2 + \dots + Q^n$$.
(b) Show that $$M_n – I + Q^{n+1} = Q[I + Q + Q^2 + \dots + Q^n]$$.
(c) Show that $$M_n = (I – Q)^{-1}(I – Q^{n+1})$$.

(a) Obviously, the element of $$Q^{k}$$ in row $$i$$, column $$j$$ is $$P_{ij}^k$$, then we have $$m_{ij}(n) = 1 + \sum_{k=1}^n P_{ij}^k$$. Proven.
(b) As shown in (a).
(c) $$M_n$$ can be computed as common geometric sequence.

4.25 Consider the gambler’s ruin problem with $$N = 6$$ and $$p=7$$. Starting in state 3, determine:
(a) the expected number of visits to state 5.
(b) the expected number of visits to state 1.
(c) the expected number of visits to state 5 in the first 7 transitions.
(d) the probability of ever visiting state 1.

Calculate $$M = (I – Q)^{-1}$$, then
(a) $$m_{3,5} = 1.3243$$ (b) $$m_{3,1} = 0.2432$$ (d) $$f_{3,1} = m_{3,1}/m_{1,1} = 0.1717$$
(c) From Problem 4.24 (c), compute $$M_{7} = M(I – Q^8)$$, then the (3, 5) element of $$M_7$$ is $$0.9932$$.
Recommend a tool for computing matrix

4.26 Consider the Markov chain with states $$0, 1, \dots, n$$ and transition probabilities
$$P_{0,1} = 1 = P_{n, n-1}, \quad P_{i, i+1} = p = 1 – P_{i, i-1}, \quad 0 < i < n.$$
Let $$\mu_{i,n}$$ denote the mean time to go from state $$i$$ to state $$n$$.
(a) Derive a set of linear equations for the $$\mu_{i, n}$$.
(b) Let $$m_i$$ denote the mean time to go from state $$i$$ to state $$i+1$$. Derive a set of equations for $$m_i, i=0, \dots, n-1$$, and show how they can be solved recursively, first for $$i=0$$, then $$i = 1$$, and so on.
(c) What is the relation between $$\mu_{i, n}$$ and the $$m_j$$.
Starting at state 0, say that an excursion ends when the chain either returns to 0 or reaches state $$n$$. Let $$X_j$$ denote the number of transitions in the jth excursion (that is, the one that begins at the jth return to 0), $$j \geq 1$$
(d) Find $$E[X_j]$$
(Hint: Relate it to the mean time of a gambler’s ruin problem)
(e) Let $$N$$ denote the first excursion that ends in state $$n$$, and find $$E[N]$$
(f) Find $$\mu_{0,n}$$.
(g) Find $$\mu_{i,n}$$.

(a) $$\mu_{i, n} = 1 + p\mu_{i+1,n} + (1-p)\mu_{i-1, n}$$
(b) \begin{align} m_i &= p + (1-p)(1 + m_{i-1} + m_{i})\\ m_i + \frac{1}{1 – 2p} &= \frac{1-p}{p}(m_{i-1} + \frac{1}{1-2p}) \\ m_0 &= 1, m_1 = \frac{2-p}{p} \end{align}
(c) $$\mu_{i, n} = \sum_{j=i}^{n-1}m_j$$
(d) $$E[X_j] = 1 + \frac{1}{2p-1}\{\frac{n[1 – (q/p)^j]}{1 – (q/p)^n} – j\}$$
(e) $$N$$ is a geometric variable with parameter $$a = (1 – q/p)/[1 – (q/p)^n]$$, thus $$E[N = 1/a]$$.
(f) We can compute $$\mu_{0, n}$$ in two ways:
$$E[N](E[X_1] + 1) = \sum_{j=0}^{n-1}m_j = \frac{2p(p-1)}{(2p – 1)^2}[1 – (\frac{1-p}{p})^n] – \frac{n}{1-2p}$$
(g) $$\frac{2p(p-1)}{(2p-1)^2}[(\frac{1-p}{p})^i – (\frac{1-p}{p})^n] – \frac{n-i}{1-2p}$$

4.27 Consider a particle that moves along a set of $$m+1$$ nodes, labeled $$0, 1, \dots, m$$. At each move it either goes one step in the clockwise direction with probability $$p$$ or one step in the counterclockwise direction with probability $$1-p$$. It continues moving until all the nodes $$1, 2, \dots, m$$ have been visited at least once. Starting at node 0, find the probability that node $$i$$ is the last node visited, $$i = 1, \dots, m$$.

As shown in Example 1.9(B),
\begin{align} &P\{i \text{ is the last node visited}\}\\ =&(1-f_{m-1, m})P\{\text{visit i – 1 before i + 1}\} + f_{1,m-1}P\{\text{visit i + 1 before i – 1}\} \\ =&(1-f_{m-1, m})f_{m-i, m-1} + f_{1, m-1}(1 – f_{m-i, m-1}) \\ \end{align}

4.28 In Problem 4.27, find the expected number of additional steps it takes to return to the initial position after all nodes have been visited.

\begin{align} P_i &= (1-f_{m-1, m})f_{m-i, m-1} + f_{1, m-1}(1 – f_{m-i, m-1}) \\ q &= 1 – p\\ B_i &= \frac{1}{2p-1}\{ \frac{(m+1)[1 – (q/p)^i]}{1 – (q/p)^{m+1}} – i\} \\ E &= \sum_{i=1}^{m} B_iP_i \end{align}

4.29 Each day one of $$n$$ possible elements is requested, the ith one with probability $$P_i, i \geq 1, \sum_{i=1}^n P_i = 1$$. These elements are at all times arranged in an ordered list that is revised as follows: the element selected is moved to the front of the list with the relative positions of all the other elements remaining unchanged. Defined the state at any time to be the list ordering at that time.
(a) Argue that the above is a Markov chain.
(b) For any state $$i_1, \dots, i_n$$(which is a permutation of $$1,2, \dots, n$$), let $$\pi(i_1, \dots, i_n)$$ denote the limiting probability. Argue that
$$\pi(i_1, \dots, i_n) = P_{i_1}\frac{P_{i_2}}{1 – P_{i_1}}\dots\frac{P_{i_{n-1}}}{1 – P_{i_1} – \dots – P_{i_{n-2}}}$$

4.31 A spider hunting a fly moves between locations 1 and 2 according to a Markov chain with transition matrix $$\begin{bmatrix} 0.7 & 0.3 \\ 0.3 & 0.7\end{bmatrix}$$ starting in location 1. The fly, unaware of the spider, starts in location 2 and moves according to a Markov chain with transition matrix $$\begin{bmatrix} 0.4 & 0.6 \\ 0.6 & 0.4\end{bmatrix}$$. The spider catches the fly and the hunt ends whenever they meet in the same location.
Show that the progress of the hunt, except for knowing the location where it ends, can be described by a three-state Markov chain where one absorbing state represents hunt ended and the other two that the spider and fly are at different locations. Obtain the transition matrix for this chain.
(a) Find the probability that at time $$n$$ the spider and fly are both at their initial locations.
(b) What is the average duration of the hunt?

Let state 1 denote spider at 1, fly at 2; state 2 denote spider at 2, fly at 1; state 3 denote they are at same location. Then the transition matrix is$$P = \begin{bmatrix} 0.28 & 0.18 & 0.54 \\ 0.18 & 0.28 & 0.54\\ 0 & 0 & 1\end{bmatrix}$$
(a) By diagonalize the transition matrix, we can compute
$$P^n = \begin{bmatrix} 1 & -1 & 1 \\ 1 & 1 & 1\\ 1 & 0 & 0\end{bmatrix} \begin{bmatrix} 1 & 0 & 0 \\ 0 & 0.1^n & 0\\ 0 & 0 & 0.46^n\end{bmatrix} \begin{bmatrix} 0 & 0 & 1 \\ -0.5 & 0.5 & 0\\ 0.5 & 0.5 & -1\end{bmatrix}$$
thus, $$(0.1^n + 0.46^n)/2$$
(b) The duration is a geometric variable with mean $$50/27$$

4.32 Consider a simple random walk on the integer points in which at each step a particle moves one step in the positive direction with probability $$p$$, one step in the negative direction with probability $$p$$, and remains in the same place with probability $$q = 1 – 2p(0 < p < 1/2)$$. Suppose an absorbing barrier is placed at the origin–that is, $$P_{00}=1$$–and a reflecting barrier at $$N$$–that is, $$P_{N, N-1} = 1$$–and that the particle starts at $$n (0 < n < N)$$.
Show that the probability of absorption is 1, and find the mean number of steps.

Let $$A_i$$ denote the probability of absorption for a particle starting at $$i$$. Then
$$A_i = pA_{i-1} + pA_{i+1} + (1-2p)A_i, \quad A_0 = 1, \quad A_{N} = A_{N-1}$$
Solve the equations we get, $$A_i = 1$$ for all $$i$$.
Let $$T_i$$ denote the expected number of steps starting at $$i$$, then
\begin{align} E[T_i] &= 1 + pE[T_{i-1}] + pE[T_{i+1}] + (1-2p)E[T_i],\\ E[T_0] &= 0, \\ E[T_N] &= 1 + E[T_{N-1}]\end{align}
Solve the equation above, we get $$E[T_n] = \frac{n(2N – n + 2p -1)}{2p}$$

4.33 Given that $$\{X_n, n \geq 0\}$$ is a branching process:
(a) Argue that either $$X_n$$ converges to 0 or infinity.
(b) Show that
$$Var(X_n|X_0 = 1) = \left\{\begin{array}{ll} \sigma^2\mu^{n-1} \frac{\mu^n – 1}{\mu – 1} \quad \mu \neq 1 \\ n\sigma^2 \quad \mu = 1 \\ \end{array} \right.$$
where $$\mu$$ and $$\sigma^2$$ are the mean and variance of the number of offspring an individual has.

(a) Since any finite set of transient states $$\{1, 2, \dots, n\}$$ will be visited only finitely often, this leads to if $$P_0 > 0$$, then the population will either die out or its size will converge to infinity.
(b) \begin{align} &Var(X_n|X_0 = 1)\\ =& E[Var(X_n|X_{n-1}, X_0=1)] + Var(E[X_n|X_{n-1}, X_0 = 1]) \\ =& \sigma^2 E[X_{n-1}|X_0=1] + Var(\mu X_{n-1}|X_0 = 1) \\ =& \sigma^2\mu^{n-1} + \mu^2Var(X_{n-1}|X_0=1) \\ =& \sigma^2(\mu^{n-1} + \mu^n + \dots + \mu^{2n-2}) \\ =& \left\{\begin{array}{ll} \sigma^2\mu^{n-1} \frac{\mu^n – 1}{\mu – 1} \quad \mu \neq 1 \\ n\sigma^2 \quad \mu = 1 \\ \end{array} \right. \end{align}

4.34 In a branching process the number of offspring per individual has a binomial distribution with parameters $$2, p$$. Starting with a single individual, calculate:
(a) the extinction probability;
(b) the probability that the population becomes extinct for the first time in the third generation.
Suppose that, instead of starting with a single individual, the initial population size $$Z_0$$ is a random variable that is Poisson distributed with mean $$\lambda$$. Show that, in this case, the extinction probability is given, for $$p > 1/2$$, by
$$exp\{\lambda(1-2p)/p^2\}$$

(a) $$\pi_0 = \sum_{k=0}^2 \pi^k{2 \choose k}p^k(1-p)^{2-k}\\ \pi_0 = \left\{\begin{array}{ll} 1 \quad p \leq 0.5 \\ (1 – 1/p)^2 \quad p > 0.5 \\ \end{array} \right.$$
(b) Let $$\phi_n(s) = E[s^{X_n}]$$. Then $$\phi_n(s) = \phi_1(\phi_{n-1}(s))$$, and it’s easy to see that $$\phi_1(s) = (sp + 1 -p)^2, P\{X_n = 0\} = \phi_n(0)$$. Thus,
\begin{align} &\phi_3(0) – \phi_2(0) \\ =& \phi_1(\phi_1(\phi_1(0))) – \phi_1(\phi_1(0))\\ =& 4p^2(1-p)^4 + 6p^3(1-p)^5 + 6p^4(1-p)^6 + 4p^5(1-p)^7 + p^6(1-p)^8 \end{align}
(c) \begin{align} &\sum_{k=0}^{\infty} (1 – 1/p)^{2k} \frac{\lambda^ke^{-\lambda}}{k!} \\ =& e^{\lambda(1-2p)/p^2} \sum_{k=0}^{\infty} \frac{[\lambda(1 – 1/p)^2]^k}{k!}e^{-\lambda(1 – 1/p)^2 }\\ =&exp\{\lambda(1-2p)/p^2\} \end{align}

4.35 Consider a branching process in which the number of offspring per individual has a Poisson distributed with mean $$\lambda, \lambda > 1$$. Let $$\pi_0$$ denote the probability that, starting with a single individual, the population eventually becomes extinct. Also, let $$a, a < 1$$, be such that$$ae^{-a} = \lambda e^{-\lambda}$$
(a) Show that $$a = \lambda\pi_0$$.
(b) Show that, conditional on eventual extinction, the branching process follows the same probability law as the branching process in which the number of offspring per individual is Poisson with mean $$a$$.

(a) \begin{align} \pi_0 &= \sum_{k=0}^{\infty} \frac{\pi^k\lambda^k}{k!} e^{-\lambda}\\ &= e^{\lambda\pi_0-\lambda}\sum_{k=0}^{\infty}\frac{(\lambda\pi_0)^k}{k!}e^{-\lambda\pi_0}\\ &= e^{\lambda\pi_0-\lambda} \\ \end{align}
Thus,
$$ae^{-a} = \lambda e^{-\lambda} = \lambda\pi_0e^{-\lambda\pi_0}$$
(b) \begin{align} &P\{X = k| \text{eventually die out}\} \\ =&\frac{P\{X=k\}\pi_0^k}{\pi_0} \\ =&\frac{(\pi_0\lambda)^{k-1}\lambda e^{-\lambda}}{k!}\\ =&\frac{a^ke^{-a}}{k!} \end{align}

4.36 For the Markov chain model of Section 4.6.1, namely,
$$P_{ij} = \frac{1}{i – 1}, \quad j = 1, \dots, i-1, \quad i > 1,$$
suppose that the initial state is $$N \equiv {n \choose m}$$, where $$n>m$$. Show that when $$n, m$$ and $$n-m$$ are large the number of steps to reach 1 from state $$N$$ has approximately a Poisson distribution with mean
$$m[c\log\frac{c}{c-1} + \log(c-1)],$$
where $$c = n/m$$. (Hint: Use Stirling’s approximation.)

From Proposition 4.6.2 (iii) we get $$T_N \sim \pi(\log {n \choose m})$$.
\begin{align} \log {n \choose m} &= \log n! – \log m! – \log (n-m)! \\ &\sim n\log n – n – m\log m + m -(n-m)\log(n-m) + (n-m) \\ &= n\log\frac{n}{n-m} + m\log\frac{n-m}{m}\\ &= m[c\log\frac{c}{c-1} + \log(c-1)] \end{align}

4.37 For any infinite sequence $$x_1, x_2, \dots$$, we say that a new long run begins each time the sequence changes direction. That is, if the sequence starts 5, 2, 4, 5, 6, 9, 3, 4, then there are three long runs–namely, (5, 2), (4, 5, 6, 9), and (3, 4). Let $$X_1, X_2, \dots$$ be independent uniform (0, 1) random variables and let $$I_n$$ denote the initial value of the nth long run. Argue that $$\{I_n, n \geq 1\}$$ is a Markov chain having a continuous state space with transition probability density given by
$$p(y|x) = e^{1-x} + e^x – e^{|y-x|} – 1.$$

Similarly to Section 4.6.2, When ascend changed to descend,
\begin{align} &P\{I_{n+1} \in (y+dy), L_n = m | I_n = x\} \\ =&\left\{\begin{array}{ll} \frac{x^{m-1}}{(m-1)!}dy[1 – (\frac{x-y}{x})^{m-1}] \quad y < x \\ \frac{x^{m-1}}{(m-1)!}dy \quad y > x \\ \end{array}\right. \end{align}
When descend changed to ascend,
\begin{align} &P\{I_{n+1} \in (y+dy), L_n = m | I_n = x\} \\ =&\frac{x^{m-1}}{(m-1)!}dyP\{min(X_1, \dots, X_{m-1}) < y | X_i < x, i = 1, \dots, m-1\} \\ =&\left\{\begin{array}{ll} \frac{x^{m-1}}{(m-1)!}dy[1 – (\frac{x-y}{x})^{m-1}] \quad y < x \\ \frac{x^{m-1}}{(m-1)!}dy \quad y > x \\ \end{array}\right. \end{align}Thus,
\begin{align} p(y|x) &= \left\{\begin{array}{ll} \sum_{m=2}^{\infty} \{\frac{(1-x)^{m-1}}{(m-1)!} + \frac{x^{m-1}}{(m-1)!}dy[1 – (\frac{x-y}{x})^{m-1}]\} \quad y < x \\ \sum_{m=2}^{\infty} \{\frac{(1-x)^{m-1}}{(m-1)!}dy[1 – (\frac{y-x}{1-x})^{m-1}] + \frac{x^{m-1}}{(m-1)!}dy\} \quad y > x \\ \end{array}\right. \\ &= e^{1-x} + e^x – e^{|y-x|} – 1 \end{align}

4.38 Suppose in Example 4.7(B) that if the Markov chain is in state $$i$$ and the random variable distributed according to $$q_{ij}$$ takes on the value $$j$$, then the next state is set equal to $$j$$ with probability $$a_j/(a_j + a_i)$$ and equal to $$i$$ otherwise. Show that the limiting probabilities for this chain are $$\pi_j = a_j/\sum_j a_j$$.

From Example 4.7(B), we can verify the chain is time reversible with stationary probabilities $$\pi_j$$ by verifying:
$$\pi_iP_{ij} = \frac{a_i}{\sum_k a_k}\frac{q_{ij}a_j}{a_i+a_j} = \pi_jP_{ji}$$
Since the above equation is immediate, we can follow the same reason to conclude that $$\pi_j = a_j/\sum_j a_j$$ are the limiting probabilities.

4.39 Find the transition probabilities for the Markov chain of Example 4.3(D) and show that it is time reversible.

$$P_{ij} = \sum_{k=0}^{min(i,j)} {i \choose k}(1-p)^kp^{i-k}\frac{e^{-\lambda}\lambda^{j-k}}{(j-k)!}$$
Thus by the formula $$\pi_j = e^{-\lambda/p}(\lambda/p)^j/j!$$, we get $$\pi_iP_{ij} = \pi_jP_{ji}$$

4.41 A particle moves among $$n$$ locations that are arranged in a circle (with the neighbors of location $$n$$ being $$n-1$$ and 1). At each step, it moves one position either in the clockwise position with probability $$p$$ or in the counterclockwise position with probability $$1 – p$$
(a) Find the transition probabilities of the reverse chain.
(b) Is the chain time reversible?

(a) $$P_{i, i+1} = p, P_{i, i-1} = 1-p$$ for $$1 < i < n$$, and $$P_{1,2} = P_{n,1} = p$$, $$P_{1,n} = P_{n, n-1} = 1-p$$.
(b) The chain is time reversible when $$p = 1/2$$

4.42 Consider the Markov chain with states $$0, 1, \dots, n$$ and with transition probabilities
$$P_{0,1} = P_{n, n-1} = 1, \quad P_{i, i+1} = p_i = 1 – P_{i, i-1}, \quad i = 1, \dots, n-1.$$
Show that this Markov chain is of the type considered in Proposition 4.7.1 and find its stationary probabilities.

The chain can be considered as a weighted graph, with $$w_{ij} = P_{ij}$$. The stationary probabilities are
$$\pi_i = \frac{\sum_j P_{ij}}{\sum_j\sum_iP{ij}} = \frac{1}{n}$$

4.44 Consider a time-reversible Markov chain with transition probabilities $$P_{ij}$$ and limiting probabilities $$\pi_i$$, and now consider the same chain truncated to the states $$0, 1, \dots, M$$. That is, for the truncated chain its transition probabilities $$P_{ij}$$ are$$\bar{P_{ij}} = \left\{ \begin{array}{ll} P_{ij} + \sum_{k > M}P_{ik} \quad 0 \leq i \leq M, j = i \\ P_{ij} \quad 0 \leq i \neq j \leq M \\ 0 \quad \text{otherwise}. \end{array} \right.$$
Show that the truncated chain is also time reversible and has limiting probabilities given by$$\bar{\pi_i} = \frac{\pi_i}{\sum_{i=0}^M\pi_i}$$

\begin{align} \bar{\pi_i}\bar{P_{ij}} &= \frac{\pi_i}{\sum_{i=0}^{M}\pi_i}(P_{ij} + I_{\{i=j\}}\sum_{k>M}^MP_{ik}) \\ &= \frac{\pi_j}{\sum_{i=0}^{M}\pi_i}(P_{ji} + I_{\{i=j\}}\sum_{k>M}^MP_{jk}) \\ &= \bar{\pi_j}\bar{P_{ji}} \end{align}

4.45 Show that a finite state, ergodic Markov chain such that $$P_{ij} > 0$$ for all $$i \neq j$$ is time reversible if, and only if,
$$P_{ij}P_{jk}P_{ki} = P_{ik}P_{kj}P_{ji} \quad \text{for all } i, j, k$$

Suppose $$\pi_j = cP_{ij}/P_{ji}$$, where $$c$$ must be chosen so that $$\sum_j \pi_j = 1$$. Then $$\pi_jP_{jk}= \pi_kP_{kj}$$ if and only if
$$\frac{cP_{ij}P_{jk}}{P_{ji}} = \frac{cP_{ik}P_{kj}}{P_{ki}}$$
if and only if $$P_{ij}P_{jk}P_{ki} = P_{ik}P_{kj}P_{ji}$$

4.47 $$M$$ ball are initially distributed among $$m$$ urns. At each stage one of the balls is selected at random, taken from whichever urn it is in, and placed, at random, in one of the other $$m-1$$ urns. Consider the Markov chain whose state at any time is the vector $$n_1, \dots, n_m$$, where $$n_i$$ denotes the number of balls in urn $$i$$. Guess at the limiting probabilities for this Markov chain and then verify your guess and show at the same time that the Markov chain is time reversible.

Guess the limiting distribution is the Multinomial distribution:
$$\pi(n_1, n_2, \dots, n_m) = \frac{M!}{n_1!\dots n_m!m^M}$$ where $$n_i \geq 0, n_1 + \dots + n_m = M$$. Now for two vectors $$(n_1, n_2, \dots, n_m)$$ and $$(n_1, n_2, \dots, n_i+1, \dots, n_j -1, \dots, n_m)$$, if $$i \neq j, n_j > 0$$, then the transition probability is
\begin{align} &P((n_1, \dots, n_i + 1, \dots, n_j – 1, \dots, n_m ),(n_1, \dots, n_m))\\ =&\frac{n_i + 1}{M}\frac{1}{m-1}\end{align}
Therefore, we have
\begin{align} \sum_{1 \leq i \neq j \leq m} &\pi(n_1, \dots, n_i + 1, \dots, n_j – 1, \dots, n_m)\\ &P((n_1, \dots, n_i + 1, \dots, n_j – 1, \dots, n_m ),(n_1, \dots, n_m)) \\ =&\sum_{1 \leq i \neq j \leq m} \frac{M!}{n_1!\dots (n_i+1)!\dots(n_j-1)!\dots n_m!m^M}\frac{n_i+1}{M}\frac{1}{m-1} \\ =& \frac{M!}{n_1!\dots n_m!m^M} \sum_{1 \leq i \neq j \leq m} \frac{n_j}{M(m-1)}\\ =& \pi(n_1, \dots, n_m) \end{align}

4.48 For an ergodic semi-Markov process.
(a) Compute the rate at which the process makes a transition from $$i$$ into $$j$$.
(b) Show that $$\sum_{i} P_{ij}/\mu_{ii} = 1/\mu_{jj}.$$
(c) Show that the proportion of time that the process is in state $$i$$ and headed for state $$j$$ is $$P_{ij}\eta_{ij}/\mu_{ii}$$, where $$\eta_{ij} = \int_0^{\infty}\bar{F_{ij}}(t)dt.$$
(d) Show that the proportion of time that the state is $$i$$ and will next be $$j$$ within a time $$x$$ is$$\frac{P_{ij}\eta_{ij}}{\mu_{ii}}F_{i,j}^e(x),$$ where $$F_{i,j}^e$$ is the equilibrium distribution of $$F_{ij}$$.

(a) Define a (delayed) renewal reward process: a renewal occurs when state $$i$$ is entered from other states and the reward of each n-th cycle $$R_n$$ equals 1 if in the n-th cycle, the state after $$i$$ is $$j$$ and 0 otherwise. Let $$R_{ij}(t)$$ be the total number of transitions from $$i$$ to $$j$$ by time $$t$$. We have
$$\sum_{n=0}^{N(t)}R_n \leq R_{ij}(t) \leq \sum_{n=0}^{N(t) + 1}R_n \leq \sum_{n=0}^{N(t)} R_n + 1.$$ Thus the rate at which the process makes a transition from $$i$$ to $$j$$ equals
$$\lim_{t \to \infty} \frac{R_{ij}(t)}{t} = \frac{E[R]}{E[X]} = \frac{P_{ij}}{\mu_{ii}}$$
(b) Let $$R_j(t)$$ be the number of visits to state $$j$$ by time $$t$$. Thus
\begin{align} \sum_i R_{ij}(t) &= R_j(t) \\ \sum_i \lim_{t \to \infty} \frac{R_{ij}(t)}{t} &= \lim_{t \to \infty}\frac{R_j(t)}{t} \\ \sum_{i} P_{ij}/\mu_{ii} &= 1/\mu_{jj} \end{align}
(c) Define cycle as in part (a) and the reward in a cycle to be 0 if the transition from $$i$$ is not into $$j$$ and $$T_{ij}$$ the time taken for transition if the transition from $$i$$ is into $$j$$. Thus
$$\lim_{t \to \infty} \frac{R(t)}{t} = \frac{E[R]}{E[X]} = \frac{P_{ij}E[T_{ij}]}{\mu_ij} = \frac{P_{ij}\eta_{ij}}{\mu_{ii}}$$
(d) Define cycle as in last part and the reward in a cycle as 0 if the transition from $$i$$ is not into $$j$$ and $$min(x, T_{ij})$$ if the transition from $$i$$ is into $$j$$. Thus
$$\lim_{t \to \infty} \frac{R(t)}{t} = \frac{E[R]}{E[X]} = \frac{P_{ij}E[min(x, T_{ij})]}{\mu_{ii}} = \frac{P_{ij}\eta_{ij}}{\mu_{ii}}F_{i,j}^e(x)$$

4.49 For an ergodic semi-Markov process derive an expression, as $$t \to \infty$$, for the limiting conditional probability that the next state visited after $$t$$ is state $$j$$, given $$X(t) = i$$.

4.50 A taxi alternates between three locations. When it reaches location 1, it is equally likely to go next to either 2 or 3. When it reaches 2, it will next go to 1 with probability 1/3 and to 3 with probability 2/3. From 3 it always goes to 1. The mean times between locations $$i$$ and $$j$$ are $$t_{12} = 20, t_{13} = 30, t_{23} = 30 (t_{ij} = t_{ji})$$.
(a) What is the (limiting) probability that the taxi’s most recent stop was at location $$i, i = 1, 2, 3$$ ?
(b) What is the (limiting) probability that the taxi is heading for location 2?
(c) What fraction of time is the taxi traveling from 2 to 3? Note: Upon arrival at a location the taxi immediately departs.

The transition matrix is $$\begin{bmatrix} 0 & 1/2 & 1/2 \\ 1/3 & 0 & 2/3\\ 1 & 0 & 0\end{bmatrix}$$
And the stationary probabilities are $$\pi_1 = 6/14, \pi_2 = 3/14, \pi_3 = 5/14$$.
Since $$\mu_i = \sum_i P_{ij}\mu_{ij}$$, we have $$\mu_1 = 25, \mu_2 = 80/3, \mu_3 = 30$$.
(a) From Proposition 4.8.3, we have $$P_1 = 15/38, P_2 = 8/38, P_3 = 16/38$$.
(b) From Problem 4.48(c), we have $$P_{12}\eta_{12}P_1/\mu_1 = 3/19$$.
(c) Similar to (b), $$P_{23}\eta_{23}P_2/\mu_2 = 3/19$$.

## Solutions to Stochastic Processes Ch.3

Solutions to Stochastic Processes Sheldon M. Ross Second Edition(pdf)
Since there is no official solution manual for this book, I
handcrafted the solutions by myself. Some solutions were referred from web, most copyright of which are implicit, can’t be listed clearly. Many thanks to those authors! Hope these solutions be helpful, but No Correctness or Accuracy Guaranteed. Comments are welcomed. Excerpts and links may be used, provided that full and clear credit is given.

3.1 Is it true that:
(a) $$N(t) < n$$ if and only if $$S_n > t$$ ?
(b) $$N(t) \leq n$$ if and only if $$S_n \geq t$$ ?
(c) $$N(t) > n$$ if and only if $$S_n < t$$ ?

(a) True, (b)(c) False.

3.2 In defining a renewal process we suppose that $$F(\infty)$$, the probability that an interarrival time is finite, equals 1. If $$F(\infty) < 1$$, then after each renewal there is a positive probability $$1 – F(\infty)$$ that there will be no further renewals. Argue that when $$F(\infty) < 1$$ the total number of renewals, call it $$N(\infty)$$, is such that $$1 + N(\infty)$$ has a geometric distribution with mean $$1/(1 – F(\infty))$$.

$$P\{1 + N(\infty) = k\} = F^{k-1}(\infty)(1 – F(\infty))$$

3.3 Express in words what the random variable $$X_{N(t) + 1}$$ represents (Hints: It is the length of which renewal interval?). Show that$$P\{X_{N(t)+1} \geq x\} \geq \bar{F}(x).$$
Compute the above exactly when $$F(x) = 1 – e^{-\lambda x}$$.

$$X_{N(t)+1}$$ represents the first renewal interval after the last event in $$t$$.
\begin{align} P\{X_{N(t)+1} \geq x\} &= \int_0^{\infty} P\{X_{N(t)+1} \geq x | S_{N(t)} = s \}dF_{S_{N(t)}}(s)\\ &= \int_0^{\infty} P\{X_{N(t)+1} \geq x | X_{N(t) + 1} > t – s \}dF_{S_{N(t)}}(s) \\ &= \int_0^{\infty} \frac{P\{X_{N(t)+1} \geq x, X_{N(t) + 1} > t – s \}}{P\{X_{N(t) + 1} > t – s \}}dF_{S_{N(t)}}(s) \\ &= \int_0^{\infty} \frac{1 – F(max\{x, t-s\})}{1 – F(t-s)}dF_{S_{N(t)}}(s) \\ &= \int_0^{\infty} min\{\frac{1 – F(x)}{1 – F(t-s)}, \frac{1 – F(t-s)}{1 – F(t-s)}\}dF_{S_{N(t)}}(s) \\ &= \int_0^{\infty} min\{\frac{1 – F(x)}{1 – F(t-s)}, 1\}dF_{S_{N(t)}}(s) \\ &\geq \int_0^{\infty} 1 – F(x)dF_{S_{N(t)}}(s) \\ &= \bar{F}(x) \end{align}
When $$F(x) = 1 – e^{-\lambda x}$$,
\begin{align} P\{X_{N(t)+1} \geq x\} &= \int_0^{\infty} \frac{e^{-\lambda x} }{e^{-\lambda(t-s)}}dF_{S_{N(t)}}(s) \\ &= \int_0^{t-x} dF_{S_{N(t)}}(s) + \int_{t-x}^t e^{-\lambda x} dm(s) \\ &= (1 + x/\lambda)e^{-\lambda x} – e^{-\lambda t} \end{align}

3.4 Prove the renewal equation
$$m(t) = F(t) + \int_0^t m(t-x)dF(x)$$

\begin{align} m(t) &= E[N(t)] = \int_0^{\infty} E[N(t)|X_1]dF(x) \\ &= \int_0^t E[1 + N(t-x)]dF(x) \\ &= F(t) + \int_0^t m(t-s)dF(x) \end{align}

3.5 Prove that the renewal function $$m(t), 0 \leq t < \infty$$ uniquely determines the interarrival distribution $$F$$.

The Laplace transform for $$F$$ is,
$$\phi_{F_n}(\lambda) = \int_0^{\infty} e^{-\lambda t}dF_n(t) = E[e^{-\lambda S_n}] = [\phi_F(\lambda)]^n$$
The Laplace transform for $$m(t)$$ is,
\begin{align} U(\lambda) &= \int_0^{\infty} e^{-\lambda t}dm(t) \\ &= \sum_{n=1}^{\infty} \int_0^{\infty} e^{-\lambda t}dF_n(t) \\ &= \sum_{n=1}^{\infty}[\phi_F(\lambda)]^n = \frac{\phi_F(\lambda)}{1 – \phi_F(\lambda)} \\ \phi_F(\lambda) &= \frac{U(t)}{1 + U(t)} \end{align}
The distribution $$F$$ is uniquely determined by its Laplace transform $$\phi_F$$, which is uniquely determined by the renewal function.

3.6 Let $$\{N(t), t \geq 0\}$$ be a renewal process and suppose that for all $$n$$ and $$t$$, conditional on the event that $$N(t) = n$$, the event times $$S_1, \dots, S_n$$ are distributed as the order statistics of a set of independent uniform $$(0, t)$$ random variables. Show that $$\{N(t), t \geq 0\}$$ is a Poisson process.
(Hint: Consider $$E[N(s)|N(t)]$$ and then use the result of Problem 3.5)

\begin{align} E[N(s) | N(t) = n] &= E[\sum_{i=1}^n I(U_{(i)} \leq s)] \\ &= E[\sum_{i=1}^n I(U_{i} \leq s)] \\ &= \sum_{i=1}^n P\{U_i \leq s\} = \frac{ns}{t}\\ m(s) &= E[E[N(s)|N(t)]] = \frac{s}{t}m(t)\\ \end{align}Solve the equation, we get $$m(s) = \lambda s$$ which uniquely determines the interarrival distribution function: $$F(x) = 1 – e^{-\lambda t}$$. Proven.

3.8 The random variables $$X_1, \dots, X_n$$ are said to be exchangeable if $$X_{i_1}, \dots, X_{i_n}$$ has the same joint distribution as $$X_1, \dots, X_n$$ whenever $$i_1, \dots, i_n$$ is a permutation of $$1, \dots, n$$. That is, they are exchangeable if the joint distribution function $$P\{X_1 \leq x_1, \dots, X_n \leq x_n \}$$ is a symmetric function of $$x_1, \dots, x_n$$. Let $$X_1, X_2, \dots$$ denote the interarrival times of a renewal process.
(a) Argue that conditional on $$N(t) = n, X_1, \dots, X_n$$ are exchangeable. Would $$X_1, \dots, X_n, X_{n+1}$$ be exchangeable (conditional on $$N(t) = n$$) ?
(b) Use (a) to prove that for $$n > 0$$
$$E[\frac{X_1 + \dots + X_{N(t)}}{N(t)}|N(t) = n] = E[X_1|N(t) = n].$$
(c) Prove that
$$E[\frac{X_1 + \dots + X_{N(t)}}{N(t)}|N(t) > 0] = E[X_1|X_1 < t] .$$

(a) \begin{align} &P\{X_1 \leq x_1, \dots, X_n \leq x_n, N(t) = n\} \\ =& \int_{y_1 \leq x_1}\dots \int_{y_n \leq x_n} P\{X_{n+1} > t – \sum_{i=1}^n y_i\}dF(y_1)\dots dF(y_n) \\ =& \int_0^1 \dots \int_0^1 I\{y_1 \leq x_1, \dots, y_n \leq x_n\} \bar{F}(t – \sum_{i=1}^n y_i)dF(y_1)\dots dF(y_n) \\ \end{align}By Fubini’s theorem, the order change of integration doesn’t change the result.
Since distribution of $$X_{n+1}$$ which contains the time $$t$$ is not identical to others, they are not exchangeable.
(b) Since they’re exchangeable,
\begin{align} &E[\frac{X_1 + \dots + X_{N(t)}}{N(t)}|N(t) = n]\\ =&E[\frac{X_1 + \dots + X_n}{n}|N(t) = n] \\ =&\frac{1}{n}\sum_{i=1}^{n} E[X_i|N(t) = n] \\ =& E[X_1|N(t) = n] \end{align}

(c) \begin{align} &E[\frac{X_1 + \dots + X_{N(t)}}{N(t)}|N(t) > 0] \\ =& \sum_{n=1}^{\infty} E[\frac{X_1 + \dots + X_{N(t)}}{N(t)}|N(t) = n]P\{N(t) = n | N(t) > 0\}\\ =& \sum_{n=1}^{\infty} E[X_1|N(t) = n]P\{N(t) = n | N(t) > 0\}\\ =& E[X_1|N(t) > 0] = E[X_1|X_1 < t] \end{align}

3.9 Consider a single-server bank in which potential customers arrive at a Poisson rate $$\lambda$$. However, an arrival only enters the bank if the server if free when he or she arrives. Let $$G$$ denote the service distribution.
(a) At what rate do customers enter the bank?
(b) What fraction of potential customers enter the bank?
(c) What fraction of time is the server busy?

(a) This is a renewal process whose renewal occurs at a customer’s entrance. Let $$\mu_G$$ denote the average service time, then average interarrival time of the renewal process is $$\mu_G$$ plus the average time a customer arrive after the service finished, which is $$1/\lambda$$ due to the memoryless property. Thus the rate is $$\lambda/(1 + \lambda\mu_G)$$
(b) When a customer enter the bank, the renewal occurs. In the $$\mu_G$$ time, the average customers arrive is $$\lambda\mu_G$$. Thus, in each period, there will $$1 + \lambda\mu_G$$ customers, only one of them entered the bank. The rate is $$1/(\lambda\mu_G + 1)$$
(c) $$\mu_G/(1/\lambda + \mu_G) = \lambda\mu_G/(1 + \lambda\mu_G)$$

3.10 Let $$X_1, X_2, \dots$$ be independent and identically distributed with $$E[X_i] < \infty$$. Also let $$N_1, N_2, \dots$$ be independent and identically distributed stopping times for the sequence $$X_1, X_2, \dots$$ with $$E[N_i] < \infty$$. Observe the $$X_i$$ sequentially, stopping at $$N_1$$. Now start sampling the remaining $$X_i$$–acting as if the sample was just beginning with $$X_{N_i + 1}$$–stopping after an additional $$N_2$$(Thus, for instance, $$X_1 + \dots + X_{N_1}$$ has the same distribution as $$X_{N_1+1} + \dots + X_{N_1 + N_2}$$). Now start sampling the remaining $$X_i$$–again acting as if the sample was just beginning–and stop after an additional $$N_3$$, and so on.
(a) Let
$$S_1 = \sum_{i=1}^{N_1}X_i, \quad S_2 = \sum_{i=N_1 + 1}^{N_1 + N_2}X_i, \quad\dots,\quad S_m = \sum_{i=N_1+\dots+N_{m-1}+1}^{N_1 + \dots + N_m}X_i$$
Use the strong law of large numbers to compute
$$\lim_{m \to \infty} (\frac{S_1 + \dots + S_m}{N_1 + \dots + N_m}).$$
(b) Writing
$$\frac{S_1 + \dots +S_m }{N_1 + \dots + N_m } = \frac{S_1 + \dots +S_m}{m}\frac{m}{N_1 + \dots + N_m},$$
derive another expression for the limit in part (a).
(c) Equate the two expression to obtain Wald’s equation.

(a) $$\lim_{m \to \infty} (\frac{S_1 + \dots + S_m}{N_1 + \dots + N_m}) = E[X]$$
(b) $$\lim_{m \to \infty} (\frac{S_1 + \dots + S_m}{N_1 + \dots + N_m}) = E[S]/E[N]$$
(c) $$E[S] = E[N]E[X]$$

3.11 Consider a miner trapped in a room that contains three doors. Door 1 leads her to freedom after two-days’ travel; door 2 returns her to her room after four days’ journey, and door 3 returns her room after eight-days’ journey. Suppose at all times she is equally to choose any of the three doors, and let $$T$$ denote the time it takes the miner to become free.
(a) Define a sequence of independent and identically distributed random variables $$X_1, X_2, \dots$$ and a stopping time $$N$$ such that
$$T = \sum_{i=1}^N X_i.$$
Note: You may have to imagine that the miner continues to randomly choose doors even after she reaches safety.
(b) Use Wald’s equation to find $$E[T]$$.
(c) Compute $$E[\sum_{i=1}^N X_i|N=n]$$ and note that it is not equal to $$E[\sum_{i=1}^n X_i]$$.
(d) Use part (c) for a second derivation of $$E[T]$$.

(a) $$P\{X=2\} = P\{X=4\} = P\{X=8\} = 1/3$$, and $$N= min\{n, X_n = 2\}$$.
(b) Since $$N \sim Geometric(1/3)$$, then $$E[N] = 3$$. Hence $$E[T] = E[N]E[X] = 14$$.
(c) \begin{align} E[\sum_{i=1}^N X_i|N=n ] &= E[\sum_{i=1}^N X_i|X_1 \neq 2, \dots, X_{n-1}\neq 2, X_n = 2] \\&=(n-1)E[X_i|X_i\neq 2] + 2 = 6n – 4 \\ E[\sum_{i=1}^N X_i] &= nE[X_i] = 14n/3 \end{align}
(d) $$E[T] = E[E[\sum_{i=1}^N X_i|N=n]] = 6E[N] – 4 = 14$$

3.12 Show how Blackwell’s theorem follows from the key renewal theorem.

Let $$h(x) = 1_{\{x \in [0, a]\}}$$, then $$h(x)$$ is directly Riemann integrable. Then we get
$$\lim_{t \to \infty} m(t) – m(t-a) = \frac{a}{\mu}$$

3.13 A process is in one of $$n$$ states, $$1, 2, \dots, n$$. Initially it is in state 1, where it remains for an amount of time having distribution $$F_1$$. After leaving state 1 it goes to state 2, where it remains for a time having distribution $$F_2$$. When it leaves 2 it goes to state 3, and so on. From state $$n$$ it returns to 1 and starts over. Find
$$\lim_{t \to \infty} P\{\text{process is in state i at time t}\}.$$
Assume that $$H$$, the distribution of time between entrances to state 1, is nonlattice and has finite mean.

There may be a typo in the last statement, which I think he might means state $$i$$ not state 1. Then,
Let $$\mu_i$$ denote the expected time of distribution $$F_i, i = 1, 2, \dots, n$$, and $$\mu_H$$ denote the expectation of distribution $$H$$. Then
$$\lim_{t \to \infty} P\{\text{process is in state i at time t}\} = \frac{\mu_i}{\sum_{j=1}^n \mu_j + n\mu_H}$$

3.14 Let $$A(t)$$ and $$Y(t)$$ denote the age and excess at $$t$$ of a renewal process. Fill in the missing terms:
(a) $$A(t) > x \leftrightarrow 0$$ events in the interval____?
(b) $$Y(t) > x \leftrightarrow 0$$ events in the interval____?
(c) $$P\{Y(t) > x\} = P\{A(\ ) > \ \ \}$$
(d) Compute the joint distribution of $$A(t)$$ and $$Y(t)$$ for a Poisson process.

(a) $$(t-x, t)$$
(b) $$(t, t+x)$$
(c) $$P\{Y(t) > x\} = P\{A(t+x) > x \}$$
(d) $$P\{A(t) > x, Y(t) > y\} = e^{-\lambda(x+y)}$$

3.15 Let $$A(t)$$ and $$Y(t)$$ denote respectively the age and excess at $$t$$. Find:
(a) $$P\{Y(t) > x|A(t) = s\}$$.
(b) $$P\{Y(t) > x|A(t + x/2) = s\}$$.
(c) $$P\{Y(t) > x|A(t + x) > s\}$$ for a Poisson process.
(d) $$P\{Y(t) > x, A(t) > y\}$$.
(e) If $$\mu < \infty$$, show that, with probability 1, $$A(t)/t \to 0$$ as $$t \to \infty$$.

(a) $$\bar{F}(x+s)/\bar{F}(s)$$ (Formal solution should be conditional on $$S_{N(t)}$$)
(b) Since $$A(t+x/2) = s, s \geq x/2$$, and $$P\{Y(t) > x|A(t + x/2) = s\} =\bar{F}(x/2+s)/\bar{F}(s)$$
(c) $$P\{Y(t) > x|A(t + x) > s\} = e^{-\lambda s}$$
(d) $$e^{-\lambda(x+y)}$$
(e) \begin{align} \lim_{t \to \infty} \frac{A(t)}{t} &= \lim_{t \to \infty} \frac{t – S_{N(t)}}{t} \\ &= 1 – \lim_{t \to \infty} \frac{S_{N(t)}}{N(t)}\frac{N(t)}{t} \\ &= 1 – \frac{\mu}{\mu} = 0 \end{align}

3.16 Consider a renewal process whose interarrival distribution is the gamma distribution with parameters $$(n, \lambda)$$. Use Proposition 3.4.6 to show that
$$\lim_{t \to \infty} E[Y(t)] = \frac{n+1}{2\lambda}.$$
Now explain how this could have been obtained without any computations.

Since $$X \sim Gamma(n, \lambda)$$, then $$E[X] = n/\lambda, E[X^2] = n(n+1)/\lambda^2$$, then
$$\lim_{t \to \infty} E[Y(t)] = \frac{E[X^2]}{2E[X]} = \frac{n+1}{2\lambda}.$$
The gamma distribution can be considered as the sum of $$n$$ exponential random variables with parameter $$\lambda$$. And the one containing $$t$$ can be seen as divided into two due to the memoryless property. Hence,
$$E[Y(t)] = E[A(t)] = \frac{n+1}{2\lambda}$$

3.18 In Problem 3.9 suppose that potential customers arrive in accordance with a renewal process having interarrival distribution $$F$$. Would the number of events by time $$t$$ constitute a (possibly delayed) renewal process if an event corresponds to a customer:
(a) entering the bank?
(b) leaving the bank?
What if $$F$$ were exponential?

Let $$X_i$$ denote the length of the i-th service and let $$Y_i$$ denote the time from the end of i-th service until the start of the $$(i + 1)$$th service. Let $$Y_0$$ denote the time when first arrival enters the bank (and gets service). Note that $$X_i$$ and $$Y_i$$ may be dependent when the arrival is not a Poisson process.
(a) In this case, each cycle consists of $$Z_i = X_i + Y_i, i = 1, 2, \dots$$ and $$Z_0 = Y_0$$. Since $$X_i, Y_i$$ are independent of $$X_j, Y_j, j = 1, \dots, i − 1, \{Z_i\}_{i\in N}$$ are i.i.d copies. We thus have a delayed renewal process.
(b) In this case, $$Z_i = Y_{i−1}+X_i$$. When $$X_i, Y_i$$ are dependent, $$\{Z_i\}_{i \in N}$$ are not i.i.d. copies. We do not have a (delayed) renewal process.
If $$F$$ is exponential, then (a) is still a delayed renewal process, (b) is a renewal process.

3.19 Prove Equation (3.5.3).

\begin{align} &P\{S_{N(t)} \leq s\} \\ =& \sum_{n=0}^{\infty} P\{S_n \leq s, S_{n+1} > t\} \\ =& \bar{G}(t) + \sum_{n=1}^{\infty} \int_0^{\infty} P\{S_n \leq s, S_{n+1} > t | S_n = y\}dG*F_{n-1}(y) \\ =& \bar{G}(t) + \sum_{n=1}^{\infty} \int_0^s \bar{F}(t-y)dG*F_{n-1}(y) \\ =& \bar{G}(t) + \int_0^s \bar{F}(t-y)d(\sum_{n=1}^{\infty} G*F_{n-1}(y)) \\ =& \bar{G}(t) + \int_0^s \bar{F}(t-y)dm_D(y) \end{align}

3.20 Consider successive flips of a fair coin.
(a) Compute the mean number of flips until the pattern HHTHHTT appears.
(b) Which pattern requires a larger expected time to occur: HHTT or HTHT?

(a) $$2^7$$
(b) HTHT

3.21 On each bet a gambler, independently of the past, either wins or loses 1 unit with respective probabilities $$p$$ and $$1-p$$. Suppose the gambler’s strategy is to quit playing the first time she wins $$k$$ consecutive bets. At the moment she quits:
(a) find her expected winnings.
(b) find the expected number of bets she has won.

Let $$X_i$$ denote the ith winning, and $$Y_i$$ denote the indicator of whether win ith game, $$N$$ denote the number of play until she quits. Then $$N$$ is the stopping time for sequence $$X_i$$s and $$Y_i$$s, and $$E[N] = \sum_{i=1}^kp^{-i}$$.
(a) $$E[N]E[X] = (2p-1)\sum_{i=1}^kp^{-i}$$
(b) $$E[N]E[Y] = \sum_{i=0}^{k-1}p^{-i}$$

3.22 Consider successive flips of a coin having probability $$p$$ of landing heads. Find the expected number of flips until the following sequences appear.
(a) A = HHTTHH.
(b) B = HTHTT.
Suppose now that $$p = 1/2$$.
(c) Find $$P\{A \text{ occurs before } B\}$$.
(d) Find the expected number of flips until either A or B occurs.

(a) $$p^{-1} + p^{-2} + p^{-4}(1-p)^{-2}$$
(b) $$p^{-2}(1-p)^{-3}$$
(c)(d) \begin{align} E[N_{B|A}] &= E[N_{HTHTT}] – E[N_{H}] = p^{-2}q^{-3} – p^{-1} \\ E[N_{A|B}] &= E[N_{A}]= p^{-1} + p^{-2} + p^{-4}p^{-2} \\ E[N_{A}] &= E[M] + E[N_A – M] \\ &= E[M] + E[N_A – M|\text{B before A}](1 – P_A) \\ &= E[M] + E[N_{A|B}](1-P_A). \\ E[N_B] = E[M] + E[N_{B|A}]P_A\\ \end{align}where $$P_A = P\{\text{A before B}\}, M=min(N_A, N_B), p=q=1/2$$. Solve the equations above, we get,
$$P_A = \frac{E[N_B] + E[N_{A|B}] – E[N_A]}{E[N_{A|B}] + E[N_{B|A}]} = \frac{8}{25} \\ E[M] = E[N_B] – E[N_{B|A}]P_A = \frac{112}{5}\\$$

3.23 A coin having probability $$p$$ of landing heads is flipped $$k$$ times. Additional flips of the coin are then made until the pattern of the first $$k$$ is repeated (possibly by using some of the first $$k$$ flips). Show that the expected number of additional flips after the initial $$k$$ and $$2^k$$.

Let $$m$$ denote the number of head in the first $$k$$ flips. Then
\begin{align} E[T] &= \sum_{i=0}^k E[T|m=i]P\{m=i\} \\ &= \sum_{i=0}^k p^{-i}(1-p)^{-k+i}{k \choose i} p^i(1-p)^{k-i} \\ &= 2^k \end{align}

3.25 Consider a delayed renewal process $$\{N_D(t), t \geq 0\}$$ whose first interarrival has distribution $$G$$ and the others have distribution $$F$$. Let $$m_D(t) = E[N_D(t)]$$.
(a) Prove that
$$m_D(t) = G(t) + \int_0^t m(t-x)dG(x),$$
where $$m(t) = \sum_{n=1}^{\infty}F_n(t)$$
(b) Let $$A_D(t)$$ denote the age at time $$t$$. Show that if $$F$$ is nonlattice with $$\int x^2 dF(x) < \infty$$ and $$t\bar{G}(t) \to 0$$ as $$t \to \infty$$, then
$$E[A_D(t)] \to \frac{\int_0^{\infty} x^2dF(x)}{2\int_0^{\infty}xdF(x)}$$
(c) Show that if $$G$$ has a finite mean, then $$t\bar{G}(t) \to 0$$ as $$t \to \infty$$.

(a) \begin{align} m_D(t) &= \int_0^t E[N_D(t) | X_1 = x]dG(x)\\ &= \int_0^t 1 + m(t – x)dG(x) \\ &= G(t) + \int_0^t m(t – x)dG(x) \end{align}
(b) \begin{align} &E[A_D(t)]\\ =& E[A_D(t)|S_{N(t)} = 0]\bar{G}(t) + \int_0^t E[A_D(t) | S_{N(t)} = y]\bar{F}(t-y)dm_D(y)\\ =& E[t|X>t]\bar{G}(t) + \int_0^t E[t-y|X>t-y]\bar{F}(t-y)dm_D(y) \\ =& \int_0^{\infty}E[t|X>t]\bar{F}(t)dt/\mu \quad (\text{from Proposition 3.5.1(v)}) \\ =& \frac{\int_0^{\infty} x^2dF(x)}{2\int_0^{\infty}xdF(x)} \end{align}
(c) \begin{align} E[G] &= \int_0^{\infty} \bar{G}(t)dt \\ &= t\bar{G}(t)|_0^{\infty} + \int_0^{\infty}tdG(t) \\ &= \lim_{t \to \infty} t\bar{G}(t) + E[G]\\ \lim_{t \to \infty} t\bar{G}(t) &\to 0 \end{align}

3.26 Prove Blackwell’s theorem for renewal reward processes. That is, assuming that the cycle distribution is not lattice, show that, as $$t \to \infty$$,
$$E[\text{reward in } (t, t+a)] \to a\frac{E[\text{reward in cycle}]}{E[\text{time of cycle}]}$$
Assume that any relevant function is directly Riemann integrable.

From Proposition 3.6.1 (ii), $$E[R(t)] \to t\frac{E[R]}{E[X]}$$, then
\begin{align} &E[\text{reward in } (t, t+a)]\\ =& E[R(t+a)] – E[R(t)] \\ \to & (t+a)\frac{E[R]}{E[X]} – t\frac{E[R]}{E[X]} \\ =& a\frac{E[\text{reward in cycle}]}{E[\text{time of cycle}]} \end{align}

3.28 In Example 3.6(C) suppose that the renewal process of arrivals is a Poisson process with mean $$\mu$$. Let $$N^*$$ denote that value of $$N$$ that minimizes the long-run average cost if a train leaves whenever there are $$N$$ travelers waiting. Another type of policy is to have a train depart every $$T$$ time units. Compute the long-run average cost under this policy and let $$T^*$$ denote the value of $$T$$ that minimizes it. Show that the policy that departs whenever $$N^*$$ travelers are waiting leads to a smaller average cost than the one that departs every $$T^*$$ time units.

From 3.6(c), we get the average cost for policy 1 is
$$E_1 = \frac{c(N-1)}{2} + \frac{K\mu}{N}$$
By derivation, we get the minimum value $$E_1^* = \sqrt{2kc\mu} – c/2$$, when $$N= N^*$$.
For policy 2,
$$E_2 = \frac{ct\mu}{2} + {K}{t}$$
By derivation, we get the minimum value $$E_2^* = \sqrt{2kc\mu}$$, when $$T = T^*$$.
Since $$c > 0$$, we get the result.

3.29 The life of a car is a random variable with distribution $$F$$. An individual has a policy of trading in his car either when it fails or reaches the age of $$A$$. Let $$R(A)$$ denote the resale value of an A-year-old car. There is no resale value of a failed car. Let $$C_1$$ denote the cost of a new car and suppose that an additional cost $$C_2$$ is incurred whenever the car fails.
(a) Say that a cycle begins each time a new car is purchased. Compute the long-run average cost per unit time.
(b) Say that a cycle begins each time a car in use fails. Compute the long-run average cost per unit time.
Note. In both (a) and (b) you are expected to compute the ratio of the expected cost incurred in a cycle to the expected time of a cycle. The answer should, of course, be the same in both parts.

(a) $$E[\text{time per cycle}] = \int_0^A xdF(x) + A(1 – F(A)) \\ E[\text{cost per cycle}] = C_1 – \bar{F}(A)R(A) + F(A)C_2\\ \lim_{t \to \infty} \frac{E[\text{total cost}]}{t} = \frac{C_1 – \bar{F}(A)R(A) + F(A)C_2 }{\int_0^A xdF(x) + A(1 – F(A)) }$$
(b) The probability of using a car until it fails is $$F(A)$$, then the number of car in a cycle $$N$$ is a geometric variable with parameter $$F(A), E[N] = 1/F(A)$$, then
\begin{align} E[\text{time per cycle}] &= E[(N-1)A] + \int_0^A xd\frac{F(x)}{F(A)}\\ &= \frac{A\bar{F}(A)}{F(A)} + \int_0^A xd\frac{F(x)}{F(A)} \\ E[\text{cost per cycle}] &= E[NC_1 – (N-1)R(A) + C_2]\\ &= \frac{C_1 + \bar{F}(A)R(A)}{F(A)} + C_2\end{align}

3.30 Suppose in Example 3.3(A) that a coin’s probability of landing heads is a beta random variable with parameters $$n$$ and $$m$$, that is, the probability density is
$$f(p) = Cp^{n-1}(1 – p)^{m-1} \quad 0 \leq p \leq 1.$$
Consider the policy that flips each newly chosen coin until $$m$$ consecutive flips land tails, then discards that coin and does the same with a new one. For this policy show that, with probability 1, the long-run proportion of flips that land heads is 1.

Similar to 3.3(A),
\begin{align} &E[\text{number of flips between successive patterns}] \\ = &\int_0^1 (\sum_{i=1}^m (1-p)^{-i}) Cp^{n-1}(1-p)^{m-1} dp \\ = &C\int_0^1 \frac{p^{n-2}}{1-p} – p^{n-2}(1-p)^{m-2} dp \\ \geq &C[\int_0^1 p^{n-2}dp\int_0^1 \frac{1}{1-p}dp – \int_0^1dp ] \\ =& \infty \end{align}

3.31 A system consisting of four components is said to work whenever both at least one of components 1 and 2 work and at least one of components 3 and 4 work. Suppose that component $$i$$ alternates between working and being failed in accordance with a nonlattice alternating renewal process with distributions $$F_i$$ and $$G_i, i = 1, 2, 3, 4$$. If these alternating renewal processes are independent, find $$\lim_{t \to \infty} = P\{\text{system is working at time } t\}$$.

Let $$\mu_{F_i}, \mu_{G_i}$$ denote the expected work time and failure time of ith component. When $$t \to \infty$$, the probability of at least one of 1 and 2 work is
$$P_{12} = 1 – \frac{\mu_{G_1}\mu_{G_2}}{(\mu_{G_1} + \mu_{F_1})(\mu_{G_2} + \mu_{F_2})}$$
Similarly,$$P_{34} = 1 – \frac{\mu_{G_3}\mu_{G_4}}{(\mu_{G_3} + \mu_{F_3})(\mu_{G_4} + \mu_{F_4})}$$
$$\lim_{t \to \infty} = P\{\text{system is working at time } t\} = P_{12}P_{34}$$

3.32 Consider a single-server queueing system having Poisson arrivals at rate $$\lambda$$ and service distribution $$G$$ with mean $$\mu_G$$. Suppose that $$\lambda\mu_G < 1$$
(a) Find $$P_0$$, the proportion of time the system is empty.
(b) Say the system is busy whenever it is nonempty (and so the server is busy). Compute the expected length of a busy period.
(c) Use part (b) and Wald’s equation to compute the expected number of customers served in a busy period.

(a) $$P_0 = 1 – \lambda\mu_G$$
(b) $$E[B] = \frac{\mu_G}{1 – \lambda\mu_G}$$
(c) $$E[C] = \frac{1}{1 – \lambda\mu_G}$$
For detail see the “Introduction to probability models” by Sheldon M. Ross 10th edition Chapter 8.5

## Solutions to Stochastic Processes Ch.2

Solutions to Stochastic Processes Sheldon M. Ross Second Edition(pdf)
Since there is no official solution manual for this book, I
handcrafted the solutions by myself. Some solutions were referred from web, most copyright of which are implicit, can’t be listed clearly. Many thanks to those authors! Hope these solutions be helpful, but No Correctness or Accuracy Guaranteed. Comments are welcomed. Excerpts and links may be used, provided that full and clear credit is given.

2.1 Show that Definition 2.1.1 of a Poisson process implies Definition 2.1.2.

\begin{align} &\ \forall t_1 < t_2, s > 0 \\ &P\{N(t_2+s) – N(t_1+s) = n\}\\ = &P\{N(t_2) – N(t_1) = n\} = e^{-\lambda(t_2-t_1)}\frac{\lambda^n(t_2-t_1)^n}{n!} \end{align}
Thus, the process has stationary increments.
$$P\{N(h) = 1\} = \lambda h + \lambda h(e^{-\lambda h} – 1)\\ \lim_{h \to 0} \frac{\lambda h(e^{-\lambda h} – 1) }{h} = 0$$
Thus, $$P\{N(h) = 1\} = \lambda h + o(h)$$.
$$P\{N(h) \geq 2\} = 1 – P\{N(h) = 1\} – P\{N(h) = 0\} = o(h)$$
Thus, $$P\{N(h) \geq 2\} = o(h)$$.

2.2 For another approach to proving that Definition 2.1.2 implies Definition 2.1.1:
(a) Prove, using Definition 2.1.2, that
$$P_0(t + s) = P_0(t)P_0(s)$$
(b) Use (a) to infer that the interarrival times $$X_1, X_2, \dots$$ are independent exponential random variables with rate $$\lambda$$.
(c) Use (b) to show that $$N(t)$$ is Poisson distributed with mean $$\lambda t$$.

(a)$$P_0(t+s) = P\{N(t) = 0, N(s)=0\} = P_0(t)P_0(s)$$
(b)\begin{align} 1 – F_X(t) &= P_0(t) = \lim_{\Delta t \to 0} P_0(\Delta t)^{t/\Delta t} \\ &= \lim_{\Delta t \to 0} (1 – \lambda\Delta t + o(\Delta t))^{t/\Delta t} \\ &= e^{-\lambda t}\\ F_X(t) &= 1 – e^{-\lambda t}\\ \end{align}
(c) From Problem 1.29 we know, $$\sum_1^n X_i \sim \Gamma(n, \lambda)$$ when all $$X_i$$ are i.i.d and $$X_i \sim Exponential(\lambda), i=1, 2, \dots$$ Let $$f_n(x)$$ denote the PDF of the sum of $$n \ X_i$$. Thus,
\begin{align} P\{N(t) = n\} &= P\{\sum_1^n X_i < t, \sum_1^{n+1} X_i > t\} \\ &= \int_0^t f_n(x)e^{-\lambda(t-x)} dx \\ &= e^{-\lambda t}\frac{(\lambda t)^n}{n!} \end{align}

2.3 For a Poisson process show, for $$s < t$$, that
$$P\{N(s) = k | N(t) = n\} = {n \choose k}(\frac{s}{t})^k(1 – \frac{s}{t})^{n-k}, \quad k=0, 1, \dots, n.$$

\begin{align} P\{N(s)=k|N(t)=n\} &= \frac{ P\{N(s)=k, N(t-s)=n-k\} }{ P\{N(t)=n\} } \\ &= \frac{ P\{N(s)=k\}P\{N(t-s)=n-k\} }{ P\{N(t)=n\} } \\ &= {n \choose k}(\frac{s}{t})^k(1 – \frac{s}{t})^{n-k}, \quad k=0, 1, \dots, n. \end{align}

2.4 Let $$\{N(t), t \geq 0\}$$ be a Poisson process with rate $$\lambda$$. Calculate $$E[N(t)N(t+s)]$$.

\begin{align} E[N(t)N(t+s)] &= E[N(t)(N(t) + N(s))] \\ &= E[N^2(t)] + E[N(t)N(s)] \\ &= \sum_0^{\infty} n^2 e^{-\lambda t}\frac{(\lambda t)^n}{n!} + E[N(t)]E[N(s)] \\ &= \lambda t[\sum_0^{\infty} (n-1 + 1) e^{-\lambda t}\frac{(\lambda t)^{n-1}}{(n-1)!}] + \lambda^2ts \\ &= \lambda^2t(t+s) + \lambda t \\ \end{align}
The second mixed raw moment, which is $$E[N(t)N(s)]$$, is called the auto-correlation function of the stochastic process. And the acf for Poisson process with parameter $$\lambda$$ is
$$E[N(t)N(s)] = \lambda st + \lambda min\{s, t\}, \quad s,t\geq 0$$

2.5 Suppose that $$\{N_1(t), t \geq 0\}$$ and $$\{N_2(t), t \geq 0\}$$ are independent Poisson process with rates $$\lambda_1$$ and $$\lambda_2$$. Show that $$\{N_1(t) + N_2(t), t \geq 0\}$$ is a Poisson process with rate $$\lambda_1 + \lambda_2$$. Also, show that the probability that the first event of the combined process comes from $$\{N_1(t), t \geq 0\}$$ is $$\lambda_1 /(\lambda_1 + \lambda_2)$$, independently of the time of the event.

Let $$N(t) = N_1(t) + N_2(t)$$, then
\begin{align} &P\{N(t) = n\}\\= &\sum_{i=0}^n P\{N_1(t) = i\}P\{N_2(t) = n- i\} \\ =&\sum_{i=0}^n e^{-\lambda_1 t}\frac{(\lambda_1 t)^i}{i!} e^{-\lambda_2 t}\frac{(\lambda_2 t)^{n-i}}{(n-i)!} \\ =&e^{-(\lambda_1 + \lambda_2)t}\frac{(\lambda_1+\lambda_2)^nt^n}{n!} \sum_{i=0}^n {n \choose i} (\frac{\lambda_1}{\lambda_1 + \lambda_2})^i(\frac{\lambda_2}{\lambda_1 + \lambda_2} )^{n-i} \\ =&e^{-(\lambda_1 + \lambda_2)t}\frac{(\lambda_1+\lambda_2)^nt^n}{n!} \end{align}
Let $$X_i$$ denote the waiting time until the first event occur of the $$i$$ process, then $$X_i \sim Exponential(\lambda_i), i=1,2$$. The probability of first event comes from 1 is,
$$P\{X_1 < X_2\} = \int_0^{\infty} \lambda_1e^{-\lambda_1 t} e^{-\lambda_2 t} dt = \frac{\lambda_1}{\lambda_1 + \lambda_2}$$

2.10 Buses arrive at a certain stop according to a Poisson process with rate $$\lambda$$. If you take the bus from that stop then it takes a time $$R$$, measured from the time at which you enter the bus, to arrive home. If you walk from the bus stop then it takes a time $$W$$ to arrive home. Suppose that your policy when arriving at the bus stop is to wait up to a time s, and if a bus has not yet arrived by that time then you walk home.
(a) Compute the expected time from when you arrive at the bus stop util you reach home.
(b) Show that if $$W < 1/\lambda + R$$ then the expected time of part (a) is minimized by letting $$s=0$$; if $$W > 1/\lambda + R$$ then it is minimized by letting $$s=\infty$$ (that is, you continue to wait for the bus), and when $$W = 1/\lambda + R$$ all values of $$s$$ give the same expected time.
(c) Give an intuitive explanation of why we need only consider the case $$s=0$$ and $$s=\infty$$ when minimizing the expected time.

(a) \begin{align} E &= (s+W)P\{N(s) = 0\} + \int_0^s (t+R)\lambda e^{-\lambda t} dt \\ &= (W-R-\frac{1}{\lambda})e^{-\lambda s} + R + \frac{1}{\lambda} \end{align}
(b) When $$W < 1/\lambda + R$$ the expectation monotonically increases as $$s$$ increases, the expected time is minimized when $$s$$ equals to its minimum value 0.
When $$W > 1/\lambda + R$$ the expectation monotonically decreases as $$s$$ increases.
When $$W = 1/\lambda + R$$ the expectation always equals $$R + \frac{1}{\lambda}$$.
(c) Because the interarrival times of bus is exponential distributed which is memoryless. The time until the first bus come after waited $$s$$ is just the “same” as if you just came the stop. That is to say, if you decide to leave after waited a $$s$$ time, you should leave at the beginning; and if you decide not to leave at the beginning, you should continue to wait for the bus.

2.11 Cars pass a certain street location according to a Poisson process with rate $$\lambda$$. A person wanting to cross the street at that location waits until she can see that no cars will come by in the next $$T$$ time units. Find the expected time that the person waits before starting to cross. (Note, for instance, that if no cars will be passing in the first $$T$$ time units then the waiting time is 0).

Condition on the first arrival time $$X$$, and the total expected waiting time
\begin{align} E[W] &= E[E[W|X]] \\ &= \int_0^T (E[W] + x)\lambda e^{-\lambda x} dx\\ &= E[W] + \frac{1}{\lambda} -e^{-\lambda T}(T + E[W] + \frac{1}{\lambda}) \\ E[W] &= \frac{e^{\lambda T} -1}{\lambda} – T \end{align}
In the second line, consider the first arrival time as a part of the second, then the conditional state is identical the initial state, due to the memoryless property.

2.12 Events, occurring according to a Poisson process with rate $$\lambda$$, are registered by a counter. However, each time an event is registered the counter becomes inoperative for the next $$b$$ units of time and does not register any new events that might occur during that interval. Let $$R(t)$$ denote the number of events that occur by time $$t$$ and are registered.
(a) Find the probability that the first $$k$$ events are all registered.
(b) For $$t \geq (n-1)b$$, find $$P\{R(t) \geq n\}$$.

(a) Since the interarrival times conform exponential distribution with parameter $$\lambda$$, then the first $$k$$ events all registered means the $$k-1$$ intervals are all greater than $$b$$, which probability is $$e^{-\lambda t(k-1)}$$.
(b) Due to the memoryless property of exponential distribution, the distribution of $$R(t)$$ is identical to $$N(t-(n-1)b)$$. Thus,
$$P\{R(t) \geq n\} = P\{N(t-(n-1)b) \geq n\} = e^{-\mu}\sum_{k=n}^{\infty} \frac{\mu^k}{k!}$$
where $$\mu = \lambda(t – (n-1)b)$$.

2.13 Suppose that shocks occur according to a Poisson process with rate $$\lambda$$, and suppose that each shock, independently, causes the system to fail with probability $$p$$. Let $$N$$ denote the number of shocks that it takes for the system to fail and let $$T$$ denote the time of failure. Find $$P\{N=n|T=t\}$$.

The process can be considered as two independent Poisson counter, the failure $$N_1(t)$$ with parameter $$\lambda p$$ and the non-failure $$N_2(t)$$ with $$\lambda (1-p)$$. Then,
$$P\{N=n|T=t\} = P\{N_2(t) = n-1\} = e^{-\lambda(1-p)t}\frac{(\lambda(1-p)t )^{n-1}}{(n-1)!}$$

2.16 The number of trails to be performed is a Poisson random variable with mean $$\lambda$$. Each trails has $$n$$ possible outcomes, number $$i$$ with probability $$P_i, \sum_1^n P_i = 1$$. Let $$X_j$$ denote the number of outcomes that occur exactly $$j$$ times, $$j=0, 1, \dots$$. Compute $$E[X_j]$$ and $$Var(X_j)$$.

Let $$Y_i$$ denote the number of the occurrence of the ith outcome. From Proposition 2.3.2, we have $$Y_i \sim \pi(\lambda P_i)$$, and all $$Y_i$$ is i.i.d. Thus, $$X_j$$ can be considered as the sum of $$n$$ random variable conformed (0-1) distribution independently.
$$E[X_j] = \sum_{i=1}^n P\{Y_i = j\} = \sum_{i=1}^n e^{-\lambda P_i} \frac{(\lambda P_i)^j}{j!}$$
$$Var(X_j) = \sum_{i=1}^n P\{Y_i = j\}(1 – P\{Y_i = j\} )$$

2.17 Let $$X_1, X_2, \dots, X_n$$ be independent continuous random variables with common density function $$f$$. Let $$X_{(i)}$$ denote the ith smallest of $$X_1, \dots, X_n$$.
(a) Note that in order for $$X_{(i)}$$ to equal $$x$$, exactly $$i-1$$ of the $$X$$’s must be less than $$x$$, one must equal $$x$$, and other $$n-i$$ must all be greater than $$x$$. Using this fact argue that the density function of $$X_{(i)}$$ is given by
$$f_{X_{(i)}}(x) = \frac{n!}{(i-1)!(n-i)!}(F(x))^{i-1}(\bar{F}(x))^{n-i}f(x).$$
(b) $$X_{(i)}$$ will be less than $$x$$ if, and only if, how many of the $$X$$’s are less than $$x$$?
(c) Use (b) to obtain an expression for $$P\{X_{(i)} \leq x\}$$.
(d) Using (a) and (c) establish the identity for $$0 \leq y \leq 1$$
$$\sum_{k=i}^n {n \choose k}y^k(1-y)^{n-k} = \int_0^y \frac{n!}{(i-1)!(n-i)!}x^{i-1}(1-x)^{n-i}dx.$$
(e) Let $$S_i$$ denote the time of the ith event of the Poisson process $$\{N(t), t \geq 0\}$$. Find$$E[S_i|N(t)=n] = \left\{ \begin{array}{ll} \underline{\ \ } \quad i \leq n \\ \underline{\ \ } \quad i > n \\ \end{array} \right. \\$$

(a) \begin{align} f_{X_{(i)}}(x) &= {n \choose i} (F(x))^{i-1}(\bar{F}(x))^{n-i} {i \choose 1}f(x) \\ &= \frac{n!}{(i-1)!(n-i)!}(F(x))^{i-1}(\bar{F}(x))^{n-i}f(x)\\ \end{align}
(b) $$\geq i$$
(c) $$P\{X_{(i)} \leq x\} = \sum_{k=i}^n(F(x))^k(\bar{F}(x))^{n-k}$$
(d) \begin{align} &P\{X_{(i)} \leq x\}\\ = &\sum_{k=i}^n(F(x))^k(\bar{F}(x))^{n-k} \\ = & \int_0^{F(x)}\frac{n!}{(i-1)!(n-i)!}(F(t))^{i-1}(\bar{F}(t))^{n-i}dF(t) \end{align}
(e) From Theorem 2.3.1 we know the distribution of $$(S_1, \dots, S_n | N(t) = n)$$ is identical to $$U_{(1)}, \dots, U_{(n)}$$.
Thus, when $$i \leq n$$,
\begin{align} E[S_i|N(t)=n] &= E[U_{(i)}] \\ &= \int_0^t x \frac{n!}{(i-1)!(n-i)!}(\frac{x}{t})^{i-1}(1 – \frac{x}{t})^{n-i}\frac{1}{t} dx \\ &= \frac{it}{n + 1} \int_0^t \frac{(n+1)!}{i!(n-i)!}(\frac{x}{t})^i(1 – \frac{x}{t})^{n-i} d \frac{x}{t} \\ &= \frac{it}{n + 1} \sum_{k=i}^n{n \choose k} (\frac{t}{t})^k(1 – \frac{t}{t})^{n-k}\\ &= \frac{it}{n + 1} \end{align}
When $$i >n$$, due to the memoryless property, the time remaining before $$t$$ after the nth event occur can be omitted. Hence it can be considered as time $$t$$ plus $$i – n$$ exponential distributed variables with parameter $$\lambda$$. Thus,
$$E[S_i|N(t)=n] = \left\{ \begin{array}{ll} \frac{it}{n + 1} \quad i \leq n \\ t + \frac{i-n}{\lambda} \quad i > n \\ \end{array} \right. \\$$

2.18 Let $$U_{(1)}, \dots, U_{(n)}$$ denote the order statistics of a set of $$n$$ uniform $$(0, 1)$$ random variables. Show that given $$U_{(n)} = y, U_{(1)}, \dots, U_{(n-1)}$$ are distributed as the order statistics of a set of $$n-1$$ uniform $$(0, y)$$ random variables.

\begin{align} &f(U_{(1)}, \dots, U_{(n-1)} | U_{(n)} = y ) \\ =& (n-1)!\prod_{i \neq (n)} f(y_i|y_{(n)}) \\ =&\frac{(n-1)!}{y^{n-1}} \end{align}

2.19 Busloads of customer arrive at an infinite server queue at a Poisson rate $$\lambda$$. Let $$G$$ denote the service distribution. A bus contains $$j$$ customers with probability $$\alpha_j, j=1, \dots$$. Let $$X(t)$$ denote the number of customers that have been served by time $$t$$.
(a) $$E[X(t)] = ?$$
(b) Is $$X(t)$$ Poisson distributed?

(a) Let $$N_j(t)$$ denote the number of bus with $$j$$ customers which has finish by the time $$t$$, $$X_j(t)$$ denote the number of customers. Then $$N_j(t)$$ are independent Poisson variables with mean $$\lambda \alpha_j \int_0^t G(y)dy$$.
$$E[X(t)] = \sum E[X_j(t)] = \sum E[E[X_j(t)|N_j(t)]] = \lambda \int_0^t G(y)dy \sum j\alpha_j$$
(b) No. When $$j=2, P\{N_2(h) = 1\} = P\{X_2(h) = 2\} \neq o(h)$$

2.20 Suppose that each event of a Poisson process with rate $$\lambda$$ is classified as being either type $$1, 2, \dots, k$$. If the event occurs at $$s$$, then, independently of all else, it is classified as type $$i$$ with probability $$P_i(s), i = 1, \dots, k, \sum_1^k P_i(s) = 1$$. Let $$N_i(t)$$ denote the number of type $$i$$ arrivals in $$[0, t]$$. Show that the $$N_i(t), i = 1, \dots, k$$ are independent and $$N_i(t)$$ is Poisson distributed with mean $$\lambda \int_0^t P_i(s)ds$$.

Similar to the proof of Proposition 2.3.2, the probability an arbitrary event occurred in $$[0, t]$$ is a type-i event is$$p_i = \frac{1}{t}\int_0^t P_i(s)ds$$ independently of the other events. Hence the joint distribution conditioning on $$N(t)$$ is
\begin{align} &P\{N_i(t) = n_i\}\\ =& P\{N_i(t) = n_i | N(t) = n\}P\{N(t) = n\} \\ =& \frac{n!}{n_1!\dots n_k!}p_1^{n_1}\dots p_k^{n_k}e^{-\lambda t}\frac{(\lambda t)^n}{n!}\\ =& \prod_{i=1}^k e^{-\lambda tp_i}\frac{(\lambda tp_i)^{n_i}}{n_i!} \end{align} where $$n = \sum_{i=1}^k n_i, i = 1, 2, \dots$$

2.21 Individuals enter the system in accordance with a Poisson process having rate $$\lambda$$. Each arrival independently makes its way through the states of the system. Let $$\alpha_i(s)$$ denote the probability that an individual is in state $$i$$ a time $$s$$ after it arrived. Let $$N_i(t)$$ denote the number of individuals in state $$i$$ at time $$t$$. Show that the $$N_i(t), i \geq 1$$, are independent and $$N_i(t)$$ is Poisson with mean equal to
$$\lambda E$$[amount of time an individual is in state i during its first t units in the system]

As shown in Problem 2.20, $$N_i(t), i \geq 1$$, are independent Poisson with mean $$\lambda \int_0^t \alpha_i(t-s)ds$$. And the later part is equal to the expectation of the amount of time an individual is in state i during its first t units in the system.

2.23 For the model of Example 2.3(C), find
(a) $$Var[D(t)]$$
(b) $$Cov[D(t), D(t+s)]$$

(a) Let $$X_i$$ denote the ith damage, which are i.i.d. And
$$E[X_i] = \frac{E[D]}{\alpha t}(1 – e^{-\alpha t}) \\ E[X_i^2] = \frac{E[D^2]}{2\alpha t}(1 – e^{-2\alpha t})$$ Hence,
\begin{align} E[D^2(t)] &= E[E[D^2(t)|N(t) = n]] \\ &= E[E[\sum_{i=1}^n D_i^2e^{-2\alpha(t-S_i)} + \sum_{i \neq j} D_iD_je^{-2\alpha(t – S_i – S_j)} | N(t) = n]] \\ &= E[nE[X_i^2]] + E[n(n-1)E^2[X_i]] \\ &= E[N(t)]E[X_i^2] + E[N^2(t) – N(t)]E^2[X_i]] \\ \end{align} Since $$E[N(t)] = \lambda t, E[N^2(t)] = \lambda^2t^2 + \lambda t$$, thus we have,
$$Var[D(t)] = E[D^2(t)] – E^2[D(t)] = \frac{\lambda E[D^2]}{2\alpha}(1 – e^{-2\alpha t})$$
(b) \begin{align} &D(t+s) = D(t)e^{-\alpha s} + D(s) \\ &Cov[D(t), D(t+s)]\\ = &E[D(t)D(t+s)] – E[D(t)]E[D(t+s)] \\ = &E[D^2(t)e^{-\alpha s} + D(s)D(t)] – E[D(t)]E[D(t)e^{-\alpha s} + D(s)] \\ = &e^{-\alpha s} (E[D^2(t)] – E^2[D(t)]) + E[D(s)]E[D(t)] – E[D(t)]E[D(s)] \\ = &\frac{\lambda E[D^2]}{2\alpha}(e^{-\alpha s} – e^{-\alpha (2t + s)}) \end{align}

2.25 Suppose that events occur in accordance with a Poisson process with rate $$\lambda$$, and that an event occurring at time $$s$$, independent of the past, contributes a random amount having distribution $$F_s, s \geq 0$$. Show that $$W$$, the sum of all contributions by time $$t$$, is a compound Poisson random variable. That is, show $$W$$ has the same distribution as $$\sum_{i=1}^N X_i$$, where the $$X_i$$ are independent of $$N$$, a Poisson random variable. Identify the distribution of the $$X_i$$ and the mean of $$N$$.

The distribution function of $$X_i$$ is $$F(x) = \frac{1}{t}\int_0^t F_s(x)ds$$, the mean of $$N$$ is $$\lambda t$$.

2.27 Compute the moment generating function of $$D(t)$$ in Example 2.3(C).

2.28 Prove Lemma 2.3.3

$$E[Y_1+\dots +Y_n | Y_1+\dots +Y_n = y ] = y \\ nE[Y_1| Y_1+\dots +Y_n = y ] = y \\ E[Y_1+\dots +Y_k | Y_1+\dots +Y_n = y ] = \frac{k}{n} y \\$$

2.29 Complete the proof that for nonhomogeneous Poisson process $$N(t+s) -N(t)$$ is Poisson with mean $$m(t+s) – m(t)$$.

\begin{align} P_n(s+h) &= P\{N(t+s+h) – N(t) =n\} \\ &= P_n(s)[1 – \lambda(t+s)h + o(h)] + P_{n-1}(s)[\lambda(t+s)h + o(h)] \\ P_n^{\prime}(s) + \lambda(t+s)P_n &= \lambda(t+s)P_{n-1} \\ \frac{d}{ds}(e^{\int\lambda(t+s)ds}P_n(s)) &= e^{\int\lambda(t+s)ds}\lambda(t+s)P_{n-1}(s)\\ \end{align}
When $$n=0$$, it holds true. Assume it holds true for $$n = k – 1$$, we can find it’s true for $$n = k$$. Proven.

2.30 Let $$T_1, T_2, \dots$$ denote the interarrival times of events of a nonhomogeneous Poisson process having intensity function $$\lambda(t)$$
(a) Are the $$T_i$$ independent?
(b) Are the $$T_i$$ identically distributed?
(c) Find the distribution of $$T_1$$.
(d) Find the distribution of $$T_2$$.

(a) \begin{align} P\{T_1 > t\} &= P\{N(t) = 0\} = e^{-m(t)} \\ P\{T_2 > t | T_1 = s\} &= P\{N(t+s) – N(s) = 0 | T_1 = s\} \\ &= e^{-[m(t+s) – m(s)]} \end{align}Thus, $$T_i$$ are not independent.
(b) No, $$T_2$$ depends on $$T_1$$.
(c)$$F_{T_1}(t) = 1-e^{-m(t)}$$
(d) $$F_{T_2}(t) =1- \int_0^{\infty} \lambda(s)e^{-m(t+s)}ds$$

2.31 Consider a nonhomogeneous Poisson process $$\{N(t), t \geq 0\}$$, where $$\lambda(t) > 0$$ for all $$t$$. Let
$$N^{*}(t) = N(m^{-1}(t))$$
Show that $$\{N^{*}(t), t \geq 0\}$$ is a Poisson process with rate $$\lambda = 1$$.

Let $$u = m^{-1}(t)$$, and suppose that $$t_1, t_2, \dots$$ is a sequence of points in $$[0, \infty]$$ with $$0 \leq t_1 < t_2 < \dots$$. Since $$m^{-1}$$ is strictly increasing, we have $$0 \leq u_1 < u_2 < \dots$$, where $$u_i = m^{-1}(t_i)$$. Due the independence of nonhomogeneous Poisson process, the sequence $$N^*_{t_1}, N^*_{t_2}, \dots$$ is also independent.
Suppose that $$t, h \in [0, \infty)$$, and let $$u_1 = m^{-1}(t), u_2 = m^{-1}(t+h)$$. Hence, $$N^*(t+h) – N^*(t) = N(u_2) – N(u_1)$$ has the Poisson distribution with parameter $$m(u_2) – m(u_1) = h$$.

2.32 (a) Let $$\{N(t), t \geq 0\}$$ be a nonhomogeneous Poisson process with mean value function $$m(t)$$. Given $$N(t) = n$$, show that the unordered set of arrival times has the same distribution as $$n$$ independent and identically distributed random variables having distribution function$$F(x) = \left\{ \begin{array}{ll} \frac{m(x)}{m(t)} \quad x \leq t \\ 1 \quad x > t \\ \end{array} \right.$$
(b) Suppose that workers incur accidents in accordance with a nonhomogeneous Poisson process with mean value function $$m(t)$$. Suppose further that each injured person is out of work for a random amount of time having distribution $$F$$. Let $$X(t)$$ be the number of workers who are out of work at time $$t$$. Compute $$E[X(t)]$$ and $$Var(X(t))$$.

(a) Similarly to the proof of Theorem 2.3.1, let $$S_i$$ denote the ith event,
\begin{align} &P\{t_i < S_i < t_i + h_i, i = 1, \dots, n|N(t) = n\} \\ =&\frac{n!\prod[m(t_i + h_i) – m(t_i)]}{e^{-m(t)[m(t)]^n/n!}} \end{align}Dividing both sides by $$h_1, \dots, h_n$$ and let $$h_i \to 0$$:
$$f_{s_1\dots s_n}(t_1,\dots,t_n|N(t) = n) = n!\prod[\lambda(t_i)/m(t)]$$
(b) The probability of an arbitrary injured worker is still out of work at time $$t$$ is
$$p = \int_0^t \bar{F}(t-s)d\frac{m(s)}{m(t)} = \int_0^t \bar{F}(t-s)\frac{\lambda(s)}{m(t)}ds$$Thus, we have
\begin{align} E[X(t)] &= E[E[X(t)|N(t)]] \\ &= m(t)\int_0^t \bar{F}(t-s)\frac{\lambda(s)}{m(t)}ds \\ &= \int_0^t \bar{F}(t-s)\lambda(s)ds \\ Var(X(t)) &= E[Var(X(t)|N(t))] + Var(E[X(t)|N(t)]) \\ &= m(t)p(1-p) + p^2Var(N(t)) \\ &= \int_0^t \bar{F}(t-s)\lambda(s)ds \end{align}

2.33 A two-dimensional Poisson process is a process of events in the plane such that (i) for any region of area $$A$$, the number of events in $$A$$ is Poisson distributed with mean $$\lambda A$$, and (ii) the numbers of events in nonoverlapping regions are independent. Consider a fixed point, and let $$X$$ denote the distance from that point to its nearest event, where distance is measured in the usual Euclidean manner. Show that
(a) $$P\{X > t\} = e^{-\lambda\pi t^2}$$.
(b) $$E[X] = 1 / (2\sqrt{\lambda})$$.
(c) Let $$R_i, i \geq 1$$ denote the distance from an arbitrary point to the ith closest event to it. Show that, with $$R_0 = 0, \pi R_i^2 – \pi R_{i-1}^2, i \geq 1$$ are independent exponential random variables, each with rate $$\lambda$$.

(a) $$P\{X > t\} = e^{-\lambda A}\frac{(\lambda A)^0}{0!} = e^{-\lambda\pi t^2}$$
(b) \begin{align} E[X] &= \int_0^{\infty} t d(1 – e^{-\lambda\pi t^2} ) \\ &= -te^{-\lambda \pi t^2}|_0^{\infty} + \frac{1}{\sqrt{\lambda \pi}}\int_0^{\infty} e^{-x^2} dx \\ &= \frac{1}{2\sqrt{\lambda}} \end{align}
(c) Let $$D_i = \pi R_i^2 – \pi R_{i-1}^2$$, then
$$F_{D_i}(x) = 1 – P\{D_i > x\} =1 – e^{-\lambda x}\frac{x^0}{0!} =1 – e^{-\lambda x}$$

2.34 Repeat Problem 2.25 when the events occur according to a nonhomogeneous Poisson process with intensity function $$\lambda(t), t \geq 0$$.

As shown in Problem 2.32, the distribution function of $$X_i$$ is $$F(x) = \int_0^t F_s(x)\frac{\lambda(s)}{m(t)} ds$$, the mean of $$N$$ is $$m(t)$$.

2.35 Let $$\{N(t), t \geq 0\}$$ be a nonhomogeneous Poisson process with intensity function $$\lambda(t), t \geq 0$$. However, suppose one starts observing the process at a random time $$\tau$$ having distribution function $$F$$. Let $$N^*(t) = N(\tau + t) – N(\tau)$$ denote the number of events that occur in the first $$t$$ time units of observation
(a) Does the process $$\{N^*(t), t \geq 0\}$$ process independent increments?
(b) Repeat (a) when $$\{N(t), t \geq 0\}$$ is a Poisson process.

$$N^*(t+s) – N^*(t) = N(\tau + t + s) – N(\tau + t)$$ which counts the event in a non-overlapping interval with $$N(\tau + t) – N(\tau)$$. Thus the increments are independent.

2.36 Let $$C$$ denote the number of customers served in an $$M/G/1$$ busy period. Find
(a) $$E[C]$$.
(b) $$Var(C)$$.

(a) $$E[C] = \frac{1}{1 – \lambda\mu_G}$$

2.37 Let $$\{X(t), t \geq 0\}$$ be a compound Poisson process with $$X(t) = \sum_{i=1}^{N(t)}X_i$$, and suppose that the $$X_i$$ can only assume a finite set of possible values. Argue that, for $$t$$ large, the distribution of $$X(t)$$ is approximately normal.

(i) A Poisson variable converges to a normal distribution as $$\lambda \to \infty$$:
Let $$X \sim \pi(\lambda)$$, then
\begin{align} Y &= \frac{X-\lambda}{\sqrt{\lambda}} \\ M_Y &= E[exp\{t\frac{X-\lambda}{\sqrt{\lambda}} \}] \\ &= exp\{-t\sqrt{\lambda}\}exp\{\lambda(e^{t/\sqrt{\lambda} – 1})\} \\ &= exp\{-t\sqrt{\lambda} + \lambda(\sum_{i=0}^{\infty} \frac{(t/\sqrt{\lambda})^i}{i!} – 1)\} \\ &= exp\{t^2/2 + t^3/(6\sqrt{\lambda}) + \dots\}\\ \lim_{\lambda \to \infty} M_Y &= exp\{t^2/2\} \end{align}
Thus, $$Y \sim N(0, 1), X \sim N(\lambda, \lambda)$$.
(ii) $$X(t)$$ can be expressed as the sum the all the possible value $$\alpha_j$$ from different independent Poisson process. Namely, $$X(t) = \sum_j \alpha_j N_j(t)$$, with parameters $$\lambda p_j t$$. When $$t$$ is large, all the Poisson variables converges to normal. Hence the sum of $$j$$ normal variables is also normal.

2.38 Let $$\{X(t), t \geq 0\}$$ be a compound Poisson process with $$X(t) = \sum_{i=1}^{N(t)}X_i$$, and suppose that $$\lambda = 1$$ and $$P\{X_i = j\} = j/10, j = 1,2,3,4$$. Calculate $$P\{X(4) = 20\}$$.

$$X(4)$$ is compound Poisson variable with parameter $$\lambda t = 4$$. From Corollary 2.5.4 we know,
\begin{align} P\{ X(t) = 20 \} &= P_{20} = \frac{1}{5} \sum_{j = 1}^{4} \frac{j^2}{10} P_{20 – j} \\ &= \dots \end{align}

2.39 Compute $$Cov(X(s), X(t))$$ for a compound Poisson process.

Suppose $$s \leq t$$, then
\begin{align} Cov(X(s), X(t)) &= Cov(X(s), X(t)-X(s)) + Cov(X(s), X(s)) \\ &= Var(X(s)) = \lambda s E[X^2] \end{align} Symmetrically, when $$t \leq s$$, thus we have
$$Cov(X(s), X(t)) = \lambda min(s,t)E[X^2]$$

2.40 Give an example of a counting process $$\{N(t), t \geq 0\}$$ that is not a Poisson process but which has the property that conditional on $$N(t) = n$$ the first $$n$$ event times are distributed as the order statistics from a set of $$n$$ independent uniform $$(0,t)$$ random variables.

Conditional Poisson processes don’t have independent increments, which means they’re not Poisson process. But given $$N(t) = n$$ the arrival times are distributed as the order statistics from a set of $$n$$ independent uniform $$(0,t)$$ random variables. Refer the solution for Problem 2.41 in textbook for detail.

## Derivation of Hypergeometric Distribution's Expectation and Variance

Let $$X$$ denote the number of white balls selected when $$n$$ balls are chosen at random from an urn containing $$N$$ balls $$K$$ of which are white. The distribution of $$X$$ is Hypergeometric Distribution. You can find detail description at Wikipedia, but the derivation of Expectation and Variance is omitted. I’ll show the derivation here from two different aspects.

First, Just Do It!
Obviously,
\begin{align} P\{X=i\} &= \frac{{K \choose i}{N-K \choose n-i}}{{N \choose n}} \\ N &\in \{0, 1, 2, \dots\} \\ K &\in \{0, 1, 2, \dots, N\} \\ n &\in \{0, 1, 2, \dots, N\} \\ i &\in \{max(0, n+K-N), \dots, min(n, K)\}\\ \\ E[X] &= \sum iP\{X=i\} =\frac{i{K \choose i}{N-K \choose n-i}}{{N \choose n}} \\ &= K\sum \frac{{K-1 \choose i-1}{N-K \choose n-i}}{{N \choose n}} \end{align}
We find the latter part in sum is very similar to the sum of the probability of a variable obeying hypergeometric distribution(which will equal to 1). Thus we do some transformations and get,
\begin{align} E[X] &= K\sum \frac{{K-1 \choose i-1}{(N-1)-(K-1) \choose (n-1)-(i-1)}}{\frac{N}{n}{N-1 \choose n-1}} \\ &= \frac{kn}{N} \sum \frac{{K-1 \choose i-1}{(N-1)-(K-1) \choose (n-1)-(i-1)}}{{N-1 \choose n-1}} \\ &= \frac{kn}{N} \\ \end{align}
We can do it recursively and get,
\begin{align} E[X(X-1)] &= \frac{K(K-1)n(n-1)}{N(N-1)} \sum \frac{{K-2 \choose i-2}{(N-2)-(K-2) \choose (n-2)-(i-2)}}{{N-2 \choose n-2}} \\ &= \frac{K(K-1)n(n-1)}{N(N-1)} \\ Var(X) &= E[X^2] – E^2[X] = E[X(X-1)] + E[X] – E^2[X]\\ &= \frac{Kn(N-K)(N-n)}{N^2(N-1)} \end{align}

It’s very time-consuming. Though hardcore, it works. Next I want to introduce a smarter way.

Second, Divide and Conquer!

Let $$X_i$$ to be the $$ith$$ ball in the N balls and $$X_i = 1$$ if it’s white, it’s easy to see that $$P(X_i = 1) = \frac{K}{N}$$ which is independent from the position $$i$$ and whether the balls was removed after chosen. So we get,
\begin{align} E[X_i] &= \frac{K}{N}\\ Var(X_i) &= \frac{K(N-K)}{N^2} \end{align}
And we can easily get the expectation,
$$E[X] = \sum_{n}E[X_i] = \frac{Kn}{N}$$
Though all the $$X_i$$ obeys the same distribution, they are not independent. To calculate the variance, we have to compute their covariance first.
\begin{align} Cov(X_i, X_j) &= E[X_i X_j] – E[X_i]E[X_j] \\ &= P\{X_i = 1, X_j = 1\} – \frac{K^2}{N^2}\\ &= P\{X_i = 1 | X_j = 1\}P\{X_j = 1\} – \frac{K^2}{N^2}\\ &= \frac{K-1}{N-1} \frac{K}{N} – \frac{K^2}{N^2}\\ &= -\frac{K(N-K)}{N^2(N-1)}\\ Var(X) &= \sum Var(X_i) + \sum_{i\neq j}Cov(X_i, X_j) \\ &= n\frac{K(N-K)}{N^2} – n(n-1)\frac{K(N-K)}{N^2(N-1)} \\ &= \frac{Kn(N-K)(N-n)}{N^2(N-1)} \end{align}

A Little More 🙂

Since all $$X_i$$ obeys the same distribution, and is independent from the position $$i$$ and whether the balls was removed after chosen, we can also conclude that “Draw Lots” is a fair game. Regardless the order you draw the lot, you get the equal probability.