Graduationwoot

Dragon Notes

i

\( \newcommand{bvec}[1]{\overrightarrow{\boldsymbol{#1}}} \newcommand{bnvec}[1]{\overrightarrow{\boldsymbol{\mathrm{#1}}}} \newcommand{uvec}[1]{\widehat{\boldsymbol{#1}}} \newcommand{vec}[1]{\overrightarrow{#1}} \newcommand{\parallelsum}{\mathbin{\|}} \) \( \newcommand{s}[1]{\small{#1}} \newcommand{t}[1]{\text{#1}} \newcommand{tb}[1]{\textbf{#1}} \newcommand{ns}[1]{\normalsize{#1}} \newcommand{ss}[1]{\scriptsize{#1}} \newcommand{vpl}[]{\vphantom{\large{\int^{\int}}}} \newcommand{vplup}[]{\vphantom{A^{A^{A^A}}}} \newcommand{vplLup}[]{\vphantom{A^{A^{A^{A{^A{^A}}}}}}} \newcommand{vpLup}[]{\vphantom{A^{A^{A^{A^{A^{A^{A^A}}}}}}}} \newcommand{up}[]{\vplup} \newcommand{Up}[]{\vplLup} \newcommand{Uup}[]{\vpLup} \newcommand{vpL}[]{\vphantom{\Large{\int^{\int}}}} \newcommand{lrg}[1]{\class{lrg}{#1}} \newcommand{sml}[1]{\class{sml}{#1}} \newcommand{qq}[2]{{#1}_{\t{#2}}} \newcommand{ts}[2]{\t{#1}_{\t{#2}}} \) \( \newcommand{ds}[]{\displaystyle} \newcommand{dsup}[]{\displaystyle\vplup} \newcommand{u}[1]{\underline{#1}} \newcommand{tu}[1]{\underline{\text{#1}}} \newcommand{tbu}[1]{\underline{\bf{\text{#1}}}} \newcommand{bxred}[1]{\class{bxred}{#1}} \newcommand{Bxred}[1]{\class{bxred2}{#1}} \newcommand{lrpar}[1]{\left({#1}\right)} \newcommand{lrbra}[1]{\left[{#1}\right]} \newcommand{lrabs}[1]{\left|{#1}\right|} \newcommand{bnlr}[2]{\bn{#1}\left(\bn{#2}\right)} \newcommand{nblr}[2]{\bn{#1}(\bn{#2})} \newcommand{real}[1]{\Ree\{{#1}\}} \newcommand{Real}[1]{\Ree\left\{{#1}\right\}} \newcommand{abss}[1]{\|{#1}\|} \newcommand{umin}[1]{\underset{{#1}}{\t{min}}} \newcommand{umax}[1]{\underset{{#1}}{\t{max}}} \newcommand{und}[2]{\underset{{#1}}{{#2}}} \) \( \newcommand{bn}[1]{\boldsymbol{\mathrm{#1}}} \newcommand{bns}[2]{\bn{#1}_{\t{#2}}} \newcommand{b}[1]{\boldsymbol{#1}} \newcommand{bb}[1]{[\bn{#1}]} \) \( \newcommand{abs}[1]{\left|{#1}\right|} \newcommand{ra}[]{\rightarrow} \newcommand{Ra}[]{\Rightarrow} \newcommand{Lra}[]{\Leftrightarrow} \newcommand{rai}[]{\rightarrow\infty} \newcommand{ub}[2]{\underbrace{{#1}}_{#2}} \newcommand{ob}[2]{\overbrace{{#1}}^{#2}} \newcommand{lfrac}[2]{\large{\frac{#1}{#2}}\normalsize{}} \newcommand{sfrac}[2]{\small{\frac{#1}{#2}}\normalsize{}} \newcommand{Cos}[1]{\cos{\left({#1}\right)}} \newcommand{Sin}[1]{\sin{\left({#1}\right)}} \newcommand{Frac}[2]{\left({\frac{#1}{#2}}\right)} \newcommand{LFrac}[2]{\large{{\left({\frac{#1}{#2}}\right)}}\normalsize{}} \newcommand{Sinf}[2]{\sin{\left(\frac{#1}{#2}\right)}} \newcommand{Cosf}[2]{\cos{\left(\frac{#1}{#2}\right)}} \newcommand{atan}[1]{\tan^{-1}({#1})} \newcommand{Atan}[1]{\tan^{-1}\left({#1}\right)} \newcommand{intlim}[2]{\int\limits_{#1}^{#2}} \newcommand{lmt}[2]{\lim_{{#1}\rightarrow{#2}}} \newcommand{ilim}[1]{\lim_{{#1}\rightarrow\infty}} \newcommand{zlim}[1]{\lim_{{#1}\rightarrow 0}} \newcommand{Pr}[]{\t{Pr}} \newcommand{prop}[]{\propto} \newcommand{ln}[1]{\t{ln}({#1})} \newcommand{Ln}[1]{\t{ln}\left({#1}\right)} \newcommand{min}[2]{\t{min}({#1},{#2})} \newcommand{Min}[2]{\t{min}\left({#1},{#2}\right)} \newcommand{max}[2]{\t{max}({#1},{#2})} \newcommand{Max}[2]{\t{max}\left({#1},{#2}\right)} \newcommand{pfrac}[2]{\frac{\partial{#1}}{\partial{#2}}} \newcommand{pd}[]{\partial} \newcommand{zisum}[1]{\sum_{{#1}=0}^{\infty}} \newcommand{iisum}[1]{\sum_{{#1}=-\infty}^{\infty}} \newcommand{var}[1]{\t{var}({#1})} \newcommand{exp}[1]{\t{exp}\left({#1}\right)} \newcommand{mtx}[2]{\left[\begin{matrix}{#1}\\{#2}\end{matrix}\right]} \newcommand{nmtx}[2]{\begin{matrix}{#1}\\{#2}\end{matrix}} \newcommand{nmttx}[3]{\begin{matrix}\begin{align} {#1}& \\ {#2}& \\ {#3}& \\ \end{align}\end{matrix}} \newcommand{amttx}[3]{\begin{matrix} {#1} \\ {#2} \\ {#3} \\ \end{matrix}} \newcommand{nmtttx}[4]{\begin{matrix}{#1}\\{#2}\\{#3}\\{#4}\end{matrix}} \newcommand{mtxx}[4]{\left[\begin{matrix}\begin{align}&{#1}&\hspace{-20px}{#2}\\&{#3}&\hspace{-20px}{#4}\end{align}\end{matrix}\right]} \newcommand{mtxxx}[9]{\begin{matrix}\begin{align} &{#1}&\hspace{-20px}{#2}&&\hspace{-20px}{#3}\\ &{#4}&\hspace{-20px}{#5}&&\hspace{-20px}{#6}\\ &{#7}&\hspace{-20px}{#8}&&\hspace{-20px}{#9} \end{align}\end{matrix}} \newcommand{amtxxx}[9]{ \amttx{#1}{#4}{#7}\hspace{10px} \amttx{#2}{#5}{#8}\hspace{10px} \amttx{#3}{#6}{#9}} \) \( \newcommand{ph}[1]{\phantom{#1}} \newcommand{vph}[1]{\vphantom{#1}} \newcommand{mtxxxx}[8]{\begin{matrix}\begin{align} & {#1}&\hspace{-17px}{#2} &&\hspace{-20px}{#3} &&\hspace{-20px}{#4} \\ & {#5}&\hspace{-17px}{#6} &&\hspace{-20px}{#7} &&\hspace{-20px}{#8} \\ \mtxxxxCont} \newcommand{\mtxxxxCont}[8]{ & {#1}&\hspace{-17px}{#2} &&\hspace{-20px}{#3} &&\hspace{-20px}{#4}\\ & {#5}&\hspace{-17px}{#6} &&\hspace{-20px}{#7} &&\hspace{-20px}{#8} \end{align}\end{matrix}} \newcommand{mtXxxx}[4]{\begin{matrix}{#1}\\{#2}\\{#3}\\{#4}\end{matrix}} \newcommand{cov}[1]{\t{cov}({#1})} \newcommand{Cov}[1]{\t{cov}\left({#1}\right)} \newcommand{var}[1]{\t{var}({#1})} \newcommand{Var}[1]{\t{var}\left({#1}\right)} \newcommand{pnint}[]{\int_{-\infty}^{\infty}} \newcommand{floor}[1]{\left\lfloor {#1} \right\rfloor} \) \( \newcommand{adeg}[1]{\angle{({#1}^{\t{o}})}} \newcommand{Ree}[]{\mathcal{Re}} \newcommand{Im}[]{\mathcal{Im}} \newcommand{deg}[1]{{#1}^{\t{o}}} \newcommand{adegg}[1]{\angle{{#1}^{\t{o}}}} \newcommand{ang}[1]{\angle{\left({#1}\right)}} \newcommand{bkt}[1]{\langle{#1}\rangle} \) \( \newcommand{\hs}[1]{\hspace{#1}} \)

  UNDER CONSTRUCTION

Randomness & Probability:
Solved Problems




\(\bb{Pr1}\ \)Find PDF of \(Y=e^{X}\) for \(X\t{~}N(0,1)\)
\(S_Y =\{y:y>0\}\). Let \(\ y=e^x\); then, \(x=\ln{y}\Rightarrow g^{-1}(y)=\ln{y}\)
\(\ds p_Y(y)=p_X(\ln{y})\left|\frac{d\ln{y}}{dy}\right|= \left\{ \begin{array}{c} \begin{align} & p_X(\ln{y})\sfrac{1}{y},\ \ y > 0 \\ & 0, \hspace{86px} y\leq 0 \end{align} \end{array} \right.\) \(\ds =\bxred{\left\{ \begin{array}{c} \begin{align} & \sfrac{1}{\sqrt{2\pi}y}\t{exp}[-\sfrac{1}{2}(\ln{y})^2],\ \ y > 0 \\ & 0, \hspace{167px} y\leq 0 \end{align} \end{array} \right.}\)
\(\bb{Pr2}\ \)Find PDF of \(Y=X^2\) for \(X\t{~}N(0,1)\vplLup\)
The solutions to \(y=x^2\) are \(x_1=-\sqrt{y}=g^{-1}_1(y),\) and \(x_2=\sqrt{y}=g^{-1}_2(y)\). We must add the PDFs (since both the \(x\)-intervals map into the same \(y\)-interval and the \(x\)-intervals are disjoint) to yield
\(p_Y(y)=p_X(g^{-1}_1(y))\left|\lfrac{dg^{-1}_1(y)}{dy}\right|+p_X(g^{-1}_2(y))\left|\lfrac{dg^{-1}_2(y)}{dy}\right|\) \(= \left\{ \begin{array}{c} \begin{align} & \small{}\left[\frac{1}{\sqrt{2\pi}}e^{-y/2}\right]\frac{1}{2\sqrt{y}}+\left[\frac{1}{\sqrt{2\pi}}e^{-y/2}\right]\frac{1}{2\sqrt{y}},\ \ y \geq 0 \\ & \small{} 0, \hspace{265px} y < 0 \end{align} \end{array} \right. \)\(= \small{} \bxred{\left\{ \begin{array}{c} \begin{align} & \frac{1}{\sqrt{2\pi y}}e^{-y/2},\ \ y \geq 0 \\ & \small{} 0, \hspace{78px} y < 0 \end{align} \end{array} \right.} \)
\(\bb{Pr3}\ \)Consider a Rayleigh rv that is input to a device that limits its output (such as a thermometer, limiting to a max value; all temps. above this maximum will read as the maximum). Determine \(p_Y(y)\).
The effect of the device can be represented by the transformation\(\vplup\)
\(\vplup y=g(x)= \left\{ \begin{array}{c} \begin{align} & x,\hspace{40px} 0\leq x < x_{\t{max}} \\ & x_{\t{max}}, \hspace{11px} \ x\geq x_{\t{max}} \end{align} \end{array} \right. \)
\(p_Y(y)=0\) for \(y<0\) since \(X\) can only take on non-negative values. For \(0\leq y < x_{\t{max}},\ g^{-1}(y)=y.\) Lastly, for \(y\geq x_{\t{max}}\), we have an infinite number of solutions \(x\in [x_{\t{max}},\infty)\) (left fig.). In region two, we have
\(\vplup p_Y(y)=p_X(g^{-1}(y))\left|\lfrac{dg^{-1}(y)}{dy}\right|=p_X(y)\)
In region three, \(Y\) cannot exceed \(x_{\t{max}}\) and so \(y=x_{\t{max}}\) is the only possible value. Its probability is equal to the probability that \(X\geq x_{\t{max}}\):
\(\vplup \ds P[Y=x_{\t{max}}]=\int_{x_{\t{max}}}^{\infty}p_X(x)dx\)
since, as shown in bot-left fig, the \(x\)-interval \([x_{\t{max}},\infty)\) is mapped into the \(y\)-point \(y=x_{\t{max}}\) is nonzero, we represent it in the PDF by using an impulse as
\(\ds \vplup p_Y(y)=\left[\int_{x_{\t{max}}}^{\infty}p_X(x)dx\right] \delta (y-x_{\t{max}}),\ \ y=x_{\t{max}}\)
Since it contains an impulse, \(p_Y(y)\) is the PDF of a mixed rv. Finally, for \(x\geq 0\), the Rayleigh PDF for \(\sigma^2=1\) is
\(\ds \vplup p_X(x)=x\exp{-x^2/2},\ \ \t{so that the PDF of }Y\t{ becomes (shown in bot-right fig)}\)
\(\ds \vplup p_Y(y)= \left\{ \begin{array}{c} \begin{align} & 0, && y<0 \\ & y\exp{-y^2/2}, && 0\leq y < x_{\t{max}} \\ & \left[\int_{x_{\t{max}}}^{\infty}x\exp{-x^2/2}dx\right] \delta (y-x_{\t{max}}), && y=x_{\t{max}} \\ & 0, && y > x_{\t{max}} \end{align} \end{array} \right. \)
\(=\bxred{ \left\{ \begin{array}{c} \begin{align} & 0, && y<0 \\ & y\exp{-y^2/2}, && 0\leq y < x_{\t{max}} \\ & \exp{x_{\t{max}}^2}\delta (y-x_{\t{max}}), && y=x_{\t{max}} \hspace{89px}\\ & 0, && y>x_{\t{max}} \end{align} \end{array} \right.} \)

\(\bb{Pr4}\ \)Let \((X,Y)\) have a standard bivariate Gaussian PDF. Given the transformation
\(\ds \mtx{W}{Z}=\mtxx{\sigma_W}{0}{0}{\sigma_Z}\mtx{X}{Y}+\mtx{\mu_W}{\mu_Z}\vphantom{\frac{A^{A^A}}{A}}\),
determine the transformed PDF \(p_{W,Z}(w,z)\).


[Sol] First solve for \(x,y\) as

\(\ds x=\frac{w-\mu_W}{\sigma_W},\quad y=\frac{z-\mu_Z}{\sigma_Z}\)
The inverse Jacobian matrix becomes
\(\ds \frac{\partial (x,y)}{\partial (w,z)}=\mtxx{1/\sigma_W}{0}{0}{1/\sigma_Z}\)
Applying \(\bb{15.2}\) to \(\bb{6}\), we obtain
\(\ds p_{W,Z}(w,z)=\frac{1}{2\pi\sqrt{1-\rho^2}}\cdot\t{exp}\lrbra{-\frac{1}{2(1-\rho^2)}\lrpar{\Frac{w-\mu_W}{\sigma_W}^2-2\rho\Frac{w-\mu_W}{\sigma_W}\Frac{z-\mu_Z}{\sigma_Z}+\Frac{z-\mu_Z}{\sigma_Z}^2}}\frac{1}{\sigma_W\sigma_Z}\Rightarrow\)
\(\ds \bxred{p_{W,Z}(w,z)=\frac{1}{2\pi\sqrt{(1-\rho^2)\sigma_W^2\sigma_Z^2}}\ \t{exp}\lrbra{-\frac{1}{2(1-\rho^2)}\lrpar{\Frac{w-\mu_W}{\sigma_W}^2-2\rho\Frac{w-\mu_W}{\sigma_W}\Frac{z-\mu_Z}{\sigma_Z}+\Frac{z-\mu_Z}{\sigma_Z}^2}}}\)

\(\bb{Pr5}\ \)[Dart worthy?] Suppose you're challenged to a dart throwing contest by a champion player. The champion scores a bullseye 85% of the time. Suppose also that you aren't very skilled at dart-throwing, and that your darts are equally likely to land anywhere on the dartboard. Assume that bullseye radius is \(1/4\t{th}\) the dartboard's - hence your odds of a bullseye are \(25\%\).
\(\vplLup\)To make the challenge reasonable, the champion proposes the following. He gets one shot per game. For you, if your dart lands outside the region \(|x|\leq\Delta x/2\), then you get to go again - until your dart lands within the region \(|x|\leq \Delta x/2\), as shown in (a) below. You even get to pick the value of \(\Delta x\) (note: only darts in the chosen region are counted). You may thus reason it best to exclude regions outside the bullseye - and choose as in (b).
\(\vplLup\)The champion bets you that, even with this aid, you will not outscore him. To solidify his confidence, he offers to return double any amount you bet - so if you stake \($5\) and win, you get \($10\).
\(\vplLup\)  Should you take this bet? If so, at which value of \(\Delta x\) do odds turn to your favor?

[Sol] To find the probability of your throw scoring a bullseye, we recall that your dart is equally likely to land anywhere on the dartboard. Hence, we
   have
\(\ds P[\t{bullseye}|-\Delta x/2\leq X \leq \Delta x /2]=\frac{P[\t{bullseye,}-\Delta x/2\leq X \leq \Delta x/2]}{P[-\Delta x/2\leq X \leq \Delta x/2]}\)
\(\vplup\)Since \(\Delta x/2\) is small, we can assume it to be \(\ll 1/4\); therefore, the cross-hatched regions can be approximated by rectangles and so
\(\ds \begin{align} P[\t{bullseye}|-\Delta x/2\leq X \leq \Delta x/2] &= \frac{\vphantom{\int^{A^A}}P[\t{double cross-hatched region}]}{P[\t{double cross-hatched region}+P[\t{single cross-hatched region}]} \\ &= \frac{\Delta x(1/2)/\pi}{\Delta x(2)/\pi}\quad \t{(probability = rectangle area / dartboard area)} \\ &= \bxred{0.25 < 0.85} \end{align}\)
(The approximations are due to the use of rectangular approximations to the cross-hatched regions, which become exact as \(\Delta x\rightarrow 0\))
\(\vplup\)Hence, the champion will have a higher probability of winning for any \(\Delta x\), now matter how small it is chosen.
Even with rewards doubled (or tripled), you end up at a loss.
\(\bb{Pr6}\ \)[rv Decorrelation] Decorrelate rv's \(X_1,X_2\), whose PMF is given as
\(x_2=-8\quad x_2=0\quad x_2=2\quad x_2=6\)\(p_{X1}[x_1]\)
\(\mtXxxx{\ \ \ x_1=-8\vphantom{\frac{1}{4}}}{x_1=8\vphantom{\frac{1}{4}}}{x_1=2\vphantom{\frac{1}{4}}}{x_1=6\vphantom{\frac{1}{4}}}\)\(\hspace{20px}\mtXxxx{\vphantom{\frac{1}{4}}0}{\frac{1}{4}}{\vphantom{\frac{1}{4}}0}{\vphantom{\frac{1}{4}}0}\hspace{60px}\mtXxxx{\frac{1}{4}}{\vphantom{\frac{1}{4}}0}{\vphantom{\frac{1}{4}}0}{\vphantom{\frac{1}{4}}0}\hspace{50px}\mtXxxx{\vphantom{\frac{1}{4}}0}{\vphantom{\frac{1}{4}}0}{\vphantom{\frac{1}{4}}0}{\frac{1}{4}}\hspace{50px}\mtXxxx{\vphantom{\frac{1}{4}}0}{\vphantom{\frac{1}{4}}0}{\frac{1}{4}}{\vphantom{\frac{1}{4}}0}\)\(\quad \mtXxxx{\frac{1}{4}}{\frac{1}{4}}{\frac{1}{4}}{\frac{1}{4}}\)
\(p_{X_2}[x_2]\)\(\hspace{22px}\frac{1}{4}\hspace{66px}\frac{1}{4}\hspace{58px}\frac{1}{4}\hspace{54px}\frac{1}{4}\)

[Sol] First determine the covariance matrix \(\bn{C}_X\) and then \(\bn{A}\) such that \(\bn{Y}=\bn{AX}\) consists of uncorrelated variables. From the table, we have
\(\ds \begin{align} \vplLup E_{X_1}[X_1] &= E_{X_2}[X_2]=0 \\ E_{X_2}[X_2] &= E_{X_2}[X_2^2]=26 \\ E_{X_1,X_2}[X_1X_2] &= 6 \\ &\Rightarrow \\ \var{X_1} &= \var{X_2}=26 \\ \cov{X_1,X_2} &= 6 \\ &\Rightarrow \end{align}\)
\(\ds \boxed{\bn{C}_X=\mtxx{26}{6}{6}{26}}\)
  \(\vplup\)Next, find the eigenvectors:
\(\ds \t{det}\left(\mtxx{26-\lambda}{6}{6}{26-\lambda}\right)=0 \Rightarrow (26-\lambda)(26-\lambda)-36=0 \Rightarrow\)
\(\vplup\boxed{\lambda_1=20,\ \lambda_2=32}\)

  \(\vplup\)Solving for the corresponding eigenvectors and normalizing yields
\(\ds \vplup (\bn{C}_X-\lambda_1\bn{I})\bn{v}_1=\mtxx{6}{\hspace{10px}6}{6}{\hspace{10px}6}\mtx{v_1}{v_2}=\mtx{0}{0}\Rightarrow \boxed{\bn{v}_1=\mtx{1/\sqrt{2}}{-1/\sqrt{2}}}\)
\(\ds \vplup (\bn{C}_X-\lambda_2\bn{I})\bn{v}_2=\mtxx{-6}{6}{6}{-6}\mtx{v_1}{v_2}=\mtx{0}{0}\Rightarrow \boxed{\bn{v}_2=\mtx{1/\sqrt{2}}{1/\sqrt{2}}}\)

  \(\vplup\)The modal matrix becomes
\(\ds \bn{V}=[\bn{v_1}\ \ \bn{v_2}]=\mtxx{1/\sqrt{2}}{1/\sqrt{2}}{-1/\sqrt{2}}{1/\sqrt{2}},\t{ and therefore}\)
\(\ds \bn{V}^T=\boxed{\bn{A}=\mtxx{1/\sqrt{2}}{-1/\sqrt{2}}{1/\sqrt{2}}{1/\sqrt{2}}}\)
  \(\vplup\)Hence, the transformed r-vector \(\bn{Y}=\bn{AX}\) is explicitly
\(\ds \vplLup^{\vplup}\vplup \bxred{Y_1=\frac{1}{\sqrt{2}}X_1-\frac{1}{\sqrt{2}}X_2} \quad\) \(\ds \vplup \bxred{Y_2=\frac{1}{\sqrt{2}}X_1+\frac{1}{\sqrt{2}}X_2}\)
  \(\vplup\)\(Y_1\) and \(Y_2\) are hence uncorrelated rv's with
\(\ds \vplup E_{\bn{Y}}[\bn{Y}] = E_{\bn{Y}}[\bn{AX}]=\bn{A}E_{\bn{X}}[\bn{X}]=\bn{0}\)
\(\bn{C}_Y = \bn{A}\bn{C}_X\bn{A}^T = \bn{V}^T\bn{C}_X\bn{V} = \bn{\Lambda} = \mtxx{20}{0}{0}{32}\)
  \(\vplLup\)Note that the solution works by rotating every datapoint by a fixed angle - effectively rotating the regression line, and changing its slope (the
  correlation coefficient). The \(\bn{A}\) found, and in general decorelation, is the rotation matrix:
\(\ds \bn{A}=\mtxx{\cos{\theta}}{-\sin{\theta}}{\sin{\theta}}{\cos{\theta}}\)
  \(\vplup\) - where \(\theta=\pi/4\) in this example. As shown in the figure below, the values of \(\bn{X}\) become the values of \(\bn{Y}\) via a \(\deg{45}\) rotation.

\(\bb{Pr7}\ \)[Bivariate Gaussian realization] Obtain a bivariate Gaussian realization \((W,Z)\) by transforming the standard bivariate Gaussian \((X,Y)\)

[Sol] Given \(X,Y\sim N(0,1)\), the distribution \(N\lrpar{[\mu_X\ \mu_Y],[\bn{C}]}\) can be obtained by applying the affine transformation

\(\ds \mtx{W}{Z}=\bn{G}\mtx{X}{Y}+\mtx{a}{b}\quad \bb{1^*}\)
  \(\vplup\) The mean vector and covariance matrix of \([W\ Z]^T\) transform according to

\(\ds E\lrbra{\mtx{W}{Z}}=\bn{G}E\lrbra{\mtx{X}{Y}}+\mtx{a}{b}\)
\(\ds \vplup \bn{C}_{W,Z}=\bn{G}\bn{C}_{X,Y}\bn{G}^T\quad \bb{2^*}\)
  Then, \((W,Z)\) with a given mean \([\mu_W\ \mu_Z]^T\) and covariance matrix \(\bn{C}_{W,Z}\) can be obtained by applying \(\bb{1^*}\) with a suitable \(\bn{G}\) and \([a\ b]^T\) so that

\(\ds E\lrbra{\mtx{W}{Z}}=\mtx{\mu_W}{\mu_Z},\quad \bn{C}_{W,Z}=\mtxx{\sigma_W^2}{\rho\sigma_W\sigma_Z}{\rho\sigma_Z\sigma_W}{\sigma_W^2}\quad \bb{3^*}\)
  Since \((X,Y)\) are zero-mean, \(E[[X\ Y]^T]=0\) - and so we choose \(a=\mu_W\) and \(b=\mu_Z\). Also, since \(X\) & \(Y\) are independent, hence uncorrelated, and
  with unity variances, we have
\(\ds \bn{C}_{X,Y}=\mtxx{1}{\hspace{10px} 0}{0}{\hspace{10px}1}=\bn{I}\)
  To obtain \(\bn{C}_{W,Z}\) using \(\bb{2^*}\), construct \(\bn{G}\) as follows; let \(\bn{G}\) be a lower triangular matrix
\(\ds \bn{G}=\mtxx{a}{\hspace{10px}0}{b}{\hspace{10px}c}\)
\(\Rightarrow\)
\(\bn{G}\bn{G}^T=\mtxx{a}{\hspace{10px}0}{b}{\hspace{10px}c}=\hspace{5px}\)\(\mtxx{a^2}{ab}{ab}{b^2+c^2}\)\(\mtxx{a}{\hspace{10px}b}{0}{\hspace{10px}c}\)
  Next, equate the elements of \(\bn{C}_{W,Z}\) in \(\bn{3^*}\) to those of \(\bn{G}\bn{G}^T\) above:
\(\ds a=\sigma_W,\quad b=\rho\sigma_Z,\quad c=\sigma_Z\sqrt{1-\rho^2}\)
  Hence, we have
\(\ds \bn{G}=\mtxx{\sigma_W}{0}{\rho\sigma_Z}{\sigma_Z\sqrt{1-\rho^2}}\)
  - and the complete transformation is

\(\ds \bxred{\mtx{W}{Z}=\mtxx{\sigma_W}{0}{\rho\sigma_Z}{\sigma_Z\sqrt{1-\rho^2}}\mtx{X}{Y}+\mtx{\mu_W}{\mu_Z}}\)

\(\bb{Pr8}\ \)[Second-order joint moments of multivariate Gaussian PDF]
Derive the second-order moments of \(\bn{X}\sim N(\bn{0},\bn{C})\). The characteristic function is
\[\phi_\bn{X}(\bn{\omega})=\exp{-\frac{1}{2}\bn{\omega}^T\bn{C\omega}}\]
[Sol]
\(\ds\vplup \t{Let }Q(\bn{\omega})=\bn{\omega}^T\bn{C\omega}=\sum_{m=1}^{N}\sum_{n=1}^{N}\omega_m\omega_n\bb{C}_{mn}\)
  - where \(\bb{C}_{mn}=c_mn\) for simplification. Note that \(Q\) has a quadratic form. Hence, with \(l_i=l_j=1\) and the other \(l\t{'s}=0\), we have

\(\ds\vplup \begin{align} E_{X_1,X_2,...,X_N}[X_1^{l_1}X_2^{l_2}...X_N^{l_N}]&=\frac{1}{j^{l_1+l_2+...+l_N}}\frac{\partial^{l_1+l_2+...+l_N}} {\partial \omega_1^{l_1}\partial \omega_2^{l_2}...\partial \omega_N^{l_N}}\phi_{X_1,X_2,...,X_N}(\omega_1,\omega_2,...,\omega_N) \vert {}_{\omega_1=\omega_2=...=\omega_N=0}\Rightarrow \\ E_{X_iX_j}[X_iX_j]&=\frac{1}{j^2}\frac{\partial^2}{\partial\omega_i\partial\omega_j}\exp{-Q(\bn{\omega})}{\vert}_{\bn{\omega}=0}\\ &\Rightarrow \\ \frac{\partial\t{exp}[-Q(\bn{\omega})/2]}{\partial\omega_i} &= -\frac{1}{2}\frac{\partial Q(\bn{\omega})}{\partial \omega_i}\exp{-Q(\bn{\omega})}\\ \frac{\partial^2\t{exp}[-Q(\bn{\omega})/2]}{\partial\omega_i\partial\omega_j}&=\frac{1}{4}\frac{\partial Q(\bn{\omega})}{\partial \omega_i} \frac{\partial Q(\bn{\omega})}{\partial\omega_j}\exp{-Q(\bn{\omega})}-\frac{1}{2}\frac{\partial^2 Q(\bn{\omega})}{\partial\omega_i\partial\omega_j}\exp{-Q(\bn{\omega})}\ \bb{1^*}\t{. But,}\\ \frac{\partial Q(\bn{\omega})}{\partial\omega_i}{\vert}_{\bn{\omega}=0} &= \sum_{m=1}^{N}\sum_{n=1}^{N}\frac{\partial \omega_m\omega_n}{\partial\omega_i}\bb{C}_{mn}{\vert}_{\bn{\omega}=0}\\ &= \sum_{m=1}^{N}\sum_{n=1}^{N} \lrbra{\omega_m\frac{\partial\omega_m}{\partial\omega_i}\bb{C}_{mn}+\omega_n\frac{\partial\omega_m}{\partial\omega_n}\bb{C}_{mn}}_{\bn{\omega}=0}\ \bb{2^*}\t{, and also}\\ \frac{\partial^2Q(\bn{\omega})}{\partial\omega_i\partial\omega_j}{\vert}_{\bn{\omega}=0}&=\sum_{m=1}^{N}\sum_{n=1}^{N}\frac{\partial^2\omega_m\omega_n}{\partial\omega_i\omega_j}\bb{C}_{mn}{\vert}_{\bn{\omega}=0}\t{. But}\\ \frac{\partial^2\omega_m\omega_n}{\partial\omega_i} &= \omega_m\frac{\partial\omega_n}{\partial\omega_n}+\omega_n\frac{\partial\omega_m}{\partial\omega_i}\\ &= \omega_m\delta_{ni}+\omega_n\delta_{mi}\t{,}\\ \t{where }\delta_{ij}\t{ is the Kronecker}&\t{ delta, defined to be }1\t{ if }i=j\t{ and }0\t{ otherwise. Hence,}\\ \frac{\partial^2\omega_m\omega_n}{\partial\omega_i\partial\omega_j} &= \delta_{mj}\delta_{ni}+\delta_{nj}\delta_{mi} \end{align}\)
  and \(\delta_{mj}\delta_{ni}=1\) if \((m,n)=(j,i)\) and \(=0\) otherwise, and \(\delta_{nj}\delta_{mi}=1\) if \((m,n)=(i,j)\) and \(=0\) otherwise. Thus,
\(\ds \vplup \begin{align} \frac{\partial^2 Q(\bn{\omega})}{\partial\omega_i\omega_j}\vert{}_{\bn{\omega}=0} &= c_{ji}+c_{ij}\\ &= 2c_{ij}\ \ (\bn{C}^T=\bn{C})\end{align}\)
  Then, we have the expected result from \(\bb{1^*}\) and \(\bb{2^*}\) that
\(\ds \vplup \begin{align} E_{X_i X_j} &=\frac{1}{j^2}\lrbra{-\frac{1}{2}\frac{\partial^2 Q(\bn{\omega})}{\partial\omega_i\omega_j}\exp{-Q(\bn{\omega})/2} }\vert {}_{\bn{\omega}=\bn{0}}\\ &= \frac{1}{j^2}\lrpar{-\frac{1}{2}}(2c_{ij})=c_{ij}=\bb{C}_{ij}\end{align}\)
  Lastly, we extend the characteristic function approach to determining the PDF for a sum of IID rv's. Letting \(Y=\sum_{i=1}^{N}X_i\), the char. function of \(Y\) is
  defined by
\(\ds \vplup \phi_Y(\omega)=E_Y[\t{exp}(j\omega Y)]\)
  With \(g(X_1,X_2,...,X_N)=\t{exp}[j\omega\sum_{i=1}^{N}X_i]\), evaluating real and imaginary integrals separately, we have
\(\ds \vplup \begin{align} E_{X1,X_2,...X_N}[g(X_1,X_2,...,X_N)] &= \pnint\pnint\cdots\pnint g(x_1,x_2,...,x_N)p_{X_1,X_2,...,X_N}(x_1,x_2,...,x_N)dx_1dx_2...dx_N \Rightarrow\\ \phi_Y(\omega) &= E_{X_1,X_2,...,X_N}\lrbra{\exp{j\omega\sum_{i=1}^{N}X_i}}\\ &= E_{X_1,X_2,...,X_N}\lrbra{\prod_{i=1}^{N}\t{exp}(j\omega X_i) }.\t{ Now, since } X_i\t{'s are IID, we have that}\\ \phi_Y(\omega) &= \prod_{i=1}^{N}E_{X_i}[\t{exp}(j\omega X_i)]\t{ (independence)}\\ &= \prod_{i=1}^{N}\phi_{X_i}(\omega) \\ &= [\phi_X(\omega)]^N\t{ (identically distributed)} \end{align}\)
  where \(\vplup\phi_X(\omega)\) is the common char. function of the rv's. To finally obtain the PDF of the sum rv, we apply the inverse Fourier transform:
\[\bxred{p_Y(y)=\frac{1}{2\pi}\pnint [\phi_X(\omega)]^N\t{exp}(j\omega y)d\omega}\]
\(\bb{Pr9}\ \)[Series/parallel system failure rates]

\(\s{\vplup^{\vplup}}\)Let RVs \(\s{}X\) & \(\s{}Y\) equal the times of failure of two systems \(\s{}S_1\) & \(\s{}S_2\), respectively, and \(\s{}Z\) the time of failure of a joint system \(\s{}S\) comprised of \(\s{}S_1\) & \(\s{}S_2\).
\(\hspace{1px}\)Let \(\s{}A=\) series-connected \(\s{}S_1\) & \(\s{}S_2\), \(\s{}B=\) parallel-connected \(\s{}S_1\) & \(\s{}S_2\) – both as shown below. Assume \(\s{}X\) & \(\s{}Y\) are independent.
   \(\vplup\) Determine the PDF and CDF of \(\s{}Z\) for \(\s{}S\) in \(\s{}A\) & \(\s{}B\).

[Sol]
The distribution \(\s{}F_X(t)\) is the probability that \(\s{}S_1\) will fail prior to time \(\s{}t\) (asm. start at \(\s{}t=0\)); same with \(\s{}F_Y(t)\) for \(\s{}S_2\). The join distribution \(\s{}F_{X,Y}(t_1,t_2)\) equals the probability that \(\s{}S_1\) fails prior to \(\s{}t_1\) and \(\s{}S_2\) fails prior to \(\s{}t_2\).
Series:
Two systems are said to be connected in series if the combined system \(\s{}S\) fails when at least one of them fails. It then follows that \(\s{}Z\) is the smaller of the two numbers \(\s{}X\) & \(\s{}Y\); hence,
\(\s{}\ds Z^{\t{SER}}=\min{X}{Y}\)
Parallel:
\(\s{}S\) fails only when both \(\s{}S_1\) & \(\s{}S_2\) fail; thus,
\(\s{}Z^{\t{PAR}}=\max{X}{Y}\)
PDF & CDF (series)
For a given \(\s{}z\), the region \(\s{}D_z\) in the \(\s{}xy\)-plane is such that \(\s{}\min{x}{y} \leq z\) – or, \(\s{}x≤z\) or \(\s{}y≤z\). Hence, to find \(\s{}F_Z(z)\), it suffices to determine the mass in \(\s{}D_z;\) \(\s{}F_Z(z)\) is thus (see fig)
\(\s{}\ds F_Z(z) = F_X(z)+F_Y(z)-F_{X,Y}(z,z);\)
\(\s{}[\{X,Y\} = \t{indep.}] \rightarrow \bxred{F_Z^{\t{SER}}(z)=F_X(z)+F_Y(z)-F_X(z)F_Y(z)} \hspace{173px}\)
\(\s{}\ds \vplup \Rightarrow f_Z(z) = f_X(z) + f_Y(z) - f_X(z)F_Y(z)-F_X(z)f_Y(z)\Rightarrow\)
\(\s{}\ds \bxred{f_Z^{\t{SER}}(z)=f_X(z)[1-F_Y(z)]+f_Y(z)[1-F_X(z)]}\)
PDF & CDF (parallel)
\(\s{}D_z\) in the \(\s{}xy\)-plane is such that \(\s{}\max{x}{y}\leq z\) – or, \(\s{}x\leq z\) and \(\s{}y\leq z\). The mass of this region equals \(\s{}F_{X,Y}(z,z)\) (see fig); hence,
\(\s{}\ds F_Z(z) = F_{X,Y}(z,z)\rightarrow\)
\(\s{}\bxred{F_Z^{\t{PAR}}(z)=F_X(z)F_Y(z)}\)
\(\s{}\ds \Rightarrow f_Z(z)=\pfrac{F_{X,Y}(z,z)}{x}+\pfrac{F_{X,Y}(z,z)}{y}=\intlim{-\infty}{z}f_{X,Y}(z,y)dy + \intlim{-\infty}{z}f_{X,Y}(z,y)dx\Rightarrow\)
\(\s{}\bxred{f_Z^{\t{PAR}}(z)=f_X(z)F_Y(z)+f_Y(z)F_X(z)}\)

\(\bb{Pr10}\ \)[White Noise Power Spectral Density]
Derive the power spectral density of the white noise random process.

[Sol] Since \(X[n]\) is white noise, it has a zero mean and ACS \(r_X[k]=\sigma^2\delta[k]\). Then,
\(\ds\begin{align}P_X(f) &= \ilim{M}\frac{1}{2M+1}E\lrbra{\sum_{n=-M}^{M}X[n]\exp{j2\pi fn}\sum_{m=-M}^{M}X[m]\exp{-j2\pi fm}} \\ &= \ilim{M}\frac{1}{2M+1}\sum_{n=-M}^{M}\sum_{m=-M}^{M}\underbrace{E[X[n]X[m]]}_{r_X[m-n]}\t{exp}[-j2\pi f(m-n)]\quad \bb{1^*} \\ &= \ilim{M}\frac{1}{2M+1}\sum_{n=-M}^{M}\sum_{m=-M}^{M}\sigma^2 \delta[m-n]\t{exp}[-j2\pi f(m-n)] \\ &= \ilim{M}\frac{1}{2M+1}\sum_{n=-M}^{M}\sigma^2 \\ &= \ilim{M}\sigma^2 =\sigma^2 \end{align}\)
Hence, the PSD for white noise is
\(\ds \bxred{P_X(f)=\sigma^2 \quad -1/2\leq f \leq 1/2}\)
- illustrating that white noise contains equal contributions of average power at all frequencies.

[Sol 2] A more straightforward approach utilizes the knowledge of ACS. From \(\bb{1^*}\) we see
\(\ds P_X(f)=\ilim{M}\frac{1}{2M+1}\sum_{n=-M}^{M}\sum_{m=-M}^{M}r_X[m-n]\t{exp}[-j2\pi f(m-n)]\quad \bb{2^*}\)
This can be simplified using
\(\ds \sum_{n=-M}^{M}\sum_{m=-M}^{M}g[m-n]=\sum_{k=-2M}^{2M}(2M+1-|k|)g[k]\)
which results from considering \(g[m-n]\) as an element of the \((2M+1)\times(2M+1)\) matrix \(\bn{G}\) with elements \(\bb{G}_{mn}=g[m-n]\) for \(m=-M,...,M\) and \(n=-M,...,M\) and then summing all the elements. Using the relationship in \(\bb{2^*}\) produces
\(\ds \begin{align}P_X(f) &= \ilim{M}\frac{1}{2M+1}\sum_{k=-2M}^{2M}(2M+1-|k|)r_X[k]\exp{-j2\pi fk} \\ &= \ilim{M}\sum_{k=-2M}^{2M}\lrpar{1-\frac{|k|}{2M+1}}r_X[k]\exp{-j2\pi fk} \end{align}\)
Assuming that \(\sum_{k=-\infty}^{\infty}|r_X[k]|<\infty\), the limit can be shown to produce the final result
\(\ds P_X(f)=\sum_{k=-\infty}^{\infty}r_X[k]\exp{-j2\pi fk}\)
which states that the PSD is the discrete-time Fourier transform of the ACS. Since \(r_X[k]=\sigma^2 \delta [k]\),
\(\ds P_X(f) = \sum_{k=-\infty}^{\infty}\sigma^2 \delta [k]\exp{-j2\pi fk} \Rightarrow\)
\(\ds \bxred{P_X(f)=\sigma^2 \quad -1/2\leq f \leq 1/2}\)
- in agreement with previous solution. \(P_X(f)\) is shown in the figure below; as can be seen, the total average power in \(X[n]\), which is \(r_X[0]=\sigma^2\), is given by the area under the PSD curve.


\(\bb{Pr11}\ \)[Linear prediction of MA random process, one-step]
Find the optimal predictor \(\hat{X}[n_0+1]\) and the minimum mean-square error \(\ts{mse}{min}=\sigma_U^2\), given the zero-mean WSS rp
\(\ds X[n]=U[n]-bU[n-1],\ \ \abs{b}<1\)

[Sol] The solution shall be executed as folllows:

[1]: Write the \(z\)-transform of the ACS as
\(\ds\mathcal{P}_X(z)=\frac{\sigma_U^2}{\mathcal{A}(z)\mathcal{A}(z^{-1})},\)
\(\ds\t{where }\mathcal{P}_X(z)=\iisum{k}r_X[k]z^{-k},\quad \mathcal{A}(z)=1-\iisum{k}a[k]z^{-k}\)
asm
\(\mathcal{A}(z)\) has all its zeros inside the \(z\)-plane unit circle (sequence is stable and causal)

[2]: For the impulse response, the solution of
\(\ds r_X[l+1]=\zisum{k}h[k]r_X[l-k],\quad l=0,1,...\)
\(\ds\t{is }h_{\t{opt}}[k]=a[k+1],\quad k=0,1,...\)

[3]: The optimal predictor is then
\(\ds\hat{X}[n_0+1]=\zisum{k}a[k+1]X[n_0-k],\quad \t{with }\ts{mse}{min}=\sigma_U^2\)

First determine the PSD. Since the system function is \(\mathcal{H}(z)=1-bz^{-1}\), the frequency response follows as \(H(f)=1-b\ \t{exp}(-j2\pi f)\). Then,
\(P_X(f)=H(f)\cdot H^*(f)\sigma_U^2=(1-b\t{exp}(-j2\pi f))(1-b\t{exp}(j2\pi f))\sigma_U^2\)
Replacing \(\t{exp}(j2\pi f)\) by \(z\), we have
\(\ds\mathcal{P}_X(z)=(1-bz^{-1})(1-bz)\sigma_U^2\Rightarrow\)
\(\ds\boxed{\mathcal{A}(z)=\frac{1}{1-bz^{-1}}}\)
To convert to \(1-\sum_{k=1}^{\infty}a[k]z^{-k}\), take the inverse \(z\)-transform, assuing a stable and causal sequence:
\(\ds\mathcal{Z}^{-1}\{\mathcal{A}(z)\}=\left\{\nmtx{b^k,\ \ k\geq 0}{0,\ \ k<0}\right.\)
and so \(a[k]=-b^k\) for \(k\geq1\). The optimal predictor is thus
\(\ds\begin{align}\hat{X}[n_0+1] &= \zisum{k}a[k+1]X[n_0+k] \\ &= \zisum{k}(-b^{k+1})X[n_0-k] \Rightarrow \end{align}\)
\(\ds\bxred{\hat{X}[n_0+1]= -bX[n_0]-b^2X[n_0-1]-b^3X[n_0-2]-...,\ \ \ \ts{mse}{min}=\sigma_U^2}\)





Dragon Notes,   Est. 2018     About

By OverLordGoldDragon