 # Dragon Notes UNDER CONSTRUCTION
Latest content:
 Apr 05 Deep Learning Mar 19 Anomaly Detection - ML Mar 13 +Data Tables Mar 08 Clustering - Machine Learning Feb 28 Support Vector Machines - ML Feb 20 Regression - Data Science

# Randomness & Probability:Gaussian Distribution

Multivariate Gaussian PDF

Gaussian random vector, $$\bn{X}\sim\mathcal{N}(\bn{\mu},\bn{C})$$, is an $$N\times 1$$ rv $$[X_1\ X_2\ ...\ X_N]^T$$ with a joint PDF given by the multivariate Gaussian PDF
$$p_{\bn{X}}(\bn{x})=\lfrac{1}{(2\pi)^{N/2}\t{det}^{1/2}(\bn{C})}\t{exp}\lrbra{-\frac{1}{2}\lrpar{\bn{x}-\bn{\mu}^T\bn{C}^{-1}(\bn{x}-\bn{\mu})}},$$

$$\ds\bn{C}=\left[\mtxxxx{\var{X_1}}{\cov{X_1,X_2}}{...}{\cov{X_1,X_N}}{\cov{X_2,X_1}}{\var{X_2}}{...}{\cov{X_2,X_N}}{\vdots}{\vdots} {\ddots}{\vdots}{\cov{X_N,X_1}}{\cov{X_N,X_2}}{...}{\cov{X_N,X_N}}\right],$$ $$\ds\ \bn{\mu}=\lrbra{\nmtttx{\mu_1}{\mu_2}{\vdots}{\mu_N}}=\lrbra{\nmtttx{E_{X_1}[X_1]}{E_{X_2}[X_2]}{\vdots}{E_{X_N}[X_N]}}$$

 : Only the first two moments, $$\bn{\mu}$$ and $$\bn{C}$$, are required to specify the entire PDF : Uncorrelated$${}^*$$ $$\ra$$ independent (* - all rv's) : A linear transformation of $$\bn{X}$$ produces another Gaussian vector. If $$\bn{Y}=\bn{GX}$$, then $$\bn{Y}\sim\mathcal{N}(\bn{G\mu}, \bn{GCG}^T)$$ -- ($$\bn{G} = M\times N$$ matrix, $$M\leq N$$) Gaussian Random Process

An rp is Gaussian if all finite sets of samples, $$\bn{X}=[X[n_1]\ X[n_2]\ ...\ X[n_K]]^T$$, have a multivariate Gaussian PDF for all $$\{n_1,n_2,...,n_K\}$$ and all $$K$$. Its properties follow:

: Uncorrelated $$\ra$$ independent

(a Gaussian rp with uncorrelated samples has independent samples)

: WSS $$\ra$$ SSS

(a WSS Gaussian rp is also stationary)

: $$\t{LSI}\{$$Gaussian rp$$\}$$ $$\ra$$ Gaussian rp

(any linear transformation of a Gaussian rp produces another Gaussian rp)

: $$X[n]\ra\lrbra{LSI}\ra Y[n]$$, $$X[n]=$$ WSS Gaussian rp with mean $$\mu_X$$, ACS $$r_X[k]$$, and $$\mu_Y=\mu_X H(0)$$, $$\ P_Y(f)=\abs{H(f)}^2P_X(f)$$.

(LSI = linear, shift-invariant filter)

For $$N$$ successive output samples $$\bn{Y}=[Y\ Y\ ...\ Y[N-1]]^T$$,
$$\ds p_{\bn{Y}}(\bn{y})=\frac{1}{(2\pi)^{N/2}\t{det}^{1/2}(\bn{C}_Y)}\t{exp}\lrbra{-\sfrac{1}{2}(\bn{y}-\bn{\mu}_Y)^T\bn{C}_Y^{-1}(\bn{y}-\bn{\mu}_Y)},$$
$$\dsup\bn{\mu}_Y=[\mu_X H(0)\cdots\mu_X H(0)],$$
$$\ds[\bn{C}_Y]_{mn}=r_U[m-n]-(\mu_X H(0))^2 = \int_{-1/2}^{1/2}\abs{H(f)}^2 P_X(f)\t{exp}(j2\pi f(m-n))df - (\mu_X H(0))^2$$
: $$\t{(Higher-order)} = \sum\t{(Second-order)}$$;
(higher-order joint moments of a multivariate Gaussian PDF can be expressed in terms of first- and second-order moments)

- For rv $$\dsup\bn{X}=[X_1X_2X_3X_4]^T\ \bn{\mu}_X=0$$,
$$\ds E[X_1X_2X_3X_4]=E[X_1X_2]E[X_3X_4]+ E[X_1X_3]E[X_2X_4]+E[X_1X_4]E[X_2X_3]$$
- For rp $$\dsup X[n]$$, $$\mu_X = 0$$,
\ds \begin{align} E[X[n_1]X[n_2]X[n_3]X[n_4]] &= E[X[n_1]X[n_2]]\ E[X[n_3]X[n_4]] \\ &+ \hspace{2px} E[X[n_1]X[n_3]]\ E[X[n_2]X[n_4]] \\ &+ \hspace{2px} E[X[n_1]X[n_4]]\ E[X[n_2]X[n_3]] \end{align}
- and if furthermore $$X[n]$$ is WSS,
\ds\begin{align} E[X[n_1]X[n_2]X[n_3]X[n_4]] &= r_X[n_2-n_1]r_X[n_4-n_3] + r_X[n_3-n_1]r_X[n_4-n_2] \\ &+ \hspace{2px}r_X[n_4-n_1]r_X[n_3-n_2]\end{align}