Dragon Notes

i

\( \newcommand{bvec}[1]{\overrightarrow{\boldsymbol{#1}}} \newcommand{bnvec}[1]{\overrightarrow{\boldsymbol{\mathrm{#1}}}} \newcommand{uvec}[1]{\widehat{\boldsymbol{#1}}} \newcommand{vec}[1]{\overrightarrow{#1}} \newcommand{\parallelsum}{\mathbin{\|}} \) \( \newcommand{s}[1]{\small{#1}} \newcommand{t}[1]{\text{#1}} \newcommand{tb}[1]{\textbf{#1}} \newcommand{ns}[1]{\normalsize{#1}} \newcommand{ss}[1]{\scriptsize{#1}} \newcommand{vpl}[]{\vphantom{\large{\int^{\int}}}} \newcommand{vplup}[]{\vphantom{A^{A^{A^A}}}} \newcommand{vplLup}[]{\vphantom{A^{A^{A^{A{^A{^A}}}}}}} \newcommand{vpLup}[]{\vphantom{A^{A^{A^{A^{A^{A^{A^A}}}}}}}} \newcommand{up}[]{\vplup} \newcommand{Up}[]{\vplLup} \newcommand{Uup}[]{\vpLup} \newcommand{vpL}[]{\vphantom{\Large{\int^{\int}}}} \newcommand{lrg}[1]{\class{lrg}{#1}} \newcommand{sml}[1]{\class{sml}{#1}} \newcommand{qq}[2]{{#1}_{\t{#2}}} \newcommand{ts}[2]{\t{#1}_{\t{#2}}} \) \( \newcommand{ds}[]{\displaystyle} \newcommand{dsup}[]{\displaystyle\vplup} \newcommand{u}[1]{\underline{#1}} \newcommand{tu}[1]{\underline{\text{#1}}} \newcommand{tbu}[1]{\underline{\bf{\text{#1}}}} \newcommand{bxred}[1]{\class{bxred}{#1}} \newcommand{Bxred}[1]{\class{bxred2}{#1}} \newcommand{lrpar}[1]{\left({#1}\right)} \newcommand{lrbra}[1]{\left[{#1}\right]} \newcommand{lrabs}[1]{\left|{#1}\right|} \newcommand{bnlr}[2]{\bn{#1}\left(\bn{#2}\right)} \newcommand{nblr}[2]{\bn{#1}(\bn{#2})} \newcommand{real}[1]{\Ree\{{#1}\}} \newcommand{Real}[1]{\Ree\left\{{#1}\right\}} \) \( \newcommand{bn}[1]{\boldsymbol{\mathrm{#1}}} \newcommand{bns}[2]{\bn{#1}_{\t{#2}}} \newcommand{b}[1]{\boldsymbol{#1}} \newcommand{bb}[1]{[\bn{#1}]} \) \( \newcommand{abs}[1]{\left|{#1}\right|} \newcommand{ra}[]{\rightarrow} \newcommand{Ra}[]{\Rightarrow} \newcommand{Lra}[]{\Leftrightarrow} \newcommand{rai}[]{\rightarrow\infty} \newcommand{ub}[2]{\underbrace{{#1}}_{#2}} \newcommand{ob}[2]{\overbrace{{#1}}^{#2}} \newcommand{lfrac}[2]{\large{\frac{#1}{#2}}\normalsize{}} \newcommand{sfrac}[2]{\small{\frac{#1}{#2}}\normalsize{}} \newcommand{Cos}[1]{\cos{\left({#1}\right)}} \newcommand{Sin}[1]{\sin{\left({#1}\right)}} \newcommand{Frac}[2]{\left({\frac{#1}{#2}}\right)} \newcommand{LFrac}[2]{\large{{\left({\frac{#1}{#2}}\right)}}\normalsize{}} \newcommand{Sinf}[2]{\sin{\left(\frac{#1}{#2}\right)}} \newcommand{Cosf}[2]{\cos{\left(\frac{#1}{#2}\right)}} \newcommand{atan}[1]{\tan^{-1}({#1})} \newcommand{Atan}[1]{\tan^{-1}\left({#1}\right)} \newcommand{intlim}[2]{\int\limits_{#1}^{#2}} \newcommand{lmt}[2]{\lim_{{#1}\rightarrow{#2}}} \newcommand{ilim}[1]{\lim_{{#1}\rightarrow\infty}} \newcommand{zlim}[1]{\lim_{{#1}\rightarrow 0}} \newcommand{Pr}[]{\t{Pr}} \newcommand{prop}[]{\propto} \newcommand{ln}[1]{\t{ln}({#1})} \newcommand{Ln}[1]{\t{ln}\left({#1}\right)} \newcommand{min}[2]{\t{min}({#1},{#2})} \newcommand{Min}[2]{\t{min}\left({#1},{#2}\right)} \newcommand{max}[2]{\t{max}({#1},{#2})} \newcommand{Max}[2]{\t{max}\left({#1},{#2}\right)} \newcommand{pfrac}[2]{\frac{\partial{#1}}{\partial{#2}}} \newcommand{pd}[]{\partial} \newcommand{zisum}[1]{\sum_{{#1}=0}^{\infty}} \newcommand{iisum}[1]{\sum_{{#1}=-\infty}^{\infty}} \newcommand{var}[1]{\t{var}({#1})} \newcommand{exp}[1]{\t{exp}\left({#1}\right)} \newcommand{mtx}[2]{\left[\begin{matrix}{#1}\\{#2}\end{matrix}\right]} \newcommand{nmtx}[2]{\begin{matrix}{#1}\\{#2}\end{matrix}} \newcommand{nmttx}[3]{\begin{matrix}\begin{align} {#1}& \\ {#2}& \\ {#3}& \\ \end{align}\end{matrix}} \newcommand{amttx}[3]{\begin{matrix} {#1} \\ {#2} \\ {#3} \\ \end{matrix}} \newcommand{nmtttx}[4]{\begin{matrix}{#1}\\{#2}\\{#3}\\{#4}\end{matrix}} \newcommand{mtxx}[4]{\left[\begin{matrix}\begin{align}&{#1}&\hspace{-20px}{#2}\\&{#3}&\hspace{-20px}{#4}\end{align}\end{matrix}\right]} \newcommand{mtxxx}[9]{\begin{matrix}\begin{align} &{#1}&\hspace{-20px}{#2}&&\hspace{-20px}{#3}\\ &{#4}&\hspace{-20px}{#5}&&\hspace{-20px}{#6}\\ &{#7}&\hspace{-20px}{#8}&&\hspace{-20px}{#9} \end{align}\end{matrix}} \newcommand{amtxxx}[9]{ \amttx{#1}{#4}{#7}\hspace{10px} \amttx{#2}{#5}{#8}\hspace{10px} \amttx{#3}{#6}{#9}} \) \( \newcommand{ph}[1]{\phantom{#1}} \newcommand{vph}[1]{\vphantom{#1}} \newcommand{mtxxxx}[8]{\begin{matrix}\begin{align} & {#1}&\hspace{-17px}{#2} &&\hspace{-20px}{#3} &&\hspace{-20px}{#4} \\ & {#5}&\hspace{-17px}{#6} &&\hspace{-20px}{#7} &&\hspace{-20px}{#8} \\ \mtxxxxCont} \newcommand{\mtxxxxCont}[8]{ & {#1}&\hspace{-17px}{#2} &&\hspace{-20px}{#3} &&\hspace{-20px}{#4}\\ & {#5}&\hspace{-17px}{#6} &&\hspace{-20px}{#7} &&\hspace{-20px}{#8} \end{align}\end{matrix}} \newcommand{mtXxxx}[4]{\begin{matrix}{#1}\\{#2}\\{#3}\\{#4}\end{matrix}} \newcommand{cov}[1]{\t{cov}({#1})} \newcommand{Cov}[1]{\t{cov}\left({#1}\right)} \newcommand{var}[1]{\t{var}({#1})} \newcommand{Var}[1]{\t{var}\left({#1}\right)} \newcommand{pnint}[]{\int_{-\infty}^{\infty}} \newcommand{floor}[1]{\left\lfloor {#1} \right\rfloor} \) \( \newcommand{adeg}[1]{\angle{({#1}^{\t{o}})}} \newcommand{Ree}[]{\mathcal{Re}} \newcommand{Im}[]{\mathcal{Im}} \newcommand{deg}[1]{{#1}^{\t{o}}} \newcommand{adegg}[1]{\angle{{#1}^{\t{o}}}} \newcommand{ang}[1]{\angle{\left({#1}\right)}} \newcommand{bkt}[1]{\langle{#1}\rangle} \) \( \newcommand{\hs}[1]{\hspace{#1}} \)

  UNDER CONSTRUCTION

[Math] Derivations


\(\bb{1}\) Stuff

\(\bb{2}\) Let \(x^* \equiv \t{fixed point},\ \ \eta (t)=x(t)-x^* \equiv \t{small perturbation away from } x^*\). Differentiating,
\(\ds \begin{align} \dot{\eta} &= \frac{d}{dt}(x-x^*)=\dot{x}\quad (x^* = \t{const.}) \\ \Rightarrow \dot{\eta} &= \dot{x}=f(x)=f(x^* + \eta) \end{align} \)
  Applying Taylor expansion,
\(f(x^* + \eta )=f(x^*) + \eta f'(x^*)+O({\eta}^2),\)
\(O({\eta}^2)= \t{quadratically small terms in }\eta \)
  \(f(x^*)=0\) since \(x^*\) is a fixed point, hence
\(\ds \dot{\eta} = \eta f'(x^*) + O({\eta}^2) \)
  If \(f'(x^*)\neq 0\), the \(O({\eta}^2)\) terms can be neglected - and we can write
\(\ds \boxed{\dot{\eta}\approx \eta f'(x^*)}\)

\(\bb{3}\) If \(X\t{~bin}(M,p)\), the expected value is
\(\ds \begin{align} E[X] &= \sum_{k=0}^{M}kp_X[k] \\ &= \sum_{k=0}^{M}k{M \choose k}p^k(1-p)^{M-k} \\ &= \sum_{k=0}^{M}k\frac{M!}{(M-k)!k!}p^k(1-p)^{M-k} \\ &= Mp\sum_{k-1}^{M}\frac{(M-1)!}{(M-k)!(k-1)!}p^{k-1}(1-p)^{M-1-(k-1)}; \\ \t{let }M'=M-1, &\t{ and } k'=k-1. \t{ Then,}\\ E[X] &= Mp\sum_{k'=0}^{M'}\frac{M'!}{(M'-k')!k'!}p^{k'}(1-p)^{M'-k'} \\ &= Mp\sum_{k'=0}^{M'}{M' \choose k'}p^{k'}(1-p)^{M'-k'} = \boxed{Mp = E[X]} \end{align}\)
\(\bb{4}\) If \(X\t{~Ber}(p)\), the expected value is
\(\ds \begin{align} E[X] &= \sum_{k=0}^{1}kp_X[k] \\ &= 0\cdot (1-p) + 1\cdot p \\ &= \boxed{p = E[X]} \end{align}\)

\(\bb{5}\) If \(X\t{~geom}(p)\), the expected value is
\(\ds \begin{align} \hspace{50px} E[X] &= \sum_{k=1}^{\infty}l(1-p)^{k-1}p;\ \t{Let } q=1-p.\t{ Then,} \\ E[X] &= p\sum_{k=1}^{\infty}\frac{d}{dq}q^k \\ &= p\frac{d}{dq}\sum_{k=1}^{\infty}q^k \\ &= p\frac{d}{dq}\Frac{q}{1-q},\ 0 < q < 1 \\ &= p\frac{(1-q)-q(-1)}{(1-q)^2} \end{align}\) \(\ds \hspace{66px}= p\frac{1}{(1-q)^2} = \boxed{1/ p=E[X]}\vplup \hspace{264px}\)
\(\bb{6}\) If \(X\t{~Pois}(p)\), the expected value is
\(\ds \begin{align} E[X] &= e^{-\lambda}\sum_{k=1}^{\infty}k\frac{\lambda^k}{k!} \\ &= \lambda e^{-\lambda}\sum_{k=1}^{\infty}k\frac{\lambda^{k-1}}{k!} \\ &= \lambda e^{-\lambda}\sum_{k=1}^{\infty}\frac{1}{k!}\frac{d\lambda^k}{d\lambda} \\ &= \lambda e^{-\lambda}\frac{d}{d\lambda}\sum_{k=1}^{\infty}\frac{\lambda^k}{k!} \\ &= \lambda e^{-\lambda}\frac{d}{d\lambda}e^{\lambda} \hspace{200px} \\ &= \lambda e^{-\lambda}e^{\lambda} = \boxed{\lambda =E[X]}\vplup \end{align}\)

\(\bb{7}\) Assume \(g\) is a one-to-one function. If \(Y=g(X)\), where \(g\) is monotonically increasing, then there is a single solution for \(x\) in \(y=g(x)\).
  Thus,
\(\ds \begin{align} F_Y(y) &= P[g(X)\leq y] \\ &= P[X\leq g^{-1}(y)] \\ &= F_X(g^{-1}(y)) \end{align} \)
  But \(p_Y(y)=dF_Y(y)/dy\) so that
\(\ds \begin{align} p_Y(y) &= \frac{d}{dy}F_X(g^{-1}(y)) \\ &= \frac{dF_X(x)}{dx}\left. \right| _{x=g^{-1}(y)}\frac{dg^{-1}(y)}{dy} \\ &= p_X(g^{-1}(y))\frac{dg^{-1}(y)}{dy} \end{align}\)
  If \(g\) is monotonically decreasing, then
\(\ds \begin{align} F_Y(y) &= P[g(X)\leq y] \\ &= P[X\geq g^{-1}(y)] \\ &= 1 - P[X\leq g^{-1}(y)] \\ &= 1 - F_X(g^{-1}(y)), \t{ and} \\ p_Y(y) &= \frac{dF_Y(y)}{dy}=-\frac{d}{dy}F_X(g^{-1}(y)) \\ &= -p_X(g^{-1}(y))\frac{dg^{-1}(y)}{dy} \end{align} \)
  Since \(g\) is monotonically decreasing, so is \(g^{-1}\) - hence \(dg^{-1}(y)/dy\) is negative. Thus, both cases are subsumed by the formula
\(\ds \boxed{p_Y(y)=p_X(g^{-1}(y))\left|\frac{dg^{-1}}{dy}\right|}\)

\(\bb{8}\) Given the state and output equations
\(\ds\begin{matrix}\bn{\dot{x}}=\bn{Ax}+\bn{Bu},\\ \bn{y}=\bn{Cx}+\bn{Du},\end{matrix}\)

  take the Laplace transform assuming zero initial conditions:
\(\ds\begin{align}s\bn{X}(s)&=\bn{AX}(s)+\bn{BU}(s)\\ \bn{Y}(s)&=\bn{CX}(s)+\bn{DU}(s)\end{align}\)\(\ \ \bb{1^*}\)
  Solving for \(\bn{X}(s)\),
\(\ds \boxed{\bn{X}(s)=(s\bn{I}-\bn{A})^{-1}\bn{BU}(s)}\)

  Substituting into \(\bb{1^*}\) yields
\(\ds \bn{Y}(s)=\bn{C}(s\bn{I}-\bn{A})^{-1}\bn{BU}(s)+\bn{DU}(s)=\boxed{[\bn{C}(s\bn{I}-\bn{A})^{-1}\bn{B}+\bn{D}]\bn{U}(s)=\bn{Y}(s)}\)
  Assuming \(\bn{U}(s)=U(s)\) and \(\bn{Y}(s)=Y(s)\) are scalar functions, then taking the ratio of output to input, the transfer function is
\(\ds \boxed{\frac{Y(s)}{U(s)}=T(s)=\bn{C}(s\bn{I}-\bn{A})^{-1}\bn{B}+\bn{D}}\)





Dragon Notes,   Est. 2018     About

By OverLordGoldDragon