\(\bb{1}\) Stuff
\(\bb{2}\) Let \(x^* \equiv \t{fixed point},\ \ \eta (t)=x(t)-x^* \equiv \t{small perturbation away from } x^*\). Differentiating,
\(\ds \begin{align}
\dot{\eta} &= \frac{d}{dt}(x-x^*)=\dot{x}\quad (x^* = \t{const.}) \\
\Rightarrow \dot{\eta} &= \dot{x}=f(x)=f(x^* + \eta)
\end{align} \)
Applying Taylor expansion,
\(f(x^* + \eta )=f(x^*) + \eta f'(x^*)+O({\eta}^2),\)
\(O({\eta}^2)= \t{quadratically small terms in }\eta \)
\(f(x^*)=0\) since \(x^*\) is a fixed point, hence
\(\ds \dot{\eta} = \eta f'(x^*) + O({\eta}^2) \)
If \(f'(x^*)\neq 0\), the \(O({\eta}^2)\) terms can be neglected - and we can write
\(\ds \boxed{\dot{\eta}\approx \eta f'(x^*)}\)
\(\bb{3}\) If \(X\t{~bin}(M,p)\), the expected value is
\(\ds \begin{align}
E[X] &= \sum_{k=0}^{M}kp_X[k] \\
&= \sum_{k=0}^{M}k{M \choose k}p^k(1-p)^{M-k} \\
&= \sum_{k=0}^{M}k\frac{M!}{(M-k)!k!}p^k(1-p)^{M-k} \\
&= Mp\sum_{k-1}^{M}\frac{(M-1)!}{(M-k)!(k-1)!}p^{k-1}(1-p)^{M-1-(k-1)}; \\
\t{let }M'=M-1, &\t{ and } k'=k-1. \t{ Then,}\\
E[X] &= Mp\sum_{k'=0}^{M'}\frac{M'!}{(M'-k')!k'!}p^{k'}(1-p)^{M'-k'} \\
&= Mp\sum_{k'=0}^{M'}{M' \choose k'}p^{k'}(1-p)^{M'-k'} = \boxed{Mp = E[X]}
\end{align}\)
\(\bb{4}\) If \(X\t{~Ber}(p)\), the expected value is
\(\ds \begin{align}
E[X] &= \sum_{k=0}^{1}kp_X[k] \\
&= 0\cdot (1-p) + 1\cdot p \\
&= \boxed{p = E[X]}
\end{align}\)
\(\bb{5}\) If \(X\t{~geom}(p)\), the expected value is
\(\ds \begin{align}
\hspace{50px} E[X] &= \sum_{k=1}^{\infty}l(1-p)^{k-1}p;\ \t{Let } q=1-p.\t{ Then,} \\
E[X] &= p\sum_{k=1}^{\infty}\frac{d}{dq}q^k \\
&= p\frac{d}{dq}\sum_{k=1}^{\infty}q^k \\
&= p\frac{d}{dq}\Frac{q}{1-q},\ 0 < q < 1 \\
&= p\frac{(1-q)-q(-1)}{(1-q)^2}
\end{align}\)
\(\ds \hspace{66px}= p\frac{1}{(1-q)^2} = \boxed{1/ p=E[X]}\vplup \hspace{264px}\)
\(\bb{6}\) If \(X\t{~Pois}(p)\), the expected value is
\(\ds \begin{align}
E[X] &= e^{-\lambda}\sum_{k=1}^{\infty}k\frac{\lambda^k}{k!} \\
&= \lambda e^{-\lambda}\sum_{k=1}^{\infty}k\frac{\lambda^{k-1}}{k!} \\
&= \lambda e^{-\lambda}\sum_{k=1}^{\infty}\frac{1}{k!}\frac{d\lambda^k}{d\lambda} \\
&= \lambda e^{-\lambda}\frac{d}{d\lambda}\sum_{k=1}^{\infty}\frac{\lambda^k}{k!} \\
&= \lambda e^{-\lambda}\frac{d}{d\lambda}e^{\lambda} \hspace{200px} \\
&= \lambda e^{-\lambda}e^{\lambda} = \boxed{\lambda =E[X]}\vplup
\end{align}\)
\(\bb{7}\) Assume \(g\) is a one-to-one function. If \(Y=g(X)\), where \(g\) is monotonically increasing, then there is a single solution for \(x\) in \(y=g(x)\).
Thus,
\(\ds \begin{align}
F_Y(y) &= P[g(X)\leq y] \\
&= P[X\leq g^{-1}(y)] \\
&= F_X(g^{-1}(y))
\end{align} \)
But \(p_Y(y)=dF_Y(y)/dy\) so that
\(\ds \begin{align} p_Y(y)
&= \frac{d}{dy}F_X(g^{-1}(y)) \\
&= \frac{dF_X(x)}{dx}\left. \right| _{x=g^{-1}(y)}\frac{dg^{-1}(y)}{dy} \\
&= p_X(g^{-1}(y))\frac{dg^{-1}(y)}{dy}
\end{align}\)
If \(g\) is monotonically decreasing, then
\(\ds \begin{align} F_Y(y)
&= P[g(X)\leq y] \\
&= P[X\geq g^{-1}(y)] \\
&= 1 - P[X\leq g^{-1}(y)] \\
&= 1 - F_X(g^{-1}(y)), \t{ and} \\ p_Y(y)
&= \frac{dF_Y(y)}{dy}=-\frac{d}{dy}F_X(g^{-1}(y)) \\
&= -p_X(g^{-1}(y))\frac{dg^{-1}(y)}{dy}
\end{align} \)
Since \(g\) is monotonically decreasing, so is \(g^{-1}\) - hence \(dg^{-1}(y)/dy\) is negative. Thus, both cases are subsumed by the formula
\(\ds \boxed{p_Y(y)=p_X(g^{-1}(y))\left|\frac{dg^{-1}}{dy}\right|}\)
\(\bb{8}\) Given the state and output equations
\(\ds\begin{matrix}\bn{\dot{x}}=\bn{Ax}+\bn{Bu},\\ \bn{y}=\bn{Cx}+\bn{Du},\end{matrix}\)
take the Laplace transform assuming zero initial conditions:
\(\ds\begin{align}s\bn{X}(s)&=\bn{AX}(s)+\bn{BU}(s)\\ \bn{Y}(s)&=\bn{CX}(s)+\bn{DU}(s)\end{align}\)\(\ \ \bb{1^*}\)
Solving for \(\bn{X}(s)\),
\(\ds \boxed{\bn{X}(s)=(s\bn{I}-\bn{A})^{-1}\bn{BU}(s)}\)
Substituting into \(\bb{1^*}\) yields
\(\ds \bn{Y}(s)=\bn{C}(s\bn{I}-\bn{A})^{-1}\bn{BU}(s)+\bn{DU}(s)=\boxed{[\bn{C}(s\bn{I}-\bn{A})^{-1}\bn{B}+\bn{D}]\bn{U}(s)=\bn{Y}(s)}\)
Assuming \(\bn{U}(s)=U(s)\) and \(\bn{Y}(s)=Y(s)\) are scalar functions, then taking the ratio of output to input, the transfer function is
\(\ds \boxed{\frac{Y(s)}{U(s)}=T(s)=\bn{C}(s\bn{I}-\bn{A})^{-1}\bn{B}+\bn{D}}\)