\documentclass[11pt]{article}
\usepackage[margin=1in]{geometry} % See geometry.pdf to learn the layout options. There are lots.
\geometry{letterpaper} % ... or a4paper or a5paper or ...
%\geometry{landscape} % Activate for for rotated page geometry
%\usepackage[parfill]{parskip} % Activate to begin paragraphs with an empty line rather than an indent
\usepackage{graphicx}
\usepackage{bbold}
\usepackage{scrextend}
\usepackage{mathdots}
\usepackage{amssymb}
\usepackage{epstopdf}
\usepackage{amsmath}
\DeclareGraphicsRule{.tif}{png}{.png}{`convert #1 `dirname #1`/`basename #1 .tif`.png}
\newcommand{\soln}[1]{\fbox{ \begin{minipage}{6in}{#1}\end{minipage}}}
\begin{document}
\section*{\vspace{3in}
\begin{center}
Applications of Differential Equations in Linear Algebra \\ \vspace{1in}
\textit{Russell Chamberlain, Dalton Cook}\\Course: Linear Algebra
\end{center}
}
\newpage
\section{Review of class Topics}
\subsection{Homogeneous Equations of Matrices}
\begin{addmargin}[.25in]{.25in}
Solutions to the homogeneous equation $A\vec{x} = \vec{0}$ are pivotal to being able to compute differential equations. \\[.05in]
\soln{\textbf{Definition:} A system of linear equations is said to be homogeneous if it can be written in the form $A\vec{x} = \vec{0}$, where A is an m x n matrix and 0 is a vector in $\mathbb{R}^{m}$. Such a system always has at least one solution, namely, $\vec{x} = \vec{0}$. This zero solution is usually called the trivial solution. When there is a solution that is not the zero vector and still satisfies the equation $A\vec{x} = \vec{0}$ it is considered the non-trivial solution.
}\\[.05in]
The non-trivial solutions to a homogeneous can be found by augmenting $A$ with zeros and employing \textbf{Gaussian Elimination}. For this project we were interested in finding the non-trivial solution to homogeneous equations. Homogeneous equations have nontrivial solutions when the Gaussian-reduced, augmented matrix has free variables.\\[.05in]
\soln{\textbf{Example:}
\begin{align*}
\begin{pmatrix}
1 & 2 & 3 \\
4 & 5 & 6 \\
7 & 8 & 9 \end{pmatrix} & \; \; \frac{r_{2}'=-4r_{1}+r_{2}}{r_{3}'=-7r_{1} + r_{2}} \hspace{-10pt} \rightarrow
\begin{pmatrix}
1 & 2 & 3 \\
0 & -3 & -6 \\
0 & -6 & -12 \end{pmatrix} & \; \; \hspace{-10pt} r_{3}'=-2r_{2}+r_{3} \rightarrow
\begin{pmatrix}
1 & 2 & 3 \\
0 & -3 & -6 \\
0 & 0 & 0 \end{pmatrix} & \; \;\\
\end{align*}
The matrix above then reduces by $\textbf{Gaussian Elimination}$ to:
\[ \left(\begin{matrix}
1 & 2 & 3\\
0 & -3 & -6\\
0 & 0 & 0 \end{matrix} \right) \] }
The reduced form of the matrix tells us that there is a free variable, $x_3$, which would give us a $\textbf{non-trivial solution}$. We know that the Matrix above will have a free variable because it only has pivots in column one and column two with the third column having no pivot. Since this system has a free variable we know that it will have a non-trivial solution according to p. 43 of the text, which states:
\textit{The \textbf{homogeneous equation} $A\vec{x} = \vec{0}$ has a non-trivial solution if and only if the equation has at least one \textbf{free variable}}\\[.05in]
\end{addmargin}
\subsection{Determinants and Theorem 8}
\begin{addmargin}[.25in]{.25in}
\soln{\textbf{Definition:} A determinant is an operation performed on a matrix. It is a measure of certain properties of the entries of a matrix expressed as a single number. The general form of the $det(A)$ operation is the \textbf{co-factor expansion}. From p. 165 of our text: ''For $n \geq 2$ the \textbf{determinant} of a $n \times n$ matrix $A = [a_{ij}]$ is the sum of $n$ terms of the form $\pm a_{1j}det(A_{1j})$, with plus and minus signs alternating, were the entries $a_{11}, a_{12}, ...,a_{1n}$ are from the first row of $A ... [det(A)] = \sum\limits_{j=1}^n (-1)^{1+j}\,a_{1j}\,det(A_{1j})$''}\\[.05in]
The general form is referred to a the \textbf{co-factor expansion}. For matrices of size $n < 4$, there are formulas which are a little simpler to apply. These are specific to the size of the matrix. \\
\soln{\textbf{Formulas for $det(A_{2\times 2})$ and $det(A_{3 \times 3})$:}\\
\begin{center}
$det(A_{2 \times 2}) =
\begin{vmatrix}
a_{11} & a_{12}\\
a_{21} & a_{22}\\
\end{vmatrix} = a_{11} \cdot a_{22} - a_{21} \cdot a_{12}$\\
$det(A_{3 \times 3}) =
\begin{vmatrix}
a_{11} & a_{12} & a_{13}\\
a_{21} & a_{22} & a_{23}\\
a_{31} & a_{32} & a_{33}\\
\end{vmatrix} = (a_{11} \cdot a_{22} \cdot a_{33} + a_{12} \cdot a_{23} \cdot a_{31} + a_{13} \cdot a_{21} \cdot a_{32}) - (a_{13} \cdot a_{22} \cdot a_{31} + a_{11} \cdot a_{23} \cdot a_{32} + a_{12} \cdot a_{21} \cdot a_{33})$
\end{center}}
\soln{\textbf{Cofactor Expansion Example:}\\ \begin{center} $det(A) = \begin{vmatrix}
1 & 0 & 0 & 0 \\
0 & 2 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 3\\
\end{vmatrix} = (-1)^{1+1} \cdot 1 \cdot
\begin{vmatrix}
2 & 0 & 0\\
0 & 1 & 0\\
0 & 0 & 3\\
\end{vmatrix} $ \end{center}
Now we have reduced $det(A_{4\times 4}) \mbox{ to } C \cdot det(A'_{3 \times 3})$ where $C = (-1)^{2} \cdot 1 = 1$, which we have a formula for. \begin{center}
$1 \cdot det(A') = \begin{vmatrix}
2 & 0 & 0\\
0 & 1 & 0\\
0 & 0 & 3\\
\end{vmatrix} = (a_{11} \cdot a_{22} \cdot a_{33} + a_{12} \cdot a_{23} \cdot a_{31} + a_{13} \cdot a_{21} \cdot a_{32}) - (a_{13} \cdot a_{22} \cdot a_{31} + a_{11} \cdot a_{23} \cdot a_{32} + a_{12} \cdot a_{21} \cdot a_{33})$ \end{center}
Notice each multiplicative term in this expression contains a zero except for the first (the main diagonal), substituting our entries into this formula reduces to:\begin{center}
$2 \cdot 1 \cdot 3 = 6$ \end{center}
we can check our answer using Theorem 2 from p. 167 of the course textbook ``If A is a triangular matrix [zeroes above or below the diagonal], then $det(A)$ is the product of the entries on the main diagonal of $A$.'' Since our original matrix has entries \textit{only} on the diagonal we can be doubly sure it is triangular, so $det(A) = 1 \cdot 2 \cdot 1 \cdot 3 = 6$.
}\\[.05in]
Whether a determinant is non-zero for example tells us a lot about a matrix, for example \textbf{a matrix with non-zero determinant is invertible}. This is a very important property because it allows is to tap into a set of properties fundamental to linear algebra. \\[.05in]
\soln{\textbf{Theorem 8:}
(excerpt from course textbook p. 112)\\
``Let a be $A$ be a square $n \times n$ matrix. Then the following statements are equivalent. That is, for a given $A$, the statements are either all true of all false''. \vspace{.025in}
\begin{addmargin}{.25in}
a. $A$ is an invertable matrix\\
c. $A$ has $n$ pivot positions\\
d. The equation $A\vec{x} = \vec{0}$ has only the trivial solution \\
e. The columns of $A$ form a linearly independent set.\\
h. The columns of $A$ span $\mathbb{R}^{n}$.
\end{addmargin}
Only the most pertinent elements of this theorem have been quoted here. This theorem is not only useful for the information if provides from being true it also tells us a substantial amount if its false. The relation between invertibility, span, and the non-trivial solutions of the homogeneous equation are essential to the computation or eigenvectors and eigenvalues.
}\\[.025in]
\end{addmargin}
\subsection{Eigenvectors and Eigenvalues}
\begin{addmargin}[.25in]{.25in}
\soln{\textbf{Definition:} An \textbf{eigenvector} of an $n$ x $n$ matrix $A$ is a nonzero vector $x$ such that $A\vec{x} = \lambda \vec{x}$ for some scalar $\lambda$. A scalar $\lambda$ is called an \textbf{eigenvalue} of $A$ if there is a nontrivial solution $x$ of $A\vec{x} = \lambda \vec{x}$; such that $\vec{x}$ is called and eigenvector corresponding to $\lambda$. }\\[.05in]
\soln{\textbf{Example:} To find an \textbf{eigenvalue} of an $n \times n$ matrix is as follows:\vspace{.02in}
\begin{center}
$A = \begin{pmatrix}
2 & 7 \\
-1 & -6 \end{pmatrix} , \lambda \times I = \begin{pmatrix}\lambda & 0 \\
0 & \lambda \end{pmatrix} , A-\lambda I = \begin{pmatrix} 2-\lambda & 7\\ -1 & -6 - \lambda \end{pmatrix}$ \end{center}
To find the non-complex Eigenvalues of the matrix $A$, want non-trivial solutions, we know from theorem 8 that if a matrix is not invertible if the determinant of a matrix is 0 (a.) and if so then the homogeneous equation $A\vec{x} = 0$ doesn't have only the trivial solution (d.): \begin{center}
$det(A-\lambda I) = 0 = (2-\lambda)(-6-\lambda) - (-1 \cdot 7) = \lambda^{2} + 4\lambda - 5 = (\lambda + 5)(\lambda - 1)$
\end{center}
This looks suspiciously like a factored quadratic from precalculus, and we can interpret it the same way. The real eigenvalue of this equation is -5 and 1. \bigskip \\
To find the \textbf{eigenvector} we can use the eigenvalue we found from the previous matrix, $\lambda = -5$ and $\lambda = 1$. Let $v$ =
$\begin{pmatrix}
v_1 \\
v_2 \end{pmatrix}$ Then $(A = -5 \lambda )\vec{v} = 0$ gives us: \newline
\begin{center}
\bigskip
$\begin{pmatrix}
2 + 5 & 7 \\
-1 & -6 + 5 \end{pmatrix}$ $\begin{pmatrix}
v_1 \\
v_2 \end{pmatrix}$ = $\begin{pmatrix}
0 \\
0 \end{pmatrix}$ \newline \end{center}
Then we get the equations:\newline
\begin{center}
$\begin{pmatrix}
7v_1 & 7v_2 \\
-v_1 & -v_2 \end{pmatrix} \; \; r_2' = (1/7)r_1 + r_2 \rightarrow \begin{pmatrix}
7v_1 & 7v_2 \\
0v_1 & 0v_2 \end{pmatrix}$
\end{center}
\bigskip
It is clear that we have one free variable$(v_2)$ for the eigenvalue -5. Solving this system produces $v_1 = -v_2$ which makes our eigenvector at $\lambda = -5$ is $ \begin{pmatrix}
-1 \\
1 \end{pmatrix}$. Computing for the eigenvalue $\lambda = 1$, using the same steps we used to solve for $\lambda = -5$, we get the eigenvector $ \begin{pmatrix}
-7 \\
1 \end{pmatrix}$. Our eignevectors are linearly independent and span all of the $\mathbb{R}^2$. Note that all linear combinations $C_1 \cdot \begin{pmatrix}
-1 \\
1 \end{pmatrix} + C_2 \cdot \begin{pmatrix}
-7 \\
1 \end{pmatrix}$ are valid solutions. }\vspace{.05in}
In differential equations, both complex and non-complex solutions are allowed, we will see those applications in the following section.
%A - \lambda I = \left(\begin{array}{ccc}
%2 & 7\\
%-1 & -6 \end{array} \right) -\left(\begin{array}{ccc}
%\lambda & 0\\
%0 & \lambda \end{array} \right) \]
%\[ A - \lambda I = \left(\begin{array}{ccc}
%2 - \lambda & 7\\
%-1 & -6 - \lambda \end{array} \right) \]
\end{addmargin}
\section{Systems of Differential Equations}
\subsection{Definitions}
\soln{\textbf{Differential Equations:} functions involving derivatives. \\[.025in]
\textbf{Ordinary Differential Equations:} differential equations which do not have partial derivatives are referred to as \textbf{ordinary} differential equations.\\[.025in]
\textbf{Linear Differential Equations:} differential equations which take the form $y' + Py = Q$. In other words, equations without exponents above 1 which contain derivatives.\\[.025in]
\textbf{Order:} the order of a differential equation is determined by order of the derivatives in that equation. For example, an equation containing second derivatives is second order, those containing only first derivatives are first order\\[.025in]
\textbf{System of Differential Equations:} Any system of equations whose elements contain derivatives.\\[.025in]
\textbf{Note:} Most examples and concepts will be related to \textbf{Systems of Linear First Order Differential Equations.}}
\subsection{Fundamental Set of Solutions}
We have a system of linear differential equations:
\begin{addmargin}[1in]{0in}
$x_1' = a_{11}x_1 + \cdots +a_{1n}x_n$ \\
$x_2' = a_{21}x_1 + \cdots +a_{2n}x_n$
\begin{addmargin}[-5in]{0in}
$$\vdots$$
\end{addmargin}
$x_n' = a_{n1}x_1 + \cdots +a_{nn}x_n$ \\
\end{addmargin}
These equations look very similar to the matrix equation $ \vec{\textbf{x}}'(t)=\textbf{A} \vec{\textbf{x}}(t)$ \\
In fact we could treat these systems just like any other system of linear equations. We can treat the terms $a_n$ as the coefficients of a matrix $A$ and solve like any of the other systems of a $A \cdot \vec{x} = \vec{x \ '}$, remembering that $\vec{x}$ is related to $\vec{x}_n'$ not just by this equation, but also by differentiation (more about this in the coming sections). Such a solution is called the \textbf{fundamental set of solutions} for this system.\\[.025in]
\soln{ \textbf{Example:} \\
Let our system be:
\begin{addmargin}[.5in]{.5in}
\begin{align*}
& x_1' = \ 4x_1 & \longrightarrow & \ x_1 = \ C_1e^{4t} \\
& x_2' = \ -x_2 & \longrightarrow & \ x_2 = \ C_2e^{-t} & \longrightarrow & \begin{pmatrix} \
4 & 0 & 0\\
0 & -1 & 0\\
0 & 0 & 3 \end{pmatrix} \cdot & \begin{pmatrix}
x_1(t)\\
x_2(t)\\
x_3(t) \end{pmatrix} & = \begin{pmatrix}
x_1'(t)\\
x_2'(t)\\
x_3'(t)\\
\end{pmatrix}\\
& x_3' = \ 3x_3 & \longrightarrow & \ x_3 = \ C_3e^{3t}
\end{align*}
\end{addmargin}
Where $x_n$ and $x_n'$ are functions of $t$. To the left of the first arrow we see the derivatives of each numbered function and on the right are the integrated functions using $y' = ky \rightarrow y = Ce^{kt}$ from calculus. Remember that our use of x and y in this case implies a function and not a single variable. To the right of the second arrow is the Linear Algebra interpretation of this system. Because each $x_n'$ corresponds to only the term $a_nx_n$ this system is said to be \textbf{decoupled} and our matrix $A$ becomes a \textbf{diagonal} matrix.
}
\subsection{Initial Value Problem}
In our example above, we have constructed a matrix equation which give us the fundamental set of solutions which form a basis for the set of all solutions, but what about a specific, unique solution? The \textbf{initial value problem} can be solved if the $\vec{x}_0$ is known.\\
\soln{\textbf{Example:}
Suppose we have a known value for each $x_n(0)$:
\begin{addmargin}[1in]{0in}
$x_1(0) = 2$\\
$x_2(0) = -2$ \\
$x_3(0) = 1$
\end{addmargin}
Plugging this in to our matrix equation from 2.2, we have \vspace{.025in}
\begin{addmargin}[1in]{0in}
$ \begin{pmatrix} \
4 & 0 & 0\\
0 & -1 & 0\\
0 & 0 & 3 \end{pmatrix} \cdot \begin{pmatrix}
2\\
-2\\
1
\end{pmatrix} = \begin{pmatrix}
16\\
2\\
3 \end{pmatrix} $
\end{addmargin} \vspace{.025in}
Recall that our functions are all of the form $Ce^t$, so we know also that at $t = 0$ $e^{kt} = 1$ for all k, so we have $C_n \cdot 1$ for all $x_n$: \vspace{.025in}
\begin{addmargin}[1in]{0in}
$\begin{pmatrix}
4C_1(1)\\
-1C_2(1)\\
3C_3(1)
\end{pmatrix} = \begin{pmatrix}
16\\
2\\
3 \end{pmatrix} $
\end{addmargin} \vspace{.025in}
solving our simple system gives: \vspace{.025in}
\begin{addmargin}[1in]{0in}
$C_1 = 4$\\
$C_2 = -2$\\
$C_3 = 1$
\end{addmargin} \vspace{.025in}
}
\subsection{Eigenvalues and Differential Equations}
For the above example, our solution $\vec{x} \ ' = A \vec{x}$ is more or less evident, there being no interaction between our functions, but what if our system looked more like looked more like this (example from [3])?
\begin{addmargin}[1in]{0in}
$y_1' = 3y_1 + 2y_2 $\\
$y_2' = 6y_1 - 1y_2$
\end{addmargin}
then our $A$ matrix would be would be:\vspace{.025in}
\begin{addmargin}[1in]{0in}
$\begin{pmatrix}
3 & 2\\
6 & -1 \end{pmatrix}$.
\end{addmargin}
We'll employ eigenvalues to help us toward a solution. Using maple or by-hand to compute the eigenvalues, we have:
\begin{addmargin}[1in]{0in}
$\lambda_1 = -3$ and $ \lambda_2 = 5$
\end{addmargin}
and eigenvectors
\begin{addmargin}[1in]{0in}
$ \vec{v}_1 = \begin{pmatrix}
1\\
-3 \end{pmatrix} \mbox{ and } \vec{v}_2 = \begin{pmatrix}
1\\
1 \end{pmatrix} $
\end{addmargin} \vspace{.025in}
Since our $A$ isn't diagonal, wouldn't it be cool if we could make diagonal, turns out, we can! First, we'll construct a matrix $V$ composed of our two eigenvectors
\begin{addmargin}[1in]{0in}
$V = \begin{pmatrix}
1 & 1\\
-3 & 1 \end{pmatrix} $
\end{addmargin}
and we'll also need it's inverse. The determinant of V using the $2 \times 2$ formula is $(1 \cdot 1) - (-3 \cdot 1) = 4$ so we know from theorem 8 (section 1.2) that a non-zero determinant implies invertability, so we can be confident that our matrix $V$ does have an inverse,
\begin{addmargin}[1in]{0in}
$V^{-1} = \begin{pmatrix}
1/4 & -1/4\\
3/4 & 1/4 \end{pmatrix}$
\end{addmargin}Now we can apply some matrix multiplication to get
\begin{addmargin}[1in]{0in}
$V^{-1}AV = \begin{pmatrix}
-3 & 0 \\
0 & 5 \end{pmatrix} $
\end{addmargin}To explain the underlying theory of this would be beyond the scope of this project. Let it suffice to say that we have constructed a new, simpler system that nonetheless behaves as the old one did. Since it is a different system we probably shouldn't call it $y_1 \mbox{ and } y_2$ anymore, lets use $w_1 \mbox{ and } w_2$.
\begin{addmargin}[1in]{0in} $\begin{pmatrix}
w_1'\\
w_2' \end{pmatrix} = \begin{pmatrix}
-3 & 0 \\
0 & 5 \end{pmatrix} \begin{pmatrix}
w_1\\
w_2 \end{pmatrix} $ \end{addmargin}
and so we have \begin{addmargin}[1in]{0in}
$w_1' = -3w_1$ where $w_1 = C_1e^{-3t}$\\
$w_2' = 5 w_2 $ where $w_2 = Ce^{5t} $
\end{addmargin}
Plugging the $\vec{y} = P\vec{w}$ relationship in to our original system, we have \begin{addmargin}[1in]{0in}$ \begin{pmatrix}
y_1\\
y_2 \end{pmatrix} = \begin{pmatrix}
1 & 1\\
-3 & 1 \end{pmatrix} \begin{pmatrix}
w_1\\
w_2 \end{pmatrix} $ \end{addmargin}
and so finally our solution is \begin{addmargin}[1in]{0in}
$y_1 = w_1 + w_2 = C_1e^{-3t} + C_2e^{5t}$\\
$y_2 = -3w_1 + w_2 = -3C_1e^{-3t} + C_2e^{5t}$ \end{addmargin}
Arriving at our solution is this manner, we have used what is known as the superposition of two solutions. If we can be confident they will produce the same result, a solution in one system is as good as another.
\section{Annotated Sources}
\subsection*{1. Linear Algebra and it's Applications (Course Text)}
David C. Lay, Fourth Edition\\
Use: Source material for review topics and application of Linear Algebra to Differential Equations.
\subsection*{2. Mathematical Methods in the Physical Sciences}
Mary L. Boas, Third Edition\\
Use: Definitions and examples regarding Differential Equations.
\subsection*{3. Elementary Linear Algebra}
Larson/Edwards/Falvo, Fith Edition\\
Use: Applications of Linear Algebra in Differential Equations, section 7.4.
\subsection*{Maple 18}
Use: Employed to verify any results used in examples.
\subsection*{Pauls Online Notes}
$http://tutorial.math.lamar.edu/Classes/DE/LA_Eigen.aspx$ \\
Use: Additional examples
\subsection*{Special thanks to:}
\begin{addmargin}[.25in]{.25in}
\subsubsection*{Dr Vicky Klima for \LaTeX \; formatting help and `soln' command.}
\subsubsection*{Dr Sarah Greenwald for consultation regarding \LaTeX \; formatting and Linear Algebra concepts.}
\subsubsection*{Andrew Zeidell for consultation on Differential Equations concepts and source recommendations.}
\end{addmargin}
\end{document}