Systems of ODEs

As we saw in one of the previous labs, we can write any second order linear ODE $$ \frac{d^2y}{dt^2}+a(t)\frac{dy}{dt} + b(t) y = f(t)$$ as a system of two first order ODEs using the change of variables $z_0 = y$ and $z_1 = \frac{dy}{dt}$. If we do so, we get the system \begin{align} \frac{dz_0}{dt} & = z_1 \\ \frac{dz_1}{dt} &= - a(t)z_1-b(t)z_0. \end{align} As it turns out, there is nothing special about this being a second order ODE, or it being linear. A similar process will allow you to convert any $n$th order ode into a system of $n$ first order ODEs.

Systems of first order differential equations show up more frequently than as a different representation of higher order ODEs, and their study is an active area of research. In many cases, we can't find explicit solutions to a system of ODEs, even if it is a linear system. In the case where the system is linear and the coefficients are constant, however, we can find a solution using tools from linear algebra.

2x2 linear systems of ODEs

We will spend the rest of this course exploring how to solve systems of two first order linear ODEs with constant coefficients. The procedure for solving higher order ODEs is effectively the same.

A 2x2 linear system of ODEs is a system in the form \begin{align} \frac{dy_1}{dt} &= a y_1 + b y_2 \\ \frac{dy_2}{dt} &= cy_1 + dy_2, \end{align} where $a,b,c,d$ are real constants. Here we have two state variables, one independent variable, and 4 parameters. We can rewrite this system using matrices and vectors as $$\frac{d}{dt}\begin{bmatrix}y_1 \\ y_2 \end{bmatrix} = \begin{bmatrix} a & b \\ c & d \end{bmatrix} \begin{bmatrix}y_1 \\ y_2 \end{bmatrix}, $$ or more compactly as $$ \frac{d\vec{y}}{dt} = A \vec{y}.$$ If $A$ were a constant scalar, and $y$ was a scalar function, we would know how to solve this equation, and the solution would simply be $$ y(t) = C e^{At}.$$ We would like to do something similar in the case of matrices and vectors. To do so, we need to make sense of what $e^{A}$ means when $A$ is a vector.

Definition: If $A$ is a matrix, we can define $$e^{A} = \sum_{j= 0}^{\infty} \frac{A^n}{n!}= I + A + \frac{1}{2}A^2 + ... ,$$ which is the usual Taylor series expansion for $e^A$.

We can use this definition to easily show that $\frac{d}{dt} e^{At} = A e^{At}.$ Therefore, the general solution to the equation $$\frac{d\vec{y}}{dt} = A \vec{y},$$ is $\vec{y} = e^{At}\vec{C}$ for some arbitrary vector $\vec{C}$. Finding the solution therefore comes down to being able to evaluate $e^{At}$ for the matrix $A$.

Example: If $A$ is a diagonal matrix, we can write $$A = \begin{bmatrix}a & 0 \\ 0 & d \end{bmatrix},$$ Then we see that, for any $n \in \mathbb{N}$, $$ (At)^n = \begin{bmatrix} (at)^n & 0 \\ 0 & (dt)^n \end{bmatrix}. $$ Therefore, we have \begin{align} e^{At} &= I + A + \frac{1}{2}A^2 + ...\\ &= \begin{bmatrix} 1 + at + \frac{1}{2}(at)^2 + ... & 0 \\ 0 & 1 + dt + \frac{1}{2}(dt)^2 + ... \end{bmatrix} \\ &= \begin{bmatrix} e^{at} & 0 \\ 0 & e^{dt} \end{bmatrix} \end{align}

If $A$ is not diagonal, then we can sometimes diagonalize it by finding the eigenvectors of $A$.

Diagonalization: Let $A$ be a 2x2 matrix that has two linearly independent eigenvectors $\vec{u}$ and $\vec{v}$ with eigenvalues $\lambda_1$ and $\lambda_2$. Recall that $u$ is an eigenvector with eigenvalue $\lambda_1$ if $u$ is nonzero and $$ Au = \lambda_1 u.$$ If we let $P = [\vec{u}|\vec{v}]$ (I.e. $P$ is a matrix with the entries of $u$ in the first column, and the entries of $v$ in the 2nd column), then \begin{align}P^{-1}A P &= P^{-1}[A\vec{u} | A\vec{v}] \\ &= P^{-1}[ \lambda_1 \vec{u} | \lambda_2 \vec{v}] \\ &= [\lambda_1 P^{-1}\vec{u} | \lambda_2 P^{-1}\vec{v}]\\ &= \begin{bmatrix} \lambda_1 & 0 \\ 0 & \lambda_2 \end{bmatrix}.\end{align} If we call this diagonal matrix $D$, then we have $P^{-1}AP = D,$ or by rearranging $A = P D P^{-1}.$

Using this in the definition of the matrix exponential, we have \begin{align} e^{At} &= e^{PDP^{-1}t} \\ & = I + PDP^{-1}t + \frac{1}{2}(PDP^{-1}t)^2 + ...\\ &= PP^{-1} + P(Dt)P^{-1}+\frac{1}{2}P(Dt)^2 P^{-1}\\ & = P \left(\sum_{n=0}^{\infty} \frac{(Dt)^2}{n!}\right)P^{-1}\\ &= P e^{Dt}P^{-1}. \end{align} Therefore, if $A$ is diagonalizable, The solution to the linear system is $$\vec{y} = P e^{Dt}P^{-1}\vec{C}.$$ Since $P^{-1}\vec{C}$ is just an arbitrary vector, we can rename it to $\vec{C}$, and the general solution is \begin{align} y(t) &= P e^{Dt}\vec{C} \\ &= [\vec{u} | \vec{v} ] \begin{bmatrix} e^{\lambda_1 t} & 0\\ 0 & e^{\lambda_2 t}\end{bmatrix}\begin{bmatrix}C_1 \\ C_2 \end{bmatrix}\\ & = [\vec{u} | \vec{v} ] \begin{bmatrix} C_1e^{\lambda_1 t} \\ C_2e^{\lambda_2 t}\end{bmatrix} \\ & = C_1 e^{\lambda_1 t}\vec{u} + C_2e^{\lambda_2 t}\vec{v}. \end{align}

Practice Problems:

  1. Diagonalize the matrix $$A = \begin{bmatrix} 1& 2 \\ 2 & 1 \end{bmatrix}$$
  2. Find the matrix exponential of $$ A = \begin{bmatrix} 2 & 2 \\ 3 & 1 \end{bmatrix}$$
  3. Find the matrix exponential of $$ A = \begin{bmatrix} 2 & -1 \\ 0 & -4\end{bmatrix}$$