When examining an electrical circuit with a resistor, capacitor, and an inductor, it is useful to look at how the current (a measure of the flow of electricity) changes as a function of time. The dynamical system in this case would consist of \((X,R)\) where \(X\) is the set of possible functions with input \(t\) and \(R\) is the rule given by the differential equation \(L \dfrac{d^2 I}{dt^2} + R \dfrac{d I}{dt} +\frac{1}{C} I =0\text{.}\) In this equation the current is \(I(t)\) and the constants \(R\text{,}\) \(L\text{,}\) and \(C\) are the resistance, inductance, and capacitance of the individual components of the circuit.
Section 3.3 Continuous Dynamical Systems
A dynamical system is a pair \((X,R)\) where \(X\) is the set of states a system can be in and \(R\) is a rule for how the system evolves or changes. We will look at some dynamical systems where the rule of evolution will describe how the state of the system changes in terms of a continuous parameter. Let’s look at some examples.
Example 3.3.1.
Example 3.3.2.
If you are looking at the position of an object moving under the force of gravity and under air-resistence, your dynamical system would consist of \((X,R)\) where
- \(X\) is the set of vectors of the form \(\vec{w}(t)=\colvec{x(t) \\y(t) \\z(t)}\) where \(x(t), y(t), z(t)\) are continuous functions of \(t\)
- \(R\) is the rule of evolution given by \(m \dfrac{d^2 w}{dt^2} = -\omega \dfrac{dw}{dt} + mg \colvec{0 \\0 \\-1}\)
Example 3.3.3. Wave Equation.
Many different physical phenomena satisfy a very famous differential equation:
\begin{equation*}
\dfrac{\partial^2 g}{\partial t^2}= c^2 (\dfrac{\partial ^2 g}{\partial x^2}+\dfrac{\partial ^2 g}{\partial y^2}+\dfrac{\partial ^2 g}{\partial z^2})
\end{equation*}
The state of the system is given by some function \(g(x,y,z,t)\) that may vary in both space and time coordinates. This kind of system is called a partial differential equation since there is not A derivative for a multivariable function and the change in our system depends on the various partial derivatives of our function.
Example 3.3.4. Heat Equation.
Many different physical phenomena satisfy another very famous differential equation:
\begin{equation*}
\dfrac{\partial g}{\partial t}= \alpha (\dfrac{\partial ^2 g}{\partial x^2}+\dfrac{\partial ^2 g}{\partial y^2}+\dfrac{\partial ^2 g}{\partial z^2})
\end{equation*}
The state of the system is given by some function \(g(x,y,z,t)\) that may vary in both space and time coordinates. This is another partial differential equation.
Activity 3.3.1.
For this activity, we want to look at the following 2D continuous dynamical system.
\begin{equation*}
\dfrac{dx}{dt}= -2x
\end{equation*}
\begin{equation*}
\dfrac{dy}{dt}=\frac{1}{3}y
\end{equation*}
(a)
What would a solution look like to this system?
(b)
Give a solution to this system.
(c)
Give all possible solutions to this system.
(d)
What is the solution with \(x(0)=1\) and \(y(0)=-1\text{?}\)
None of the stuff in the previous problem seems like linear algebra, so why are we doing this stuff? The answer is that we can expand our notion of what a “vector” is and use the idea that we would like to express solutions to these systems as linear combinations of our “nice” solutions.
Subsection 3.3.1 Linear Systems of Ordinary Differential Equations
In this subsection, we will look at systems of Linear ODEs of the form:
\begin{align*}
\dfrac{d x_1}{dt} \amp= a_{11} x_1 \amp+ \ldots \amp+ a_{1n} x_n \\
\dfrac{d x_2}{dt} \amp= a_{21} x_1 \amp+ \ldots \amp+ a_{2n} x_n \\
\vdots \amp \ddots \amp \amp \vdots \\
\dfrac{d x_n}{dt} \amp= a_{n1} x_1 \amp+ \ldots \amp+ a_{nn} x_n
\end{align*}
with initial values given by \(\vec{x}(0)=\vec{x}_0 = \colvec{x_1(0) \\ x_2(0) \\ \vdots \\x_n(0)}\text{.}\)
Notice that this system has the following properties:
- no forcing term (the right hand side of the system does not explicitly depend on \(t\))
- constant coefficients
- linear solutions (solutions are linear combinations of each other)
If you look around at other books and online resources, you will see that the solution to the system given by \(\dfrac{d \vec{x}}{dt} = A \vec{x}\) where \(\vec{x}=\colvec{x_1(t) \\ x_2(t) \\ \vdots \\x_n(t)}\text{,}\) will be of the form: \(\vec{x}(t) = exp(At) \vec{x_0}\text{.}\) The term \(exp(At)\) is called the matrix exponential of \(A\text{.}\)
The solution to a linear continuous dynamical system involves evaluating a matrix exponential. This is not a straightforward task and the evaluation algorithm is highly suspect in many situations. In fact, one of the most cited papers in all of applied mathematics is written by Van Loan and Moler (founder of Matlab) titled19 Dubious Ways to Compute the Exponential of a Matrix written in 1978. This paper and idea was so important in computational science and applied mathematics that it was revised by the same authors and updated 25 years later titled appropriately 19 Dubious Ways to Compute the Exponential of a Matrix, Twenty-Five Years Later. In short, the papers primary contribution is to show that there is no accepted way to evaluate a matrix exponential for all matrices and the algorithm choice is matrix dependent. Dr. Beauregard’s research takes a particular interest in symplectic approximations as they preserve fundamental physical quantities.
1
2
Let’s start with the same assumption we did for discrete dynamical systems: We will assume that we have continuous dynamical system given by a \(n\) by \(n\) matrix \(A\) (with rule given by \(\dfrac{d \vec{x}}{dt} = A \vec{x}\)) and that we can find a set of \(n\) linearly independent eigenvectors of \(A\text{,}\) \(\{\vec{v}_1,\vec{v}_2, \ldots, \vec{v}_n\}\text{,}\) with eigenvalues \(\{\lambda_1,\lambda_2, \ldots, \lambda_n\}\text{.}\) Further, let’s define two matrices
\begin{equation*}
D=diag(\lambda_1,\lambda_2, \ldots, \lambda_n)=
\begin{bmatrix} \lambda_1 \amp 0 \amp \cdots \amp 0\\ 0 \amp \lambda_2 \amp \cdots \amp 0\\ \vdots \amp \vdots \amp \ddots \amp \vdots \\ 0 \amp 0 \amp \cdots \amp \lambda_n \end{bmatrix}
\end{equation*}
and \(V=[\vec{v}_1 , \ldots , \vec{v}_n]\text{.}\) From our work on change of coordinates, you should recognize that \(A = V D V^{-1}\text{.}\) So,
\begin{align*}
A^k =\amp (V D V^{-1})(V D V^{-1})\ldots (V D V^{-1})\\
= \amp V D^k V^{-1}
\end{align*}
where \(D^k = diag(\lambda_1^k,\lambda_2^k, \ldots, \lambda_n^k)\text{.}\) We can use our knowledge of power series to write exponentials using
\begin{equation*}
e^\alpha = \sum_{k=0}^\infty \frac{\alpha^k}{k!}
\end{equation*}
Notice that all this requires to apply to a matrix is that powers of the matrices have to make sense and the scalar multiplication by \(\frac{1}{k!}\) also needs to make sense. So we can define the matrix exponential as
\begin{equation*}
e^{At} = V \sum_{k=0}^\infty \frac{D^k}{k!} V^{-1}= V e^{Dt} V^{-1}= V diag(e^{\lambda_1 t},e^{\lambda_2 t}, \ldots, e^{\lambda_n t}) V^{-1}
\end{equation*}
Note here that \(e^{At} \vec{x}_0\) will be a vector (by matrix vector product) and thus our solution to \(\dfrac{d \vec{x}}{dt} = A \vec{x}\) is given by \(\vec{x}(t) = e^{At}\vec{x}_0= V e^{Dt} V^{-1} \vec{x}_0 \text{.}\)
This looks a bit like our solutions to the discrete dynamical systems but still different. The vector \(V^{-1} \vec{x}_0\) is a solution to what matrix equation? If \(\vec{c}=V^{-1}\vec{x}_0\text{,}\) then \(\vec{c}\) is the solution to \(V \vec{c} =\vec{x}_0\text{!!!}\) You should recognize that \(\vec{c}\) is the vector of coefficients for the vector equation \(c_1 \vec{v_1} +c_2 \vec{v}_2 +\ldots + c_n \vec{v}_n = \vec{x}_0\text{!}\) The vector \(\vec{c}=V^{-1}\vec{x}_0\) comes from writing the initial condition of our system as a linear combination of the eigenvectors of \(A\text{!}\)
Our solutions to \(\dfrac{d \vec{x}}{dt} = A \vec{x}\) are of the form
\begin{equation*}
\vec{x}(t) = e^{At}\vec{x}_0= V e^{Dt} \vec{c} = \sum_{j=1}^n c_j e^{\lambda_j t} \vec{v}_j
\end{equation*}
Look at how much of this is determined by the algebra of problems you already know how to do. Which parts of this will determine the long term behavior of solutions? When will you have fixed point(s)? When will the fixed point(s) be attractors? repellors? saddles?
Example 3.3.5.
Let \(A=\begin{bmatrix} 1.25 \amp -0.75 \\ -0.75 \amp 1.25 \end{bmatrix}\text{.}\) As we have seen in our previous work, \(A\) has eigenvalues of \(\lambda_1=1/2\) and \(\lambda_2=2\) with a choice of eigenvectors given by \(\vec{v_1}=\colvec{1\\1}\) and \(\vec{v}_2=\colvec{1 \\-1}\text{.}\) The system of differential equations that corresponds to this matrix \(A\) is given by:
\begin{align*}
\dfrac{d x_1}{dt} =\amp 1.25 x_1 -0.75 x_2 \\
\dfrac{d x_2}{dt} =\amp -0.75 x_1 +1.25 x_2
\end{align*}
Using our tools from earlier, we can see that the solutions of this system can be written in the vector form
\begin{align*}
\vec{x}(t) =\amp \colvec{x_1(t)\\x_2(t)} \amp= \sum_{j=1}^n c_j e^{\lambda_j t} \vec{v}_j\\
\amp =c_1 e^{\frac{1}{2}t}\colvec{1\\1}+c_2 e^{2t} \colvec{1 \\-1}
\end{align*}
If we wanted to find the particular solution with \(\vec{x}(0)=\colvec{1\\2}\text{,}\) then we need to solve
\begin{equation*}
\colvec{1\\2} =c_1 \colvec{1\\1}+c_2 \colvec{1 \\-1} \text{,}
\end{equation*}
which has a solution of \(c_1=\frac{3}{2}\) and \(c_2= -\frac{1}{2}\text{.}\) So the particular solution with \(\vec{x}(0)=\colvec{1\\2}\) is
\begin{equation*}
\vec{x}=\frac{3}{2} e^{\frac{1}{2}t}\colvec{1\\1}-\frac{1}{2} e^{2t} \colvec{1 \\-1}\text{.}
\end{equation*}
Activity 3.3.2.
To arouse a deeper interest into the study of linear systems, let us borrow from a classic example of relationships, first presented by Steven Strogatz in 1988 and then further illustrated by J.C. Sprott in 2004.
Now we know the story of Romeo and Juliet. In our situation, Romeo is desperately in love with Juliet, but Juliet is more fickle than what Shakespeare had in mind. In fact, the more Romeo loves Juliet, the more Juliet wants to run away and hide. This discourages Romeo and he starts to love Juliet less, strangely this is exactly the moment that Juliet finds Romeo more attractive and she begins to love him. On the other hand, Romeo notices again that Juliet is warming up to Romeo and he begins to warm up to her as well. Will the eager beaver (Romeo) ever find true love with the ever fickle and cautious lover (Juliet)?
Let \(R(t)\) and \(J(t)\) be the love (or hate, when negative) that each person has for each other, respectively, at time \(t\text{.}\) Let \(a, b, c, d \gt 0\text{.}\) The love/hate relationship is governed by the dynamical system:
\begin{align*}
\dfrac{dR}{dt}=\amp aR+bJ\\
\dfrac{dJ}{dt}=\amp -cR+dJ
\end{align*}
Consider the case where \(a=0\text{,}\) \(b = c = d = 1\) and answer the following:
- Determine the eigenvalues and eigenvectors for the coefficient matrix.
- Write down the general solution using the eigenvalues and eigenvectors.
- Use Euler’s formula to simplify the result completely to determine the real-valued solution to the position function.
- Plot \(R(t)\) and \(J(t)\) over time. Plot \(R(t)\) versus \(J(t)\text{.}\) Contrast this to a phase portrait.
- How does the situation change if \(a = 3\text{?}\) What of \(a \gt 3\text{?}\)
Activity 3.3.3.
(a)
For each of the matrices below, state the general solution for the system of differential equations given by \(\dfrac{d\vec{x}}{dt}=A\vec{x}\) and find the particular solution with \(\vec{x}(1)=\colvec{-1\\2}\)
- \(\displaystyle A_1=\begin{bmatrix} 1 \amp 2 \\ 2 \amp 3 \end{bmatrix}\)
- \(\displaystyle A_2=\begin{bmatrix} 1 \amp 1 \\ -1 \amp 1 \end{bmatrix}\)
- \(\displaystyle A_3=\begin{bmatrix} 0.5 \amp 1.5 \\ 1.5 \amp 0.5 \end{bmatrix}\)
- \(\displaystyle A_4=\begin{bmatrix} 3 \amp 2 \\ 1 \amp 1 \end{bmatrix}\)
Subsection 3.3.2 Converting Higher Order DEs to Systems
Since we have such a nice description and clean algebra to solve systems of differential equations of the form \(\dfrac{d \vec{x}}{dt}=A \vec{x}\text{,}\) it is common to write other types of questions in terms of a system of first order differential equations. For example, if we consider the damped harmonic oscillator (an object moving on a spring with friction), then the system follows the differential equation
\begin{equation*}
m \dfrac{d^2 x}{dt^2} = -\alpha \dfrac{d x}{dt}- k x
\end{equation*}
where \(x(t)\) is the function of time that measures the position of the object (as measured from the rest length of the spring), \(m\) is the mass of the object, \(k\) is the spring constant, and \(\alpha\) is the coefficent of friction for the object. This is called a second order differential equation because it involves a second derivative of the objective function, but we can rewrite this as first order differential equation of the vector \(\vec{y}(t) =\colvec{\dfrac{dx}{dt}(t) \\ x(t)}\text{.}\) In particular,
\begin{equation*}
\dfrac{d\vec{y}}{dt}=\colvec{-\frac{\alpha}{m} \dfrac{dx}{dt}- \frac{k}{m} x \\ \dfrac{dx}{dt}}
\end{equation*}
Which takes the form \(\dfrac{d\vec{y}}{dt}=A\vec{y}\) for \(A=\begin{bmatrix} -\frac{\alpha}{m} \amp - \frac{k}{m} \\ 1 \amp 0 \end{bmatrix}\text{.}\) Thus by analyzing the second component of our solution, \(\vec{y}(t)\text{,}\) we can give the solution to \(m \dfrac{d^2 x}{dt^2} = -\alpha \dfrac{d x}{dt}- k x\)
Example 3.3.6.
If we consider the second-order, ordinary differential equation given by
\begin{equation*}
\dfrac{d^2 x}{dt^2} = - 5 \dfrac{d x}{dt}- 2 x
\end{equation*}
then we are trying to find a scalar function \(x(t)\) that satisfies the second order equation. The corresponding first-order, vector differential equation will be
\begin{equation*}
\dfrac{d\vec{y}}{dt}=\colvec{-5 \dfrac{dx}{dt}- 2 x \\ \dfrac{dx}{dt}}= \begin{bmatrix} -5 \amp - 2 \\ 1 \amp 0 \end{bmatrix} \colvec{\frac{dx}{dt} \\ x}
\end{equation*}
The corresponding matrix (\(\begin{bmatrix} -5 \amp -2 \\ 1 \amp 0 \end{bmatrix}\)) has eigenvalues of \(\frac{-5 \pm \sqrt{17}}{2}\) which will have eigenvectors of \(\colvec{-5\pm \sqrt{17} \\ 2}\text{.}\) So the general solution to our vector equation will be
\begin{equation*}
\vec{y(t)}=c_1 e^{(\frac{-5 + \sqrt{17}}{2})t} \colvec{-5+ \sqrt{17} \\ 2}+ c_2 e^{(\frac{-5 - \sqrt{17}}{2})t} \colvec{-5- \sqrt{17} \\ 2}
\end{equation*}
The solution to
\begin{equation*}
\dfrac{d^2 x}{dt^2} = - 5 \dfrac{d x}{dt}- 2 x
\end{equation*}
will be the second component of our vector solution, namely \(x(t)=c_1 e^{(\frac{-5 + \sqrt{17}}{2})t} (2) + c_2 e^{(\frac{-5 - \sqrt{17}}{2})t} (2)\text{.}\) You can write this a little more simply because you can absorb the constants into the \(c_1\) and \(c_2\) to get \(x(t)=c_1 e^{(\frac{-5 + \sqrt{17}}{2})t} + c_2 e^{(\frac{-5 - \sqrt{17}}{2})t} \text{.}\)
What is the long term behavior of these solutions? Does the behavior depend on \(c_1\) and \(c_2\text{?}\)