next up previous contents
Next: Contractions and expansions Up: Linear Systems Previous: Multiple eigenvalues

2.5 Asymptotic behaviour

We shall discuss in this section the long term behaviour of solutions of the linear system

 \begin{displaymath}x' = Ax \, .
\end{displaymath} (2.6)

For any $x_{0} \in R^{n}$, the solution of (2.6) subject to the initial condition x(0) = x0 is given by

x(t) = eAt x0.

Clearly, for each fixed t, eAt can be viewed as a mapping from Rn to Rn. The one-parameter family of mappings

 \begin{displaymath}e^{At} : R^{n} \raro R^{n}, \quad t \in R
\end{displaymath} (2.7)

is called the flow of the linear system (2.6).

It is easy to verify that the flow $\phi_{t} = e^{At}$ satisfies

1.
$\phi_{0} = I$,

2.
$\phi_{t+s} = \phi_{t} \circ \phi_{s}, \quad \forall \, t,
s \in R$ (semigroup property).


Definition 2.5.1:     System (2.6) is called

1.
hyperbolic if all the eigenvalues of A have nonzero real parts;

2.
a sink (source) if all the eigenvalues of A have negative (positive) real parts;

3.
a saddle if it is hyperbolic but neither a sink nor a source.

From Theorem 2.4.4, we see that the following results hold.


Theorem 2.5.1:     The origin is a sink of (2.6) iff $\forall \; x_{0} \in R^{n}$

\begin{displaymath}\lim_{t \raro \infty} e^{At} x_{0} = 0 .
\end{displaymath}


Theorem 2.5.2:     The origin is a source of (2.6) iff $\forall \; x_{0} \in R^{n}\backslash \{ 0 \}$

\begin{displaymath}\lim_{t \raro \infty} \mid e^{At} x_{0} \mid \, = \infty
\, .
\end{displaymath}

In general, let $\lambda_{j}$, be the eigenvalues of A with multiplicity nj, $j = 1, \ldots ,
\gamma$ and ${\dss\sum_{j=1}^{\gamma}} n_{j} = n$. Then corresponding to each $\lambda_{j}$, there are mj linearly independent eigenvectors and nj linearly independent generalized eigenvectors with $m_{j}~\leq~n_{j}$. mj is called the index of $\lambda_{j}$. Clearly, A is diagonalizable iff mj = nj for all $j = 1, \ldots ,
\gamma$. It is easy to see that the real parts of eigenvalues of A determine the asymptotic behaviour of solutions of (2.6). Thus, we give the following definition.


Definition 2.5.2:     Let $\lambda_{j} = \alpha_{j} + i \beta_{j}$, $j=1, \ldots , n$, be the eigenvalues of A repeated according to their multiplicity and wj = uj + i vj, $j=1, \ldots , n$, be the corresponding generalized eigenvectors, where $\beta_{j} =0$ and vj = 0 if $\lambda_{j}$ is a real eigenvalue. Then we define the stable subspace Es, centre subspace Ec and unstable subspace Eu of (2.6) by

\begin{displaymath}E^{s} = {\rm span} \; \{ u_{j}, v_{j} \, \mid \,
\alpha_{j} < 0 \}
\end{displaymath}


\begin{displaymath}E^{c} = {\rm span} \; \{ u_{j}, v_{j} \, \mid \,
\alpha_{j} = 0 \}
\end{displaymath}


\begin{displaymath}E^{u} = {\rm span} \; \{ u_{j}, v_{j} \, \mid \,
\alpha_{j} > 0 \} \, .
\end{displaymath}

Clearly, $R^{n} = E^{s} \oplus E^{c} \oplus E^{u}$, the origin is a sink when Es = Rn, a source when Eu = Rn and a saddle when $E^{c} =
\phi$.

From linear algebra, we know that a generalized eigenspace of A, i.e. the span of generalized eigenvectors corresponding to eigenvalues $\lambda$, is A-invariant, i.e. $AE \subset E$. For the linear system (2.6), we have a similar concept.


Definition 2.5.4:     A subspace $E
\subset R^{n}$ is said to be an invariant subspace of (2.6) if $e^{At} E \subset E$ for all $t \in R$.


Theorem 2.5.3:     The stable subspace Es, center subspace Ec and unstable subspace Eu are invariant subspaces of (2.6).


Proof: (omitted)

Thus for all $t \in R$, $e^{At} x_{0} \in E^{s}$ and therefore $e^{At} E^{s} \subset E^{s}$, i.e. Es is invariant. Similarly, we can show that Ec and Eu are invariant subspaces of (2.6).


Theorem 2.5.4:     If the origin is a saddle point of (2.6), then $R^{n} = E^{s} \oplus E^{u}$ and

1.
$\forall \; x_{0} \in E^{s}$, $e^{At} x_{0} \in E^{s}$, $t \in R$ and ${\dss\lim_{t \raro \infty}} e^{At}
x_{0} = 0$.

2.
$\forall \; x_{0} \in E^{u}$, $e^{At} x_{0} \in E^{u}$, $t \in R$ and ${\dss\lim_{t \raro \infty}} \mid
e^{At} x_{0} \mid \, = \infty$ if $x_{0} \neq 0$.


Example 2.5.1:     Discuss the asymptotic behaviour of solutions of (2.6) with

\begin{displaymath}A = \left[ \begin{array}{rrl}
-2 & -1 & 0 \\
1 & -2 & 0 \\
0 & 0 & 3 \end{array} \right] \, .
\end{displaymath}


Solution:     A has eigenvalues $\lambda_{1} = - 2 + i$, $\lambda_{2} = 3$ and the corresponding eigenvectors

$w_{1} = u_{1} + i v_{1} = \left( \begin{array}{c}
0 \\ 1 \\ 0 \end{array} \right) + i \left(
\begin{array}{c} 1 \\ 0 \\ 0 \end{array} \right)$ and $v_{2} = \left( \begin{array}{c}
0 \\ 0 \\ 1 \end{array} \right)$. Then

\begin{displaymath}E^{s} = {\rm span} \; \left\{ \left( \begin{array}{c}
0 \\ 1 ...
... \\ 0 \end{array} \right) \right\} = x_{1}
x_{2}\mbox{-plane},
\end{displaymath}


\begin{displaymath}E^{u} = {\rm span} \; \left\{ \left( \begin{array}{c}
0 \\ 0 ...
... \right) \right\} = \, \mbox{the}
\quad x_{3}\mbox{-axis, and}
\end{displaymath}

the origin is a saddle point. The phase portrait is shown on the right.



Example 2.5.2:     Discuss the asymptotic behaviour of solutions of (2.6) with

\begin{displaymath}A = \left[ \begin{array}{lrl}
0 & -1 & 0 \\
1 & 0 & 0 \\
0 & 0 & 2 \end{array} \right] \, .
\end{displaymath}


Solution:      $\lambda_{1} =2$, $\lambda_{2} =
i$, v1 = (0,0,1)T, w1 = (0,1,0)T +i (1,0,0)T.

\begin{displaymath}E^{u} = {\rm span} \; \{ (0,0,1)^{T} \} =
x_{3}\mbox{-axis}, ...
... \; \{
(0,1,0)^{T}, (1,0,0)^{T} \} = x_{1} x_{2}\mbox{-plane}.
\end{displaymath}

The flow eAt is nonhyperbolic. All solutions starting in Ec are bounded and if $x(0) \neq
0$, they are nontrivial periodic solutions


with period $2 \pi$, i.e. $x(t +2 \pi ) = x(t)$ for all $t \in R$. In general, solutions in Ec need not be bounded, e.g.

\begin{displaymath}A = \left[ \begin{array}{cc}
0 & 0 \\ 1 & 0 \end{array} \righ...
...uiv c_{1}t + c_{2}. \end{array} \right.
\end{array}\right. $}
\end{displaymath}




Since the flow eAt is determined by the eigenvalues and the generalized eigenspaces of A, many apparently different linear systems may have flows with identical asymptotic behaviour. Also for any k > 0, eAt and ekAt have the same phase portrait in the phase space Rn. Thus it is natural to classify all the linear systems by the following definition.


Definition 2.5.5:     Two linear systems x' = A1 x and x' = A2x in Rn are said to be linearly equivalent if there exists k > 0 and a nonsingular matrix P such that

A2 = kP-1 A1 P.

By this definition, all linear systems in Rn can be classified into a finite number of equivalent classes. For example, there are 14 equivalent classes in R2.

14 equivalent systems in $\mbox{\bm $R^{2}$ }$


\begin{displaymath}\mbox{} \hspace*{-.5in}\left. \begin{array}{l}
\left. \begin{...
...inear systems} \\
Re (\lambda_{i}) \neq 0 \end{array}\right.
\end{displaymath}


\begin{displaymath}\left.\begin{array}{llllllll}
10. & \mbox{center} \quad (0) &...
...} \\
Re (\lambda_{1}) Re( \lambda_{2}) = 0 \end{array}\right.
\end{displaymath}


Note: The number or symbol in brackets ( ) is the number of nontrivial invariant subspaces.


next up previous contents
Next: Contractions and expansions Up: Linear Systems Previous: Multiple eigenvalues