next up previous contents
Next: Asymptotic behaviour Up: Linear Systems Previous: Distinct eigenvalues

2.4 Multiple eigenvalues

Consider the IVP

 \begin{displaymath}x' = Ax, \quad x(0) = x_{0}
\end{displaymath} (2.5)

for the case when A has multiple eigenvalues.


Definition 2.4.1     Let $\lambda$ be an eigenvalue of the $n \times n$ matrix A of multiplicity $m \leq n$. Then for $k=1, \ldots ,
m$, any non-zero solution v of

\begin{displaymath}(A-\lambda I)^{k} v=0
\end{displaymath}

is called a generalized eigenvector of A.


Exercise 2.4.1:     Let v1 be a non-zero solution of

\begin{displaymath}(A-\lambda I) v = 0
\end{displaymath}

and v2 be a non-zero solution of

\begin{displaymath}(A-\lambda I) v = v_{1} \, .
\end{displaymath}

Then show that v1 and v2 are linearly independent generalized eigenvectors.


Lemma 2.4.1:     Let A be a real $n \times n$ matrix with real eigenvalues $\lambda_{1}, \ldots , \lambda_{n}$ repeated according to their multiplicity. Then there exist n generalized eigenvectors $\{ v_{1}, \ldots , v_{n} \}$ of A, such that $P = [ v_{1}, \ldots ,
v_{n} ]$ is nonsingular and A = S + N with

\begin{displaymath}P^{-1} SP = {\rm diag} \; [\lambda_{j}, j = 1,2, \ldots ,
n ]...
...{and} \quad N^{k+1} = 0 \quad \mbox{for some}
\quad k < n \, .
\end{displaymath}


Proof: (omitted)


Remark:      J = P-1 AP is called the Jordan canonical form of A.


Theorem 2.4.1:     Let A be a real $n \times n$ matrix with real eigenvalues $\lambda_{1}, \ldots , \lambda_{n}$ repeated according to their multiplicity. Then the IVP (2.5) has a unique solution given by

\begin{displaymath}x(t) = P {\rm diag} \; [e^{\lambda_{j}t}] P^{-1} \left[ I
+ Nt +\cdots + \frac{N^{k}t^{k}}{k!} \right] x_{0} \, .
\end{displaymath}


Corollary 2.4.1:     If $\lambda$ is a real eigenvalue of multiplicity n of an $n \times n$ matrix A, then

\begin{displaymath}S = {\rm diag} \; [ \lambda ] = \lambda I, \quad N = A -
\lambda I
\end{displaymath}

and

\begin{displaymath}x(t) = e^{\lambda t} \left[ I+Nt+ \cdots + \frac{N^{k}
t^{k}}{k!} \right] x_{0}
\end{displaymath}

is a solution of the IVP (2.5).


Example 2.4.1:     Solve the IVP (2.5) with

\begin{displaymath}A = \left[ \begin{array}{lrrr}
0 & -2 & -1 & -1 \\
1 & 2 & 1 & 1 \\
0 & 1 & 1 & 0 \\
0 & 0 & 0 & 1 \end{array} \right] \, .
\end{displaymath}


Solution:     It is easy to solve $\det
(A-\lambda I) = 0$ and find that A has an eigenvalue $\lambda =1$ of multiplicity 4. Thus S=I

\begin{displaymath}N=A-S= \left[ \begin{array}{rrrr}
-1 & -2 & -1 & -1 \\
1 & 1 & 1 & 1 \\
0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 \end{array} \right]
\end{displaymath}


\begin{displaymath}N^{2} = \left[ \begin{array}{rrrr}
-1 & -1 & -1 & -1 \\
0 & ...
... & 0 \end{array} \right] \quad \mbox{and} \quad
N^{3} = 0 \, .
\end{displaymath}

Thus by Corollary 2.4.1,

\begin{eqnarray*}x(t) & = & e^{t} \left[ I+Nt+ \frac{N^{2}t^{2}}{2!} \right]
x_{...
...\frac{t^{2}}{2} \\
0 & 0 & 0 & 1 \end{array} \right] x_{0} \, .
\end{eqnarray*}


In the general case, we must first determine a basis of generalized eigenvectors for Rn and compute $S = P {\rm diag} \; [ \lambda_{j}] P^{-1}$ and N = A - S.


Example 2.4.2:     Solve the IVP (2.5) with

\begin{displaymath}A = \left[ \begin{array}{rll}
1 & 0 & 0 \\
-1 & 2 & 0 \\
1 & 1 & 2 \end{array} \right] \, .
\end{displaymath}


Solution:     A has eigenvalues $\lambda_{1} = 1$, $\lambda_{2} = \lambda_{3} = 2$. It is easy to find the corresponding eigenvectors

\begin{displaymath}v_{1} = \left[ \begin{array}{r}
1 \\ 1 \\ -2 \end{array} \rig...
..._{2} = \left[ \begin{array}{c}
0 \\ 0 \\ 1 \end{array} \right]
\end{displaymath}

and one generalized eigenvector corresponding to $\lambda = 2$ $v_{3} = \left[ \begin{array}{c}
0 \\ 1 \\ 0 \end{array} \right]$ which is a solution of

\begin{displaymath}\left[ \begin{array}{rll}
-1 & 0 & 0 \\
-1 & 0 & 0 \\
1 & 1...
...= \left[ \begin{array}{c}
0 \\ 0 \\ 1 \end{array} \right] \, .
\end{displaymath}

Thus

\begin{displaymath}P = \left[ \begin{array}{rll}
1 & 0 & 0 \\
1 & 0 & 1 \\
-2 ...
... & 0 & 0 \\
2 & 0 & 1 \\
-1 & 1 & 0 \end{array} \right] \, .
\end{displaymath}


\begin{displaymath}S = P \left[ \begin{array}{lll}
1 & 0 & 0 \\
0 & 2 & 0 \\
0...
... & 0 & 0 \\
-1 & 2 & 0 \\
2 & 0 & 2 \end{array} \right] \, .
\end{displaymath}


\begin{displaymath}N = A -S = \left[ \begin{array}{rll}
0 & 0 & 0 \\
0 & 0 & 0 \\
-1 & 1 & 0 \end{array} \right] \, , \quad N^{2} = 0 \, .
\end{displaymath}

Thus by Theorem 2.4.1

\begin{eqnarray*}x(t) & = & P \left[ \begin{array}{lll}
e^{t} & 0 & 0 \\
0 & e^...
...} + (2-t)e^{2t} & te^{2t} & e^{2t} \end{array}\right] x_{0} \, .
\end{eqnarray*}


In case of multiple complex eigenvalues, we have the following Lemma whose proof can be found in Hirsch and Smale, Appendix III.


Lemma 2.4.2     Let A be $2m \times 2m$ matrix with complex eigenvalues $\lambda_{j} = \alpha_{j} + i \beta_{j}$ and $\ol{\lambda}_{j} = \alpha_{j} - i\beta_{j}$, $j = 1, 2, \ldots , m$, repeated according to their multiplicity. Then there exist 2m generalized eigenvectors wj = uj + ivj and $\ol{w}_{j} = u_{j}
- iv_{j}$, $j=1,
\ldots , m$ such that $P = [ u_{1} v_{1}
\ldots u_{m} v_{m}]$ is nonsingular and

A = S+N

with

\begin{displaymath}P^{-1} SP = {\rm diag} \; \left\{ \left[ \begin{array}{rl}
\a...
..., ,
\quad N^{k} = 0 \quad \mbox{for some} \quad k \leq 2m \, ,
\end{displaymath}

and SN=NS.


Remark:      J = P-1 AP is called the Jordan canonical form of A.


Theorem 2.4.2:     Under the assumptions of Lemma 2.4.2, the IVP (2.5) has a solution

\begin{displaymath}x(t) = P {\rm diag} \; \left\{ e^{\alpha_{j}t} \left[
\begin{...
...t + \cdots + \frac{N^{k-1}
t^{k-1}}{(k-1)!} \right] x_{0} \, .
\end{displaymath}


Example 2.4.3:     Solve the IVP (2.5) with

\begin{displaymath}A = \left[ \begin{array}{lrlr}
0 & -1 & 0 & 0 \\
1 & 0 & 0 & 0 \\
0 & 0 & 0 & -1 \\
2 & 0 & 1 & 0 \end{array} \right] \, .
\end{displaymath}


Solution:     A has eigenvalues $\lambda =
i$, $\ol{\lambda} = - i$ of multiplicity 2.

Solve (A-iI)w=0 and get the eigenvector

\begin{displaymath}w_{1} = \left( \begin{array}{c}
0 \\ 0 \\ i \\ 1 \end{array} ...
...ft(
\begin{array}{c}
0 \\ 0 \\ 1 \\ 0 \end{array} \right) \, .
\end{displaymath}

Solve (A-iI) w = w1 to get the generalized eigenvector

\begin{displaymath}w_{2} = \left( \begin{array}{c}
i \\ 1 \\ 0 \\ 1 \end{array} ...
...i \left(
\begin{array}{c}
1 \\ 0 \\ 0 \\ 0 \end{array} \right)
\end{displaymath}


\begin{displaymath}P = \left[ \begin{array}{cccc}
0 & 0 & 0 & 1 \\
0 & 0 & 1 & ...
... 1 & 0 \\
0 & 1 & 0 & 0 \\
1 & 0 & 0 & 0 \end{array} \right]
\end{displaymath}


\begin{displaymath}S = P \left[ \begin{array}{rlrl}
0 & 1 & 0 & 0 \\
-1 & 0 & 0...
...0 & 0 \\
0 & 1 & 0 & -1 \\
1 & 0 & 1 & 0 \end{array} \right]
\end{displaymath}


\begin{displaymath}N=A-S= \left[ \begin{array}{lrll}
0 & 0 & 0 & 0 \\
0 & 0 & 0...
... 0 \\
1 & 0 & 0 & 0 \end{array} \right] \, , \quad N^{2} = 0.
\end{displaymath}

Thus the solution to the IVP (2.5) is

\begin{eqnarray*}x(t) & = & P \left[ \begin{array}{rrrr}
\cos t & \sin t & 0 & 0...
...t \cos t & -t \sin t & \sin t & \cos t \end{array}\right] x_{0}.
\end{eqnarray*}


In the general case, we have the following result. (Hirsch/Smale, P. 133)


Lemma 2.4.3     Let A be a real matrix with real eigenvalues $\lambda_{j}$, $j =
1, \ldots , k$ and complex eigenvalues $\lambda_{k+s} = \alpha_{k+s} + i \beta_{k+s}$, $\ol{\lambda}_{k+s} = \alpha_{k+s} - i \beta_{k+s}$, $s=1, \ldots , m$, repeated according to their multiplicity, where k+2m=n. Then there exist generalized eigenvectors $v_{1}, \ldots , v_{k}$, wk+1 = uk+1 + i vk+1, $\ol{w}_{k+1} =
u_{k+1} - i v_{k+1}, \ldots , w_{k+m} = u_{k+m} + i
v_{k+m}$, $\ol{w}_{k+m} = u_{k+m} - i v_{k+m}$, such that the matrix

\begin{displaymath}P = [ v_{1}, \ldots ,v_{k}, u_{k+1}, v_{k+1}, \ldots
, u_{k+m}, v_{k+m} ]
\end{displaymath}

is nonsingular and

A = S+N

with

\begin{displaymath}P^{-1} SP = {\rm diag} \; [ \lambda_{1}, \ldots ,
\lambda_{k}, \, B_{k+1}, \ldots , B_{k+m} ]
\end{displaymath}

where $B_{j} = \left[ \begin{array}{rl}
\alpha_{j} & \beta_{j} \\
- \beta_{j} & \alpha_{j} \end{array} \right]$, $j =
k+1, \ldots , k+m$ and Ns = 0 for some $s
\leq n$ and SN=NS.


Remark:     The matrix J = P-1 AP is called the Jordan canonical form of A.


Theorem 2.4.3:     Under the assumption of Lemma 2.4.3 the IVP (2.5) has a solution

\begin{displaymath}x(t) = P {\rm diag} \; \Biggl\{ e^{\lambda_{1}t} , \ldots ,
e...
...beta_{k+1}t & \cos \beta_{k+1}t \end{array}\right] , \ldots ,
\end{displaymath}


\begin{displaymath}e^{\alpha_{k+m} t} \left[ \begin{array}{rl}
\cos \beta_{k+m} ...
...t + \cdots +
\frac{N^{s-1} t^{s-1}}{(s-1)!} \right] x_{0} \, .
\end{displaymath}

Based upon the above theorem, we arrive at the following important conclusion.


Theorem 2.4.4:     Each coordinate in the solution x(t) of the IVP (2.5) is a linear combination of functions of the form

\begin{displaymath}t^{k} e^{\alpha t} \cos \beta t \quad \mbox{or} \quad
t^{k} e^{\alpha t} \sin \beta t, \quad 0 \leq k \leq n -1 ,
\end{displaymath}

where $\lambda = \alpha + i \beta$ is an eigenvalue of the matrix A, $\alpha \in R$, $\beta \geq 0$.


next up previous contents
Next: Asymptotic behaviour Up: Linear Systems Previous: Distinct eigenvalues