2.13.3 Calculation of Eigenvectors

Example:

\[\tilde{A}=\left(\begin{array}{c}1\\-2\end{array}\begin{array}{c}-2\\1\end{array}\right)\qquad\mbox{eigenvalues:}\begin{array}{c}\lambda_1=3\\\lambda_2=-1\end{array}\]
\[\left(\begin{array}{c}1\\-2\end{array}\begin{array}{c}-2\\1\end{array}\right)\vect{\alpha_1\\\alpha_2}=3\cdot\vect{\alpha_1\\\alpha_2}\]

\begin{eqnarray*} &\Rightarrow&\left.\begin{array}{ccc} \alpha_1-2\alpha_2=3\alpha_1&\Rightarrow&\alpha_1+\alpha_2=0\\ -2\alpha_1+\alpha_2=3\alpha_2&\Rightarrow&\alpha_1+\alpha_2=0\end{array} \right\}\;\alpha_1+\alpha_2=0\;\mbox{or}\; \alpha_1=-\alpha_2\\ &\Rightarrow&\vec{x}=\vect{\alpha_1\\\alpha_2}=\alpha_1\vect{1\\-1}\;\;\;\mbox{$\alpha_1$-arbitrary$\rightarrow$}\begin{array}{l}\mbox{Eigenvectors determined up}\\\mbox{to an arbitrary factor.}\end{array}\\ \mbox{e.g.}\qquad \alpha_1&=&1 \qquad \vec{x} = \vect{1\\-1} \mbox{is an Eigenvector of $\tilde A$} \end{eqnarray*}

\begin{eqnarray*}\left(\begin{array}{c}1\\-2\end{array}\begin{array}{c}-2\\1\end{array}\right)\vect{\alpha_1\\\alpha_2}&=&-1\cdot\vect{\alpha_1\\\alpha_2}\;\rightarrow\;\begin{array}{c}\alpha_1-2\alpha_2=-\alpha_1\\\\-2\alpha_1+\alpha_2=\-\alpha_2\end{array}\\&\rightarrow&\left.\begin{array}{c}\alpha_1-\alpha_2=0\\\alpha_1-\alpha_2=0\end{array}\right\}\rightarrow\,\alpha_1=\alpha_2\end{eqnarray*}

\begin{eqnarray*}\Rightarrow\;\vec{x}&=&\vect{\alpha_1\\\alpha_2}=\alpha_1\vect{1\\1}\;\mbox{other linear independent EW}\\ \rightarrow \;\vec{x}_1&=&\alpha\vect{1\\-1}\;\vec{x}_2=\beta\vect{1\\1}\;\mbox{are two linear independent Eigenvectors of the matrix $\tilde A$}\end{eqnarray*}

General for calculation of Eigenvector of a \(N\times N\) matrix \(\tilde A\)

Example:

\[ \tilde{ A}=\left(\begin{array}{cccc}1&0&0&1\\ 0&1&0&0\\ 0&0&1&0\\ 1&0&0&1\end{array}\right)\,\rightarrow\,\begin{array}{c}P(\lambda)=(1-\lambda)^2\lambda(\lambda-2)\\\\EW:\lambda_1=1,\;\lambda_2=0,\;\lambda_3=2\end{array}\]

Eigenvectors:

\[\vec{x}=\vect{\alpha_1\\\alpha_2\\\alpha_3\\\alpha_4}\;\rightarrow\;\left(\tilde A\right)\vect{\alpha_1\\\alpha_2\\\alpha_3\\\alpha_4}=1\cdot\vect{\alpha_1\\\alpha_2\\\alpha_3\\\alpha_4}\]

\begin{eqnarray*}&\Rightarrow&\left.\begin{array}{rcl}\alpha_1+\alpha_4&=&\alpha_1\\ \begin{array}{c}\alpha_2\\\alpha_3\end{array}&\begin{array}{c}=\\=\end{array}&\left.\begin{array}{c}\alpha_2\\\alpha_3\end{array}\right\}\mbox{ arbitrary }\\ \alpha_1+\alpha_4&=&\alpha_4 \end{array}\right|\quad \alpha_4=\alpha_1=0\end{eqnarray*}

\begin{eqnarray*} &\rightarrow&\vec{x}=\vect{0\\\alpha_2\\\alpha_3\\0}\;\mbox{is an Eigenvector with $\alpha_2,\alpha_3$ arbitrary! $N=4$, $k=2$ $\rightarrow\;\begin{array}{l}\alpha_1,\alpha_4\;\mbox{fixed}\\\alpha_2,\alpha_3\;\mbox{arbitrary}\end{array}$}\\ &\rightarrow&\left(\tilde A\right)\vect{\alpha_1\\\alpha_2\\\alpha_3\\\alpha_4}=0\vect{\alpha_1\\\alpha_2\\\alpha_3\\\alpha_4}\;\Rightarrow\;\begin{array}{r}\alpha_1+\alpha_4=0\\\alpha_2=0\\\alpha_3=0\\\alpha_1+\alpha_4=0\end{array}\;\alpha_4= - \alpha_1\\ &&\rightarrow\;\vec{x}=\vect{\alpha_1\\0\\0\\-\alpha_1}=\alpha_1\vect{1\\0\\0\\-1}\qquad\alpha_1,\mbox{ arbitrary }\\ &&\left(\tilde A\right)\vect{\alpha_1\\\alpha_2\\\alpha_3\\\alpha_4}=2\vect{\alpha_1\\\alpha_2\\\alpha_3\\\alpha_4}\;\rightarrow\; \begin{array}{rclcrclcl} \alpha_1+\alpha_4&=&2\alpha_1\\ \alpha_2&=&2\alpha_2&\rightarrow&\alpha_2&=&0& \\ &&&&&&&\rightarrow&\alpha_4=\alpha_1\\ \alpha_3&=&2\alpha_3&\rightarrow&\alpha_3&=&0& \\ \alpha_1+\alpha_4&=&2\alpha_4\\ \end{array}\\&&\rightarrow\;\vec{x}=\alpha\vect{1\\0\\0\\1} \end{eqnarray*}

Summary: \begin{eqnarray*}\lambda_1= 1 \quad \mbox{(2 times)}&,&\; \vec{x}=\alpha_2\vect{0\\1\\0\\0}+\alpha_3\vect{0\\0\\1\\0}=\vect{0\\\alpha_2\\\alpha_3\\0}\\ \lambda_2=0&,&\;\vec{x}=\alpha_1\vect{1\\0\\0\\-1}\\ \\ \lambda_3=2&,&\;\vec{x}=\alpha_1\vect{1\\0\\0\\1}\end{eqnarray*}

Thus:

\[ \left[\frac{1}{\sqrt{2}}\vect{1\\0\\0\\-1},\frac{1}{\sqrt{2}}\vect{1\\0\\0\\-1},\vect{0\\1\\0\\0},\vect{0\\0\\1\\0}\right]\]

is a set of 4 lines independent Eigenvectors of \(\tilde A\). The factor \(\frac{1}{\sqrt{2}}\) is because we want \(|\vec{x}|=1\).
in general: if \(\tilde A\;\;N\times N\) then there are \(N\) linear independent Eigenvectors, which may be difficult to find!
Diagonal matrices: \begin{eqnarray*} \tilde A&=&\left(\begin{array}{cc}\begin{array}{cc}\alpha_1&\\&\alpha_2\end{array}&0\\0&\begin{array}{cc}\ddots&\\&\alpha_N\end{array}\end{array}\right)\;\mbox{than $P(\lambda)=(\alpha_1-\lambda)\cdot(\alpha_2-\lambda)\cdot\ldots\cdot(\alpha_N-\lambda)$.}\\ P(\lambda)&=&0\;\rightarrow\;\lambda_1=\alpha_1,\ldots,\lambda_N=\alpha_N\\\\ \mbox{thus: }\det(\tilde A)&=&\lambda_1\cdot\ldots\cdot\lambda_N=\Pi_{j=1}^N \lambda_j \end{eqnarray*}

\(\Rightarrow\) we will see that this can be achieved by a transformation for certain matrices (\(\Rightarrow\) exercises!)


With frame With frame as PDF

go to Eigenvalues and Eigenvectors

© J. Carstensen (Math for MS)