A method of finding the eigenvector that I don't fully understand

Let $$A=\begin{pmatrix} a & b & c \\ d & e & f \\ g & h & t \\ \end{pmatrix}$$
Which has a known eigenvalue : $\lambda$

Find the corresponding eigenvector


Over the last few months, I have come across several ways to do this:

(1)

$$Ae=\lambda e$$

$$\begin{pmatrix} a & b & c \\ d & e & f \\ g & h & t \\ \end{pmatrix} \begin{pmatrix} x_1 \\ x_2 \\ x_3 \\ \end{pmatrix}= \lambda \begin{pmatrix} x_1 \\ x_2 \\ x_3 \\ \end{pmatrix}$$

Which is the longer (more time consuming method.


(2)

Some do it in this way:

$$(A-\lambda I)e=0$$

$$\begin{pmatrix} a-\lambda & b & c \\ d & e-\lambda & f \\ g & h & t-\lambda \\ \end{pmatrix} \begin{pmatrix} x_1 \\ x_2 \\ x_3 \\ \end{pmatrix}=0$$

And mostly they perform reduce row echleon form to speed the process up.


(3)

Most recently I came across a method which is the fastest so far, but I don’t understand how it works. And sometimes it doesn’t work for me, I get correct R answer and sometimes wrong. Here is how it is, (similar to taking cross product of two vectors):

It is EVALUATING

$$\begin{bmatrix} i & j & k \\ d & e-\lambda & f \\ g & h & t-\lambda \\ \end{bmatrix}$$

Or

$$\begin{bmatrix} i & j & k \\ a-\lambda & b & c \\ d & e-\lambda & f \\ \end{bmatrix}$$

Or

$$\begin{bmatrix} i & j & k \\ a-\lambda & b & c \\ g & h & t-\lambda \\ \end{bmatrix}$$

Etc any two rows

How does this work? And why doesn’t it work sometime? Why doesn’t it work with some matrices ? (Sometimes)

Although this is much time saving, it is unreliable to me which I believe is because I don’t understand well in order to apply it correctly


Example :

Let

$$A=\begin{pmatrix} 4 & 1 & -1 \\ -4 & -1 & 4 \\ 0 & -1 & 5 \\ \end{pmatrix}$$

$\lambda=3$

$$\begin{bmatrix} i & j & k \\ 4-3 & 1 & -1 \\ 0 & -1 & 5-3 \\ \end{bmatrix}$$

Eigenvector:

$$\begin{pmatrix} 1 \\ -2 \\ -1 \\ \end{pmatrix}$$

But yes the evaluation goes like this.

Solutions Collecting From Web of "A method of finding the eigenvector that I don't fully understand"

The shortcut is a method to (more or less) quickly solve full rank $(n-1)\times n$ linear systems (i.e., all $n-1$ lines are linearly independent). It is IMO awkwardly presented by writing a the determinant of a matrix in which some entries are “unit vectors” (which makes no sense, matrix entries must be scalars). The proper way to present is it is as follows. Given a $(n-1)\times n$ matrix $A’$ of coefficients, form the map $\def\R{\Bbb R}\R^n\to\R$ given by
$$
\gamma:(x_1,\ldots,x_n)\mapsto
\begin{vmatrix} a_{1,1}&a_{1,2}&\cdots&a_{1,n}\\
a_{2,1}&a_{2,2}&\cdots&a_{2,n}\\ \vdots & \vdots & \ddots & \vdots \\
a_{n-1,1}&a_{n-1,2}&\cdots&a_{n-1,n} \\
x_1 & x_2 & \cdots & x_n
\end{vmatrix}.
$$
By multi-linearity by rows of the determinant, this is a linear function, so there are coefficients $c_1,\ldots,c_n$ such that this function equals $\gamma:(x_1,\ldots,x_n)\mapsto c_1x_1+\ldots+c_nx_n$ (these are the cofactors by which the $x_i$ get multiplied in the determinant, and also the coefficients of those “unit vectors”). By the alternating property of the determinant, one has $\gamma(a_{i,1},a_{i,2},\ldots,a_{i,n})=0$ for $i=1,2,\ldots,n-1$, which can be written as
$$
\begin{pmatrix} a_{1,1}&a_{1,2}&\cdots&a_{1,n}\\
\vdots & \vdots & \ddots & \vdots \\
a_{n-1,1}&a_{n-1,2}&\cdots&a_{n-1,n}
\end{pmatrix} \cdot
\begin{pmatrix} c_1 \\ c_2 \\ \vdots \\ c_n\end{pmatrix}
= \begin{pmatrix} 0 \\ \vdots \\ 0 \end{pmatrix}
$$
Thus the column vector with entries $c_1,\ldots,c_n$ is in the kernel of $A’$. If $A’$ had rank $n-1$ this kernel is $1$-dimensionsal, and the function $\gamma$ is nonzero (since the $n-1$ independent rows can be completed to an invertible matrix), so we’ve found a nonzero vector spanning $\ker A’$. However, should $A’$ have rank less than $n-1$, then not only will the kernel be of dimension at least$~2$ so that one vector would not suffice to span it, one also has that $\gamma=0$, so the vector we have found is the zero vector; the method does not work in this case.

The relation with eigenvalues is as follows. The fact that $\lambda$ is eigenvalue of $A$ means that $A-\lambda I$ is non-invertible; it has rank at most $n-1$. If one supposes it has rank $n-1$, then there is some row that is a linear combination of the others, which are independent. The $1$-dimensional kernel of $A-\lambda I$ (eigenspace for$~\lambda$) will not change if one removes such a row, and the kernel of this simplified matrix $A’$ can be found by the above method.

The method will not always work. One thing that can go wrong is selecting the wrong row. The dependence among the rows could be because of some relation that involves a proper subset of the rows (for instance a row could be zero, or two rows proportional, but in general it could be a hard-to-spot circumstance). Then removing one of the other rows could make the rank of $A’$ drop to $n-2$, and the method will not work. Here a different choice of row to delete instead would work. However if the rank of $A-\lambda I$ was less than $n-1$ to begin with, then no choice of row will make the method work; one always finds just the zero vector. This happens precisely when the geometric multiplicity of $\lambda$ as eigenvalue is at least two; the method will never let you compute such higher dimensional eigenspaces.

It looks like you are computing the cross product of two rows of the matrix from the second method. This will give a vector that is orthogonal to those two rows. An eigenvector will be orthogonal to all rows, since it is in the null space of the matrix.

So, if the two chosen vectors span the row-space, the third method gives an eigenvector.

Your method $(3)$ has nothing to do with eigenvectors per se. Instead it is a quick method for solving homogeneous systems of rank $2$ in three variables; but it can only be used in this case, namely rank $2$ and dimension $3$.

Given such a system
$$a_{i1}x_1 +a_{i2} x_2+a_{i3}x_3=0\qquad(1\leq i\leq m)\ ,\tag{1}$$
where usually $m=2$ or $m=3$, it is easy to spot two rows $a_i$ and $a_{j}$ which are linearly independent. The corresponding equations can be visualized geometrically as requesting
$${\bf a}_i\cdot{\bf x}=0,\quad {\bf a}_j\cdot{\bf x}=0\ .$$
This means that we are looking for vectors ${\bf x}$ that are orthogonal to ${\bf a}_i$ as well to ${\bf a}_j$. A prototype such vector is the cross product $${\bf p}:={\bf a}_i\times {\bf a}_j\ne{\bf 0}\ ,$$
and from basic principles of linear algebra it then follows that the general solution of $(1)$ is given by
$${\bf x}\ =\ c\>{\bf p}\qquad(c\in{\mathbb R})\ .$$