Eigenvector and its corresponding eigenvalue

For the following square matrix:

$$ \left( \begin{array}{ccc} 3 & 0 & 1 \\
-4 & 1 & 2 \\
-6 & 0 & -2 \end{array} \right)$$

Decide which, if any, of the following vectors are eigenvectors of
that matrix and give the corresponding eigenvalue.

$ \left( \begin{array}{ccc} 2 \\ 2 \\
-1 \end{array} \right)$ $ \left( \begin{array}{ccc}
-1 \\ 0 \\ 2 \end{array} \right)$ $ \left( \begin{array}{ccc}
-1 \\ 1 \\ 3 \end{array} \right)$ $ \left( \begin{array}{ccc} 0 \\ 1 \\ 0 \end{array} \right)$$ \left( \begin{array}{ccc} 3 \\ 2 \\ 1
\end{array} \right)$

If I’ve understood correctly, I must multiply the matrix by each vector first. If the result is a multiple of that vector, then it’s an eigenvector. Only the fourth vector is so. But how should I calculate its corresponding eigenvalue?

Solutions Collecting From Web of "Eigenvector and its corresponding eigenvalue"

Gigili, this is how’s usually calculated both eigenvalues and eigenvectors. Calculate the determinant:

$$\det(\lambda I-A)=\begin{vmatrix}\lambda-3&0&-1\\4&\lambda-1&-2\\6&0&\lambda+2\end{vmatrix}=(\lambda-1)(\lambda-3)(\lambda+2)+6(\lambda-1)=\lambda(\lambda-1)^2\Longrightarrow$$

$$\Longrightarrow \lambda=0\,,\,1\,\,\,\text{are the eigenvalues of the matrix}$$

Now, to find eigenvectors corresponding to the eigenvalues you form a homogeneous linear system by subtituting $\,\lambda\,$ in the above matrix expression with the corr. value. Call the unknowns $\,x,y,z\,$ and note that since the determinant is going to be zero we get always a system with a non-trivial expression (why?):

$$\lambda=0:\;\;\;\;\begin{cases}-3x&&\;\;-z=0\\\;\;\;4x&-y&+2z=0\\\;\;\;6x&&\;\;\;2z=0\end{cases}\;\;\;\;\Longrightarrow\;\;\;\begin{cases}z=-3x\\y=4x+2z=-2x\end{cases}$$

Thus, any eigenvector corresponding to the eigenvalue $\,\lambda=0\,$ has the form

$$\begin{pmatrix}\;\;x\\\!\!-2x\\\!\!-3x\end{pmatrix}\,\,,\,\,\text{for example}\,\,\,\begin{pmatrix}\;\;\;1\\-2\\-3\end{pmatrix}$$

Try now to mimic the above for the eigenvalue $\,\lambda=1\,$ . Once again, you get a system of rank 2 and thus there’s only one linearly independent eigenvector, too.

If $\mathbf{M}\mathbf{v} = \lambda \mathbf{v}$ for a scalar $\lambda$, then $\mathbf{v}$ is an eigenvector of $\mathbf{M}$, and $\lambda$ is its corresponding eigenvalue.

To expand on DonAntonio’s answer: an eigenvalue and eigenvector of a $n \times n$ matrix $A$ is defined as a respectively a scalar $\lambda$ and a vector $x \neq 0$ such that $Ax = \lambda x$. But this equivalent to $Ax – \lambda x = (A-\lambda I)x = 0$. We now want to solve this equation for $x$, so that $x \neq 0$. Now, if $A-\lambda I$ would be invertible, we would have only the solution $x = (A-\lambda I)^{-1} 0 = 0$, so that’s not what we want. Instead, we require the matrix $A-\lambda I$ to be singular.

Recall that a matrix is singular if and only if it’s determinant is zero, so we calculate the determinant of $A – \lambda I$ and equate it to zero:
$$
0 = det(A-\lambda I) = (\lambda – \lambda_1)^{m_1} \ldots (\lambda – \lambda_k)^{m_k}
$$
This is called the characteristic equation for $A$. This will yield a polynomial of degree $n$, with $k \leq n$ roots. These roots are then the eigenvalues of $A$, since these values of $\lambda$ will make $A-\lambda I$ singular. The exponentials $m_1$ through $m_k$ are called the algebraic multiplicities of the corresponding eigenvalues by the way, and these determine an upper bound on the dimensions of the eigenspaces, that I will explain next.

So now, we have a matrix $A – \lambda_i I$ that is singular, and we want to solve $(A-\lambda_i I)x_i = 0$. Every vector $x_i$ that is a solution of this equation is an eigenvector of $A$, corresponding to the eigenvalue $\lambda_i$. But, this vector is not unique! Note that if $x_i$ is an eigenvector, then so is $c x_i$ for every scalar $c \neq 0$. And it could even be the case that there exist two or more linearly independent eigenvectors! The space of all vectors that are a solution to this equation are called the eigenspace of $A$ for the eigenvector $\lambda_i$. This space is the same as the null space of $A-\lambda_i I$, whose dimension is determined by the rank-nullity theorem. This theorem (look it up in your linear algebra book 😉 ) states that the dimension of a matrix is equal to the sum of it’s rank (the dimension of the row space) and it’s nullity (the dimension of it’s null space). It also can be shown that the dimension of the eigenspace must be necessarily less or equal to the algebraic multiplicity of the eigenvector. This dimension is called the geometric multiplicity of $\lambda_i$.

So, to summarize the calculation of eigenvalues and corresponding eigenvectors:

  • Write down the characteristic polynomial of $A$:
    $$
    det(A-\lambda I) = 0.
    $$
  • Solve the characteristic equation. The solutions $\lambda_i$ are the eigenvalues of $A$.
  • Write down the system $(A-\lambda I) x = 0$ and solve the system for the vector $x$. (By Gaussian elimination or something like that.) The solutions $x_i$ are the eigenvectors of $A$ corresponding to $\lambda_i$. The subspace of all eigenvectors corresponding to $\lambda_i$ is the eigenspace of $A$ for $\lambda_i$.

Hope that helped a bit. 😉