# What do the eigenvectors of an adjacency matrix tell us?

The principal eigenvector of the adjacency matrix of a graph gives us some notion of vertex centrality.

What do the second, third, etc. eigenvectors tell us?

Motivation: A standard information retrieval technique (LSI) uses a truncated SVD as a low-rank approximation of a matrix. If we truncate to rank 1, then we essentially have a PageRank algorithm. I was wondering if there are ways of interpreting the corrections introduced by higher eigenvectors.

Something similar to what the moments of a distribution tell us (e.g. first moment gives us the mean, second tells us the variance, third gives us skewness, etc).

#### Solutions Collecting From Web of "What do the eigenvectors of an adjacency matrix tell us?"

The second eigenvalue of a markov chain has meaning, and it affects (for example) the convergence and stability of algorithms that find the equilibrium distribution. See The second eigenvalue of the Google matrix, by Haveliwala and Kamvar for a discussion on how to compute its value.

The second eigenvector, however, has to be something more complex. Given that the PageRank matrix is a reversible markov chain (by construction), it has a unique equilibrium distribution. So the only vector v for which P v = v is the first eigenvector. So, the second eigenvector does not necessarily represent a probability distribution. According to Fluctuation Induced Almost Invariant Sets, by Schwartz and Billings, if a reversible markov chain represents a linear dynamical system the second eigenvector is a good way of finding almost invariant sets, which are subsets of the pages in which the perfect stochastic user would spend a long time “trapped in” before leaving to other parts of the web. The signs of the entries of the second eigenvector (and the third, and maybe others) can be used to find these almost invariant sets.