Intereting Posts

Linear Diophantine Equations in Three Variables
Isomorphisms between the groups $U(10), U(5)$ and $\mathbb{Z}/4\mathbb{Z}$
Limit of $h_n(x)=x^{1+\frac{1}{2n-1}}$
What's so special with small categories?
conditional expectation of brownian motion
Primary ideals in Noetherian rings
Mean of an increasing function over exponential distribution
In how many ways I can write a number $n$ as sum of $4$ numbers?
Definition of Hamiltonian system through integral invariant
Sequential Criterion for Functional Limits
Determine whether this map is an isomorphism
Why is $\tan((1/2)\pi)$ undefined?
Prove that $\sqrt{x}$ is continuous on its domain $[0, \infty).$
When do we use entailment vs implication?
if $G$ has a matching with $k$ edges, then it has a bipartite subgraph with $(|E(G)+k|)/2$ edges or more

I’m reading about the AHP method (Analytic Hierarchy Process). On page 2 of this document, it says:

Given the priorities of the alternatives and given the matrix of

preferences for each alternative over every other alternative, what

meaning do we attach to the vector obtained by weighting the

preferences by the corresponding priorities of the alternatives and

adding? It is another priority vector for the alternatives. We can use

it again to derive another priority vector ad infinitum. Even then

what is the limit priority and what is the real priority vector to be

associated with the alternatives? It all comes down to this: What

condition must a priority vector satisfy to remain invariant under the

hierarchic composition principle? A priority vector must reproduce

itself on a ratio scale because it is ratios that preserve the

strength of preferences. Thus a necessary condition that the priority

vector should satisfy is not only that it should belong to a ratio

scale, which means that it should remain invariant under

multiplication by a positive constant c, but also that it should be

invariant under hierarchic composition for its own judgment matrix so

that one does not keep getting new priority vectors from that matrix.

In sum, a priority vector x must satisfy the relation Ax = cx, c > 0

To let you quickly grasp what AHP is all about, check this simple tutorial.

- Power method for finding all eigenvectors
- If $A^2=2A$, then $A$ is diagonalizable.
- If two real symmetric square matrices commute then does they have a common eigenvector ?
- Find the eigenvalues of a matrix with ones in the diagonal, and all the other elements equal
- A normal matrix with real eigenvalues is Hermitian
- What is the relation between rank of a matrix, its eigenvalues and eigenvectors

The matrix of preferences for each alternative over every other alternative is obvious to me. Ideally, such a matrix should satisfy a property $a_{ij}=a_{ik}a_{kj}$ (because if I say I prefer A to B two times, and B to C three times, then I should prefer A to C six times (it makes sense I guess, but it’s a very informal rule). OK, but in the quote I gave, it says:

what meaning do we attach to the vector obtained by weighting the

preferences by the corresponding priorities of the alternatives and

adding? It is another priority vector for the alternatives.

I’m not quite sure what it means. Alternatives can be apple, banana, cherry. Preferences are just numbers in matrix of preferences, just like here

But what are ‘corresponding priorities of the alternatives’?

I’d say to obtain a priority vector (i.e. to find out which fruit is preferred the most) one could just

1) divide every element in a given column by the sum of elements in that column (normalization)

2) calculate average of elements in each row of the matrix obtained in step 1).

The obtained vector is the priority vector, I belive.

But in the quoted text, it gets worse – the author describes raising the matrix to consecutive powers. Why do we multiply priority matrix by itself? It says the result of this multiplication is ‘another priority vector of alternatives’. Why? Haven’t we just lost some information by doing this?

I mean, we always can multiply matrices, but it should be justified. In case of priority matrix I can’t see the justification. Later in the document I’ve quoted, the author uses the Perron-Frobenius theorem and other sophisticated methods. I’d be grateful for a intuitive, clear explanation of what’s going on here.

And finally: **WHY** the eigenvector $w$ matching the maximum eigenvalue $\lambda_{max}$ of the pairwise comparison matrix $A$ is the final expression of the preferences between the investigated elements?

More articles on AHP method that might help you with answering my questions:

http://books.google.com/books?id=wct10TlbbIUC&printsec=frontcover&hl=eng&redir_esc=y#v=onepage&q&f=false

http://www.booksites.net/download/coyle/student_files/AHP_Technique.pdf

http://www.isahp.org/2001Proceedings/Papers/065-P.pdf

For example, what’s the relationship between Perron-Frobenius theorem and this method?

- Relation between AB and BA
- Strict Inequality in Homogenous LMI
- Why $\displaystyle f(z)=\frac{az+b}{cz+d}$, $a,b,c,d \in \mathbb C$, is a linear transformation?
- Are all fields vector spaces?
- Hoffman and Kunze, Linear Algebra Sec 3.5 exercise 11
- Proof if $a \vec v = 0$ then $a = 0$ or $\vec v = 0$
- What shape do we get when we shear an ellipse? And more generally, do affine transformations always map conic sections to conic sections?
- Total number of subspaces of $\mathbb F_2^n$
- Minimum distance of a binary linear code
- Additive rotation matrices

$A \in \mathbb{R}^{n \times n}$ is called a *pairwise comparison matrix*, if it satisfies the following three properties:

$(1)$ $a_{i,j}>0$;

$(2)$ $a_{i,i}=1$;

$(3)$ $a_{i,j} = 1/a_{j,i}$,

for all $i,j=1,\dots,n$. Of course $(1)$ and $(3)$ together imply $(2)$.

That means a pairwise comparison matrix is a positive matrix in the following shape:

$$

A= \begin{bmatrix}

1 & a_{1,2} & a_{1,3} & \dots & a_{1,n} \\

1/a_{1,2} & 1 & a_{2,3} & \dots & a_{2,n} \\

1/a_{1,3} & 1/a_{2,3} & 1 & \dots & a_{3,n} \\

\vdots & \vdots & \vdots & \ddots & \vdots \\

1/a_{1,n} & 1/a_{2,n} & 1/a_{3,n} & \dots & 1 \\

\end{bmatrix}.

$$

The motivation behind the definition is that the elements of $A$ are representing pairwise comparisons, since if $a$ alternative is $2$ times better than $b$, then $b$ is $1/2$ times better than $a$. Because we can use only positive quantities, that means the measure of $a$ to $a$ is always identical. The $i$th alternative is $a_{i,j}$ times better than the $j$th alternative.

If $A$ also satisfies that $a_{i,k}a_{k,j}=a_{i,j}$ for all $i,j,k=1,\dots,n$ then $A$ is called *consistent*, otherwise $A$ is *inconsistent*. That means a cardinal transitivity.

Note that the properties $(1)\!-\!(3)$ are very natural, so it is easy to compare alternatives in that way, but it is hard to hold consistenty for all triplets.

It is easy to see (prove it!) that $A$ is consistent if and only if there exist a $w\in\mathbb{R}^n$ positive vector, for that $a_{i,j}=w_i/w_j$ for all $i,j=1,\dots,n$.

Because of the Perronâ€“Frobenius theorem we know that $A$ has a $\lambda_{\max}$ eigenvalue which is the spectral radius of $A$, and the components of the corresponding $v$ eigenvector are nonzero and have the same sign, so we can suppose that $v$ is positive.

Another easy remark (prove it!) that if $A$ is consistent, then the $v$ eigenvector corresponding to $\lambda_{\max}$ has the property that $a_{i,j}=v_i/v_j$ for all $i,j=1,\dots,n$. This eigenvector is called *Perron eigenvector* of *principal eigenvector*.

*In general* we call a positive $w$ vector a *weight vector* if it is the Perron eigenvector if the matrix is consistent, and it is representing “somehow” the preferences of the decision maker.

In AHP the *eigenvector method (EM)* means that we calculate the Perron eigenvector of the matrix, and this is the weight vector. But in general there are other methods, with we can find weight vectors (for example by *distance minimization*).

Finally, I give an example for the eigenvector method with $4$ alternatives.

Let $(\text{apple},\text{banana},\text{pear},\text{orange})$ be the list of alternatives, and after the decision maker made the pairwise comparisons we have the following matrix:

$$ A= \left[ \begin {array}{cccc} 1&4&2&5\\ 1/4&1&1/4&3

\\ 1/2&4&1&4\\ 1/5&1/3&1/4&1

\end {array} \right].

$$

For example apple is $4$ times better than banana and $2$ times better than pear. $\lambda_{\max}$ of $A$ is the following:

$$\lambda_{\max} \approx 4.170149768.$$

The Perron eigenvector is:

$$ w= \left[ \begin {array}{cccc} 6.884563466,& 1.859400323,& 4.693747683,&

1.0\end {array} \right]^T.

$$

Which gives a preferences order: $ \text{orange} \precsim \text{banana} \precsim \text{pear} \precsim \text{apple}$.

In the example $A$ is inconsistent. AHP measures the inconsistency with $CR$ (consistancy ratio):

$$

CR := \frac{\lambda{_\max}-n}{n-1}.

$$

A matrix is acceptable, if $CR<0.1$. In the example above $CR=0.05671658933$.

- Prove that $AB=BA$ if $A, B$ are diagonal matrices
- Dominant term and Big Omega
- $f^+\mathscr{G}$ not a sheaf even if $\mathscr{G}$ is
- what is wrong in my proof using simple induction?
- Orthogonal and symmetric Matrices
- Prove that a positive polynomial function can be written as the squares of two polynomial functions
- Categorical Banach space theory
- Do Lipschitz/Hurwitz quaternions satisfy the Ore condition?
- Repayments of a loan with compound interest
- surface that is created by the intersection of paraboloid and plane
- induced map homology example
- Polynomial algebra
- Going down theorem fails
- Theorem about positive matrices
- Can an element in a Noetherian ring have arbitrarily long factorizations?