Intereting Posts

Every maximal ideal is principal. Is $R$ principal?
Rational solutions of Pell's equation
Infinite sets don't exist!?
What numbers are integrally represented by $4 x^2 + 2 x y + 7 y^2 – z^3$
Fermat Last Theorem for non Integer Exponents
A natural number multiplied by some integer results in a number with only ones and zeros
Proof: $\lim_{n\to\infty} (1-\frac{1}{n})^{-n} = e$
Distinct permutations of the word “toffee”
Weak *-topology of $X^*$ is metrizable if and only if …
A “Cantor-Schroder-Bernstein” theorem for partially-ordered-sets
The other ways to calculate $\int_0^1\frac{\ln(1-x^2)}{x}dx$
Prove geometric sequence question
Number of automorphisms of a direct product of two cyclic $p$-groups
Parametric Equation of a Circle in 3D Space?
Relate the singular values of $A$ and $\frac{A^T+A}{2}$

I’m reading about the AHP method (Analytic Hierarchy Process). On page 2 of this document, it says:

Given the priorities of the alternatives and given the matrix of

preferences for each alternative over every other alternative, what

meaning do we attach to the vector obtained by weighting the

preferences by the corresponding priorities of the alternatives and

adding? It is another priority vector for the alternatives. We can use

it again to derive another priority vector ad infinitum. Even then

what is the limit priority and what is the real priority vector to be

associated with the alternatives? It all comes down to this: What

condition must a priority vector satisfy to remain invariant under the

hierarchic composition principle? A priority vector must reproduce

itself on a ratio scale because it is ratios that preserve the

strength of preferences. Thus a necessary condition that the priority

vector should satisfy is not only that it should belong to a ratio

scale, which means that it should remain invariant under

multiplication by a positive constant c, but also that it should be

invariant under hierarchic composition for its own judgment matrix so

that one does not keep getting new priority vectors from that matrix.

In sum, a priority vector x must satisfy the relation Ax = cx, c > 0

To let you quickly grasp what AHP is all about, check this simple tutorial.

- Characterizing a real symmetric matrix $A$ as $A = XX^T - YY^T$
- question on left and right eigenvectors
- Matrix representation of the adjoint of an operator, the same as the complex conjugate of the transpose of that operator?
- If $\lambda$ is an eigenvalue of $A^2$, then either $\sqrt{\lambda}$ or $-\sqrt{\lambda}$ is an eigenvalue of $A$
- Square root of Matrix $A=\begin{bmatrix} 1 &2 \\ 3&4 \end{bmatrix}$
- Eigenvalues of block matrix related

The matrix of preferences for each alternative over every other alternative is obvious to me. Ideally, such a matrix should satisfy a property $a_{ij}=a_{ik}a_{kj}$ (because if I say I prefer A to B two times, and B to C three times, then I should prefer A to C six times (it makes sense I guess, but it’s a very informal rule). OK, but in the quote I gave, it says:

what meaning do we attach to the vector obtained by weighting the

preferences by the corresponding priorities of the alternatives and

adding? It is another priority vector for the alternatives.

I’m not quite sure what it means. Alternatives can be apple, banana, cherry. Preferences are just numbers in matrix of preferences, just like here

But what are ‘corresponding priorities of the alternatives’?

I’d say to obtain a priority vector (i.e. to find out which fruit is preferred the most) one could just

1) divide every element in a given column by the sum of elements in that column (normalization)

2) calculate average of elements in each row of the matrix obtained in step 1).

The obtained vector is the priority vector, I belive.

But in the quoted text, it gets worse – the author describes raising the matrix to consecutive powers. Why do we multiply priority matrix by itself? It says the result of this multiplication is ‘another priority vector of alternatives’. Why? Haven’t we just lost some information by doing this?

I mean, we always can multiply matrices, but it should be justified. In case of priority matrix I can’t see the justification. Later in the document I’ve quoted, the author uses the Perron-Frobenius theorem and other sophisticated methods. I’d be grateful for a intuitive, clear explanation of what’s going on here.

And finally: **WHY** the eigenvector $w$ matching the maximum eigenvalue $\lambda_{max}$ of the pairwise comparison matrix $A$ is the final expression of the preferences between the investigated elements?

More articles on AHP method that might help you with answering my questions:

http://books.google.com/books?id=wct10TlbbIUC&printsec=frontcover&hl=eng&redir_esc=y#v=onepage&q&f=false

http://www.booksites.net/download/coyle/student_files/AHP_Technique.pdf

http://www.isahp.org/2001Proceedings/Papers/065-P.pdf

For example, what’s the relationship between Perron-Frobenius theorem and this method?

- Unique least square solutions
- Etymology of the word “normal” (perpendicular)
- Why do we care about dual spaces?
- Minimum and Maximum eigenvalue inequality from a positive definite matrix.
- Cokernel of a Composition.
- Eigenvalues of doubly stochastic matrices
- Prove the following summation inequality
- Find out trace of a given matrix $A$ with entries from $\mathbb{Z}_{227}$
- Determine the coefficient of polynomial det(I + xA)
- Determinant of matrix exponential?

$A \in \mathbb{R}^{n \times n}$ is called a *pairwise comparison matrix*, if it satisfies the following three properties:

$(1)$ $a_{i,j}>0$;

$(2)$ $a_{i,i}=1$;

$(3)$ $a_{i,j} = 1/a_{j,i}$,

for all $i,j=1,\dots,n$. Of course $(1)$ and $(3)$ together imply $(2)$.

That means a pairwise comparison matrix is a positive matrix in the following shape:

$$

A= \begin{bmatrix}

1 & a_{1,2} & a_{1,3} & \dots & a_{1,n} \\

1/a_{1,2} & 1 & a_{2,3} & \dots & a_{2,n} \\

1/a_{1,3} & 1/a_{2,3} & 1 & \dots & a_{3,n} \\

\vdots & \vdots & \vdots & \ddots & \vdots \\

1/a_{1,n} & 1/a_{2,n} & 1/a_{3,n} & \dots & 1 \\

\end{bmatrix}.

$$

The motivation behind the definition is that the elements of $A$ are representing pairwise comparisons, since if $a$ alternative is $2$ times better than $b$, then $b$ is $1/2$ times better than $a$. Because we can use only positive quantities, that means the measure of $a$ to $a$ is always identical. The $i$th alternative is $a_{i,j}$ times better than the $j$th alternative.

If $A$ also satisfies that $a_{i,k}a_{k,j}=a_{i,j}$ for all $i,j,k=1,\dots,n$ then $A$ is called *consistent*, otherwise $A$ is *inconsistent*. That means a cardinal transitivity.

Note that the properties $(1)\!-\!(3)$ are very natural, so it is easy to compare alternatives in that way, but it is hard to hold consistenty for all triplets.

It is easy to see (prove it!) that $A$ is consistent if and only if there exist a $w\in\mathbb{R}^n$ positive vector, for that $a_{i,j}=w_i/w_j$ for all $i,j=1,\dots,n$.

Because of the Perronâ€“Frobenius theorem we know that $A$ has a $\lambda_{\max}$ eigenvalue which is the spectral radius of $A$, and the components of the corresponding $v$ eigenvector are nonzero and have the same sign, so we can suppose that $v$ is positive.

Another easy remark (prove it!) that if $A$ is consistent, then the $v$ eigenvector corresponding to $\lambda_{\max}$ has the property that $a_{i,j}=v_i/v_j$ for all $i,j=1,\dots,n$. This eigenvector is called *Perron eigenvector* of *principal eigenvector*.

*In general* we call a positive $w$ vector a *weight vector* if it is the Perron eigenvector if the matrix is consistent, and it is representing “somehow” the preferences of the decision maker.

In AHP the *eigenvector method (EM)* means that we calculate the Perron eigenvector of the matrix, and this is the weight vector. But in general there are other methods, with we can find weight vectors (for example by *distance minimization*).

Finally, I give an example for the eigenvector method with $4$ alternatives.

Let $(\text{apple},\text{banana},\text{pear},\text{orange})$ be the list of alternatives, and after the decision maker made the pairwise comparisons we have the following matrix:

$$ A= \left[ \begin {array}{cccc} 1&4&2&5\\ 1/4&1&1/4&3

\\ 1/2&4&1&4\\ 1/5&1/3&1/4&1

\end {array} \right].

$$

For example apple is $4$ times better than banana and $2$ times better than pear. $\lambda_{\max}$ of $A$ is the following:

$$\lambda_{\max} \approx 4.170149768.$$

The Perron eigenvector is:

$$ w= \left[ \begin {array}{cccc} 6.884563466,& 1.859400323,& 4.693747683,&

1.0\end {array} \right]^T.

$$

Which gives a preferences order: $ \text{orange} \precsim \text{banana} \precsim \text{pear} \precsim \text{apple}$.

In the example $A$ is inconsistent. AHP measures the inconsistency with $CR$ (consistancy ratio):

$$

CR := \frac{\lambda{_\max}-n}{n-1}.

$$

A matrix is acceptable, if $CR<0.1$. In the example above $CR=0.05671658933$.

- Integral equation
- Find the velocity of a flow
- All real numbers in $$ can be represented as $\sqrt{2 \pm \sqrt{2 \pm \sqrt{2 \pm \dots}}}$
- Find minimal $\alpha_3$ such that $u\in H^3(\Omega)$ and $u(x,y,z)=x^\alpha(1-x)y(1-y)z(1-z)$
- Converse of interchanging order for derivatives
- Functions between topological spaces being continuous at a point?
- How was this approximation of $\pi$ involving $\sqrt{5}$ arrived at?
- A number-theory question on the deficiency function $2x – \sigma(x)$
- second fundamental form of a surface given by regular values
- Determining stability of ODE
- Expected value of integrals of a gaussian process
- Applying equivalence of norms on $\mathbb R^n$ .
- Compute $\mathrm{Tor}_{n}^{\mathbb{Z}_{8}}(\mathbb{Z}_{4},\mathbb{Z}_{4})$
- Making an infinite generating function a finite one
- Question on group homomorphisms involving the standard Z-basis