Articles of matrices

What is the significance of reversing the polarity of the negative eigenvalues of a symmetric matrix?

Consider a full rank $n\times n$ symmetric matrix $A$ (coming from a set of physical measurements). I do an eigendecomposition of this matrix as $$A = E V E^T$$ Most of the eigenvalues are positive, while a few are negative but with much smaller magnitude compared to the maximum eigenvalue. I want to convert this […]

What does a homomorphism $\phi: M_k \to M_n$ look like?

Let $\phi : M_k(\Bbb{C}) \to M_n(\Bbb{C})$ be a homomorphism of $C^*$-algebras. We know that $\phi$ decomposes as a direct sum of irreducible representations, each of them equivalent to the identity representation (because $M_k$ has no non-trivial invariant subpspaces) with some null part. So this means that $\phi(a)= u \begin{bmatrix}a&0&0&0&0…&0\\0&a&0&0&0…&0\\0&0&a&0&0…&0\\&&&…&&&\\\\\\0&0&0&0&0…&0\end{bmatrix} u^*$ where $u$ is some unitary. […]

The algorithm to find the largest eigenvalue and one of its eigenvector of a symmetric tridiagonal matrix?

Anyone can help describe any known specialized algorithm that finds (or quite accurately approximates) the largest eigenvalue and its eigenvectors of a symmetric tri-diagonal matrix in the form $\left( {\begin{array}{*{20}{c}} {{x_1}}&{{y_1}}&{}&{} \\ {{y_1}}&{{x_2}}& \ddots &{} \\ {}& \ddots & \ddots &{{y_{n – 1}}} \\ {}&{}&{{y_{n – 1}}}&{{x_n}} \end{array}} \right)$, where $y_1,…,y_{n-1}$ are positive numers? Is […]

Dominant Eigenvalues

I’m trying to understand dominant eigenvalues and I found this website that has a explanation of it (Definition 9.2). In the example, the power method is used to find the dominant eigenvector which correspondes to the eigenvalue of 1. When I calculate the eigenvalues and vectors of the matrix in the example, I got this […]

Prove that $AB=BA$ if $A, B$ are diagonal matrices

Could you confirm my proof? A fixed Proof (Confirm please): Let $A, B$ be two diagonal matrices of order $n$. Then, both $AB,BA$ are defined and are of the same order $n$ (i.e. sizes match). Also, $A_{ij},B_{ij}=0$ whenever $i\ne j$. Consider the case $i\ne j$: $$\eqalign{ & {\left( {AB} \right)_{ij}} = \sum\limits_{k = 1}^n {{A_{ik}}{B_{kj}}} […]

Algorithm for reversion of power series?

Given a function $f(x)$ of the form: $$f(x) = x/(a_0x^0+a_1x^1+a_2x^2+a_3x^3+a_4x^4+a_5x^5+…a_nx^n)$$ Let $A$ be an arbitrary (any) infinite lower triangular matrix with ones in the diagonal: $$A = \left(\begin{array}{cccc} 1 & 0 & 0 & 0 \\ 1 & 1 & 0 & 0 \\ 1 & 1 & 1 & 0 \\ 1 & 1 […]

Investigations about the trace form

Let us (again) consider the bilinear form $\beta(A,B)=\operatorname{Tr}(AB)$ for $A,B \in \mathbb{F}^{n,n}$ (quadratic matrices over a field $\mathbb{F}$). I am interested in finding the biggest subspace $U \subset \mathbb{F}^{n,n}$ such that for all $A \in U: \beta(A,A)=\operatorname{Tr}(A^2)=0$.

Why the gradient of $\log{\det{X}}$ is $X^{-1}$, and where did trace tr() go??

I’m studying Boyd & Vandenberghe’s “Convex Optimization” and encountered a problem in page 642. According to the definition, the derivative $Df(x)$ has the form: $$f(x)+Df(x)(z-x)$$ And when $f$ is real-valued($i.e., f: R^n\to R$),the gradient is $$\nabla{f(x)}=Df(x)^{T}$$ See the original text below: But when discussing the gradient of function $f(X)=\log{\det{X}}$, author said “we can identify $X^{-1}$ […]

Minimum eigenvalue and singular value of a square matrix

How to show that the relationship $\left | \lambda_{min} \right | \geq \sigma_{min}$ holds between the minimum eigenvalue and singular value of a square matrix $A \in \mathbb{C}^{n \times n}$?

Can axis/angle notation match all possible orientations of a rotation matrix?

The rotation group is isomorphic to the orthogonal group $SO(3)$. So a rotation matrix can represent all the possible rotation transformations on the euclidean space $R3$ obtainable by the operation of composition. The axis/angle notation describes any rotation that can be obtained by rotating a solid object around an axis passing on the reference origin […]