Articles of svd

Singular Value Decomposition of Rank 1 matrix

I am trying to understand singular value decomposition. I get the general definition and how to solve for the singular values of form the SVD of a given matrix however, I came across the following problem and realized that I did not fully understand how SVD works: Let $0\ne u\in \mathbb{R}^{m}$. Determine an SVD for […]

Singular vector of random Gaussian matrix

Suppose $\Omega$ is a Gaussian matrix with entries distributed i.i.d. according to normal distribution $\mathcal{N}(0,1)$. Let $U \Sigma V^{\mathsf T}$ be its singular value decomposition. What would be the distribution of the column (or row) vectors of $U$ and $V$? Would it be a Gaussian or anything closely related?

$B – A \in S^n_{++}$ and $I – A^{1/2}B^{-1}A^{1/2} \in S^n_{++}$ equivalent?

Define $S^n_{++}$ to be the set that contains all the positive definite matrices. That is, if $A \in S^n_{++}$, then $A$ is a positive definite matrix. Now suppose that $A,B \in S^n_{++}$ are two positive definite matrices. How to prove that $$B – A \in S^n_{++}$$ if and only if $$I – A^{1/2}B^{-1}A^{1/2} \in S^n_{++}?$$

Why SVD on $X$ is preferred to eigendecomposition of $XX^\top$ in PCA

In this post J.M. has mentioned that … In fact, using the SVD to perform PCA makes much better sense numerically than forming the covariance matrix to begin with, since the formation of $XX^\top$ can cause loss of precision. This is detailed in books on numerical linear algebra, but I’ll leave you with an example […]

How can you explain the Singular Value Decomposition to Non-specialists?

I am giving a presentation in two days about a search engine I have been making the past summer, and my research involved the use of singular value decompositions, or in other words, $A=U\Sigma V^T$. I took a high school course on Linear Algebra last year, but the course was not very thorough, and though […]

Recovering eigenvectors from SVD

I am dealing with a problem similar to principal component analysis. Aka, I have a matrix and i want to recover the ‘most efficient basis’ to exaplin the matrix variability. With a square matrix these are the eigenvectors, weighted by the eigenvalues. Originally, I was dealing with square matrices, and I used eigendecomposition to recover […]

Condition number of a product of two matrices

Given two square matrices $A$ and $B$, is the following inequality $$\operatorname{cond}(AB) \leq \operatorname{cond}(A)\operatorname{cond}(B),$$ where $\operatorname {cond}$ is the condition number, true? Is this still true for rectangular matrices? I know this is true: $$||AB|| \leq ||A|| \cdot ||B||$$ The definition of condition number of matrix is as follows: $$\operatorname{cond}(A)=||A|| \cdot ||A^{-1}||$$

Maximizing the trace

Say i have the following maximization. $ max_R$ trace $(RZ): R^TR = I_n$ where $R$ is an $n$ x $n$ orthogonal transformational vector. Also, the SVD of $Z = USV^T$. I’m trying to find the optimal $R^*$ which intuitively I know is equal to $VU^T$ where $trace$ $(RZ)$ $=$ $trace$ $(VU^T USV^T)$ $=$ $trace(S)$. I […]

Understanding a derivation of the SVD

Here’s an attempt to motivate the SVD. Let $A \in \mathbb R^{m \times n}$. It’s natural to ask, in what direction does $A$ have the most “impact”. In other words, for which unit vector $v$ is $\| A v \|_2$ the largest? Denote this unit vector as $v_1$. Let $\sigma_1 = \| A v_1 \|_2$, […]

Why does spectral norm equal the largest singular value?

This may be a trivial question yet I was unable to find an answer: $$\left \| A \right \| _2=\sqrt{\lambda_{\text{max}}(A^{^*}A)}=\sigma_{\text{max}}(A)$$ where the spectral norm $\left \| A \right \| _2$ of a complex matrix $A$ is defined as $$\text{max} \left\{ \|Ax\|_2 : \|x\| = 1 \right\}$$ How does one prove the first and the second […]