Articles of pseudoinverse

Prove that a non-positive definite matrix has real and positive eigenvalues

I have a $2\times2$ matrix $J$ of rank $2$ and a $2\times2$ diagonal positive definite matrix $Α$. Denote by $J^+$ a pseudoinverse of $J$. I can find many counterexamples for which $J^+AJ$ is not positive definite (e.g. $J=\left(\begin{smallmatrix}2&1\\1&1\end{smallmatrix}\right)$ and $A=\left(\begin{smallmatrix}1&0\\0&2\end{smallmatrix}\right)$), but for all of them $J^+AJ$ has real and positive eigenvalues. So, I was wondering […]

How to optimize a singular covariance-weighted residual?

Definitions: $$v(x)\equiv\{g_1(x),g_2(x),\ldots,g_n(x)\}^T$$ $$C\equiv \operatorname{cov}(v)=\langle vv^T \rangle -\langle v\rangle \langle v^T \rangle =\int f(x)v(x)v(x)^T \, dx-\int f(x)v(x) \, dx \int f(x’)v(x’)^T \, dx’$$ $$R(x)\equiv v(x)^T C^\dagger v(x)$$ z is an implicit parameter of f(x) and of all the g(x)’s. How can one go about optimizing R wrt z if C is singular? For an non-singular D, […]

Can't understand this pseudo-inverse relation.

In the Answer to a different Question, a curious matrix relation came up: M is symmetric and non-singular, G is non-symmetric and singular. Theorem: When $M$ is positive/negative definite, or more generally, if $G$ and $(G^\dagger G) M (G^\dagger G)$ have the same ranks then $$G^{T}\left(GMG^{T}\right)^{\dagger}G=\left((G^\dagger G)M(G^\dagger G)\right)^\dagger$$ I have no idea how to prove […]

'Stable' Ways To Invert A Matrix

So lets say that I need to invert a matrix that is generally dense and is poorly conditioned. What are some ways I can get an accurate inverse? Here are my candidates: SVD Inverse Inverse Via Cholesky Decomposition Inverse Via LU Decomposition Inverse Via QR Decomposition Are there any other methods I am missing? Of […]

Block matrix pseudoinverse: symmetry of the inverse of a symmetric matrix

In the wiki page for block matrix pseudoinverses, there is a formula $$ \begin{pmatrix}A & B \\ C & D\end{pmatrix}^{-1}=\begin{pmatrix} (A-BD^{-1}C)^{-1} & -A^{-1}B(D-CA^{-1}B)^{-1}\\ -D^{-1}C(A-BD^{-1}C)^{-1} & (D-CA^{-1}B)^{-1} \end{pmatrix}\cdot $$ Let’s call $M=\begin{pmatrix}A & B \\ C & D\end{pmatrix}$ and $N$ the matrix on the RHS above. I can verify that $MN=I$ but I am stuck when […]

If $A$ is a non-square matrix with orthonormal columns, what is $A^+$?

If a matrix has orthonormal columns, they must be linearly independent, so $A^+ = (A^T A)^{−1} A^T$ . Also, the fact that its columns are orthonormal gives $A^T A = I$. Therefore, $$A^+ = (A^T A)^{−1} A^T = (I)^{-1}A^T = A^T$$ Thus, $A^+ = A^T$. Am I correct? Thank you.

Explain why $x^+=A^+b$ is the shortest possible solution to $A^TA\hat{x}=A^Tb$

I’m was going through the chapter on pseudoinverse in intro to linear algebra by Strang, and it says The vector $x^+=A^+b$ is the shortest possible solution to $A^TA\hat{x}=A^Tb$ Reason: The difference $\hat{x}-x^+$ is in the nullspace of $A^TA$. This is also the nullspace of A, orthogonal to $x^+$. I get that it is essentially saying […]

What forms does the Moore-Penrose inverse take under systems with full rank, full column rank, and full row rank?

The normal form $ (A’A)x = A’b$ gives a solution to the least square problem. When $A$ has full rank $x = (A’A)^{-1}A’b$ is the least square solution. How can we show that the moore-penrose solves the least square problem and hence is equal to $(A’A)^{-1}A’$. Also what happens in a rank deficient matrix ? […]

generalized inverse of a matrix and convergence for singular matrix

Pls help me understand what is generalized inverse of a matrix and how does it helps in convergence in linear regression for singular matrix.

What is step by step logic of pinv (pseudoinverse)?

So we have a matrix $A$ size of $M \times N$ with elements $a_{i,j}$. What is a step by step algorithm that returns the Moore-Penrose inverse $A^+$ for a given $A$ (on level of manipulations/operations with $a_{i,j}$ elements, not vectors)?