Intereting Posts

Basic Question on the Ideal-Variety Correspondence and Adjoint Functors
A decomposition of a differentiable function
A definition of Conway base-$13$ function
How to prove $\cos ^6x+3\cos ^2x\space \sin ^2x+\sin ^6x=1$
Family of geometric shapes closed under division
Assume R is commutative. If $I+J=R$, prove that $IJ = I \cap J$. Provide a counterexample when R is not commutative.
there exist infinite many $n\in\mathbb{N}$ such that $S_n-<\frac{1}{n^2}$
Generalization of the Jordan form for infinite matrices
How should I calculate the $n$th derivative of $f(x)=x^x$?
Interesting puzzle about a sphere and some circles
Method of charactersitics and second order PDE.
$G$ be a non-measurable subgroup of $(\mathbb R,+)$ ; $I$ be a bounded interval , then $m^*(G \cap I)=m^*(I)$?
How can I express $\sum_{k=0}^n\binom{-1/2}{k}(-1)^k\binom{-1/2}{n-k}$ without using summations or minus signs?
Abel summability under the hypothesis $a_n \ge 0$
Is the vector cross product only defined for 3D?

Let $\Sigma$ be a symmetric positive definite matrix of dimensions $n \times n$. Is there a numerically robust way of checking whether it can be decomposed as $\Sigma = \mathcal{D} + v^t.v$ where $v$ has dimensions $r \times n$ with $r < n-1$ and $\mathcal{D}$ is diagonal with positive elements?

So far, given $\Sigma$, I am checking for minimal $k$ for which positive solutions for diagonal matrix of $\mathrm{rank}(\Sigma -\mathcal{D}) = k$ exist.

I have to add, that even when the solution does exist, it may be non-unique.

- Proof if $a \vec v = 0$ then $a = 0$ or $\vec v = 0$
- Linear dependency of polynomials question
- Algorithm for a conjugating matrix?
- Set of Linear equation has no solution or unique solution or infinite solution?
- Show that the eigenvalues of a unitary matrix have modulus $1$
- Existence of vector space complement and axiom of choice

Example:

$$

\Sigma = \left(

\begin{array}{cccc}

6 & 6 & 7 & 0 \\

6 & 11 & 12 & -3 \\

7 & 12 & 20 & -6 \\

0 & -3 & -6 & 9 \\

\end{array}

\right) \, \text{ where } \mathcal{D} = \left(

\begin{array}{cccc}

1 & 0 & 0 & 0 \\

0 & 2 & 0 & 0 \\

0 & 0 & 3 & 0 \\

0 & 0 & 0 & 4 \\

\end{array}

\right) \text{ and }

v^t = \left(

\begin{array}{cc}

2 & 1 \\

3 & 0 \\

4 & -1 \\

-1 & 2 \\

\end{array}

\right)

$$

$\Sigma$ admits a different representation as well, in fact it is one of 1-parametric family:

$$

\mathcal{D} = \mathrm{diag}\left(\frac{31}{39}, \frac{403}{203}, \frac{79}{29}, \frac{84}{19} \right) \text{ and } v^t = \left(

\begin{array}{cc}

0 & 679 \\

-78 & 696 \\

-156 & 812 \\

97 & 0 \\

\end{array}

\right) \cdot \left(

\begin{array}{cc}

\sqrt{2522} & 0 \\

0 & 2 \sqrt{19691} \\

\end{array}

\right)^{-1}

$$

I have two questions. Why is there a parametric family of solutions ? Can a solution be found by methods of linear algebra ?

Thank you

- connection between graphs and the eigenvectors of their matrix representation
- Null space for $AA^{T}$ is the same as Null space for $A^{T}$
- Find the number of distinct real values of $c$ such that $A^2x=cAx$
- Construct orthogonal projection for plane (matrix form)
- Find the value of $a^2-b^2+c^2$
- For a symmetric, positive definite matrix, why does each diagonal element exceeds the small eigenvalue?
- There exist an infinite subset $S\subseteq\mathbb{R}^3$ such that any three vectors in $S$ are linearly independent.
- Intersection of kernels and linear dependence of functionals
- Size of Jordan block
- Which Linear Algebra textbook would be best for beginners? (Strang, Lay, Poole)

If you accept the numerical accuracy around the order of $[10^{-14},10^{-10}]$, then the following Linear Matrix Inequality solution is robust:

I have tried to solve the related problem : Finding a diagonal matrix $D\succ 0$ such that $$S:=\Sigma-D \succeq 0,\quad \ \operatorname{rank}(S)\leq n-2$$ Rank constraints are always problematic (non-convex) since going up (increasing the rank) is easy but going down is typically very hard (Since the rank of a matrix is upper semi-continuous..)

Anyway, I have set up a simple LMI problem with a diagonal matrix $D$ as follows

$$ \min_{\displaystyle\substack{ D\succ 0,\\S\succeq 0}} \ \ \operatorname{tr(S)}$$

In other words, I simply minimized the sum of the eigenvalues of $S$, hoping that some of them will come quite close to zero. And much to my surprise, (except a few cases ended up with one, so far) two of them usually turned out to be arbitrarily close to zero. This is also equivalent to maximizing entries of $D$ while keeping the positive semi-definiteness of $S$.

If you have MATLAB and Robust Control Toolbox you can test it for yourself with the following code (any LMI solver would do):

```
%Generate some pos def matrix
n = 5;
Sr = rand(n);
Sr = 2*(Sr+Sr');
Sr = 2.5*max(abs(eig(Sr)))*eye(n)+Sr;
% Uncomment the next line for the particular matrix in the question
%Sr = [6 6 7 0;6 11 12 -3;7 12 20 -6;0 -3 -6 9]; n=4;
%Declare LMIs
setlmis([])
D=lmivar(1,[ones(n,1) zeros(n,1)]);
lmiterm([1 1 1 D],1,1);
lmiterm([-1 1 1 0],Sr);
lmis = getlmis;
c = mat2dec(lmis,-eye(n))
[cop,xop] = mincx(lmis,c,[1e-12 200 1e9 20 0]);
Dnum= dec2mat(lmis,xop,D);
disp('The eigenvalues of the difference')
eig(Sr-Dnum)
disp('The rank of the difference')
rank(Sr-Dnum)
disp('The condition number of the difference')
rcond(Sr-Dnum)
```

But as we can expect $(S_r-D_{num})$ is a full rank matrix bur its condition number is generally around $10^{-14}$.

Finally, I have tried your matrix and ended up with

$$

D = \begin{pmatrix}

0.75669 &0 &0 &0\\

0 &2.1392 &0 &0\\

0 &0 &2.6752 &0\\

0 &0 &0 &4.4885

\end{pmatrix}

$$

After making sure that m of the eigenvalues are indeed close to zero, obtaining the low rank variable is simply numerical massaging. (This part can be substantially improved in my opinion)

```
[K,L,M] = svd(Sr-Dnum);
M = M';
m=2; %Select how many eigs are close to zero
K = K(:,1:n-m);
L = L(1:n-2,1:n-m);
M = M(1:n-m,:);
T = K*L*M
disp('The rank of V^T V ' )
rank(T) %indeed n-m
V = K*sqrt(L);
% Testing the result
disp('The error of \Sigma = D - v^T v' )
Sr-Dnum-V*V'
```

Numerical result for your variable $v^t$ (rounded off further by the display)

$$

v^t =

\begin{pmatrix}

-1.826 &1.3817\\

-2.9418 &0.45479\\

-4.1423 & -0.40799\\

1.2817 & 1.6938

\end{pmatrix}

$$

We also have a necessary condition by using the orthogonal complement $V_\perp$ of $v^t$:

$$

V_\perp^T\Sigma V_\perp = V_\perp^T D V_\perp

$$

And this difference also suffers from numerical errors (around $10^{-12}$. Nevertheless, if you are searching for an quick and dirty approximation, this comes to pretty close. By fiddling with the solution $D_{num}$ (which means manual labor), you can even obtain exact results.

Regarding the theory, I don’t know why the number of eigs that come close to zero is $n-2$ in general but not more. I’ll try to come back to this if I have some time later.

I don’t think an analytical solution would lead to a direct result since the rank is excessively dependent to matrix entries and it would be really tough(and great of course) to obtain something other than approximations.

Lastly, I did not understand what you meant with $1$-parametric family, so I don’t have any comments on that :).

- Every subnet of $(x_d)_{d\in D}$ has a subnet which converges to $a$. Does $(x_d)_{d\in D}$ converge to $a$?
- Local coefficient System and universal cover
- Prove an inequality
- Density of randomly packing a box
- Prove that if $\sum a_n$ converges, then $na_n \to 0$.
- Distinction between vectors and points
- confused about joint mutual information
- $z^n=(z+1)^n=1$, show that $n$ is divisible by $6$.
- Evaluating tetration to infinite heights (e.g., $2^{2^{2^{2^{.^{.^.}}}}}$)
- Is every subset of $\mathbb{Z^+}$ countable?
- Class Group of Ring of Integers of $\mathbb{Q}$
- Series of natural numbers which has all same digits
- Minimum multi-subset sum to a target
- The Supremum and Bounded Functions
- Any open subset of $\Bbb R$ is a at most countable union of disjoint open intervals.