Direct summand of skew-symmetric and symmetric matrices

Let $W_1$ be the subspace of $\mathcal{M}_{n \times n}$ that consists of all $n \times n$ skew-symmetric matrices with entries from $\mathbb{F}$, and let $W_2$ be the subspace of $\mathcal{M}_{n \times n}$ consisting of all symmetric $n \times n$ matrices. Prove that $\mathcal{M}_{n \times n}(\mathbb{F}) = W_1 \oplus W_2$.

I couldn’t really figure out why the sum of an $n \times n$ symmetric matrix and $n \times n$ skew-symmetric matrix would form a $n \times n$ matrix (to satisfy the direct summand property $W_1 + W_2 = \mathcal{M}_{n \times n}(\mathbb{F})).$ Browsing online, I found that $$ M = \frac{1}{2}(M + M^{t}) + \frac{1}{2}(M-M^{t}),$$ where $\frac{1}{2}(M+M^{t}) \in W_2, \frac{1}{2}(M-M^t) \in W_1$, and $M \in \mathcal{M}_{n \times n}(\mathbb{F})$.

This reminds me of a formula on how odd and even functions may be be added together to form a generic function. $$f(x) = \frac{f(x)+f(-x)}{2} + \frac{f(x)-f(-x)}{2}.$$

However, to me it is disturbing to use unless I know how it was derived. If anyone could show me why a skew-symmetric matrix may be represented as $\frac{1}{2}(M-M^t)$ and why a symmetric matrix may be represented as $\frac{1}{2}(M+M^t)$ than I may sleep better tonight.

Thanks.

Solutions Collecting From Web of "Direct summand of skew-symmetric and symmetric matrices"

Rather than asking why a symmetric matrix may be represented as $\frac{1}{2}(M+M^t)$, you need to ask yourself the following three things:

  1. Is $\frac{1}{2}(M+M^t)$ symmetric, for every $M$?
  2. Is $\frac{1}{2}(M-M^t)$ skew-symmetric, for every $M$?
  3. Is the sum of these equal to $M$?

After you have answered “yes” to the above three questions, you will have proven that $M_{n,n}(\mathbb{F})=W_1+W_2$. Then, you merely prove that $W_1\cap W_2=\{0\}$, and then you have $M_{n,n}(\mathbb{F})=W_1\oplus W_2$.


Ultimately, there are many matrices $M$ that all have the same symmetrization via the above formula. The reason those two formulas are used is that they make the above three questions all have the right answer.

There is nothing mysterious about these decompositions. Any linear operation $T$ on a vector space over a field$~F$ not of characteristic$~2$ that is an involution ($T^2=I$) is diagonalisable with eigenvalues $1,-1$ only, and the space is therefore the direct sum of the eigenspaces for $1$ and for $-1$. One may take $T$ for instance to be transposition of matrices, or argument reversal $f\mapsto(x\mapsto f(-x))$ in a vector space of functions. These eigenspaces are the cases of symmetric and anti-symmetric objects.

In fact in situations like this (operators satisfying a polynomial equation, where the polynomial splits into distinct linear factors corresponding to the potential eigenspaces) the projections on the eigenspaces can be expressed as polynomials in $T$. In the case at hand this means one can express symmetrising and anti-symmetrising operations for $T$ in terms of $T$ itself, and that is what the expressions giving components in $W_1$ and $W_2$ do. While in general such expression can be complicated, they are quite simple for involutions: the projection on the eigenspace for $1$ is $P_+=\frac12(T+I)$ and the projection on the eigenspace for $-1$ is $P_-=\frac12(-T+I)$. One easily checks $T\cdot P_+=P_+$ and $T\cdot P_-=-P_-$, confirming that on the images of the projection $T$ acts as$~1$ respectively as$~{-}1$. Of course one also has $P_++P_-=I$, so that a value (matrix) can be reconstructed from its symmetrised and anti-symmetrised parts. The fact that all this is quite similar to decomposing a function into an even and an odd function is therefore no coincidence.