Intereting Posts

Right invertible and left zero divisor in matrix rings over a commutative ring
{line bundles} $\neq$ {divisor line bundles}
Geometric basis for the real numbers
Question on computing direct limits
In how many ways a train can stop at $K$ of $N$ intermediate stations without stopping in any consecutive stations
Generalising Parseval's Identity using the Convolution Theorem
Prove sequence $a_n=n^{1/n}$ is convergent
Is this a sound demonstration of Euler's identity?
Metrizability of weak convergence by the bounded Lipschitz metric
Determinant of matrix with trigonometric functions
Common factors of cyclotomic polynomials and polynomials with prime coefficients
Additivity + Measurability $\implies$ Continuity
Can a regular heptagon be constructed using a compass, straightedge, and angle trisector?
Does a function have to be “continuous” at a point to be “defined” at the point?
$C_{c}(X)$ is complete. then implies that $X$ is compact.

It is well-known to many that $\mathbb{C}$ can be represented by matrices of the form $\left[ \begin{array}{cc} a & b \\ -b & a \end{array} \right]$. For example, see this question or this question. It is also discussed in the wikipedia article history of complex numbers article. Apparently, there is even an introductory complex variable textbook by Copson from 1935 which uses such matrices to define complex numbers. This is mentioned in *Numbers* by Ebbinghaus et. al. on page 69.

My question is simply this:

What is the history of this construction? Who first explained that complex numbers could be viewed as $2 \times 2$ matrices of the special form $\left[ \begin{array}{cc} a & b \\ -b & a \end{array} \right]$ ?

I realize this is just the regular representation of $\mathbb{C}$, and I realize such matrices are the matrices of a dilation composed with a rotation and possibly a reflection, but, the question still remains, who did found these first? References are appreciated.

- History Behind Integral Error Between $\pi$ and $22/7$
- The roots of the equation $z^n=(1+z)^n$…
- Non-Euclidean Geometrical Algebra for Real times Real?
- Angle brackets for tuples
- How was 78557 originally suspected to be a Sierpinski number?
- Proof of an inequality about $\frac{1}{z} + \sum_{n=1}^{\infty}\frac{2z}{z^2 - n^2}$
- Find $\sin\frac{\pi}{3}+\frac{1}{2}\sin\frac{2\pi}{3}+\frac{1}{3}\sin\frac{3\pi}{3}+\cdots$
- There exist $x_{1},x_{2},\cdots,x_{k}$ such two inequality $|x_{1}+x_{2}+\cdots+x_{k}|\ge 1$
- Proof of $\sin nx=2^{n-1}\prod_{k=0}^{n-1} \sin\left( x + \frac{k\pi}{n} \right)$
- What are some examples of mathematics that had unintended useful applications much later?

This set of lecture notes from Wedderburn explicitly says that a complex scalar $\alpha + i\beta$ can be written as

\begin{equation}

\left(\begin{array}{lr} \alpha & -\beta \\ \beta & \alpha \end{array}\right)

\end{equation}

on page 101 of the PDF (this is page 108 of the document when viewed in a PDF viewer). These notes are from 1934, which is obviously only slightly earlier than your example. However, the notes themselves are based on lectures given at Princeton starting in 1920, and it would seem that this notation goes back to 1907 because in that year Wedderburn (in his thesis) showed that associative hypercomplex systems can be represented by matrices. I’ve been unable to find his thesis online to check if this representation is explicitly written out, but I will update this post if I do.

Going back even further, in 1858 Arthur Cayley published “A Memoir on the Theory of Matrices” in which he mentions matrix representations of quaternions. Specifically, in item #45 on page 32 of the PDF (or on page 17 when viewed in a PDF viewer), he makes a passing mention of the fact that matrices $M$, $N$, and $L$ such that $L^2 = -1$, $M^2 = -1$, and $N = LM – -ML$ satisfy a system of equations that is the same as those that the quaternions satisfy. I didn’t see anything in the above paper by Cayley about representing complex numbers with matrices, though I’ve seen a few passing references to Cayley coming up with the idea in 1858, so it may be the consensus of the mathematical community that the credit should go to Cayley.

Today was the first day of class in my complex analysis course. I sometimes attempt new derivations in real time to keep it fresh. Today, we got to the point of asking what was the reciprocal of $z=x+iy$. We said, let $w=a+ib$ and seek solutions of $wz=1$. This gives:

$$ wz = (a+ib)(x+iy) = ax-by+i(bx+ay) = 1+i(0).$$

Equating real and imaginary parts reveals:

$$ ax-by = 1 \qquad \& \qquad bx+ay = 0 $$

which is a system of linear equations which has matrix form:

$$ \left[ \begin{array}{cc} x & -y \\ y & x \end{array}\right]\left[ \begin{array}{c} a \\ b \end{array}\right] =\left[ \begin{array}{c} 1 \\ 0 \end{array}\right] $$

We solve for $[a,b]^T$ by multiplying by the inverse of the $2 \times 2$ matrix for which we have the handy-dandy formula $\displaystyle \left[ \begin{array}{cc} x & -y \\ y & x \end{array}\right]^{-1} = \frac{1}{x^2+y^2}\left[ \begin{array}{cc} x & y \\ -y & x \end{array}\right]$. Thus,

$$ \left[ \begin{array}{c} a \\ b \end{array}\right] = \frac{1}{x^2+y^2}\left[ \begin{array}{cc} x & y \\ -y & x \end{array}\right]\left[ \begin{array}{c} 1 \\ 0 \end{array}\right] = \frac{1}{x^2+y^2}\left[ \begin{array}{c} x \\ -y \end{array}\right].$$

Therefore, $a = \frac{x}{x^2+y^2}$ and $b = \frac{-y}{x^2+y^2}$ so

$$\frac{1}{z}= \frac{x-iy}{x^2+y^2}.$$

I just found this a nice illustration of JHance’s comment. In this routine calculation we stumble upon the $2 \times 2$ representations of both $z=x+iy$ and $1/z$. So, perhaps the real question to ask is not when the matrix representation was first given. Rather, the real question is simply when was the algebra of small matrices first known. I gather from yoknapatawpha’s post of the 1858 paper of Cayley it may be a few years before that work. Apparently, the term *matrix* is Latin for “womb” and is due to Sylvester in 1850 as you may read at history of matrices. This makes me think there may exist some improvement on the Cayley answer.

- If $G$ contains a normal subgroup $H \cong \mathbb{Z_2}$ such that $G/H$ is infinite cyclic, then $G \cong \mathbb{Z} \times \mathbb{Z_2}$
- How to show the Symmetric Group $S_4$ has no elements of order $6$.
- What is the difference between square of sum and sum of square?
- How do we prove that something is unprovable?
- $A^2X = X$ show that $AX = X$, if $A_{ij} > 0$ and $x_{i} > 0$
- Number of $(0,1)$ $m\times n$ matrices with no empty rows or columns
- Permutation module of $S_n$
- A logic puzzle involving a balance.
- *-homomorphism between concrete von Neumann algebras is SOT-SOT continuous iff it is WOT-WOT continuous
- Proof that sum of complex unit roots is zero
- probability almost surely and expectation
- Meaning of correlation
- Learning differential calculus through infinitesimals
- Matrix with zeros on diagonal and ones in other places is invertible
- Generalization of $f(\overline{S}) \subset \overline{f(S)} \iff f$ continuous