Showing the Lie Algebras $\mathfrak{su}(2)$ and $\mathfrak{sl}(2,\mathbb{R})$ are not isomorphic.

I am working through the exercises in “Lie Groups, Lie Algebras, and Representations” – Hall and can’t complete exercise 11 of chapter 3. My aim was to demonstrate that there does not exist a vector space isomorphism $A$ between the two spaces that also preserves the commutator.
$$[AX, AY] = A[X, Y]$$
To this end I computed the following commutation relations on bases for two spaces.

For the $\mathfrak{su}(2)$ basis matrices $e_1, e_2, e_3$ it holds that
$$[e_1, e_2] = 2e_3 \,\,\,\,\,\, [e_1, e_3] = -2e_2 \,\,\,\,\,\, [e_2, e_3] = 2e_1$$

For the $\mathfrak{sl}(2, \mathbb{R})$ basis matrices $f_1, f_2, f_3$ it holds that
$$[f_1, f_2] = 2f_2 \,\,\,\,\,\, [f_1, f_3] = -2f_3 \,\,\,\,\,\, [f_2, f_3] = f_1$$

It is clear that for the linear bijection $(e_1, e_2, e_3) \mapsto (f_1, f_2, f_3)$ would not preserve the relationships, nor would a permutation of the target matrices. However, I need to show no invertible matrix satisfies
$$[AX, AY] = A[X, Y]$$
So from there I began to derive equations for the elements of $A$. They are ugly expressions in terms of the sub-determinants of the $A$ matrix, and given them I can’t think of a way to conclude $A$ cannot exist. Is there an easier way to finish the proof than to derive the equations for $A$?

Note: I have looked up solutions for this problem and the only technique I see hinted at is to consider Killing forms (which have not yet been covered in this book).

Solutions Collecting From Web of "Showing the Lie Algebras $\mathfrak{su}(2)$ and $\mathfrak{sl}(2,\mathbb{R})$ are not isomorphic."

Your approach works without problems, if you write the condition $[Ax,Ay]=A[x,y]$ for all $x,y$ in terms of the $9$ coefficients of the matrix $A$. The polynomial equations in these $9$ unknowns over $\mathbb{R}$ quickly yield $\det(A)=0$, a contradiction.

Another elementary argument is the following. $\mathfrak{sl}(2,\mathbb{R})$ has a $2$-dimensional subalgebra, e.g., $\mathfrak{a}=\langle f_1,f_2\rangle$, but $\mathfrak{su}(2)$ has no $2$-dimensional subalgebra. Hence they cannot be isomorphic.

This is a Q&A style answer not meant to be the final answer to the question. It completes the original technique for future readers. Thanks to Dietrich Burde for the motivation to continue with it.

As above, suppose $A$ is an isomorphism from $\mathfrak{su}(2) \to \mathfrak{sl}(2, \mathbb{R})$. Then
$$[AX, AY] = A[X, Y]$$

Let $A_i$ denote the column vectors of $A$. Then
$$Ae_i = \sum_j A_{ij} f_j$$
We use $[Ae_1, Ae_2] = A[e_1, e_2]$ to obtain
$$2\begin{vmatrix} A_{11} & A_{21} \\ A_{12} & A_{22} \end{vmatrix}f_2 +
2\begin{vmatrix} A_{11} & A_{21} \\ A_{13} & A_{23} \end{vmatrix}(-f_3) +
2\begin{vmatrix} A_{12} & A_{22} \\ A_{13} & A_{23} \end{vmatrix}f_1 = 2 (A_{31}f_1 + A_{32}f_2 + A_{33}f_3)$$
Combining these three implied equations with the cofactor expansion of the determinant:
$$\begin{vmatrix} A_{11} & A_{21} & A_{31} \\ A_{12} & A_{22} & A_{32} \\ A_{13} & A_{23} & A_{33} \end{vmatrix} = A_{31}\begin{vmatrix} A_{12} & A_{22} \\ A_{13} & A_{23} \end{vmatrix}
– A_{32}\begin{vmatrix} A_{11} & A_{21} \\ A_{12} & A_{22} \end{vmatrix}
+ A_{33}\begin{vmatrix} A_{11} & A_{21} \\ A_{13} & A_{23} \end{vmatrix}$$
we obtain:
$$\det(A) = 2 A_{31}^2 – A_{32}^2 – A_{33}^2$$
Using the other two commutivity relations we get:
$$\det(A) = – A_{31}^2 + 2 A_{32}^2 – A_{33}^2$$
$$\det(A) = – A_{31}^2 – A_{32}^2 + 2 A_{33}^2$$
Adding the three equations together we see that
$$3 \det(A) = 0$$
Hence, $A$ is not invertible contradicting it being a vector space isomorphism.

This is a Q&A style answer not meant to be the final answer to the question. It fleshes out one of the techniques suggested by Dietrich Burde for future readers.

Another elementary argument is the following. $\mathfrak{sl}(2,\mathbb{R})$ has a $2$-dimensional subalgebra, e.g., $\mathfrak{a}=\langle f_1,f_2\rangle$, but $\mathfrak{su}(2)$ has no $2$-dimensional subalgebra. Hence they cannot be isomorphic.


$\mathfrak{sl}(2, \mathbb{R})$ has a two dimensional subspace.

Consider matrices of the form $\alpha_1 f_1 + \alpha_2 f_2$. Clearly this is a subspace of $\mathfrak{sl}(2)$. We need to show the commutation operation is closed in this subspace:
$$[\alpha_1 f_1 + \alpha_2 f_2, \beta_1 f_1 + \beta_2 f_2] = 2(\alpha_1\beta_2 – \alpha_2\beta_1)f_2$$

$\mathfrak{su}(2)$ does not have a two dimensional subspace.

Consider a two dimensional subspace with basis $g_1, g_2$. Then
$$[\alpha_1 g_1 + \alpha_2 g_2, \beta_1 g_1 + \beta_2 g_2] = (\alpha_1\beta_2 – \alpha_2\beta_1)[g_1, g_2]$$
We must show that $g_1, g_2$ cannot be chosen such $[g_1, g_2]$ is in the span of $g_1, g_2$. To this end let $g_1 = \sum_i a_i e_i, g_2 = \sum b_i e_i$. It can be shown through direct calculation that
$$[g_1, g_2] = \begin{vmatrix}
2 e_1 & a_1 & b_1 \\
2 e_2 & a_2 & b_2 \\
2 e_3 & a_3 & b_3 \notag
\end{vmatrix}$$
In other words, the commutator of $g_1$ and $g_2$ is twice their cross product. Since the cross product is perpendicular to $g_1, g_2$ we are done.

This is a Q&A style answer not meant to be the final answer to the question. It fleshes out one of the techniques suggested by Mariano Suárez-Alvarez for future readers.

An isomorphism $f:\mathfrak{su}(2) \to \mathfrak{su}(3)$ has to map a diagonalizable element to a diagonalizable element.

It is isn’t quite the same technique, but inspired by it. Instead I will use that if an isomorphism existed between $\mathfrak{su}(2)$ and $\mathfrak{sl}(2, \mathbb{R})$ then the induced homomorphism on their adjoint representations would have to preserve diagonalizability of matrices. This leads to a contradiction.

The following proposition is inspired by Lie algebra homomorphisms preserve Jordan form:

Suppose the Lie algebras $\mathfrak{g}, \mathfrak{h}$ are isomorphic. Denote the isomorphism as $\phi : \mathfrak{g} \to \mathfrak{h}$. Then for all diagonalizable $ad_X \in ad_\mathfrak{g}$, $\phi^*(ad_X) \in ad_\mathfrak{h}$ is diagonalizable (where $\phi^*$ is the induced homomorphism between the adjoint representations). In particular, if $\lambda_i$, $Y_i$ is an eigenvalue, eigenvector pair of $ad_X$, then $\lambda_i$, $\phi(Y_i)$ is an eigenvalue, eigenvalues pair of $ad_{\phi(X)}$.

Suppose that $ad_X$ is diagonalizable with eigenvalues $\lambda_i$ and eigenvectors $Y_i$. Then
$$ad_X(Y_i) = \lambda_i Y_i$$
We want to show that $\phi(Y_i)$ is an eigenvector of $\phi^*(ad_X)$.

\begin{eqnarray*}
\phi^*(ad_X)(\phi(Y_i)) &=& ad_{\phi(X)}(\phi(Y_i)) \\
&=& [\phi(X), \phi(Y_i)] \\
&=& \phi([X, Y_i]) \\
&=& \phi(ad_X(Y_i)) \\
&=& \lambda_i\phi(Y_i) \\
\end{eqnarray*}

Now using the commutivity relations stated in the problem we can calculate the adjoint representation of $\mathfrak{su}(2)$:
$$ ad_{e_1} =
\begin{bmatrix}
0 & 0 & 0 \\
0 & 0 & -2 \\
0 & 2 & 0
\end{bmatrix} \,\,\,\,\,
ad_{e_2} =
\begin{bmatrix}
0 & 0 & 2 \\
0 & 0 & 0 \\
-2 & 0 & 0
\end{bmatrix}\,\,\,\,\,
ad_{e_3} =
\begin{bmatrix}
0 & -2 & 0 \\
2 & 0 & 0 \\
0 & 0 & 0
\end{bmatrix}$$

For $\mathfrak{sl}(2, \mathbb{R})$ we find:
$$ ad_{f_1} =
\begin{bmatrix}
0 & 0 & 0 \\
0 & 2 & 0 \\
0 & 0 & -2
\end{bmatrix} \,\,\,\,\,
ad_{f_2} =
\begin{bmatrix}
0 & 0 & 1 \\
-2 & 0 & 0 \\
0 & 0 & 0
\end{bmatrix}\,\,\,\,\,
ad_{f_3} =
\begin{bmatrix}
0 & -1 & 0 \\
0 & 0 & 0 \\
2 & 0 & 0
\end{bmatrix}$$

Suppose $\phi$ is an isomorphism between $\mathfrak{sl}(2, \mathbb{R})$ and $\mathfrak(su)(2)$ and that
$$\phi(f_1) = a_1 e_1 + a_2 e_2 + a_3 e_3$$
Now any linear combination of the matrices $ad_{e_i}$ is skew-symmetric which means that it has imaginary eigenvalues. On the other hand the matrix $ad_{f_1}$ has eigenvalues $0, -2, 2$. Consider the eigenvalue, eigenvector pair $-2, v$ of $f_1$. There is no way that $\phi(v)$ can be an eigenvector of $\phi(f_1)$ with eigenvalue $-2$, so we have a contradiction.