Intereting Posts

When and where the concept of valid logic formula was defined?
Do closed-form expressions exist for these integrals?
How can I prove that $x-{x^2\over2}<\ln(1+x)$
How to avoid perceived circularity when defining a formal language?
Prove nth root of n! is less than n+1 th root of ((n+1) !): $\sqrt{n!}\lt \sqrt{(n+1)!}$?
Prove $2(7^n) + 3(5^n)-5$ is always divisible by $24$
If $\alpha$ is an irrational real number, why is $\alpha\mathbb{Z}+\mathbb{Z}$ dense in $\mathbb{R}$?
Prove that $a^3+b^3+c^3 \geq a^2b+b^2c+c^2a$
Proof of irrationality of square roots without the fundamental theorem of arithmetic
Example of not so simple group ??
How to solve congruence $x^y = a \pmod p$?
Given a solid sphere of radius R, remove a cylinder whose central axis goes through the center of the sphere.
Convergence of nested radicals
If $f(x)$ and $(f(2x)-f(x))/x$ have limit $0$ as $x\to 0$, then $f(x)/x\to 0$
Is there a relation between $End(M)$ and $M$ under tensor products?

First, what I know is that given the basis:

$$e = \left(\begin{array}{cc} 0 & 1 \\ 0 & 0 \end{array}\right),f = \left(\begin{array}{cc} 0 & 0 \\ 1 & 0 \end{array}\right),h = \left(\begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array}\right)$$

- Geometric & Intuitive Meaning of $SL(2,R)$, $SU(2)$, etc… & Representation Theory of Special Functions
- Surprising but simple group theory result on conjugacy classes
- Normal subgroups of the symmetric group $S_N$
- Estimates on conjugacy classes of a finite group.
- In how many ways can a group element in a finite group be written as a commutator?
- About the converse of Maschke's theorem

I want to find the ‘structure constants’, but furthermore that the adjoint representation of $sl(2,F)$, with respect to the basis given we get $$ad \, h = \left(\begin{array}{cc} 0 & 0 & 0\\ 0 & 2 & 0 \\ 0 & 0 & -2 \end{array}\right)$$ to similarly find the matrix representation representing $ad \, e$ and $ad \, f$.

Now I know that the structure constants (at least the answer given is $[e,f] = h$, $[e,h] = -2e$, and $[f,h] = 2f$. If I look at the structure constant formula $[x_i,x_j] = \sum_{k = 1}^{3} a_{ij}^kx_k$ where I let $x_1 = e, x_2 = f, x_3 = h$ so $i,j \in \{1,2,3\}$ I get things such as $(ad \, x_3)(x_1) = a_{3,1}^1x_1+a_{3,1}^2x_2+ a_{3,1}^3x_3 = 2x_1$.

We could only have the case such that $a_{3,1}^1 = -2$. But then as the structure constants are of the form $a_{ij}^k$ how come we come up with $2 $ instead of $[h,e]=2e$? Furthermore, while I see that I also get $(ad \, h)(h) = [h,h]=0$, $(ad \, h)(f) = -2f$ we have our $0,2, -2$ that so happen to appear in the matrix $ad \, h$, how is it that they are arranged as they are?

As a bonus, what is the major points of these “structure constants”? Why are they useful – especially as I can seemingly just calculate the lie bracket to figure them out and don’t need to figure out some summation.

Thanks for any help.

- Why is the Auslander Reiten theory not working in this example?
- Normal subgroups of the symmetric group $S_N$
- The generators of $SO(n)$ are antisymmetric, which means there are no diagonal generators and therefore rank zero for the Lie algebra?
- About the converse of Maschke's theorem
- About integral binary quadratic forms fixed by $\operatorname{GL_2(\mathbb Z)}$ matrices of order $3$
- Prove that $Q_8 \not < \text{GL}_2(\mathbb{R})$
- Group of order 24 with no element of order 6 is isomorphic to $S_4$
- the converse of Schur lemma

The adjoint ${\rm ad}\,h$ is the linear map $x\mapsto[h,x]$, or simply $[h,-]$ for abbreviation. To determine the matrix of this linear map, we calculate its effect on the basis vectors $e,f,h$:

$$\color{Red}{[h,}e\color{Red}{]}=\color{Blue}{2}e+\color{Blue}{0}f+\color{Blue}{0}h$$

$$\color{Red}{[h,}f\color{Red}{]}=\color{Blue}{0}e\color{Blue}{-2}f+\color{Blue}{0}h \tag{$\circ$}$$

$$\color{Red}{[h,}h\color{Red}{]}=\color{Blue}{0}e+\color{Blue}{0}f+\color{Blue}{0}h $$

Therefore the matrix of this linear map is given by

$${\rm ad}\,h=\begin{pmatrix}2 & \,0 & 0 \\ 0 & -2 & 0 \\ 0 & \,0 & 0\end{pmatrix} $$

how come we come up with $2$ instead of $[h,e]=2e$?

Constants and equations are different things. If we compute the lie bracket of two basis vectors, the result will be expressible as a linear combination of basis vectors. “Structure constants” refer to the coefficients of these basis vectors in such sums.

The coordinates of the vector $(1,0,0)\in\Bbb C^3$ are not $(1,0,0)$, the coordinates are the actual scalars $1,0,0$ in that order. Similarly the structure constants that appear when writing $[h,e]$ as a linear combination of $e,f,h$ are $2,0,0$ in that order.

what is the major points of these “structure constants”? Why are they useful – especially as I can seemingly just calculate the lie bracket to figure them out and don’t need to figure out some summation.

Do you really want to compute $(\begin{smallmatrix} 1 & 0 \\ 0 & -1\end{smallmatrix}) (\begin{smallmatrix}0 & 1 \\ 0 & 0\end{smallmatrix}) – (\begin{smallmatrix}0 & 1 \\ 0 & 0\end{smallmatrix}) (\begin{smallmatrix} 1 & 0 \\ 0 & -1\end{smallmatrix})$ every single time you need $[h,e]$? That’s a lot of superfluous matrix multiplication when all you’d have to do instead is memorize the simple fact that $[h,e]=2e$ to avoid all of that tedious work. What if the elements of the lie algebra are $8\times8$ matrices, would you rather compute every lie bracket over and over again by hand for the rest of your life, or simply compute them once and get it over with? What if the elements of the lie algebra aren’t matrices at all, they’re just abstract vectors – in what sense are you “calculating” the lie brackets then?

Not to mention, if you want to write the product of basis vectors as a linear combination of the basis vectors, then yes you *do* need to “figure out some summation” one way or another. One might as well figure out it once, write down the appropriate coefficients of the basis vectors (the structure constants), and then reuse that information later whenever it comes up again.

Suppose $R$ is a *not necessarily associative or unital* $S$-algebra (I am thinking in particular of $S$s that are commutative domains or fields like $S=\Bbb Z,\Bbb Q,\Bbb R,\Bbb C$ but these facts are more general) which has basis elements $r_1,\cdots,r_n$. That is, every element is uniquely expressible as a sum $s_1r_1+\cdots+s_nr_n$ for scalars $s_1,\cdots,s_n\in S$. Then for each $1\le i,j\le n$ we can write the products $r_ir_j$ as a $S$-linear combination of basis elements, say as $r_ir_j=\sum_{k=1}^n c_{ij}^k r_k$. These **structure constants** $c_{ij}^k$ completely determine the structure of the ring. All you would need to do is to write down the structure constants for another person to compute anything in the ring. They would know every element is a combination of basis elements, and they’d be able to compute the product of two sums of basis elements using distributivity and these structure constants.

For example, suppose I told you the structure constants of some nonassociative $\Bbb Z$-algebra I have, every element of which is $ax+by$ for some $a,b\in\Bbb Z$, are given by the following equations:

$$\begin{array}{ll} xx=x+y & xy=x \\ yx=y & yy=x-y \end{array}$$

If you want to see the constants more clearly, write it like this:

$$\begin{array}{ll} xx=\color{Blue}{1}x+\color{Blue}{1}y & xy=\color{Blue}{1}x+\color{Blue}{0}y \\ yx=\color{Blue}{0}x+\color{Blue}{1}y & yy=\color{Blue}{1}x\color{Blue}{-1}y \end{array}$$

Notice how this time the product of two basis elements can be a nontrivial combination of basis elements, instead of just a single term (at most) as in our nice $e,f,g$ situation. The act of rewriting the product of two basis elements as a linear combination of basis elements is the act of using structure constants. The constants and the equations are literally the same information. Indeed writing a linear map as a matrix is the same idea: a priori you have a bunch of equations, describing how applying the operator to a basis element yields something that can be written as a sum of basis elements, and then you collect all of the coefficients together in a matrix.

Can you use these equations to compute $(3x+2y)(2x-3y)$ as $ax+by$ for some $a,b\in\Bbb Z$? Sure you can; distribute and then use the equations. If I had omitted any of the four equations, would you still be able to calculate the product? Nope. So you see these structure constants are necessary and sufficient conditions to doing calculations in the ring. In particular this applies with lie algebras, since they are nonassociative nonunital algebras over a field (the lie bracket is the “multiplication” in the ring). If you wanted to store a lie algebra in a computer and then query it later to do lie bracket calculations, you would store the structure constants and then program the computer to distribute and then evaluate products of basis elements using structure constants.

- In a graph, connectedness in graph sense and in topological sense
- Topology: Example of a compact set but its closure not compact
- Can the product of two non invertible elements in a ring be invertible?
- parity bias for trees?
- Analytic Vectors (Nelson's Theorem)
- Three pythagorean triples
- Proof that if $p$ and $p^2+2$ are prime then $p^3+2$ is prime too
- Probability Question: Would A always have a greater chance of $A\cap B$?
- Why does $0.75 \times 1.25$ equal $0.93$ and not $1$?
- Codifferential of a $p$-vector in components
- If $p$ is a non-zero real polynomial then the map $x\mapsto \frac{1}{p(x)}$ is uniformly continuous over $\mathbb{R}$
- Why is $\sqrt{-x}*\sqrt{-x}=-x?$
- What is the inverse of the function $x \mapsto \frac{ax}{\sqrt{a^2 – |x|^2}}$?
- Calculate an integral in a measurable space
- Question related to Lagrange multipliers