Intereting Posts

Basis of primitive nth Roots in a Cyclotomic Extension?
Image of Möbius transformation
What is the relationship between the second isomorphism theorem and the third one in group theory?
$|\mathbb{Q}(e^{2\pi i/n}) : \mathbb{Q}(\cos(2\pi/n)| = 1 \text{ or } 2$
A question regarding Frobenius method in ODE
Limits: How to evaluate $\lim\limits_{x\rightarrow \infty}\sqrt{x^{n}+a_{n-1}x^{n-1}+\cdots+a_{0}}-x$
Proving the snake lemma without a diagram chase
Example where $f$ is discontinuous
Why is the commutator defined differently for groups and rings?
Congruent division of a shape in euclidean plane
What are the rules for convergence for 2 series that are added/subtracted/multiplied/divided?
what functions or classes of functions are Riemann non-integrable but Lebesgue integrable
Half order derivative of $ {1 \over 1-x }$
How to solve $x^3\equiv 10 \pmod{990}$?
$A$ and $B$ are ideals of a ring ${R}$ such that $A\cap B=\{0\}$. Prove that $st=0$ for every $s\in A, t\in B$.

First, what I know is that given the basis:

$$e = \left(\begin{array}{cc} 0 & 1 \\ 0 & 0 \end{array}\right),f = \left(\begin{array}{cc} 0 & 0 \\ 1 & 0 \end{array}\right),h = \left(\begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array}\right)$$

I want to find the ‘structure constants’, but furthermore that the adjoint representation of $sl(2,F)$, with respect to the basis given we get $$ad \, h = \left(\begin{array}{cc} 0 & 0 & 0\\ 0 & 2 & 0 \\ 0 & 0 & -2 \end{array}\right)$$ to similarly find the matrix representation representing $ad \, e$ and $ad \, f$.

- Universal Cover of $SL_{2}(\mathbb{R})$
- Classsifying 1- and 2- dimensional Algebras, up to Isomorphism
- Complex Lie algebra $\mathfrak{g}$ is solvable implies that $\mathfrak{g}'$ is nilpotent.
- The injectivity of torus in the category of abelian Lie groups
- Does $e^Xe^Y = e^Ye^X$ iff $=0$ hold once we are sufficiently close to the identity of a Lie group?
- Computing information about a Lie algebra from cartan matrix

Now I know that the structure constants (at least the answer given is $[e,f] = h$, $[e,h] = -2e$, and $[f,h] = 2f$. If I look at the structure constant formula $[x_i,x_j] = \sum_{k = 1}^{3} a_{ij}^kx_k$ where I let $x_1 = e, x_2 = f, x_3 = h$ so $i,j \in \{1,2,3\}$ I get things such as $(ad \, x_3)(x_1) = a_{3,1}^1x_1+a_{3,1}^2x_2+ a_{3,1}^3x_3 = 2x_1$.

We could only have the case such that $a_{3,1}^1 = -2$. But then as the structure constants are of the form $a_{ij}^k$ how come we come up with $2 $ instead of $[h,e]=2e$? Furthermore, while I see that I also get $(ad \, h)(h) = [h,h]=0$, $(ad \, h)(f) = -2f$ we have our $0,2, -2$ that so happen to appear in the matrix $ad \, h$, how is it that they are arranged as they are?

As a bonus, what is the major points of these “structure constants”? Why are they useful – especially as I can seemingly just calculate the lie bracket to figure them out and don’t need to figure out some summation.

Thanks for any help.

- On the relationship between the commutators of a Lie group and its Lie algebra
- When is the Killing form null?
- Spinor Mapping is Surjective
- Algorithms to compute the orbit of the action of the Weyl group of a semisimple Lie algebra on a given weight?
- Representing natural numbers as matrices by use of $\otimes$
- Prove that the augmentation ideal in the group ring $\mathbb{Z}/p\mathbb{Z}G$ is a nilpotent ideal ($p$ is a prime, $G$ is a $p$-group)
- Exponential map is surjective for compact connected Lie group
- Understanding the representations of group and Modules.
- Does $e^Xe^Y = e^Ye^X$ iff $=0$ hold once we are sufficiently close to the identity of a Lie group?
- Construction of an Irreducible Module as a Direct Summand

The adjoint ${\rm ad}\,h$ is the linear map $x\mapsto[h,x]$, or simply $[h,-]$ for abbreviation. To determine the matrix of this linear map, we calculate its effect on the basis vectors $e,f,h$:

$$\color{Red}{[h,}e\color{Red}{]}=\color{Blue}{2}e+\color{Blue}{0}f+\color{Blue}{0}h$$

$$\color{Red}{[h,}f\color{Red}{]}=\color{Blue}{0}e\color{Blue}{-2}f+\color{Blue}{0}h \tag{$\circ$}$$

$$\color{Red}{[h,}h\color{Red}{]}=\color{Blue}{0}e+\color{Blue}{0}f+\color{Blue}{0}h $$

Therefore the matrix of this linear map is given by

$${\rm ad}\,h=\begin{pmatrix}2 & \,0 & 0 \\ 0 & -2 & 0 \\ 0 & \,0 & 0\end{pmatrix} $$

how come we come up with $2$ instead of $[h,e]=2e$?

Constants and equations are different things. If we compute the lie bracket of two basis vectors, the result will be expressible as a linear combination of basis vectors. “Structure constants” refer to the coefficients of these basis vectors in such sums.

The coordinates of the vector $(1,0,0)\in\Bbb C^3$ are not $(1,0,0)$, the coordinates are the actual scalars $1,0,0$ in that order. Similarly the structure constants that appear when writing $[h,e]$ as a linear combination of $e,f,h$ are $2,0,0$ in that order.

what is the major points of these “structure constants”? Why are they useful – especially as I can seemingly just calculate the lie bracket to figure them out and don’t need to figure out some summation.

Do you really want to compute $(\begin{smallmatrix} 1 & 0 \\ 0 & -1\end{smallmatrix}) (\begin{smallmatrix}0 & 1 \\ 0 & 0\end{smallmatrix}) – (\begin{smallmatrix}0 & 1 \\ 0 & 0\end{smallmatrix}) (\begin{smallmatrix} 1 & 0 \\ 0 & -1\end{smallmatrix})$ every single time you need $[h,e]$? That’s a lot of superfluous matrix multiplication when all you’d have to do instead is memorize the simple fact that $[h,e]=2e$ to avoid all of that tedious work. What if the elements of the lie algebra are $8\times8$ matrices, would you rather compute every lie bracket over and over again by hand for the rest of your life, or simply compute them once and get it over with? What if the elements of the lie algebra aren’t matrices at all, they’re just abstract vectors – in what sense are you “calculating” the lie brackets then?

Not to mention, if you want to write the product of basis vectors as a linear combination of the basis vectors, then yes you *do* need to “figure out some summation” one way or another. One might as well figure out it once, write down the appropriate coefficients of the basis vectors (the structure constants), and then reuse that information later whenever it comes up again.

Suppose $R$ is a *not necessarily associative or unital* $S$-algebra (I am thinking in particular of $S$s that are commutative domains or fields like $S=\Bbb Z,\Bbb Q,\Bbb R,\Bbb C$ but these facts are more general) which has basis elements $r_1,\cdots,r_n$. That is, every element is uniquely expressible as a sum $s_1r_1+\cdots+s_nr_n$ for scalars $s_1,\cdots,s_n\in S$. Then for each $1\le i,j\le n$ we can write the products $r_ir_j$ as a $S$-linear combination of basis elements, say as $r_ir_j=\sum_{k=1}^n c_{ij}^k r_k$. These **structure constants** $c_{ij}^k$ completely determine the structure of the ring. All you would need to do is to write down the structure constants for another person to compute anything in the ring. They would know every element is a combination of basis elements, and they’d be able to compute the product of two sums of basis elements using distributivity and these structure constants.

For example, suppose I told you the structure constants of some nonassociative $\Bbb Z$-algebra I have, every element of which is $ax+by$ for some $a,b\in\Bbb Z$, are given by the following equations:

$$\begin{array}{ll} xx=x+y & xy=x \\ yx=y & yy=x-y \end{array}$$

If you want to see the constants more clearly, write it like this:

$$\begin{array}{ll} xx=\color{Blue}{1}x+\color{Blue}{1}y & xy=\color{Blue}{1}x+\color{Blue}{0}y \\ yx=\color{Blue}{0}x+\color{Blue}{1}y & yy=\color{Blue}{1}x\color{Blue}{-1}y \end{array}$$

Notice how this time the product of two basis elements can be a nontrivial combination of basis elements, instead of just a single term (at most) as in our nice $e,f,g$ situation. The act of rewriting the product of two basis elements as a linear combination of basis elements is the act of using structure constants. The constants and the equations are literally the same information. Indeed writing a linear map as a matrix is the same idea: a priori you have a bunch of equations, describing how applying the operator to a basis element yields something that can be written as a sum of basis elements, and then you collect all of the coefficients together in a matrix.

Can you use these equations to compute $(3x+2y)(2x-3y)$ as $ax+by$ for some $a,b\in\Bbb Z$? Sure you can; distribute and then use the equations. If I had omitted any of the four equations, would you still be able to calculate the product? Nope. So you see these structure constants are necessary and sufficient conditions to doing calculations in the ring. In particular this applies with lie algebras, since they are nonassociative nonunital algebras over a field (the lie bracket is the “multiplication” in the ring). If you wanted to store a lie algebra in a computer and then query it later to do lie bracket calculations, you would store the structure constants and then program the computer to distribute and then evaluate products of basis elements using structure constants.

- A certain problem concerning a Hilbert class field
- To show that function is constant
- How to find all $m,n$ such that $mn|m^2+n^2+1$?
- Number of combinations with repetitions (Constrained)
- cardinality of all real sequences
- Inequality involving the regularized gamma function
- How many directed graphs of size n are there where each vertex is the tail of exactly one edge?
- What is a Form Domain of an Operator?
- Intuitive explanation for integration
- Zeta function zeros and analytic continuation
- when we have circle in hyperbolic plane,what is the center and radius of this circle in Euclidean plane?
- Visualization of Lens Spaces
- undergraduate math vs graduate math
- Is $\mathbf{R}^\omega$ in the uniform topology connected?
- Rationals of the form $\frac{p}{q}$ where $p,q$ are primes in $$