Intereting Posts

Accumulation Points for $S = \{(-1)^n + \frac1n \mid n \in \mathbb{N}\}$
How to justify unicyclic connected graphs formula?
why does the reduced row echelon form have the same null space as the original matrix?
Does $\sum\limits_{k=1}^n 1 / k ^ 2$ converge when $n\rightarrow\infty$?
How I can doing a sum of a vector in $ℤ^{r}$ with a equivalence class in $ℤ/nℤ$
what is the tensor product $\mathbb{H\otimes_{R}H}$
Given sum of uniform random variables $Z_1 + Z_2 + \dots + Z_n =1$,what's the probability that $k$ R.Vs are at least $1/n$?
How to construct symmetric and positive definite $A,B,C$ such that $A+B+C=I$?
Computing the local ring of an affine variety
Solve $2000x^6+100x^5+10x^3+x-2=0$
Are $R=K/(ad-bc, a^2c-b^3, bd^2-c^3, ac^2-b^2d)$ and $K$ isomorphic?
Show that $u_1^3+u_2^3+\cdots+u_n^3$ is a multiple of $u_1+u_2+\cdots+u_n$
Evaluating the limit of $\sqrt{2+\sqrt{2+\sqrt{2+\cdots+\sqrt{2}}}}$ when $n\to\infty$
$\ell_p$ is Hilbert space if and only if $p=2$
Question on relative homology

I am horribly confused by the cluster of terminology and operations surrounding “change of basis” operations.

Finding alternate references on this topic only seems to add to the confusion as there doesn’t

appear to be a consistent approach to defining and notating these operations. Perhaps someone will

be able to clarify just one simple aspect of this which is as follows:

Let $u = \{u_1, \dots, u_n \}$ and and $w = \{w_1, \dots, w_n\}$ be bases for a vector space $V$. Then,

necessarily, there exists a unique linear operator $T:V \rightarrow V$ such that $T(u_i) = w_i$. Now,

the most natural thing in the world to call the matrix of this operator is the **change of basis matrix from $u$ to $w$.**

Give this operator a vector in $u$ and it spits out a vector in $w$. Now, whether it is correct I don’t

know, but I’ve seen the matrix of this operator called the

reversing the target and source bases. This latter interpretation makes no sense because

because it takes vectors in $u$ and produces vectors in $w$! I’ve seen this interpretation in more than

one place so it can’t just be a fluke. So…which is it?

- algebra - matrices and polynoms
- Block Diagonal Matrix Diagonalizable
- The calculation of $\dim(U + V + W)$
- Finding a matrix with a given null space.
- If $AB=BA$, show that $B$ is diagonalizable.
- Why is a matrix of indeterminates diagonalizable?

- Show that Pn is an (n+1)-dimensional subspace
- The annihilator of an intersection is the sum of annihilators
- Are all algebraic commutative operations always associative?
- There is a subspace $W$ of $V$ such that $V = U \oplus W$
- In a complex vector space, $\langle Tx,x \rangle=0 \implies T = 0$
- Gaussian Elimination, Question Check.
- How to solve this to find the Null Space
- Idempotent matrix is diagonalizable?
- Eigenvalue of a linear transformation substituting $t+1$ for $t$ in polynomials.
- High-level linear algebra book

The “change of basis matrix from $\beta$ to $\gamma$” or “change of coordinates matrix from $\beta$-coordinates to $\gamma$-coordinates” is the matrix $A$ with the property that for every vector $v\in V$,

$$A[v]_{\beta} = [v]_{\gamma},$$

where $[x]_{\alpha}$ is the coordinate vector of $x$ relative to $\alpha$. This matrix $A$ is obtained by considering the coordinate matrix of the *identity linear transformation*, from $V$-with-basis-$\beta$ to $V$-with-basis-$\gamma$; i.e., $[\mathrm{I}_V]_{\beta}^{\gamma}$.

Now, you say you want to take $T\colon V\to V$ that sends $v_i$ to $w_i$, and consider “the matrix of this linear transformation”. *Which* matrix? With respect to *what* basis? The matrix of $T$ relative to $\beta$ and $\gamma$, $[T]_{\beta}^{\gamma}$, is just the identity matrix. So not that one.

Now, if you take $[T]_{\beta}^{\beta}$; i.e., you express the vectors $w_i$ in terms of the vectors $v_i$, what do you get? You get the matrix that takes $[x]_{\gamma}$ and gives you $[x]_{\beta}$; that is, you get the change-of-coordinates matrix from $\gamma$ to $\beta$. To see this, note that for example that $[w_1]_{\gamma} = (1,0,0,\ldots,)^t$, so $[T]_{\beta}^{\beta}[w_1]_\gamma$ is the first column of $[T]_{\beta}^{\beta}$, which is how you express $w_1$ in terms of $\beta$.

Which is why it would be the “change of basis matrix *from* $\gamma$ *to* $\beta$. Because, as Qiaochu mentions in the answer I linked to, the “translation” of coordinates vectors achieved by this matrix goes “the other way”: it translates from $\gamma$-coordinates to $\beta$-coordinates, even though you “defined” $T$ as “going” from $\beta$ to $\gamma$.

Maybe looking at the one-dimensional case will clarify the point of confusion. “seconds” and “minutes” are both units of time and can be taken to be bases of a one-dimensional real vector space representing time.

If I ask, what is the factor that takes me from the basis {seconds} to the basis {minutes}? Then the answer is 60. (The (1 by 1) matrix consisting of the number 60 is the $T$ of the question.)

However, if I ask, 120 seconds is equal to how many minutes? Then the factor I need to apply is 1/60.

In either case, I am “going from seconds to minutes”, but in the first case I am changing the *basis elements themselves*, from the basis {seconds} to the basis {minutes}, while in the other case, I am converting a *fixed unit of time* from seconds to minutes. The matrices in the two cases are inverses of each other.

The difference in terminology depends on which of these procedures you think should be called the “change of basis matrix”.

If $(u_1,\ldots,u_n)$ and $(w_1,\ldots,w_n)$ are bases of $V$ then there is indeed a unique linear transformation $T:\ V\to V$ such that $T(u_i)=w_i$ $(1\leq i\leq n)$, but this transformation is of no help in understanding what is going on here.

What is at stake is the following: Any vector $x\in V$ has some coordinates $(x_1,\ldots, x_n)$ with respect to the “old” basis $(u_1,\ldots,u_n)$ and another set of coordinates $(\bar x_1,\ldots, \bar x_n)$ with respect to the “new” basis $(w_1,\ldots,w_n)$. The vectors $x$ do not move, but you want to know the connection between the $x_k$ and the $\bar x_i$.

The data about this coordinate transformation are stored in a matrix $T=(t_{ik})_{1\leq i\leq n,\ 1\leq k\leq n}$ in the following way: Any “new” basis vector $w_i$ is a linear combination of the old basis vectors $u_k$, therefore there are (given) numbers $t_{ik}$ such that

$$w_i=\sum_{k=1}^n t_{ki} u_k\ .$$

This is to say that in the columns of $T$ we see the “old coordinates” of the “new” basis vectors. Now any vector $x\in V$ has “new coordinates” $\bar x_i$. Writing this out we have

$$x=\sum_{i=1}^n \bar x_i w_i= \sum_{i,k} \bar x_i t_{ki} u_k= \sum_{k=1}^n \Bigl(\sum_{i=1}^n {t_{ki} \bar x_i}\Bigr) u_k\ ,$$

and we see that the “old coordinates” $x_k$ of the same vector $x\in V$ are given by

$$x_k\ =\ \sum_{i=1}^n t_{ki}\bar x_i\ .$$

If we write our “coordinate vectors” as column vectors we therefore have the formula $x=T\ \bar x$.

One has to get accustomed to the fact that the symbol $x$ denotes at the same time the “geometric object” $x$ and its “coordinate vector” with respect to the “old basis”.

- equation of a curve given 3 points and additional (constant) requirements
- Extreme points of unit ball in $C(X)$
- linear dependence proof using subsets
- Prove that if Rank$(A)=n$, then Rank$(AB)=$Rank$(B)$
- Prove that if $n^2$ is divided by 3, then also $n$ can also be divided by 3.
- How to debug math?
- Let $M$ be a bounded subset of the space $C_{}$. Prove that the set of all functions $F(x)=\int^{x}_{a}f(t)dt$ with $f\in{M}$ compact.
- Number of well-ordering relations on a well-orderable infinite set $A$?
- Probability for the length of the longest run in $n$ Bernoulli trials
- Pythagorean quadruples
- Showing that a level set is not a submanifold
- Correctness in Proof By Induction for a Collatz-ish function
- Representation of $S^{3}$ as the union of two solid tori
- Known proofs of Wirtinger's Inequality?
- Prove that the $\lim_{(x,y) \to (0,0)}h(x,y)$ Does not Exist using Polar Coordinates