Prove that if $S$ is a change of basis matrix, its columns are a basis for $\mathbb{R}^n$

Let’s say we have a basis $B$ of $\mathbb{R}^n$ consisting of vectors $\vec{v}_1$ through $\vec{v}_n$, and some other basis $C$ of $\mathbb{R}^n$. Then, would $[\vec{v}_1]_C$ through $[\vec{v}_n]_C$ be a basis for $\mathbb{R}^n$ as well? It seems obvious, but I am not sure how to go about this.

I am trying to prove this by creating a matrix $S$ that is a change of basis matrix, so it has $[\vec{v}_1]_C$ through $[\vec{v}_n]_C$ as its columns. So I get $C$$S$=$B$. I know that kernels of both $C$ and $B$ are equal to zero, since they are square matrices with independent columns as they are bases, but I am not sure what that tells me about $S$. Maybe there is a way to show that a matrix with independent columns ($C$) times a matrix with not-independent columns ($S$) can’t (always/sometimes?) yield another matrix with independent columns ($B$).

Solutions Collecting From Web of "Prove that if $S$ is a change of basis matrix, its columns are a basis for $\mathbb{R}^n$"

Yes, the coordinate vectors of $\mathbf{v}_1,\ldots,\mathbf{v}_n$ with respect to the matrix $C$ will be a basis for $\mathbb{R}^n$.

Note that you are abusing notation somewhat: if $B$ and $C$ are bases, then they aren’t matrices; what you really want is to have a basis $\beta$, and $B$ is the matrix whose columns are the vectors in $\beta$; and another basis $\gamma$, and $C$ is the matrix whose columns are the vectors in $\gamma$.

From what you have: since $CS = B$, and the kernel of $B$ is zero, then so is the kernel of $S$: if $\mathbf{x}$ lies in the kernel of $S$, then
$$\mathbf{0} = C\mathbf{0} = C(S\mathbf{x}) = (CS)\mathbf{x} = B\mathbf{x}.$$
But since the kernel of $B$ is zero, that means that $\mathbf{x}=\mathbf{0}$. So the kernel of $S$ is zero, which means that its columns are linearly independent, hence a basis. But the columns of $S$ are the coordinate vectors of the elements of $\beta$ written in terms of $\gamma$, which gives your result.

For your final question, yes, you are correct. Here’s a general statement you can prove (using the same technique as above):

If $A$ is $n\times p$, $B$ is $p\times m$, and $C$ is $n\times m$ with $AB=C$, then $\mathrm{ker}(B)\subseteq\mathrm{ker}(C)$. That is, if $\mathbf{x}\in\mathbb{R}^m$ is in the kernel of $B$, then it is also in the kernel of $C$.

Another way of attacking the problem is to prove that $[\mathbf{v}_1]_{\gamma},\ldots,[\mathbf{v}_n]_{\gamma}$ are linearly independent directly. But this is easy: suppose that
$$\alpha_1[\mathbf{v}_1]_{\gamma} + \cdots + \alpha_n[\mathbf{v}_n]_{\gamma} = \mathbf{0}.$$ Then we have:
$$\mathbf{0} = \alpha_1[\mathbf{v}_1]_{\gamma} + \cdots + \alpha_n[\mathbf{v}_n]_{\gamma}
= [\alpha_1\mathbf{v}_1+\cdots+\alpha_n\mathbf{v}_n]_{\gamma}.$$
But this means that $\alpha_1\mathbf{v}_1+\cdots+\alpha_n\mathbf{v}_n=\mathbf{0}$, and since $\mathbf{v}_1,\ldots,\mathbf{v}_n$ are linearly independent, that means $\alpha_1=\cdots=\alpha_n=0$.

In essence, it’s the same argument as you have above, but playing directly with the vectors instead of the associated change-of-basis matrices.

Yes. If $SB = C$ then $S = CB^{-1}$, which is also an invertible matrix since $B$ and $C$ are invertible.

If $B$ and $C$ are basis, there is a basis change matrix $S$ (or rather, a linear application $f$ associated to $S$) that changes $B$ into $C$.
But $f$ is not only the application that turns one basis into another, it is an application that turns any basis into another basis.
If $(e_1,\ldots,e_n)$ is the canonical basis of $\mathbb{R}^n$, $f$ turns this basis into a new basis, whose vectors are the column vectors of its matrix representation $S$.

Those applications $f$ who turn basis into basis are precisely the bijective linear applications from $\mathbb{R}^n$ to $\mathbb{R}^n$.
It is easy to check that if $f$ is a change of basis, then it is bijective ; and if $f$ is bijective, it turns any basis into another basis.