Assuming $AB=I$ prove $BA=I$

Possible Duplicate:
If $AB = I$ then $BA = I$

Most introductory linear algebra texts define the inverse of a square matrix $A$ as such:

Inverse of $A$, if it exists, is a matrix $B$ such that $AB=BA=I$.

That definition, in my opinion, is problematic. A few books (in my sample less than 20%) give a different definition:

Inverse of $A$, if it exists, is a matrix $B$ such that $AB=I$. Then they go and prove that $BA=I$.

Do you know of a proof other than defining inverse through determinants or through using rref?

Is there a general setting in algebra under which $ab=e$ leads to $ba=e$ where $e$ is the identity?

Solutions Collecting From Web of "Assuming $AB=I$ prove $BA=I$"

Multiply both sides of $AB-I=0$ on the left by $B$ to get
$$
(BA-I)B=0\tag{1}
$$
Let $\{e_j\}$ be the standard basis for $\mathbb{R}^n$. Note that $\{Be_j\}$ are linearly independent: suppose that
$$
\sum_{j=1}^n a_jBe_j=0\tag{2}
$$
then, multiplying $(2)$ on the left by $A$ gives
$$
\sum_{j=1}^n a_je_j=0\tag{3}
$$
which implies that $a_j=0$ since $\{e_j\}$ is a basis. Thus, $\{Be_j\}$ is also a basis for $\mathbb{R}^n$.

Multiplying $(1)$ on the right by $e_j$ yields
$$
(BA-I)Be_j=0\tag{4}
$$
for each basis vector $Be_j$. Therefore, $BA=I$.

Failure in an Infinite Dimension

Let $A$ and $B$ be operators on infinite sequences. $B$ shifts the sequence right by one, filling in the first element with $0$. $A$ shifts the sequence left, dropping the first element.

$AB=I$, but $BA$ sets the first element to $0$.

Arguments that assume $A^{-1}$ or $B^{-1}$ exist and make no reference to the finite dimensionality of the vector space, usually fail to this counterexample.

Without the assumption of $A$ and $B$ being square matrices, we can find counterexamples. For example:
$$
\left(\begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ \end{array}\right)
\left(\begin{array}{cc} 1 & 0 \\ 0 & 1 \\ 0 & 0 \\ \end{array}\right)
=
\left(\begin{array}{cc} 1 & 0 \\ 0 & 1 \\ \end{array}\right)
$$
and
$$
\left(\begin{array}{cc} 1 & 0 \\ 0 & 1 \\ 0 & 0 \\ \end{array}\right)
\left(\begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ \end{array}\right)
=
\left(\begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 &
0 \\ \end{array}\right).
$$

For square matrices, it was proved in several ways for square matrices in the question:

If $AB = I$ then $BA = I$

Is there a general setting in algebra under which ab=e leads to ba=e where e is the identity?

Finiteness or finite-dimensionality or rigidities that follow from those, such as:

  • a Dedekind-finite set is not infinite
  • the double-dual $V^{**}$ being naturally isomorphic to $V$,
  • antipode^2=identity, and other fancier analogues (think here of phrases like rigid tensor categories with duals).

There is a duality between injective and surjective, or left and right, and you need some setting in which the transpose from one to the other is its own inverse. The linear algebra result for finite matrices ultimately rests on the same principle for functions on finite sets, and on the dimension of a finite-dimensional vector space being well-defined (which is closely related to the cardinality of a finite set being well-defined).

For $A$ and $B$ square, $AB=I$ implies $(AB)^{-1} = B^{-1}A^{-1}=I$. Then multiply $B$ from the left side, $A$ from the right side. $BB^{-1}A^{-1}A=BIA$ implies $BA=I$.