Short matrix algebra question

$A$ is a square matrix, and there is a matrix $D$ such that $AD=I$.

I need help to prove either (a) or (b) below:

(a) Show that for each $\vec b$, $A\vec x=\vec b$ has a unique solution.

OR

(b) Show that $A$ is invertible.


For (a), $AD\vec b=I\vec b = \vec b$, so obviously the given equation has at least one solution $\vec x =D\vec b$. But how to show that the solution is unique?

For (b), I’m guessing we should show either that $DA=I$ or that $DA=AD$ (and thus $D=A^{-1}$), but I’m not sure how to do this without assuming that A is invertible, which is what we are needing to show.

Solutions Collecting From Web of "Short matrix algebra question"

Here is a proof which does not use group theory or vector spaces:

I will establish equivalence of the following. The details are left for you to be filled in. Once you have established (b) by the sketch below, establishing (a) is trivial.

The following are equivalent for any square matrix $A$:

  1. $A$ has a left inverse.
  2. $Ax = 0$ has only the trivial solution.
  3. The reduced row echelon form of $A$ is the identity matrix.
  4. $A$ is a product of elementary matrices.
  5. $A$ is invertible.
  6. A has a right inverse.

$1\implies 2$: If $A$ has a left inverse $L$ then $Ax=0 \implies LAx=L0\implies Ix=x=0$.

$2\implies 3$: The augmented matrix for $Ax=0$ must have been reduced to $Ix=0$ by Gauss Jordan elimination.

$3\implies 4$: Since $E_1\cdots E_kA=I$ and each elementary matrix is invertible (and the inverse is also an elementary matrix) so $A=E_k^{-1}\cdots E_1^{-1}$.

$4\implies 5$: Each elementary matrix is invertible.

$5\implies 6$: Trivial.

$6\implies 1$: If $R$ is the right inverse of $A$, then $A$ is the left inverse of $R$ and by $1\implies 5$ $R$ is invertible with inverse $A$, following which $R$ is the left inverse of $A$.

Let’s use a determinant argument.

$AD=I \Longrightarrow \det(AD)=\det(I) \Longrightarrow \det(A)\det(D)=1.$

If $A$ or $D$ are square and non-invertible, then $\det(A)=0$ or $\det(D)=0$. By above, this cannot be the case.

Therefore, both $A$ and $D$ are invertible and belong to $GL_n(\mathbb{F})$. The inverse is unique, so $D = A^{-1}$.

Showing part (b) implies part (a): if the inverse $A^{-1}$ exists, it is unique, so $A^{-1}Ax = A^{-1}b \Longrightarrow x = A^{-1}b$.

Claim: A square matrix $\,A\,$ is invertible iff $\,AD=I\,$ for some (square) matrix $\,D\,$

The proof of the above depends on the following

Proposition: in $\,x\in G\,$ is a group with unit $\,1\,$ and $\,xx=x\,$ , then $\,x=1\,$

So using now associativity:
$$(DA)(DA)=D(AD)A=DIA=DA\Longrightarrow DA=I=AD$$

Putting $\,D=A^{-1}\,$ gives us the usual notation for inverse, and the above solves positively both questions:
$$(a) \,\,Ax=b\Longrightarrow A^{-1}Ax=A^{-1}b\Longrightarrow A^{-1}b$$
$(b)\,$It’s done above

Added: Following the remarks in the comments, and since we cannot assume any of $\,D,A,DA, AD\,$ is invertible (and thus talking of a group is out of order), I shall try an approach proposed by Rolando (assume the matrices are $\,n\times n\,$):

We’re given $\,AD=I,$ . Either in terms of matrices or of linear operators, this means that $\,\dim Im(AD)=n\,$ (is full, i.e. $\,AD\,$ is onto). Now we have a general claim whose proof is elementary:

Claim: If we have functions $\,f:A\to B\,\,,\,g:B\to C\,$ , then $\,g\circ f\,$ onto $\,\Longrightarrow\,g\,$ onto.

From this it follows that $\,A\,$ is onto $\,\Longleftrightarrow \dim Im(A)=n\Longleftrightarrow \dim\ker A=0\,$ ,which means $\,A\,$ is a bijection and, thus, invertible.

For any $b$ in your vector space, you have
$$AD(b)=I(b)=b$$
In particular $A$ is surjective, and $D$ is injective. Since an endomorphism of a vector space is surjective if and only if it is bijective by the rank nullity theorem, $A$ is invertible.

Since $AD=I$, the range of $A$ is the full vector space: For any vector $\vec v$ you can find a vector $\vec w$ so that $A\vec w=\vec v$: Just use $\vec w=D\vec v$. Denote the dimension of the vector space with $d$ (I assume we are in a finite-dimensional vector space, because otherwise I would not know how to define a square matrix). Thus the range of $A$ has dimension $d$.

Now assume there are two vectors $\vec v_1\neq\vec v_2$ so that $A\vec v_1=A\vec v_2$. Then by linearity, $A(\vec v_1-\vec v_2)=\vec 0$. Now you can take the vector $\vec b_1 := \vec v_1-\vec v_2$ (which by the assumption is nonzero) and extend that with $d-1$ other vectors to a basis $\vec b_1,\dots,\vec b_n$ of the vector space. Now look at the image of an arbitrary vector $w=\sum w_i\vec b_i$: Since $\vec b_1$ is, by construction, always mapped to $\vec 0$, every vector is mapped to the $\sum_{k=\color{red}2}^d w_i A\vec b_i$. But the space spanned by those vectors can only have the maximal dimension $d-1$, which contradicts the previously shown fact that the range of $A$ is the whole vector space. Thus the assumption $\vec v_1\neq\vec v2$ but $A\vec v_1=A\vec v_2$ cannot be true.

Fisrt, we need to prove that $A$ is invertible. Given $ AD=I \Rightarrow |AD|=1 \Rightarrow |A||D|=1$ $\Rightarrow$ $|A| \neq 0$ and $|D|\neq 0\,.$ Now, since $|A|\neq 0$ then $A$ is invertible.

The other step, you need to prove that $D=A^{-1}$. One way is the following or the other ways as in the other answers

$ AD=I \Rightarrow AD=AA^{-1} \Rightarrow AD-AA^{-1}=0 \Rightarrow A(D-A^{-1})=0 \Rightarrow (D-A^{-1})=0\,, $ if not then $AD \neq I\,,$ $\Rightarrow D = A^{-1}\,,$

since A does not equal the zero matrix.