Intereting Posts

Unique up to unique isomorphism
$gHg^{-1}\subset H$ whenever $Ha\not = Hb$ implies $aH\not =bH$
Prove that $\|UVU^{-1}V^{-1}-I\|\leq 2\|U-I\|\|V-I\|$
Limit of a sequence involving root of a factorial: $\lim_{n \to \infty} \frac{n}{ \sqrt {n!}}$
Let $\text{Rank}{(A – \lambda I)^k} = \text{Rank}{(B – \lambda I)^k}$. Why are $A$ and $B$ similar?
Evaluate the following integral?
Proof of the duality of the dominance order on partitions
Monad = Reflective Subcategory?
Evaluate $\int_0^1\left(\frac{1}{\ln x} + \frac{1}{1-x}\right)^2 \mathrm dx$
Nets of Geodesic spheres
$\pi_n(X^n)$ free Abelian?
Riemann Integrable $f$ and Real Analysis Proofs
Completeness of sequence of reals with finitely many nonzero terms
Do there exist a non-PIR in which every countably generated prime ideal is principal?
general topology exercise 777

$A$ is a square matrix, and there is a matrix $D$ such that $AD=I$.

I need help to prove either (a) or (b) below:

(a) Show that for each $\vec b$, $A\vec x=\vec b$ has a unique solution.

- Can you equip every vector space with a Hilbert space structure?
- Is the natural map $L^p(X) \otimes L^p(Y) \to L^p(X \times Y)$ injective?
- Linear Algebra: determine whether the sets span the same subspace
- Cross product in $\mathbb R^n$
- Show if $A^TA = I$ and $\det A = 1$ , then $A$ is a rotational matrix
- What is the point of “seeing” a set of polynomials or functions as a vector space?

OR

(b) Show that $A$ is invertible.

For (a), $AD\vec b=I\vec b = \vec b$, so obviously the given equation has at least one solution $\vec x =D\vec b$. But how to show that the solution is *unique*?

For (b), I’m guessing we should show either that $DA=I$ or that $DA=AD$ (and thus $D=A^{-1}$), but I’m not sure how to do this without assuming that A is invertible, which is what we are needing to show.

- Tensor products of infinite-dimensional spaces and other objects
- Commutation when minimal and characteristic polynomial agree
- Change of Basis Confusion
- what happens to rank of matrices when singular matrix multiply by non-singular matrix??
- a convex function on a 2 dimensional closed convex set
- Hessian matrix of a quadratic form
- Prove or Disprove the Existence of Solutions…
- $A^2=A^*A$.Why $A$ is Hermitian matrix?
- Eigenvalues of symmetric matrices are real without (!) complex numbers
- Prove that Anosov Automorphisms are chaotic

Here is a proof which does not use group theory or vector spaces:

I will establish equivalence of the following. The details are left for you to be filled in. Once you have established (b) by the sketch below, establishing (a) is trivial.

The following are equivalent for any square matrix $A$:

- $A$ has a left inverse.
- $Ax = 0$ has only the trivial solution.
- The reduced row echelon form of $A$ is the identity matrix.
- $A$ is a product of elementary matrices.
- $A$ is invertible.
- A has a right inverse.

$1\implies 2$: If $A$ has a left inverse $L$ then $Ax=0 \implies LAx=L0\implies Ix=x=0$.

$2\implies 3$: The augmented matrix for $Ax=0$ must have been reduced to $Ix=0$ by Gauss Jordan elimination.

$3\implies 4$: Since $E_1\cdots E_kA=I$ and each elementary matrix is invertible (and the inverse is also an elementary matrix) so $A=E_k^{-1}\cdots E_1^{-1}$.

$4\implies 5$: Each elementary matrix is invertible.

$5\implies 6$: Trivial.

$6\implies 1$: If $R$ is the right inverse of $A$, then $A$ is the left inverse of $R$ and by $1\implies 5$ $R$ is invertible with inverse $A$, following which $R$ is the left inverse of $A$.

Let’s use a determinant argument.

$AD=I \Longrightarrow \det(AD)=\det(I) \Longrightarrow \det(A)\det(D)=1.$

If $A$ or $D$ are square and non-invertible, then $\det(A)=0$ or $\det(D)=0$. By above, this cannot be the case.

Therefore, both $A$ and $D$ are invertible and belong to $GL_n(\mathbb{F})$. The inverse is unique, so $D = A^{-1}$.

Showing part (b) implies part (a): if the inverse $A^{-1}$ exists, it is unique, so $A^{-1}Ax = A^{-1}b \Longrightarrow x = A^{-1}b$.

**Claim:** A square matrix $\,A\,$ is invertible iff $\,AD=I\,$ for some (square) matrix $\,D\,$

The proof of the above depends on the following

**Proposition:** in $\,x\in G\,$ is a group with unit $\,1\,$ and $\,xx=x\,$ , then $\,x=1\,$

So using now associativity:

$$(DA)(DA)=D(AD)A=DIA=DA\Longrightarrow DA=I=AD$$

Putting $\,D=A^{-1}\,$ gives us the usual notation for inverse, and the above solves positively both questions:

$$(a) \,\,Ax=b\Longrightarrow A^{-1}Ax=A^{-1}b\Longrightarrow A^{-1}b$$

$(b)\,$It’s done above

**Added:** Following the remarks in the comments, and since we cannot assume any of $\,D,A,DA, AD\,$ is invertible (and thus talking of a group is out of order), I shall try an approach proposed by Rolando (assume the matrices are $\,n\times n\,$):

We’re given $\,AD=I,$ . Either in terms of matrices or of linear operators, this means that $\,\dim Im(AD)=n\,$ (is full, i.e. $\,AD\,$ is onto). Now we have a general claim whose proof is elementary:

**Claim:** If we have functions $\,f:A\to B\,\,,\,g:B\to C\,$ , then $\,g\circ f\,$ onto $\,\Longrightarrow\,g\,$ onto.

From this it follows that $\,A\,$ is onto $\,\Longleftrightarrow \dim Im(A)=n\Longleftrightarrow \dim\ker A=0\,$ ,which means $\,A\,$ is a bijection and, thus, invertible.

For any $b$ in your vector space, you have

$$AD(b)=I(b)=b$$

In particular $A$ is surjective, and $D$ is injective. Since an endomorphism of a vector space is surjective if and only if it is bijective by the rank nullity theorem, $A$ is invertible.

Since $AD=I$, the range of $A$ is the full vector space: For any vector $\vec v$ you can find a vector $\vec w$ so that $A\vec w=\vec v$: Just use $\vec w=D\vec v$. Denote the dimension of the vector space with $d$ (I assume we are in a finite-dimensional vector space, because otherwise I would not know how to define a square matrix). Thus the range of $A$ has dimension $d$.

Now assume there are two vectors $\vec v_1\neq\vec v_2$ so that $A\vec v_1=A\vec v_2$. Then by linearity, $A(\vec v_1-\vec v_2)=\vec 0$. Now you can take the vector $\vec b_1 := \vec v_1-\vec v_2$ (which by the assumption is nonzero) and extend that with $d-1$ other vectors to a basis $\vec b_1,\dots,\vec b_n$ of the vector space. Now look at the image of an arbitrary vector $w=\sum w_i\vec b_i$: Since $\vec b_1$ is, by construction, always mapped to $\vec 0$, every vector is mapped to the $\sum_{k=\color{red}2}^d w_i A\vec b_i$. But the space spanned by those vectors can only have the maximal dimension $d-1$, which contradicts the previously shown fact that the range of $A$ is the whole vector space. Thus the assumption $\vec v_1\neq\vec v2$ but $A\vec v_1=A\vec v_2$ cannot be true.

Fisrt, we need to prove that $A$ is invertible. Given $ AD=I \Rightarrow |AD|=1 \Rightarrow |A||D|=1$ $\Rightarrow$ $|A| \neq 0$ and $|D|\neq 0\,.$ Now, since $|A|\neq 0$ then $A$ is invertible.

The other step, you need to prove that $D=A^{-1}$. One way is the following or the other ways as in the other answers

$ AD=I \Rightarrow AD=AA^{-1} \Rightarrow AD-AA^{-1}=0 \Rightarrow A(D-A^{-1})=0 \Rightarrow (D-A^{-1})=0\,, $ if not then $AD \neq I\,,$ $\Rightarrow D = A^{-1}\,,$

since A does not equal the zero matrix.

- Why bring in Borel sets and $\sigma$-algebra in probability theory?
- What the implicit function theorem is actually showing
- Simple Logical reasoning question
- Evaluating $\lim_{n\rightarrow\infty}x_{n+1}-x_n$
- Maximizing a linear function over an ellipsoid
- there exist some real $a >0$ such that $\tan{a} = a$
- Riesz's Lemma on $l_\infty$
- Minimal value of $\sqrt{(x-a)^2 + b^2} +\sqrt{(x-c)^2 + d^2}$ without derivatives and without distance formula
- The Matrix Equation $X^{2}=C$
- Normal and central subgroups of finite $p$-groups
- Variation on the Monty Hall Problem
- Inductive proof that $(m!^n)n! \mid (mn)!$
- Compute $\lim_{x\to 0}\dfrac{\sqrt{\cos x}-\sqrt{\cos x}}{x^2}$
- Sentence such that the universe of a structure has exactly two members
- Show that $d_2$ defined by $d_2(x,y)=\frac{|x-y|}{1+|x-y|}$ is a metric