Intereting Posts

Prove that two sets are the same
Riemann zeta function at odd positive integers
Indiscrete rational extension for $\mathbb R$ (examp 66 in “Counterexamples in topology”)
homomorphism from $S_3$ to $\mathbb Z/3\mathbb Z$
Is there an analogue of Lebesgue’s Dominated Convergence Theorem for a net $ (f_{\alpha})_{\alpha \in A} $ of measurable functions?
How to find the matrix representation of a linear tranformation
How to plot a phase portrait for this system of differential equations?
$ \lim_{x\rightarrow 0^{+}}\frac{\sin ^{2}x\tan x-x^{3}}{x^{7}}=\frac{1}{15} $
Lambert function approximation $W_0$ branch
How were trigonometrical functions of $\dfrac{2\pi}{17}$ calculated?
How often does $D(n^2) = m^2$ happen, where $D(x)$ is the deficiency of $x$?
Two different solutions to integral
Doubt in Application of Integration – Calculation of volumes and surface areas of solids of revolution
Can I find a closed form solution for this system of equation?
Is 1+1 =2 a theorem?

This is out of my textbook, Axler’s “Linear Algebra Done Right” which I am self-studying from. (*I organized my thoughts in which I would like some sort of response with Roman Numerals*).

**Linear Dependence Lemma**: If $(v_{1},\ldots,v_{m})$ is linearly dependent in $V$ and $v_{1} \neq 0$, then there exists $j \in \{2,\ldots,m\}$ such that the following hold:

(a) $v_{j} \in span(v_{1},\ldots,v_{j-1})$;

- $U^*\otimes V$ versus $L(U,V)$ for infinite dimensional spaces
- Are all fields vector spaces?
- Matrix equation implies invertibility
- Showing the countable direct product of $\mathbb{Z}$ is not projective
- Direct proof that nilpotent matrix has zero trace
- Linear Independence of an infinite set .

*( I. Why does this need to be justified? Is it because $v_{j}$ is an “extra” vector, which would make this arbitrary set of linear combinations dependent?).*

(b) If the $j^{th}$ term is removed from $(v_{1},\ldots,v_{m})$, the span of the remaining list equal $span(v_{1},\ldots,v_{m})$.

*( II. My assumption is that this basically means that if we remove this extra vector, then we still have the same list of linear combinations).*

*(I also found the following proof a bit confusing and need some clarification).*

PROOF: Suppose $(v_{1},\ldots,v_{m})$ is linearly dependent in $V$ and $v_{1} \neq 0$. Then there exists $a_{1},\ldots,a_{m} \in \mathbb{F}$, not all $0$ such that $$a_{1}v_{1}+\cdots+a_{m}v_{m} = 0$$.

(So far so good, from what I know, this is just stating the opposite of Linear Independence, where the only choice of $a_{1},\ldots,a_{m} \in \mathbb{F}$ that satisfies $a_{1}v_{1}+\cdots+a_{m}v_{m} = 0$ is $a_1 =\cdots= a_{m} = 0$)

CONT: Not all of $a_{2},a_{3},\ldots,a_{m}$ can be $0$ (because $v_1 \neq 0)$. Let $j$ be the largest element of $\{2,\ldots.,m\}$ such that $a_{j} \neq 0$. Then $$ v_{j} = -\frac{a_1}{a_j}v_1 – \cdots – \frac{a_{j-1}}{a_j}v_{j-1} ,$$

proving (a).

*( III. I will fill in the extra steps here because I feel that I may have the right idea).*

$Span(v_{1},\ldots,v_{m}) = 0$ for $j \in \{2,\ldots,m\} = a_{1}v_{1} + \cdots + a_{j}v_{j} = 0$.

Here I just solved for $v_j$, and got the result $v_{j} = -\frac{a_1}{a_j}v_1 – \cdots – \frac{a_{j-1}}{a_j}v_{j-1} ,$ which corresponds to the above. $a_{j} \neq 0$ because we have $a_j^{-1}$ for each term, and $v_1 \neq 0$ because if we have $a_{1}v_{1}+\cdots+a_{j}v_{j} = 0$ then all the scalars $a_{2},\ldots,a_{m} \in \mathbb{F}$ could be equal to $0$, if that was the case. I think I have an idea, but how exactly does this prove that $v_j$ is contained in the span of $(v_{1},\ldots,v_{j-1})$? Is it because $ -\frac{a_1}{a_j}v_1 – \cdots – \frac{a_{j-1}}{a_j}v_{j-1}$, is just a linear combination of vectors that is equal to $v_j$?

CONT: to prove (b), suppose that $u \in span(v_{1},\ldots,v_{m})$. Then there exists $c_{1},\ldots,c_m \in \mathbb{F}$ such that $$u = c_1v_1 + \cdots + c_mv_m$$.

In the equation above, we replace $v_j$ with the right side of 2.5, which shows that $u$ is in the span of the list obtained by removing the $j^{th}$ term from $(v_1,\ldots,v_m)$. Thus (b) holds. $\Box$

(**IV.** So how exactly does this work? I find this part the most confusing).

Sorry that this is such a long list, but I really want to fully understand everything I am learning, and I am pretty new to proving stuff, so I want to make sure that I improve that skill as well.

- Diagonalization of restrictions of a diagonalizable linear operator
- Find the smallest disk in complex plane containing the eigenvalues of a matrix (NBHM 2012)
- Max and min value of $7x+8y$ in a given half-plane limited by straight lines?
- Exponential of matrices and bounded operators
- Sum of positive definite matrices still positive definite?
- Trace of the power matrix is null
- What is the geometric interpretation of the transpose?
- Is the following matrix invertible?
- $ax+by+cz=d$ is the equation of a plane in space. Show that $d'$ is the distance from the plane to the origin.
- Question about rank-nullity theorem

I. Why does (a) need to be justified

Because every claim made needs to be. The essence of (a) is that if there is a linear dependence, then there is a specific index $j$ such that the blame of linear dependence can be given to $v_j$, for its failing to be independent of its predecessors. A different way to prove this is to consider the list of sequences $()$, $(v_1)$, $(v_1,v_2)$, $\ldots$, $(v_1,v_2,\ldots,v_m)$; since the first one is linearly independent and the last one is linearly dependent (by assumption) there must be a first one in the list the is linearly dependent; if this is $(v_1,\ldots,v_j)$ then one can show $v_j\in span(v_1,\ldots,v_{-1})$. The argument is similar to the proof given in the text (a linear dependence relation among $(v_1,\ldots,v_j)$ must involve $v_j$ with nonzero coefficient). An advantage of this approach is that it does not depend essentially on any choice (the linear dependency taken is unique up to a scalar multiple) and it puts the blame on the very first $v_j$ that causes linear dependence.

II. My assumption is that (b) basically means that if we remove this extra vector, then we still have the same list of linear combinations.

More precisely, the removal of $v_j$ does not affect the *set of vectors* that can be written as linear combinations, even though the linear combinations themselves look different (since $v_j$ is no longer allowed to appear).

III. I will fill in… $span(v_{1},\ldots,v_{m}) = 0$ for $j \in \{2,\ldots,m\} = a_{1}v_{1} + \cdots + a_{j}v_{j} = 0$.

Here I just solved for $v_j$, and got the result $v_j = -\frac{a_1}{a_j}v_1 – \cdots – \frac{a_{j-1}}{a_j}v_{j-1} ,$ which corresponds to the above. $a_{j} \neq 0$ because we have $a_j^{-1}$ for each term,

You’ve got that a bit backwards. You could only solve for $v_j$ under the assumption $a_j\neq0$; you cannot conclude that from the fact that you just solved for $v_j$.

and $v_1 \neq 0$ because if we have $a_{1}v_{1}+\cdots+a_{j}v_{j} = 0$ then all the scalars $a_{2},\ldots,a_{m} \in \mathbb{F}$ could be equal to $0$, if that was the case.

This is doubly beside the point. $v_1\neq0$ is given, you don’t need to prove that. On the other hand there is nothing absurd in the scalars $a_{2},\ldots,a_{m} \in \mathbb{F}$ all being equal to $0$, except that the text had just proved it is not the case *using* the fact that $v_1\neq0$. But the author could have avoided the whole mess about $v_1$ by observing that $span()=\{0\}$.

I think I have an idea, but how exactly does this prove that $v_j$ is contained in the span of $(v_{1},\ldots,v_{j-1})$? Is it because $ -\frac{a_1}{a_j}v_1 – \cdots – \frac{a_{j-1}}{a_j}v_{j-1}$, is just a linear combination of vectors that is equal to $v_j$?

Precisely.

In the equation above, we replace $v_j$ with the right side of 2.5, which shows that $u$ is in the span of the list obtained by removing the $j^{th}$ term from $(v_1,\ldots,v_m)$. Thus (b) holds. $\Box$

IV. So how exactly does this work?

If you write down a linear combination of $v_1,\ldots,v_m$ it contains a single occurrence of $v_j$. If you replace that occurrence (within parentheses, as it gets multiplied by a scalar) by the right hand side of 2.5, then there is no longer any occurrence of $v_j$. You don’t directly get a linear combination of the remaining $v_i$, but once you work out the multiplication and collect like terms, you do get such a linear combination. For instance if $v_3=-5v_1+\frac32v_2$ then

$$\begin{align}

av_1+bv_2+cv_3+dv_4

&= av_1+bv_2+c(-5v_1+\frac32v_2)+dv_4

\\&= av_1+bv_2-5cv_1+\frac32cv_2+dv_4

\\&= (a-5c)v_1+(b+\frac32c)v_2+dv_4.

\end{align}

$$

- Evaluate $ \int_{0}^{1} \log\left(\frac{x^2-2x-4}{x^2+2x-4}\right) \frac{\mathrm{d}x}{\sqrt{1-x^2}} $
- Universal Cover of $SL_{2}(\mathbb{R})$
- Mathematical statement with simple independence proof from $\mathsf{ZF}$
- What is the motivation for defining both homogeneous and inhomogeneous cochains?
- $\lim_{n\to \infty} {a^n-b^n\over a^n+b^n}.$
- How to generalize the Thue-Morse sequence to more than two symbols?
- Find the sum of digits in decimal form of the given number
- Real world application of Fourier series
- Advanced Algebraic Equation – Solve for P
- Find all solutions of $a^b = b^a$
- What is meant with unique smallest/largest topology?
- Find all solutions of the equation $n^m=x^2+py^2$ which satisfy the following properties
- How to prove that no two-to-one function can be continuous?
- An inequality regarding the derivative of two concave functions
- The Irreducible Corepresentations of the eight-dimensional Kac-Paljutkin Quantum Group