Intereting Posts

Maximal size of independent generator set in simple group
Solve $z^6=(z-1)^6$.
Seeking more information regarding the function $\varphi(n) = \sum_{i=1}^n \left.$
Can $\displaystyle\lim_{h\to 0}\frac{b^h – 1}{h}$ be solved without circular reasoning?
Munkres Topology Question article 17 problem 5
Why $\mathrm e^{\sqrt{27}\pi } $ is almost an integer?
Characterization of non-unique decimal expansions
Poincaré's Inequality on Sobolev Spaces in One Dimension
Can a number be equal to the sum of the squares of its prime divisors?
The “set” of equivalence classes of things.
Product and sum of positive operators is positive
How many field structures does $\mathbb{R}\times \mathbb{R}$ have?
Prove that $\sum_{k=0}^n{e^{ik^2}} = o(n^\alpha)$, $ \forall \alpha >0$
Endpoint of a line knowing slope, start and distance
limit involving $e$, ending up without $e$.

This is out of my textbook, Axler’s “Linear Algebra Done Right” which I am self-studying from. (*I organized my thoughts in which I would like some sort of response with Roman Numerals*).

**Linear Dependence Lemma**: If $(v_{1},\ldots,v_{m})$ is linearly dependent in $V$ and $v_{1} \neq 0$, then there exists $j \in \{2,\ldots,m\}$ such that the following hold:

(a) $v_{j} \in span(v_{1},\ldots,v_{j-1})$;

- Determinant from Paul Garrett's Definition of the Characteristic Polynomial.
- Cayley-Hamilton theorem on square matrices
- How to find $A=UDU^H$ in this case
- A strictly positive operator is invertible
- QR decomposition and the fundamental subspaces
- Spectral radius, and a curious equality.

*( I. Why does this need to be justified? Is it because $v_{j}$ is an “extra” vector, which would make this arbitrary set of linear combinations dependent?).*

(b) If the $j^{th}$ term is removed from $(v_{1},\ldots,v_{m})$, the span of the remaining list equal $span(v_{1},\ldots,v_{m})$.

*( II. My assumption is that this basically means that if we remove this extra vector, then we still have the same list of linear combinations).*

*(I also found the following proof a bit confusing and need some clarification).*

PROOF: Suppose $(v_{1},\ldots,v_{m})$ is linearly dependent in $V$ and $v_{1} \neq 0$. Then there exists $a_{1},\ldots,a_{m} \in \mathbb{F}$, not all $0$ such that $$a_{1}v_{1}+\cdots+a_{m}v_{m} = 0$$.

(So far so good, from what I know, this is just stating the opposite of Linear Independence, where the only choice of $a_{1},\ldots,a_{m} \in \mathbb{F}$ that satisfies $a_{1}v_{1}+\cdots+a_{m}v_{m} = 0$ is $a_1 =\cdots= a_{m} = 0$)

CONT: Not all of $a_{2},a_{3},\ldots,a_{m}$ can be $0$ (because $v_1 \neq 0)$. Let $j$ be the largest element of $\{2,\ldots.,m\}$ such that $a_{j} \neq 0$. Then $$ v_{j} = -\frac{a_1}{a_j}v_1 – \cdots – \frac{a_{j-1}}{a_j}v_{j-1} ,$$

proving (a).

*( III. I will fill in the extra steps here because I feel that I may have the right idea).*

$Span(v_{1},\ldots,v_{m}) = 0$ for $j \in \{2,\ldots,m\} = a_{1}v_{1} + \cdots + a_{j}v_{j} = 0$.

Here I just solved for $v_j$, and got the result $v_{j} = -\frac{a_1}{a_j}v_1 – \cdots – \frac{a_{j-1}}{a_j}v_{j-1} ,$ which corresponds to the above. $a_{j} \neq 0$ because we have $a_j^{-1}$ for each term, and $v_1 \neq 0$ because if we have $a_{1}v_{1}+\cdots+a_{j}v_{j} = 0$ then all the scalars $a_{2},\ldots,a_{m} \in \mathbb{F}$ could be equal to $0$, if that was the case. I think I have an idea, but how exactly does this prove that $v_j$ is contained in the span of $(v_{1},\ldots,v_{j-1})$? Is it because $ -\frac{a_1}{a_j}v_1 – \cdots – \frac{a_{j-1}}{a_j}v_{j-1}$, is just a linear combination of vectors that is equal to $v_j$?

CONT: to prove (b), suppose that $u \in span(v_{1},\ldots,v_{m})$. Then there exists $c_{1},\ldots,c_m \in \mathbb{F}$ such that $$u = c_1v_1 + \cdots + c_mv_m$$.

In the equation above, we replace $v_j$ with the right side of 2.5, which shows that $u$ is in the span of the list obtained by removing the $j^{th}$ term from $(v_1,\ldots,v_m)$. Thus (b) holds. $\Box$

(**IV.** So how exactly does this work? I find this part the most confusing).

Sorry that this is such a long list, but I really want to fully understand everything I am learning, and I am pretty new to proving stuff, so I want to make sure that I improve that skill as well.

- A Challenge on linear functional and bounding property
- Help with commutative property of matrices problem
- The determinant of a block Matrix of Order $2n$
- Existence of T-invariant complement of T-invariant subspace when T is diagonalisable
- A lower bound for the ratio of $2$- and $\infty$-norms within a linear subspace
- General Cholesky-like decomposition
- Smooth spectral decomposition of a matrix
- $S\subseteq V \Rightarrow \text{span}(S)\cong S^{00}$
- Can any two disjoint nonempty convex sets in a vector space be separated by a hyperplane?
- iterated dual vector spaces

I. Why does (a) need to be justified

Because every claim made needs to be. The essence of (a) is that if there is a linear dependence, then there is a specific index $j$ such that the blame of linear dependence can be given to $v_j$, for its failing to be independent of its predecessors. A different way to prove this is to consider the list of sequences $()$, $(v_1)$, $(v_1,v_2)$, $\ldots$, $(v_1,v_2,\ldots,v_m)$; since the first one is linearly independent and the last one is linearly dependent (by assumption) there must be a first one in the list the is linearly dependent; if this is $(v_1,\ldots,v_j)$ then one can show $v_j\in span(v_1,\ldots,v_{-1})$. The argument is similar to the proof given in the text (a linear dependence relation among $(v_1,\ldots,v_j)$ must involve $v_j$ with nonzero coefficient). An advantage of this approach is that it does not depend essentially on any choice (the linear dependency taken is unique up to a scalar multiple) and it puts the blame on the very first $v_j$ that causes linear dependence.

II. My assumption is that (b) basically means that if we remove this extra vector, then we still have the same list of linear combinations.

More precisely, the removal of $v_j$ does not affect the *set of vectors* that can be written as linear combinations, even though the linear combinations themselves look different (since $v_j$ is no longer allowed to appear).

III. I will fill in… $span(v_{1},\ldots,v_{m}) = 0$ for $j \in \{2,\ldots,m\} = a_{1}v_{1} + \cdots + a_{j}v_{j} = 0$.

Here I just solved for $v_j$, and got the result $v_j = -\frac{a_1}{a_j}v_1 – \cdots – \frac{a_{j-1}}{a_j}v_{j-1} ,$ which corresponds to the above. $a_{j} \neq 0$ because we have $a_j^{-1}$ for each term,

You’ve got that a bit backwards. You could only solve for $v_j$ under the assumption $a_j\neq0$; you cannot conclude that from the fact that you just solved for $v_j$.

and $v_1 \neq 0$ because if we have $a_{1}v_{1}+\cdots+a_{j}v_{j} = 0$ then all the scalars $a_{2},\ldots,a_{m} \in \mathbb{F}$ could be equal to $0$, if that was the case.

This is doubly beside the point. $v_1\neq0$ is given, you don’t need to prove that. On the other hand there is nothing absurd in the scalars $a_{2},\ldots,a_{m} \in \mathbb{F}$ all being equal to $0$, except that the text had just proved it is not the case *using* the fact that $v_1\neq0$. But the author could have avoided the whole mess about $v_1$ by observing that $span()=\{0\}$.

I think I have an idea, but how exactly does this prove that $v_j$ is contained in the span of $(v_{1},\ldots,v_{j-1})$? Is it because $ -\frac{a_1}{a_j}v_1 – \cdots – \frac{a_{j-1}}{a_j}v_{j-1}$, is just a linear combination of vectors that is equal to $v_j$?

Precisely.

In the equation above, we replace $v_j$ with the right side of 2.5, which shows that $u$ is in the span of the list obtained by removing the $j^{th}$ term from $(v_1,\ldots,v_m)$. Thus (b) holds. $\Box$

IV. So how exactly does this work?

If you write down a linear combination of $v_1,\ldots,v_m$ it contains a single occurrence of $v_j$. If you replace that occurrence (within parentheses, as it gets multiplied by a scalar) by the right hand side of 2.5, then there is no longer any occurrence of $v_j$. You don’t directly get a linear combination of the remaining $v_i$, but once you work out the multiplication and collect like terms, you do get such a linear combination. For instance if $v_3=-5v_1+\frac32v_2$ then

$$\begin{align}

av_1+bv_2+cv_3+dv_4

&= av_1+bv_2+c(-5v_1+\frac32v_2)+dv_4

\\&= av_1+bv_2-5cv_1+\frac32cv_2+dv_4

\\&= (a-5c)v_1+(b+\frac32c)v_2+dv_4.

\end{align}

$$

- On $\big(\tfrac{1+\sqrt{5}}{2}\big)^{12}=\small 161+72\sqrt{5}$ and $\int_{-1}^1\frac{dx}{\left(1-x^2\right)^{\small3/4} \sqrt{161+72\sqrt{5}\,x}}$
- Make this visual derivative of sine more rigorous
- Prove that $p$ is prime in $\mathbb{Z}$ if and only if $x^2+3$ is irreducible in $\mathbb{F}_p$.
- Probability distribution of the maximum of random variables
- Is ${\sqrt2^{\sqrt2^{\sqrt2^{\sqrt2^\sqrt2}}}}^{…}=4$ correct?
- Probability Problem with $n$ keys
- Question on Functions of Bounded Variation
- Seeking a More Elegant Proof to an Expectation Inequality
- How deal with Gothic letters, like $\mathfrak{ A,B,C,D,a,b,c,d}\dots$, when writing by hand?
- If $n = a^2 + b^2 + c^2$ for positive integers $a$, $b$,$c$, show that there exist positive integers $x$, $y$, $z$ such that $n^2 = x^2 + y^2 + z^2$.
- Can we have an equation for any graph?
- Help with these isomorphisms
- Combinatorial interpretation of sum of squares, cubes
- Prime elements of ring $\mathbb{Z}$
- $\sum x_{k}=1$ then, what is the maximal value of $\sum x_{k}^{2}\sum kx_{k} $