Linearly independent set can be completed to a basis

Suppose I have a linear independent set for a finite dimensional vector space $V$. How can I prove rigorously than it can be completed to a basis? More importantly, is the completion unique? I do not think that the answer to this question is yes: the following is my counterexample:

$((1,1), e_2)$ is a basis for $\mathbb{R}^2$, as is $(e_1, e_2)$. However, does $(e_2)$ count as a linearly independent set?

Solutions Collecting From Web of "Linearly independent set can be completed to a basis"

The completion is certainly not unique. Multiplying any of the new vectors by a nonzero constant will not affect the span or the linear independence, but will change the basis.

To prove that you can extend any linearly independent set $S$ to a basis, you proceed by an iterative argument. If $S$ spans you are done. Otherwise, the span of $S$ does not include some vector $v$. You claim that $S \cup \{v\}$ is linearly independent. Write down the condition for linear independence and observe for yourself that if linear independence fails, you can deduce that $v$ was in the span of $S$ (solve the non-zero equation for $v$ in terms of elements in $S$). Now you ask if $S \cup \{v\}$ spans, and if so, you are done. If not, take some vector $w$ not in the span and consider, $S \cup \{v\} \cup \{w\}$. Iterate this argument.To prove in general that this iteration eventually terminates in a spanning set, you actually need to use Zorn’s Lemma. However, if you have a finite spanning set $B$ then you can pick the elements $v,w, \cdots$ from $B$ and the most number of steps our iteration might take is to exhaust all the elements of $B$.

For the sake of variety:

The extendibility of an independent set $X$ to a basis can be shown with the Steinitz exchange lemma (although this is a little backwards). There is a basis of $V$, say $B$, and $X$ is independent hence there is a basis containing all elements of $X$ and $\dim V – |X|$ elements from $B$.

Take your original list {$v_i$} and start tossing in vectors. If your space V is finite-dimensional, you will eventually span the space (you just need as many vectors as dim V). It remains to show that you can throw out some vectors to recover linear independence and still span the space. This can be done algorithmically, as follows:

Take the kth vector in your list, and see if it is spanned by the previous k-1. If it is, toss it out. Otherwise, keep it. In this way, go through all the vectors in your list.

You can check that the resulting list is linearly independent, spans V, and that this process terminates (again, if V is finite-dimensional.)

The completion is not unique. Pick any vector in $\mathbb{R}^3$ and consider the plane normal to it. You may complete by choosing $any$ 2 vectors which span this plane.

The definition of finite dimensional is that $V = \operatorname{sp} \{ v_k \}_{k=1}^m$ for some finite set of vectors.

Now suppose you have a linearly independent list $w_1,…,w_p$. Augment this list to get $w_1,…,w_p,v_1,…,v_m$, and use Gram Schmidt orthogonalization (in the order given) to these vectors. This will produce a collection of vectors $\hat{w_1},…,\hat{w_p}, \hat{v_1},…,\hat{v_s}$ with the property that $\operatorname{sp} \{ w_1,…,w_p \} = \operatorname{sp} \{\hat{w_1},…,\hat{w_p} \} $, and $V = \operatorname{sp} \{\hat{w_1},…,\hat{w_p} , \hat{v_1},…,\hat{v_s} \} $. The dimension of $V$ will be, of course, $s+p$.

The completion is not unique, any set of vectors that spans $\operatorname{sp} \{ \hat{v_1},…,\hat{v_s} \} $ will do.

Suppose S := {v1, v2, . . . , vm} is a linear independent set of vectors and B is a basis. We
will show that we get a new basis of V by substituting a suitable vector of B by v1. There exists
w1, w2, . . . , wr ∈ B such that v1 = a1w1 + a2w2 + · · · + arwr for some a1, a2, . . . ar(6= 0) ∈ R.
Then w1 = a− 1 1v1 −a− 1 1a2w2 −· · ·−a− 1 1arwr. Then B1 := {v1}∪B {w1} is a basis of V and it is
an easy check. Next we can get a new basis B2 from B1 substituting a suitable vector (different
from v1) by v2 (using the same procedure). For instance, v2 = b1u1+b2u2+· · ·+btut+av1 where
at least one of b1, b2, . . . , bt is nonzero (since {v1, v2} is a linearly independent set). Suppose
b1 6= 0 and define, B2 = {v2} ∪ B1 \ {u1}
This process will stop in m steps and we get a basis Bm of V which contains S as a subset.