Zero vector of a vector space

I know that every vector space needs to contain a zero vector. But all the vector spaces I’ve seen have the zero vector actually being zero (e.g. $\mathbf{0}=\langle0,0,\ldots,0\rangle$). Can’t the “zero vector” not involve zero, as long as it acts as the additive identity? If that’s the case then are there any graphical representations of a vector space that does not contain the origin?

Solutions Collecting From Web of "Zero vector of a vector space"

Here’s an example. Let $V$ be the set of all $n$-tuples of strictly positive numbers $x_1,\ldots,x_n$ satisfying $x_1+\cdots+x_n=1$. Define “addition” of such vectors by

$$
(x_1,\ldots,x_n) \mathbin{\text{“}{+}\text{”}} (y_1,\ldots,y_n) = \frac{(x_1 y_1,\ldots,x_n y_n)}{x_1 y_1 + \cdots + x_n y_n }.
$$

This is a vector space whose zero element is
$$
\left( \frac 1 n , \ldots, \frac 1 n \right).
$$
The additive inverse of $(x_1,\ldots,x_n)$ is
$$
\frac{\left( \dfrac 1 {x_1}, \ldots, \dfrac 1 {x_n} \right)}{\dfrac 1 {x_a} + \cdots + \dfrac 1 {x_n}}.
$$
This operation is involved in a basic identity on conditional probabilities: $$ (\Pr(A_1),\ldots,\Pr(A_n)) \mathbin{\text{“}{+}\text{”}} k\cdot(\Pr(D\mid A_1),\ldots,\Pr(D\mid A_n)) = (\Pr(A_1\mid D),\ldots,\Pr(A_n\mid D)) $$
where $k$ is whatever it takes to make the sum of the entries $1$. However, in practice, one wouldn’t bother with $k$; just multiply term by term and then normalize.

Here’s a more down-to-earth example. Look at $\mathbb R^3$ and say you want to put the zero point at $\vec p = (2,3,7)$. Then define “addition” as follows:
$$
\vec a \mathbin{\text{“}{+}\text{”}} \vec b = \underbrace{\vec p + (\vec a – \vec p) + (\vec b – \vec p)}_{\begin{smallmatrix} \text{These are the usual} \\ \text{addition and subtraction.} \end{smallmatrix}}.
$$

Michael Hardy provides a very good answer. I want to explain what’s so exceptional about it.

If you have a vector space (let’s say finite dimensional), once you choose a basis for that vector space, and once you represent vectors in that basis, the zero vector will always be $(0,0,\ldots,0)$. Of course, the coordinates here are with respect to that basis.

We usually describe elements of $\mathbb R^n$ using coordinates that are of course the coordinates of the most obvious basis of $\mathbb R^n$. And the same for any subspace. So this question doesn’t come up there.

The exotic examples only happen when you use coordinates that are not really indigenous to the vector space. The coordinates may have some interesting mathematical structure, but one structure they will not have is the structure of the vector space they are representing. Calling them “coordinates” is almost a lie, since they don’t act like vector space coordinates at all, for instance, $(0,0,\ldots,0)$ is not the zero vector.

In linear algebra textbooks one sometimes encounters the example $V = (0, \infty)$, the set of positive reals, with “addition” defined by
$$
u \oplus v = uv
$$
and “scalar multiplication” defined by
$$
c \odot u = u^{c}.
$$
It’s straightforward to show $(V, \oplus, \odot)$ is a vector space, but the zero vector (i.e., the identity element for $\oplus$) is $1$.

(The pleasure “relabeling” this example to look like a more familiar space is left as an exercise.)

The zero vector in a vector space depends on how you define the binary operation “Addition” in your space.

For an example that can be easily visualized, consider the tangent space at any point $(a,b)$ of the plane $\mathbb{R}^2$. This is the space of “bound vectors” that start at the point $(a,b)$. Any such vector can be written as $(a,b) + t(c,d)$ for some $t \geq 0$ and $(c,d) \in \mathbb{R}^2$. The zero element in this vector space is the base point $(a,b)$. Pictures from Lee’s “Introduction to Smooth Manifolds”:

enter image description here

Or consider the tangent plane to a sphere at any point, which is also naturally a vector space:

enter image description here

Note however that it is generally meaningless to speak of “zero” without having a vector space structure (or some other algebraic structure with the notion of zero) in mind already.

In the vector plane of the oriented segments with addition defined with the parallelogram law the zero vector is just a segment consisting of a single point, no zeros involved. Only after you introduce a vector base and represent the vector with coordinates you have that the zero vector is represented by $(0,0)$.

That’s because once you establish an isomorphism of a vector space with $\bigoplus_{i\in I} F$ (in which I am of course talking about the standard addition and scaling ), the zero vector is the only candidate that the zero of the ‘abstract’ vector space could map to. There is nothing else such that $v+v=v$.

Since it is common practice to represent vector spaces this way, that is why you’ve seen it so often.

One more comment- in the specific vector space, $\mathbb R^n$, the zero vector is the $n$-tuple $(0, 0, \ldots, 0)$. Any $n$-dimensional vector space is “isomorphic” to $R^n$, with, given a basis $e_1, e_2, \ldots , e_n$, the isomorphism that maps $e_n$ to $(0, 0, \ldots, 1, \ldots, 0)$ where the “$1$” is in the $n$th place so we can always express vectors in the $(a, b, \ldots, z)$ notation.

However there exist infinite dimensional vector spaces for which we cannot use that notation.