Intereting Posts

Can you use a logarithm coefficient in a linear equation?
Probability of rolling a dice 8 times before all numbers are shown.
What are the Eigenvectors of the curl operator?
Infinitely many nonequivalent unprovable statements in ZFC because of Gödel's incompleteness theorem?
Limit of positive sequence $(f_n)$ defined by $f_n(x)^2=\int^x_0 f_{n-1}(t)\mathrm{d}t$
Proof of Hasse-Minkowski over Number Field
Find the remainder using Fermat's little theorem when $5^{119}$ is divided by $59$?
Finding a quartic polynomial in $\mathbb{Q}$ with four real roots such that Galois group is ${S_4}$.
Prove that a covering map is a homeomorphism
Great books on all different types of integration techniques
A certain unique rotation matrix
Arithmetic mean. Why does it work?
Is the rank of a matrix the same of its transpose? If yes, how can I prove it?
Derived sets – prove $(A \cup B)' = A' \cup B'$
For what $n$ and $m$ is this number a perfect square?

I was just wondering why we don’t ever define multiplication of vectors as individual component multiplication. That is, why doesn’t anybody ever define $\langle a_1,b_1 \rangle \cdot \langle a_2,b_2 \rangle$ to be $\langle a_1a_2, b_1b_2 \rangle$? Is the resulting vector just not geometrically interesting?

- Orthogonality and linear independence
- Canonical Isomorphism Between $\mathbf{V}$ and $(\mathbf{V}^*)^*$
- to prove $f(P^{-1}AP)=P^{-1}f(A)P$ for an $n\times{n}$ square matrix?
- Show that for an invertible matrix, the images of a set of vectors spanning the space also form a spanning set
- How to find an intersection of a 2 vector subspace?
- Show that the dual norm of the spectral norm is the nuclear norm
- How Do I Compute the Eigenvalues of a Small Matrix?
- Nth roots of square matrices
- Chameleons of Three Colors puzzle
- Linear dependence of linear functionals

This is Hadamard product, which is defined for matrices, and hence for vector columns. See Wikipedia page : Hadamard Product

Unlike the usual operations of vector calculus, the product $\bullet$ you defined here is not covariant for Cartesian coordinate changes. This means that an equation involving $\bullet$ is not guaranteed to keep holding true if both members undergo an orthogonal coordinate change, such as a rotation of the axes.

For a 2 dimensional example, consider the following equation:

\begin{equation}

\begin{bmatrix} 1 \\ 0 \end{bmatrix} \bullet \begin{bmatrix} 0 \\ 1\end{bmatrix} = \begin{bmatrix} 0 \\ 0\end{bmatrix}.

\end{equation}

If we rotate the plane 45° counterclockwise then

\begin{align}

\begin{bmatrix} 1 \\ 0 \end{bmatrix} \to \begin{bmatrix} \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} \end{bmatrix},& & \begin{bmatrix} 0 \\ 1 \end{bmatrix} \to \begin{bmatrix} -\frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} \end{bmatrix},&&\begin{bmatrix} 0 \\ 0 \end{bmatrix} \to \begin{bmatrix} 0 \\ 0 \end{bmatrix},

\end{align}

but

\begin{equation}\tag{!!}

\begin{bmatrix} \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} \end{bmatrix} \bullet \begin{bmatrix} -\frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}}\end{bmatrix} \ne\begin{bmatrix} 0 \\ 0\end{bmatrix}.

\end{equation}

From the physicist’s point of view, then, this operation is ill-posed as it should be independent of the particular coordinate system one chooses to describe physical space. This is not the case of the dot-product and the cross-product, which are independent of such a choice.

To elaborate yet a bit more on what Guiseppe Negro, James S. Cook and Michael Joyce have already said:

A vector is

nota tuple of individual components. A vector is an element of some vector space.

When you’re writing a vector as such a tuple, you’re only referring to the *expansion* of the vector in some particular basis. But this basis is very often not even specified. Which is actually ok, because the “normal” vector operations don’t in fact depend on the choice, i.e. if you transformed all you vectors to be written out in some *other* basis, you would have all different numbers but the same calculations would still yield correct results.

But that wouldn’t work for component-wise multiplication, as was already shown. This operation simply does *not* work on the vectors but on their basis representation, which is only well-defined for some fixed choice of basis, which is not what you’re actually interested in when studying vectors.

Of course, there are plenty of applications where you are in fact interested in tuples of numbers, but those aren’t vectors then. There’s nothing wrong with the Hadamard product, but it doesn’t work on vectors but on matrices^{1}. If you want to multiply components, then your objects may be called tuples or arrays or lists or whatever, but hardly vectors.

Unfortunately, many people have got this wrong, and that’s why e.g. C++ programmers are blessed with^{2} an `std::vector`

class that is in fact for dynamic arrays, which are even less accurately vectors than static arrays.

^{1}Matrices suffer from a similar problem: many people use “matrix” and “linear mapping” as synonyms, but they aren’t in fact the same; matrices refer to a particular basis while linear mappings need no such thing.

^{2}Nothing against `std::vector`

, it’s great – it’s just not a vector class, just like “functors” aren’t in fact functors.

I think the reason you don’t normally see it is just because it doesn’t really have an application in linear algebra.

If you look at $\mathbb{F}^n$ as a *ring* instead of a vector space over $\mathbb{F}$, then what you have suggested (coordinatewise multiplication) is exactly the product ring structure of the ring $\mathbb{F}^n$. It’s completely natural, and useful.

It’s just not mentioned in linear algebra because you are rarely thinking of $\mathbb{F}^n$ as a ring, you are usually focused upon its vector space identity.

I think a lot of people have probably been cognitively fooled into thinking of your product “like the cross product” or “like the inner product”. Those two are really useful in linear algebra, but the coordinatewise product does not compare.

I often see this product as an incorrect answer in my freshman mechanics course. If they told me they thought I wanted the direct product of $\mathbb{R}$ with itself then I suppose I would let them have their points back.

This multiplication has been on my mind lately. The algebra defined on $\mathbb{R}^2$ by this Hadamard product is equivalent to the hyperbolic numbers $\mathbb{R} \oplus j\mathbb{R}$ where $j^2=1$. Let’s call your algebra $\mathcal{A}_1$ and the hyperbolic numbers $\mathcal{A}_2$

The isomorphism is given by $\Phi: \mathcal{A}_1 \rightarrow \mathcal{A}_2$ with $\Phi(a,b) = \frac{1}{2}(a+b)+\frac{1}{2}j(b-a)$. Notice that the identity for $\mathcal{A}_1$ is $(1,1)$ and $\Phi(1,1)=1$. Furthermore, $\Phi^{-1}(x+jy) = (x-y, x+y)$ which allows us to see that $\Phi^{-1}(j) = (-1,1)$. In other words, $(-1,1)$ is the “$j$” for the Hadamard product.

The geometry of $\mathcal{A}_2$ is in part exposed by thinking about $j$-multiplication:

$$ j(x+jy) = jx+j^2y = y+jx $$

Multiplication by $j$ reflects about the line $y=x$. This is obviously different than the complex numbers $\mathbb{R} \oplus i\mathbb{R}$ where multiplication by $i$ maps $(x,y)$ to $(-y,x)$. See http://en.wikipedia.org/wiki/Split-complex_number for the geometry of these hyperbolic numbers.

This is really just Giuseppe Negro’s answer said differently, but your definition is flawed in the sense that it depends on the choice of a basis. So it’s not really a function of the two input vectors, but rather a function of the two vectors *and* a choice of basis for the vector space.

In Giuseppe’s example, he shows that if you expand two particularly chosen vectors in two different bases, you get two different values for your product using your definition.

Contrast that with the dot product. Let’s take any two vectors $v, w$ in a vector space and compute the dot product. You open your notebook and expand $v$ and $w$ in terms of *your* favorite orthonormal basis $e_1, \dots, e_n$, giving $v = v_1 e_1 + \cdots + v_n e_n$ and $w = w_1 e_1 + \cdots + w_n e_n$, so you conclude that $v \cdot w = v_1 w_1 + \cdots v_n w_n$. I, on the other hand, open my notebook and expand $v$ and $w$ in *my* favorite orthonormal basis $f_1, \dots, f_n$, giving $v = v’_1 f_1 + \cdots + v’_n f_n$ and $w = w’_1 f_1 + \cdots + w’_n f_n$, so I conclude that $v \cdot w = v’_1 w’_1 + \cdots + v’_n w’_n$. The fact that we wind up with the same number for $v \cdot w$ (even though my $v’_i$ will not be the same as your $v_i$ and similarly for the coefficients of $w$) is what tells us that the dot product is really just a function of the vectors $v$ and $w$, not a function of their expansion coefficients in terms of some particular basis.

- Does Fermat's Little Theorem work on polynomials?
- Prove $A^tB^t = (BA)^t$
- Basic problem on topology $( James Dugundji)$
- Summation of Central Binomial Coefficients divided by even powers of $2$
- Multiple choice exam where no two consecutive answers are the same (2)
- Why not write the solutions of a cubic this way?
- Are Hausdorff compactifications of a Tychonoff space $X$ in one-to-one correspondence with completely regular subalgebras of $BC(X)$?
- If $\gcd(a,b)=1$ and $\gcd(a,c)=1$, then $\gcd(a,bc)=1$
- Norm is weakly lower semicontinuous
- Finding the shortest distance between two Parabolas
- Question regarding integral of the form $\oint\limits _{s_{R}}e^{i\cdot k\cdot z}f\left(z\right)dz$.
- Find a smooth function with prescribed moments
- Characteristic of an integral domain must be either $0$ or a prime number.
- Topology of manifolds
- Ring of integers of a cubic number field